[
  {
    "path": ".gitignore",
    "content": ".pypirc\n*.pyc\n*.swp\n*.rpm\n*.egg\n*.yaml\n*.bak\nprestoadmin.egg-info/\n.tox/\n.coverage\nhtmlcov/\nlog/\ntmp/\n\n# Ignore generated Sphinx docs\ndocs/prestoadmin.*\ndocs/modules.rst\ndocs/_build\n\n# Ignore build folders\nbuild/\ndist/\n.eggs/\n.idea/\n*.iml\n*.egg/\n\n# tmp backup files\n*~\n\\#*#\n.#*\n\n#mvn targets\npresto-admin-test/target\n\n# presto yarn package for product tests\npresto-yarn-package.zip\n"
  },
  {
    "path": ".travis.yml",
    "content": "language: python\npython: \"2.7\"\nsudo: required\ngroup: deprecated-2017Q2\ndist: trusty\nservices:\n  - docker\nenv:\n  global:\n    - PYTHONPATH=$PYTHONPATH:$(pwd)\n    - LONG_PRODUCT_TESTS=\"tests/product/test_server_install.py tests/product/test_status.py tests/product/test_collect.py tests/product/test_catalog.py tests/product/test_control.py tests/product/test_server_uninstall.py\"\n  matrix:\n    - ARTIFACTS=true\n    - OTHER_TESTS=true\n    - SHORT_PRODUCT_TEST_GROUP=0\n    - LONG_PRODUCT_TEST_GROUP_PRESTO_ADMIN=\"tests/product/test_server_install.py\"\n    - LONG_PRODUCT_TEST_GROUP_PRESTO_ADMIN_AND_PRESTO=\"tests/product/test_status.py\"\n    - LONG_PRODUCT_TEST_GROUP_PRESTO_ADMIN_AND_PRESTO=\"tests/product/test_collect.py tests/product/test_server_uninstall.py\"\n    - LONG_PRODUCT_TEST_GROUP_PRESTO_ADMIN_AND_PRESTO=\"tests/product/test_catalog.py\"\n    - LONG_PRODUCT_TEST_GROUP_PRESTO_ADMIN_AND_PRESTO=\"tests/product/test_control.py\"\nbefore_install:\n  - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -\n  - sudo add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable\"\n  - sudo apt-get update\n  - sudo apt-get -y install docker-ce\ninstall:\n  - pip install --upgrade pip==9.0.1\n  - pip install -r requirements.txt\n  - pip install tox==3.0.0 tox-travis==0.10\nbefore_script:\n  - make docker-images\n  - make presto-server-rpm.rpm\nscript:\n  - |\n    if [ -v ARTIFACTS ]; then\n      ./bin/build-artifacts-in-docker.sh\n    elif [ -v SHORT_PRODUCT_TEST_GROUP ]; then\n      ALL_PRODUCT_TESTS=$(find tests/product/ -name 'test_*py' | grep -v __init__ | xargs wc -l | sort -n | head -n -1 | awk '{print $2}' | tr '\\n' ' ')\n      for LONG_PRODUCT_TEST in ${LONG_PRODUCT_TESTS[@]}; do\n        ALL_PRODUCT_TESTS=${ALL_PRODUCT_TESTS//$LONG_PRODUCT_TEST/};\n        if [ $? -ne 0 ]; then\n          exit 1\n        fi\n      done\n      SHORT_PRODUCT_TESTS=$(echo $ALL_PRODUCT_TESTS | tr ' ' '\\n' | awk \"NR % 1 == $SHORT_PRODUCT_TEST_GROUP\" | tr '\\n' ' ')\n      ./bin/ci-product.sh ${SHORT_PRODUCT_TESTS};\n    elif [ -v LONG_PRODUCT_TEST_GROUP_PRESTO_ADMIN ]; then\n      export IMAGE_NAMES=\"standalone_presto_admin\"\n      ./bin/ci-product.sh ${LONG_PRODUCT_TEST_GROUP_PRESTO_ADMIN};\n    elif [ -v LONG_PRODUCT_TEST_GROUP_PRESTO_ADMIN_AND_PRESTO ]; then\n      export IMAGE_NAMES=\"standalone_presto standalone_presto_admin\"\n      ./bin/ci-product.sh ${LONG_PRODUCT_TEST_GROUP_PRESTO_ADMIN_AND_PRESTO}\n    elif [ -v OTHER_TESTS ]; then\n      ./bin/ci-basic.sh\n    else\n      echo \"Unknown test\"\n      exit 1\n    fi\n"
  },
  {
    "path": "CONTRIBUTING.rst",
    "content": "============\nContributing\n============\n\nContributions are welcome, and they are greatly appreciated! Every\nlittle bit helps, and credit will always be given.\n\nYou can contribute in many ways:\n\nTypes of Contributions\n----------------------\n\nReport Bugs\n~~~~~~~~~~~\n\nReport bugs at https://github.com/prestodb/presto-admin/issues.\n\nIf you are reporting a bug, please include:\n\n* Your operating system name and version.\n* Any details about your local setup that might be helpful in troubleshooting.\n* Detailed steps to reproduce the bug.\n\nFix Bugs\n~~~~~~~~\n\nLook through the GitHub issues for bugs. Anything tagged with \"bug\"\nis open to whomever wants to implement it.\n\nImplement Features\n~~~~~~~~~~~~~~~~~~\n\nLook through the GitHub issues for features. Anything tagged with \"feature\"\nis open to whomever wants to implement it.\n\nWrite Documentation\n~~~~~~~~~~~~~~~~~~~\n\npresto-admin could always use more documentation, whether as part of the\nofficial presto-admin docs, in docstrings, or even on the web in blog posts,\narticles, and such.\n\nSubmit Feedback\n~~~~~~~~~~~~~~~\n\nThe best way to send feedback is to file an issue at https://github.com/prestodb/presto-admin/issues.\n\nIf you are proposing a feature:\n\n* Explain in detail how it would work.\n* Keep the scope as narrow as possible, to make it easier to implement.\n\nGet Started!\n------------\n\nReady to contribute? Here's how to set up `presto-admin` for local development.\n\n1. Fork the `presto-admin` repo on GitHub, https://github.com/prestodb/presto-admin.\n2. Clone your fork locally::\n\n    $ git clone git@github.com:your_name_here/presto-admin.git\n\n3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development::\n\n    $ mkvirtualenv prestoadmin\n    $ cd prestoadmin/\n    $ python setup.py develop\n\n4. Create a branch for local development::\n\n    $ git checkout -b name-of-your-bugfix-or-feature\n\n   Now you can make your changes locally.\n\n5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox.\nTo run tests, you need docker installed. You may also need to pip install wheel into your virtualenv. To install and start docker use::\n\n    $ wget -qO- https://get.docker.com/ | sh\n\n    # Add current user to Docker group to run without sudo\n    $ sudo gpasswd -a ${USER} docker\n    $ sudo service docker restart\n\nNow, to run presto-admin tests::\n\n    $ make lint\n    $ make test-all\n\n6. Commit your changes and push your branch to GitHub::\n\n    $ git add .\n    $ git commit -m \"Your detailed description of your changes.\"\n    $ git push origin name-of-your-bugfix-or-feature\n\n7. Submit a pull request through the GitHub website.\n\nPull Request Guidelines\n-----------------------\n\nBefore you submit a pull request, check that it meets these guidelines:\n\n1. The pull request should include tests.\n2. If the pull request adds functionality, the docs should be updated. Put\n   your new functionality into a function with a docstring, and add the\n   feature to the presto-admin/docs."
  },
  {
    "path": "LICENSE",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "MANIFEST.in",
    "content": "include CONTRIBUTING.rst\ninclude HISTORY.rst\ninclude LICENSE\ninclude README.md\n\nrecursive-include tests *\nrecursive-exclude * __pycache__\nrecursive-exclude * *.py[co]\nrecursive-include prestoadmin *.ini\n\nrecursive-include docs *.rst conf.py Makefile make.bat\n"
  },
  {
    "path": "Makefile",
    "content": ".PHONY: clean-all clean clean-eggs clean-build clean-pyc clean-test-containers clean-test \\\n\tclean-docs lint smoke test test-all test-images test-rpm docker-images coverage docs \\\n\topen-docs release release-builds dist dist-online dist-offline wheel install precommit \\\n\tclean-test-all smoke-configurable-cluster test-all-configurable-cluster _clean_tmp\n\nhelp:\n\t@echo \"precommit - run \\`quick' tests and tasks that should pass or succeed prior to pushing\"\n\t@echo \"clean-all - clean everything; effectively resets repo as if it was just checked out\"\n\t@echo \"clean - remove build, test, coverage and Python artifacts except for the cache Presto RPM\"\n\t@echo \"clean-eggs - remove *.egg and *.egg-info files and directories\"\n\t@echo \"clean-build - remove build artifacts\"\n\t@echo \"clean-pyc - remove Python file artifacts\"\n\t@echo \"clean-test-containers - remove Docker containers used during tests\"\n\t@echo \"clean-test - remove test and coverage artifacts for unit and integration tests\"\n\t@echo \"clean-test-all - remove test and coverage artifacts for all tests\"\n\t@echo \"clean-docs - remove doc artifacts\"\n\t@echo \"lint - check style with flake8\"\n\t@echo \"smoke - run tests annotated with attr smoke using nosetests\"\n\t@echo \"smoke-configurable-cluster - same target as smoke but doesn't build the Docker images as the tests will run on a configurable cluster\"\n\t@echo \"test - run tests quickly with Python 2.6 and 2.7\"\n\t@echo \"test-all - run tests on every Python version with tox. Specify TEST_SUITE env variable to run only a given suite.\"\n\t@echo \"test-all-configurable-cluster - same target as test-all but doesn't build the Docker images as the tests will run on a configurable cluster\"\n\t@echo \"test-images - create product test image(s). Specify IMAGE_NAMES env variable to create only certain images.\"\n\t@echo \"test-rpm - run tests for the RPM package\"\n\t@echo \"docker-images - pull docker image(s). Specify DOCKER_IMAGE_NAME env variable for specific image.\"\n\t@echo \"coverage - check code coverage quickly with the default Python\"\n\t@echo \"docs - generate Sphinx HTML documentation, including API docs\"\n\t@echo \"open-docs - open the root document (index.html) using xdg-open\"\n\t@echo \"release - package and upload a release\"\n\t@echo \"release-builds - run all targets associated with a release (clean-build clean-pyc dist dist-online docs)\"\n\t@echo \"dist - package and build installer that requires an Internet connection\"\n\t@echo \"dist-online - package and build installer that requires an Internet connection\"\n\t@echo \"dist-offline - package and build installer that does not require an Internet connection\"\n\t@echo \"wheel - build wheel only\"\n\t@echo \"install - install the package to the active Python's site-packages\"\n\nprecommit: clean dist lint docs test\n\nclean-all: clean\n\trm -f presto*.rpm\n\nclean: clean-build clean-pyc clean-test-all clean-eggs clean-docs\n\nclean-eggs:\n\trm -fr .eggs/\n\tfind . -name '*.egg-info' -exec rm -fr {} +\n\tfind . -name '*.egg' -type f -exec rm -rf {} +\n\tfind . -name '*.egg' -type d -exec rm -rf {} +\n\nclean-build:\n\trm -fr build/\n\trm -fr dist/\n\nclean-pyc:\n\tfind . -name '*.pyc' -exec rm -f {} +\n\tfind . -name '*.pyo' -exec rm -f {} +\n\tfind . -name '*~' -exec rm -f {} +\n\tfind . -name '__pycache__' -exec rm -fr {} +\n\nclean-test-containers:\n\t for c in $$(docker ps --format \"{{.ID}} {{.Image}}\" | awk '/teradatalabs\\/pa_test/ { print $$1 }'); do docker kill $$c; done\n\nclean-test:\n\trm -fr .tox/\n\trm -f .coverage\n\trm -fr htmlcov/\n\nclean-test-all: clean-test _clean_tmp\n\tfor image in $$(docker images | awk '/teradatalabs\\/pa_test/ {print $$1}'); do docker rmi -f $$image ; done\n\t@echo \"\\n\\tYou can kill running containers that caused errors removing images by running \\`make clean-test-containers'\\n\"\n\n_clean_tmp:\n\trm -rf tmp\n\nclean-docs:\n\trm -rf docs/prestoadmin.*\n\trm -f docs/modules.rst\n\trm -rf docs/_build\n\nlint:\n\tflake8 prestoadmin packaging tests\n\nTEST_PRESTO_RPM_URL?=https://repository.sonatype.org/service/local/artifact/maven/content?r=central-proxy&g=com.facebook.presto&a=presto-server-rpm&e=rpm&v=RELEASE\n\npresto-server-rpm.rpm:\n\tif echo '${TEST_PRESTO_RPM_URL}' | grep -q '^http'; then       \\\n\t\techo \"Downloading presto-rpm from ${TEST_PRESTO_RPM_URL}\";   \\\n\t\twget -q '${TEST_PRESTO_RPM_URL}' -O $@;                      \\\n\telse                                                           \\\n\t\techo \"Using local presto-rpm from ${TEST_PRESTO_RPM_URL}\";   \\\n\t\tcp '${TEST_PRESTO_RPM_URL}' $@;                              \\\n\tfi\n\nsmoke: clean-test-all test-images _smoke\n\n# Configurable cluster requires the base Docker images to build the\n# presto-admin installer\nsmoke-configurable-cluster: clean-test _clean_tmp docker-images _smoke\n\n_smoke:\n\ttox -e py26 -- -a smoketest,'!quarantine'\n\ntest: clean-test\n\ttox -- -s tests.unit\n\ttox -- -s tests.integration\n\nTEST_SUITE?=tests.product\n\ntest-all: clean-test-all test-images _test-all\n\n# Configurable cluster requires the base Docker images to build the\n# presto-admin installer\ntest-all-configurable-cluster: clean-test _clean_tmp docker-images _test-all\n\n_test-all:\n\ttox -- -s tests.unit\n\ttox -- -s tests.integration\n\ttox -e py26 -- -s ${TEST_SUITE} -a '!quarantine'\n\n# Can take any space-separated combination of:\n# standalone_presto, standalone_presto_admin, standalone_bare,\n# yarn_slider_presto_admin, all\nIMAGE_NAMES?=\"all\"\n\n#\n# The build process and product tests rely on several base Docker images.\n# Teradata builds and releases a number of Docker images from the same\n# repository, all versioned together. This makes it simple to verify that your\n# test environment is sane: if all of the images are the same version, they\n# should work together.\n#\n# As part of the process of releasing those images, we tag all of the images\n# with the version number of the release. This means that anything that uses\n# the images can reference them as `teradatalabs/image_name:version'. The\n# Makefile needs to know that to pull the images, and the python code needs to\n# know that for various reasons.\n#\n# base-images-tag.json is the canonical source of the tag information for the\n# repository. The python code parses it properly with the json module, and the\n# Makefile parses it adequately with awk ;-)\n#\nBASE_IMAGES_TAG := $(shell awk '/base_images_tag/ \\\n\t{split($$NF, a, \"\\\"\"); print a[2]}' base-images-tag.json)\n\ntest-images: docker-images presto-server-rpm.rpm\n\tpython tests/product/image_builder.py $(IMAGE_NAMES)\n\nDOCKER_IMAGES := \\\n\tprestodb/centos6-presto-admin-tests-build:$(BASE_IMAGES_TAG)\n\ndocker-images:\n\tfor image in $(DOCKER_IMAGES); do docker pull $$image || exit 1; done\n\ntest-rpm: clean-test-all test-images\n\ttox -e py26 -- -s tests.rpm -a '!quarantine'\n\ncoverage:\n\tcoverage run --source prestoadmin setup.py test -s tests.unit\n\tcoverage report -m\n\tcoverage html\n\techo `pwd`/htmlcov/index.html\n\ndocs: clean-docs\n\tsphinx-apidoc -o docs/ prestoadmin\n\t$(MAKE) -C docs clean\n\t$(MAKE) -C docs html\n\nopen-docs:\n\txdg-open docs/_build/html/index.html\n\nrelease: clean\n\tpython setup.py sdist upload -r pypi_internal\n\tpython setup.py bdist_wheel upload -r pypi_internal\n\nrelease-builds: clean-build clean-pyc dist dist-offline docs\n\ndist: dist-online\n\ndist-online: clean-build clean-pyc\n\tpython setup.py bdist_prestoadmin --online-install\n\tls -l dist\n\ndist-offline: clean-build clean-pyc\n\tpython setup.py bdist_prestoadmin\n\tls -l dist\n\nwheel: clean\n\tpython setup.py bdist_wheel\n\tls -l dist\n\ninstall: clean\n\tpython setup.py install\n"
  },
  {
    "path": "README.md",
    "content": "# presto-admin [![Build Status](https://travis-ci.org/prestodb/presto-admin.svg?branch=master)](https://travis-ci.org/prestodb/presto-admin)\n\npresto-admin installs, configures, and manages Presto installations.\n\nComprehensive documentation can be found [here](http://prestodb.github.io/presto-admin/).\n\n## Requirements\n\n1. Python 2.6 or 2.7\n2. [Docker](https://www.docker.com/). (Only required for development, if you want to run the system tests)\n    * If you DO NOT have Docker already installed, you can run the `install-docker.sh`\n      script in the `bin` directory of this project. That script has only been tested on\n      Ubuntu 14.04.\n    * If you have Docker already installed, you need to make sure that your user has\n      been added to the docker group. This will enable you to run commands without `sudo`,\n      which is a requirement for some of the unit tests. To enable sudoless docker access\n      run the following:\n\n            $ sudo groupadd docker\n            $ sudo gpasswd -a ${USER} docker\n            $ sudo service docker restart\n\n      If the user you added to the docker group is the same one you're logged in as, you will\n      need to log out and back in so that the changes can take effect.\n\n## Building\n\nPresto-admin makes use of `make` as its build tool. `make` in turn calls out to various utilities (e.g.\n`tox`, `flake8`, `sphinx-apidoc`, `python`) in order to perform the requested actions.\n\nIn order to get started with `presto-admin`,\n\n1. Fork the `presto-admin` repo on GitHub, https://github.com/prestodb/presto-admin.\n2. Clone your fork locally ::\n\n        $ git clone git@github.com:your_name_here/presto-admin.git\n\n3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development ::\n\n        $ mkvirtualenv prestoadmin\n        $ cd prestoadmin/\n        $ python setup.py develop\n\n4. Create a branch for local development ::\n\n        $ git checkout -b name-of-your-bugfix-or-feature\n\n     Now you can make your changes locally.\n\n5. When you're done making changes, check that your changes pass `make clean lint test`, which runs flake8 and the unit tests (which test both Python 2.6 and 2.7).\nTo run the product tests tests (`make test-all`), you need docker installed. You may also need to run `pip install wheel` in your virtualenv. To install and start docker use ::\n\n        $ wget -qO- https://get.docker.com/ | sh\n\n        # Add current user to Docker group to run without sudo\n        $ sudo gpasswd -a ${USER} docker\n        $ sudo service docker restart\n\n\n### Building the installer\n\nThe two tasks used to build the presto-admin installer are `dist` and\n`dist-offline`. The `dist` task builds an installer that requires internet\nconnectivity during installation. The `dist-offline` task builds an installer\nthat does not require internet connectivity during installation. Instead the\noffline installer downloads all dependencies at build time and points `pip` to\nthose dependencies during installation.\n\n## License\n\nFree software: Apache License Version 2.0 (APLv2).\n"
  },
  {
    "path": "base-images-tag.json",
    "content": "{\n\t\"base_images_tag\": \"latest\"\n}\n"
  },
  {
    "path": "bin/build-artifacts-in-docker.sh",
    "content": "#!/usr/bin/env bash\n\nset -e\nset -o pipefail\nset -x\n\nROOT_DIR=$(readlink -f $(dirname $0)/..)\n\nif [[ -z \"${BASE_IMAGE_NAME}\" ]]; then\n  BASE_IMAGE_NAME=\"prestodb/centos6-presto-admin-tests\"\nfi\n\nBASE_IMAGE_NAME=${BASE_IMAGE_NAME}-build\n\nif [[ -z \"${BASE_IMAGE_TAG}\" ]]; then\n  BASE_IMAGE_TAG=$(cat ${ROOT_DIR}/base-images-tag.json | python -c 'import sys, json; print json.load(sys.stdin)[\"base_images_tag\"]')\nfi\n\necho Building presto-admin-artifacts in container ${BASE_IMAGE_NAME}:${BASE_IMAGE_TAG}\n\nCONTAINER_NAME=\"presto-admin-build-$(date '+%s')\"\nCONTAINER_DIR=\"/mnt/presto-admin\"\n\ndocker run --name ${CONTAINER_NAME} -v ${ROOT_DIR}:${CONTAINER_DIR} --rm -i ${BASE_IMAGE_NAME}:${BASE_IMAGE_TAG} \\\n  env CONTAINER_DIR=\"${CONTAINER_DIR}\" bash <<\"EOF\"\n    cd ${CONTAINER_DIR}\n    pip install --upgrade pip==9.0.1\n    pip install tox-travis==0.10\n    # use explicit versions of dependent packages\n    pip install pycparser==2.18\n    pip install Babel==2.5.3\n    pip install cffi==1.11.5\n    pip install PyNaCl==1.2.1\n    pip install cryptography==2.1.1\n    pip install -r requirements.txt\n    export PYTHONPATH=${PYTHONPATH}:$(pwd)\n    make dist dist-offline\nEOF\n"
  },
  {
    "path": "bin/ci-basic.sh",
    "content": "#!/bin/bash -xe\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nmake clean lint dist docs\ntox -- -s tests.unit\ntox -- -s tests.integration\n"
  },
  {
    "path": "bin/ci-product.sh",
    "content": "#!/bin/bash -xe\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nmake test-images \nnosetests --with-timer --timer-ok 60s --timer-warning 300s -a '!quarantine' \"$@\"\n"
  },
  {
    "path": "bin/install-docker.sh",
    "content": "#!/bin/bash -x\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Install docker on Ubuntu 14.04\nwget -qO- https://get.docker.com/ | sh\n\n# Add current user to Docker group to run without sudo\nsudo gpasswd -a ${USER} docker\n\nsudo sh -c \"echo 'DOCKER_OPTS=\\\"--dns 153.65.2.111 --dns 8.8.8.8\\\"' >> /etc/default/docker\"\n\nsudo service docker restart\n"
  },
  {
    "path": "docs/Makefile",
    "content": "# Makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line.\nSPHINXOPTS    =\nSPHINXBUILD   = sphinx-build\nPAPER         =\nBUILDDIR      = _build\n\n# User-friendly check for sphinx-build\nifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)\n$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)\nendif\n\n# Internal variables.\nPAPEROPT_a4     = -D latex_paper_size=a4\nPAPEROPT_letter = -D latex_paper_size=letter\nALLSPHINXOPTS   = -W -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .\n# the i18n builder cannot share the environment and doctrees with the others\nI18NSPHINXOPTS  = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .\n\n.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext\n\nhelp:\n\t@echo \"Please use \\`make <target>' where <target> is one of\"\n\t@echo \"  html       to make standalone HTML files\"\n\t@echo \"  dirhtml    to make HTML files named index.html in directories\"\n\t@echo \"  singlehtml to make a single large HTML file\"\n\t@echo \"  pickle     to make pickle files\"\n\t@echo \"  json       to make JSON files\"\n\t@echo \"  htmlhelp   to make HTML files and a HTML help project\"\n\t@echo \"  qthelp     to make HTML files and a qthelp project\"\n\t@echo \"  devhelp    to make HTML files and a Devhelp project\"\n\t@echo \"  epub       to make an epub\"\n\t@echo \"  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter\"\n\t@echo \"  latexpdf   to make LaTeX files and run them through pdflatex\"\n\t@echo \"  latexpdfja to make LaTeX files and run them through platex/dvipdfmx\"\n\t@echo \"  text       to make text files\"\n\t@echo \"  man        to make manual pages\"\n\t@echo \"  texinfo    to make Texinfo files\"\n\t@echo \"  info       to make Texinfo files and run them through makeinfo\"\n\t@echo \"  gettext    to make PO message catalogs\"\n\t@echo \"  changes    to make an overview of all changed/added/deprecated items\"\n\t@echo \"  xml        to make Docutils-native XML files\"\n\t@echo \"  pseudoxml  to make pseudoxml-XML files for display purposes\"\n\t@echo \"  linkcheck  to check all external links for integrity\"\n\t@echo \"  doctest    to run all doctests embedded in the documentation (if enabled)\"\n\nclean:\n\trm -rf $(BUILDDIR)/*\n\nhtml:\n\t$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html\n\t@echo\n\t@echo \"Build finished. The HTML pages are in $(BUILDDIR)/html.\"\n\ndirhtml:\n\t$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml\n\t@echo\n\t@echo \"Build finished. The HTML pages are in $(BUILDDIR)/dirhtml.\"\n\nsinglehtml:\n\t$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml\n\t@echo\n\t@echo \"Build finished. The HTML page is in $(BUILDDIR)/singlehtml.\"\n\npickle:\n\t$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle\n\t@echo\n\t@echo \"Build finished; now you can process the pickle files.\"\n\njson:\n\t$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json\n\t@echo\n\t@echo \"Build finished; now you can process the JSON files.\"\n\nhtmlhelp:\n\t$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp\n\t@echo\n\t@echo \"Build finished; now you can run HTML Help Workshop with the\" \\\n\t      \".hhp project file in $(BUILDDIR)/htmlhelp.\"\n\nqthelp:\n\t$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp\n\t@echo\n\t@echo \"Build finished; now you can run \"qcollectiongenerator\" with the\" \\\n\t      \".qhcp project file in $(BUILDDIR)/qthelp, like this:\"\n\t@echo \"# qcollectiongenerator $(BUILDDIR)/qthelp/prestoadmin.qhcp\"\n\t@echo \"To view the help file:\"\n\t@echo \"# assistant -collectionFile $(BUILDDIR)/qthelp/prestoadmin.qhc\"\n\ndevhelp:\n\t$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp\n\t@echo\n\t@echo \"Build finished.\"\n\t@echo \"To view the help file:\"\n\t@echo \"# mkdir -p $$HOME/.local/share/devhelp/prestoadmin\"\n\t@echo \"# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/prestoadmin\"\n\t@echo \"# devhelp\"\n\nepub:\n\t$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub\n\t@echo\n\t@echo \"Build finished. The epub file is in $(BUILDDIR)/epub.\"\n\nlatex:\n\t$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex\n\t@echo\n\t@echo \"Build finished; the LaTeX files are in $(BUILDDIR)/latex.\"\n\t@echo \"Run \\`make' in that directory to run these through (pdf)latex\" \\\n\t      \"(use \\`make latexpdf' here to do that automatically).\"\n\nlatexpdf:\n\t$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex\n\t@echo \"Running LaTeX files through pdflatex...\"\n\t$(MAKE) -C $(BUILDDIR)/latex all-pdf\n\t@echo \"pdflatex finished; the PDF files are in $(BUILDDIR)/latex.\"\n\nlatexpdfja:\n\t$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex\n\t@echo \"Running LaTeX files through platex and dvipdfmx...\"\n\t$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja\n\t@echo \"pdflatex finished; the PDF files are in $(BUILDDIR)/latex.\"\n\ntext:\n\t$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text\n\t@echo\n\t@echo \"Build finished. The text files are in $(BUILDDIR)/text.\"\n\nman:\n\t$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man\n\t@echo\n\t@echo \"Build finished. The manual pages are in $(BUILDDIR)/man.\"\n\ntexinfo:\n\t$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo\n\t@echo\n\t@echo \"Build finished. The Texinfo files are in $(BUILDDIR)/texinfo.\"\n\t@echo \"Run \\`make' in that directory to run these through makeinfo\" \\\n\t      \"(use \\`make info' here to do that automatically).\"\n\ninfo:\n\t$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo\n\t@echo \"Running Texinfo files through makeinfo...\"\n\tmake -C $(BUILDDIR)/texinfo info\n\t@echo \"makeinfo finished; the Info files are in $(BUILDDIR)/texinfo.\"\n\ngettext:\n\t$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale\n\t@echo\n\t@echo \"Build finished. The message catalogs are in $(BUILDDIR)/locale.\"\n\nchanges:\n\t$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes\n\t@echo\n\t@echo \"The overview file is in $(BUILDDIR)/changes.\"\n\nlinkcheck:\n\t$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck\n\t@echo\n\t@echo \"Link check complete; look for any errors in the above output \" \\\n\t      \"or in $(BUILDDIR)/linkcheck/output.txt.\"\n\ndoctest:\n\t$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest\n\t@echo \"Testing of doctests in the sources finished, look at the \" \\\n\t      \"results in $(BUILDDIR)/doctest/output.txt.\"\n\nxml:\n\t$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml\n\t@echo\n\t@echo \"Build finished. The XML files are in $(BUILDDIR)/xml.\"\n\npseudoxml:\n\t$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml\n\t@echo\n\t@echo \"Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml.\"\n"
  },
  {
    "path": "docs/conf.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n#\n# presto-admin documentation build configuration file\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n\nimport sys\nimport os\n\n# If extensions (or modules to document with autodoc) are in another\n# directory, add these directories to sys.path here. If the directory is\n# relative to the documentation root, use os.path.abspath to make it\n# absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\n# Get the project root dir, which is the parent dir of this\ncwd = os.getcwd()\nproject_root = os.path.dirname(cwd)\n\n# Insert the project root dir as the first element in the PYTHONPATH.\n# This lets us ensure that the source package is imported, and that its\n# version is used.\nsys.path.insert(0, project_root)\n\nimport prestoadmin\n\n# -- General configuration ---------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.napoleon']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'presto-admin'\n\n# The version info for the project you're documenting, acts as replacement\n# for |version| and |release|, also used in various other places throughout\n# the built documents.\n#\n# The short X.Y version.\nversion = prestoadmin.__version__\n# The full version, including alpha/beta/rc tags.\nrelease = prestoadmin.__version__\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to\n# some non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built\n# documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output -------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.  See the documentation for\n# a list of builtin themes.\nhtml_theme = 'classic'\n\n# Theme options are theme-specific and customize the look and feel of a\n# theme further.  For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents.  If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar.  Default is the same as\n# html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the\n# top of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon\n# of the docs.  This file should be a Windows icon file (.ico) being\n# 16x16 or 32x32 pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets)\n# here, relative to this directory. They are copied after the builtin\n# static files, so a file named \"default.css\" will overwrite the builtin\n# \"default.css\".\n#html_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page\n# bottom, using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names\n# to template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer.\n# Default is True.\nhtml_show_sphinx = False\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer.\n# Default is True.\nhtml_show_copyright = False\n\n# If true, an OpenSearch description file will be output, and all pages\n# will contain a <link> tag referring to it.  The value of this option\n# must be the base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'prestoadmindoc'\n\n\n# -- Options for LaTeX output ------------------------------------------\n\nlatex_elements = {\n    # The paper size ('letterpaper' or 'a4paper').\n    #'papersize': 'letterpaper',\n\n    # The font size ('10pt', '11pt' or '12pt').\n    #'pointsize': '10pt',\n\n    # Additional stuff for the LaTeX preamble.\n    #'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass\n# [howto/manual]).\nlatex_documents = [\n    ('index', 'prestoadmin.tex',\n     u'presto-admin Documentation',\n     '', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at\n# the top of the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings\n# are parts, not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n    ('index', 'prestoadmin',\n     u'presto-admin Documentation',\n     [u''], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ----------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n#  dir menu entry, description, category)\ntexinfo_documents = [\n    ('index', 'prestoadmin',\n     u'presto-admin Documentation',\n     u'',\n     'prestoadmin',\n     'One line description of project.',\n     'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n"
  },
  {
    "path": "docs/contributing.rst",
    "content": ".. include:: ../CONTRIBUTING.rst\n"
  },
  {
    "path": "docs/emr.rst",
    "content": ".. _presto-admin-on-emr-label:\n..\n.. If you modify this file, you will have to modify the NOTEs in the following files:\n.. docs/installation/java-installation.rst\n.. docs/installation/presto-admin-configuration.rst\n.. docs/installation/presto-admin-installation.rst\n..\n\n================================================\nSetting up Presto Admin on an Amazon EMR cluster\n================================================\n\nTo install, configure and run Presto Admin on an Amazon EMR cluster, follow the instructions in :ref:`quick-start-guide-label`, but pay attention to the notes or sections specfic to EMR cluster. We reiterate these EMR specific caveats below:\n\n- To install Presto Admin on an Amazon EMR cluster, follow the instructions in :ref:`presto-admin-installation-label` except for the following difference:\n\n\t- Use the online installer instead of the offline installer (see explanation :ref:`presto-admin-installation-label`).\n\n- To configure Presto Admin on an Amazon EMR cluster, follow the instructions in :ref:`presto-admin-configuration-label`. Specifically, we recommend the following property values during the configuration:\n\n\t- Use ``hadoop`` as the ``username`` instead of the default username ``root`` in the ``config.json`` file.\n\n\t- Use the host name of the EMR master node as the ``coordinator`` in the ``config.json`` file.\n\n- To run Presto Admin on EMR, see the sections starting from :ref:`presto-server-installation-label` onwards in :ref:`quick-start-guide-label`) except for the following caveats:\n\n        - The default version of Java installed on an EMR cluster (up to EMR 4.4.0) is 1.7, whereas Presto requires Java 1.8. Install Java 1.8 on the EMR cluster by following the instructions in :ref:`java-installation-label`.\n\n        - For running Presto Admin commands on an EMR cluster, do the following:\n                * Copy the ``.pem`` file associated with the Amazon EC2 key pair to the Presto Admin installation node of the EMR cluster.\n                * Use the ``-i <path to .pem file>`` input argument when running presto-admin commands on the node.\n\t\t  ::\n\n\t\t   </path/to/presto-admin> -i </path/to/your.pem> <presto_admin_command>\n"
  },
  {
    "path": "docs/index.rst",
    "content": "Presto-Admin\n============\n\n`Mailing list <http://groups.google.com/group/presto-users>`_ |\n`Issues <https://github.com/prestodb/presto-admin/issues>`_ |\n`Github <https://github.com/prestodb/presto-admin>`_ |\n\nIntroduction\n------------\nPresto-Admin is a tool for installing and managing the Presto query engine on a\ncluster. It provides easy-to-use commands to:\n\n    * Install and uninstall Presto across your cluster\n    * Configure your Presto cluster\n    * Start and stop the Presto servers\n    * Gather status and log information from your Presto cluster\n\nContent\n-------\n\n.. toctree::\n   :maxdepth: 3\n\n   software-requirements\n   user-guide\n   contributing\n   release\n\n.. toctree::\n   :hidden:\n\n   modules\n\nIndices and tables\n------------------\n\n* :ref:`search`\n\n"
  },
  {
    "path": "docs/installation/advanced-installation-options.rst",
    "content": "=============================\nAdvanced Installation Options\n=============================\n\nSpecifying a Certificate Authority for the Online Installer\n-----------------------------------------------------------\nThe online installer downloads its dependencies from ``pypi.python.org``, the\nstandard Python location for hosting packages. For some operating systems,\nthe certificate for pypi.python.org is not included in the CA cert bundle,\nso our installation scripts specify ``--trusted-host pypi.python.org`` when\ndownloading the dependencies.\n\nIf using ``--trusted-host`` is not suitable for your security needs, it is\npossible to supply your own certificates to use to authenticate to\n``pypi.python.org``.  Please note that if these certificates do not work to\naccess ``pypi.python.org``, the installation will fail. For example, to install\nwith your own certificates:\n\n::\n\n ./install-prestoadmin.sh <path_to_cert>/cacert.pem\n\nCoordinator failover\n--------------------\nPresto does not yet support automatic failover for the coordinator. You can\nmigrate to a new coordinator using the ``presto-admin`` -H and -x flags\nto include and exclude hosts in your command, respectively.\n\nTo view these ``presto-admin`` options, use the ``--extended-help`` flag.\n\nYou can switch to a new coordinator by following the steps below:\n\n1. Stop Presto on all the nodes where it is running using the command: ::\n\n     ./presto-admin server stop\n\n2. Edit the ``presto-admin`` topology file and replace the old coordinator\n   with the new one.  By default, the topology file is located at\n   ``~/.prestoadmin/config.json``.\n\n3. To install Presto on the new node, run the following two ``presto-admin``\n   commands. The first command is needed only if Java 8 is not already installed\n   on the new coordinator: ::\n\n     ./presto-admin package install -H new_coordinator /path/to/jdk8.rpm\n     ./presto-admin server install -H new_coordinator /path/to/presto-server.rpm\n\n4. Update the coordinator and worker configuration files controlled by\n   ``presto-admin``. By default, these files are available at ``~/.prestoadmin/``.\n\n5. Run the following commands to deploy the new configurations to all nodes,\n   including the new coordinator and start the server: ::\n\n     ./presto-admin configuration deploy\n     ./presto-admin server start\n"
  },
  {
    "path": "docs/installation/java-installation.rst",
    "content": ".. _java-installation-label:\n\n=================\nInstalling Java 8\n=================\nPrerequisites: :ref:`presto-admin-installation-label` and :ref:`presto-admin-configuration-label`\n\nThe Oracle Java 1.8 JRE (64-bit), update 45 or higher, is a prerequisite for Presto. If a suitable 64-bit version of Oracle Java 8 is already installed on the cluster, you can skip this step.\n\n.. NOTE:: On Amazon EMR (up to EMR 4.4.0), the default version of Java is 1.7. To run Presto on EMR, please install Java 1.8.\n\nThere are two ways to install Java: via RPM and via tarball.  The RPM installation sets the default Java on your machine to be Java 8. If\nit is acceptable to set the default Java to be Java 8, you can use ``presto-admin`` to install Java, otherwise you will need to install Java 8 manually.\n\nTo install Java via RPM using ``presto-admin``:\n\n1. Download `Oracle Java 8 <http://java.com/en/download/linux_manual.jsp>`_, selecting the Oracle Java 1.8 (64-bit) RPM download for Linux.\n\n2. Copy the RPM to a location accessible by ``presto-admin``.\n\n3. Run the following command to install Java 8 on each node in the Presto cluster: ::\n\n    $ ./presto-admin package install <local_path_to_java_rpm>\n\n\n.. NOTE:: The ``server-install-label`` will look for your Oracle Java 1.8 installation at locations where Java is normally installed when using the binary or the RPM based installer. Otherwise, you need to have an environment variable called ``JAVA8_HOME`` set with your Java 1.8 install path. If ``JAVA8_HOME`` is not set or is pointing to an incompatible version of Java, the installer will look for the ``JAVA_HOME`` environment variable for a compatible version of Java. If neither of these environmental variables is set with a compatible version, and ``presto-admin`` fails to find Java 8 at any of the normal installation locations, then ``server install`` will fail. After successfully running ``server install`` you can find the Java being used by Presto at ``/etc/presto/env.sh``.\n\n.. NOTE:: If you have installed the JDK, ``JAVA8_HOME`` should be set so refer to the ``jre`` subdirectory of the JDK.\n\n.. NOTE:: If installing Java on SLES, you will need to specify the flag ``--nodeps`` for ``presto-admin package install``, so that the RPM is installed without checking or validating dependencies.\n"
  },
  {
    "path": "docs/installation/presto-admin-configuration.rst",
    "content": ".. _presto-admin-configuration-label:\n\n========================\nConfiguring Presto-Admin\n========================\nA Presto cluster consists of a coordinator node and one or more workers nodes.\nA coordinator and worker may be located on the same node, meaning that you can\nhave a single-node installation of Presto, but having a dedicated node for the\ncoordinator is recommended for better performance, especially on larger\nclusters.\n\nIn order to use ``presto-admin`` to manage software on a cluster of nodes,\nyou must specify a configuration for ``presto-admin``. This configuration\nindicates the nodes on which to install as well as other credentials.\n\nTo set up a configuration, create a file ``~/.prestoadmin/config.json``\n(or ``$PRESTO_ADMIN_CONFIG_DIR/config.json`` if you have the ``presto-admin``\nconfig directory set using the environment variable) with the content below as\nappropriate for your cluster setup. Replace the variables denoted with\nbrackets <> with actual values enclosed in double quotations. The user\nspecified by ``username`` must have sudo access, unless the username\nis root, on all the Presto nodes, and ``presto-admin`` also must be\nable to login to all of the nodes via SSH as that user (see\n:ref:`ssh-configuration-label` for details on how to set that up). The\nfile should be owned by root with R/W permissions (i.e. 622).\n\n.. NOTE::\n   The sudo setup for a non-root user must have the ability to run /bin/bash as root. This can be a security issue. The IT organization should take the necessary steps to address this security hole and select an appropriate presto-admin user.\n\nConfiguration for Amazon EMR\n----------------------------\n\nUse the following configuration as a template for Amazon EMR:\n::\n\n {\n \"username\": \"hadoop\",\n \"port\": \"<ssh_port>\",\n \"coordinator\": \"<EMR_master_node_host_name>\",\n \"workers\": [\"<host_name_1>\", \"<host_name_2>\", ... \"<host_name_n>\"],\n \"java8_home\":\"<path/to/java8/on/presto/nodes>\"\n }\n\nAlso, for running Presto Admin commands on Amazon EMR, do the following:\n\n\t- Copy the ``.pem`` file associated with the Amazon EC2 key pair to the Presto Admin installation node of the EMR cluster.\n\t- Use the ``-i </path/to/your.pem>`` input argument when running presto-admin commands on the node.\n\n\t  ::\n\n\t   </path/to/presto-admin> -i </path/to/your.pem> <presto_admin_command>\n\n\nConfiguration for other clusters\n----------------------------------------------\nUse the following configuration as a template for other clusters:\n::\n\n {\n \"username\": \"<ssh_user_name>\",\n \"port\": \"<ssh_port>\",\n \"coordinator\": \"<host_name>\",\n \"workers\": [\"<host_name_1>\", \"<host_name_2>\", ... \"<host_name_n>\"],\n \"java8_home\":\"<path/to/java8/on/presto/nodes>\"\n }\n\nDo not use localhost as host_name for a multi-node cluster.\nAll of the properties are optional, and if left out the following defaults will\nbe used:\n::\n\n {\n \"username\": \"root\",\n \"port\": \"22\",\n \"coordinator\": \"localhost\",\n \"workers\": [\"localhost\"]\n }\n\nNote that ``java8_home`` is not set by default.  It only needs to be set if\nJava 8 is in a non-standard location on the Presto nodes.  The property is used\nto tell the Presto RPM where to find Java 8.\n\n.. NOTE:: If you have installed the JDK, ``java8_home`` should be set so refer to the ``jre`` subdirectory of the JDK.\n\nYou can also specify some but not all of the properties. For example, the\ndefault configuration is for a single-node installation of Presto on the same\nnode that ``presto-admin`` is installed on. For a 6 node cluster with default\nusername and port, a sample ``config.json`` would be:\n\n::\n\n {\n \"coordinator\": \"master\",\n \"workers\": [\"slave1\",\"slave2\",\"slave3\",\"slave4\",\"slave5\"]\n }\n\nYou can specify a range of workers by including the number range in brackets in the worker name.  For example:\n\n::\n\n    \"workers\": [\"worker[01-03]\"]\n\nis the same as\n\n::\n\n    \"workers\": [\"worker01\", \"worker02\", \"worker03\"]\n\n\n.. _sudo-password-spec:\n\nSudo Password Specification\n---------------------------\nPlease note that if the username you specify is not root, and that user needs\nto specify a sudo password, you do so in one of two ways. You can specify it on\nthe command line:\n::\n\n ./presto-admin <command> -p <password>\n\nAlternatively, you can opt to use an interactive password prompt, which prompts\nyou for the initial value of your password before running any commands:\n::\n\n ./presto-admin <command> -I\n Initial value for env.password: <type your password here>\n\nThe sudo password for the user must be the same as the SSH password.\n"
  },
  {
    "path": "docs/installation/presto-admin-installation.rst",
    "content": ".. _presto-admin-installation-label:\n\n=======================\nInstalling Presto Admin\n=======================\n\nPrerequisites:\n - `Python 2.6 or Python 2.7 <https://www.python.org/downloads>`_.\n - If you are using the online installer then make sure you've installed the\n   Python development package for your system. For RedHat/Centos that package is\n   ``python2-devel`` and for Debian/Ubuntu it is ``python-dev``.\n\nPresto Admin is packaged as an offline installer --\n``prestoadmin-<version>-offline.tar.gz`` -- and as an online\ninstaller -- ``prestoadmin-<version>-online.tar.gz``.\n\nThe offline installer includes all of the dependencies for\n``presto-admin``, so it can be used on a cluster without an outside\nnetwork connection. The offline installer is currently only supported\non RedHat Linux 6.x or CentOS equivalent.\n\nThe online installer downloads all of the dependencies when you run\n``./install-prestoadmin.sh``. You must use the online installer for\ninstallation of Presto on Amazon EMR and for use on any operating\nsystem not listed above. If you are using presto-admin on an\nunsupported operating system, there may be operating system\ndependencies beyond the installation process, and presto-admin may not\nwork.\n\nTo install ``presto-admin``:\n\n1. Download an offline installer from\n`releases page <https://github.com/prestodb/presto-admin/releases>`_.\n\n2. Copy the installer ``prestoadmin-<version>-offline.tar.gz`` to the\nlocation where you want ``presto-admin`` to run.\nNote that ``presto-admin`` does not have to be on the same node(s)\nwhere Presto will run, though it does need to have SSH access to all\nof the nodes in the cluster.\n\n.. NOTE::\n     For Amazon EMR, use the online installer instead of the offline installer.\n\n3. Extract and run the installation script from within the ``prestoadmin`` directory.\n::\n\n $ tar xvf prestoadmin-<version>-offline.tar.gz\n $ cd prestoadmin\n $ ./install-prestoadmin.sh\n\nThe installation script will create a ``presto-admin-install`` directory and an\nexecutable ``presto-admin`` script. By default, the ``presto-admin`` config and log\ndirectory locations are configured to be ``~/.prestoadmin`` and ``~/.prestoadmin/log``,\nrespectively.  This can be changed by modifying the environment variables,\nPRESTO_ADMIN_CONFIG_DIR and PRESTO_ADMIN_LOG_DIR. The installation script will also create\nthe directories pointed to by PRESTO_ADMIN_CONFIG_DIR and PRESTO_ADMIN_LOG_DIR. If those\ndirectories already exist, the installation script will not erase their contents.\n\n4. Verify that ``presto-admin`` was installed properly by running the following\ncommand:\n::\n\n $ ./presto-admin --help\n\nPlease note that you should only run one ``presto-admin`` command on your\ncluster at a time.\n"
  },
  {
    "path": "docs/installation/presto-admin-upgrade.rst",
    "content": "======================\nUpgrading Presto-Admin\n======================\n\nUpgrading to a newer version of ``presto-admin`` requires deleting the old\ninstallation and then installing the new version.  After you've deleted the\n``prestoadmin`` directory, install the newer version of ``presto-admin``\nby following the instructions in the installation section\n(see :ref:`presto-admin-installation-label`).\n\nFor ``presto-admin`` versions earlier than 2.0, the configuration files are\nlocated at ``/etc/opt/prestoadmin``.  To upgrade to a newer version and\ncontinue to use these configuration files, make sure you copy them to the\nnew configuration directory at ``~/.prestoadmin`` (or\n``$PRESTO_ADMIN_CONFIG_DIR``). The connector configuration directory\nlocated at ``/etc/opt/prestoadmin/connectors`` must be renamed to\n``/etc/opt/prestoadmin/catalog``, before copying to ``~/.prestoadmin``.\n\nFor ``presto-admin`` versions 2.0 and later, the configuration files\nlocated in ``~/.prestoadmin`` will remain intact and continue to be used\nby the newer version of ``presto-admin``.\n"
  },
  {
    "path": "docs/installation/presto-catalog-installation.rst",
    "content": "\n================\nAdding a Catalog\n================\n\nIn Presto, connectors allow you to access different data sources -- e.g.,\nHive, PostgreSQL, or MySQL.\n\nTo add a catalog for the Hive connector:\n\n1. Create a file ``hive.properties`` in ``~/.prestoadmin/catalog`` with the following content: ::\n\n    connector.name=hive-hadoop2\n    hive.metastore.uri=thrift://<metastore-host-or-ip>:<metastore-port>\n\n\n2. Distribute the configuration file to all of the nodes in the cluster: ::\n\n    ./presto-admin catalog add hive\n\n\n3. Restart Presto: ::\n\n    ./presto-admin server restart\n\n\nYou may need to add additional properties for the Hive connector to work properly, such as if your Hadoop cluster\nis set up for high availability. For these and other properties, see the `Hive connector documentation <https://prestodb.io/docs/current/connector/hive.html>`_.\n\nFor detailed documentation on ``catalog add``, see :ref:`catalog-add`.\nFor more on which catalogs Presto supports, see the `Presto connector documentation <https://prestodb.io/docs/current/connector.html>`_.\n"
  },
  {
    "path": "docs/installation/presto-cli-installation.rst",
    "content": ".. _presto-cli-installation-label:\n\n======================\nRunning Presto Queries\n======================\n\nThe Presto CLI provides a terminal-based interactive shell for running queries. The CLI is a self-executing JAR file, which means it acts like a normal UNIX executable.\n\nTo run a query via the Presto CLI:\n\n1. Download the ``presto-cli`` and copy it to the location you want to run it from. This location may be any node that has network access to the coordinator.\n\n2. Rename the artifact to ``presto`` and make it executable, substituting your version of Presto for \"version\": ::\n\n    $ mv presto-cli-<version>-executable.jar presto\n    $ chmod +x presto\n\n.. NOTE:: Presto must run with Java 8, so if Java 7 is the default on your cluster, you will need to explicitly specify the Java 8 executable. For example, ``<path_to_java_8_executable> -jar presto``. It may be helpful to add an alias for the Presto CLI: ``alias presto='<path_to_java_8_executable> -jar <path_to_presto>'``.\n\n3. By default, ``presto-admin`` configures a TPC-H catalog, which generates TPC-H data on-the-fly.\n   Using this catalog, issue the following commands to run your first Presto query: ::\n\n    $ ./presto --catalog tpch --schema tiny\n    $ select count(*) from lineitem;\n\n\nThe above command assumes that you installed the Presto CLI on the coordinator, and that the Presto server is on port 8080. If either of these are not the case, then specify the server location in the command: ::\n\n    $ ./presto --server <host_name>:<port_number> --catalog tpch --schema tiny\n\n"
  },
  {
    "path": "docs/installation/presto-configuration.rst",
    "content": ".. _presto-configuration-label:\n\n==================\nConfiguring Presto\n==================\n\nPresto configuration parameters can be modified to\ntweak performance or add/remove features. While Presto is designed to work well out-of-the-box,\nyou still may need to make some changes.\n\n\nMemory configuration\n--------------------\nIt is often necessary to change the default memory configuration based on your cluster's\ncapacity. The default max memory for each Presto server is 16 GB, but if you have a lot of\nmemory (say, 120GB/node), you may want to allocate more memory to Presto for better performance.\n\nIn order to update the max memory value to 60 GB per node:\n\n1. Change the line in ``~/.prestoadmin/coordinator/jvm.config`` and\n   ``~/.prestoadmin/workers/jvm.config`` that says ``-Xmx16G`` to ``-Xmx60G``.\n\n2. Change the following lines in ``~/.prestoadmin/coordinator/config.properties``\n   and ``~/.prestoadmin/workers/config.properties``: ::\n\n        query.max-memory-per-node=8GB\n        query.max-memory=50GB\n\n\n   to ::\n\n        query.max-memory-per-node=30GB\n        query.max-memory=<30GB * number of nodes>\n\n\n   We recommend setting ``query.max-memory-per-node`` to half of the JVM config max memory, though if your workload is highly concurrent, you may want\n   to use a lower value for ``query.max-memory-per-node``. If you have large data skew, ``query.max-memory-per-node`` should.\n   By default in Presto 148t and higher, ``query.max-memory-per-node`` is 10% of the ``Xmx`` value specified in ``jvm.config``.\n\n3. Run the following command to deploy the configuration change to the cluster: ::\n\n        ./presto-admin configuration deploy\n\n\n4. Restart the Presto servers so that the changes get picked up: ::\n\n        ./presto-admin server restart\n\n\n   If you are running Presto in a test environment that has less than 16 GB of memory available,\n   you will need to follow similar procedures to set the memory configurations lower.\n\nLog file location configurations\n--------------------------------\n\nFor most production environments, it will be necessary to change the log locations. In order to update these:\n\n1. Stop the Presto server. ::\n\n    ./presto-admin server stop\n\n2. Presto stores logs and other data in ``node.data-dir``, ``node.launcher-log-file``,\n   and ``node.server-log-file``. It is very important that these locations have enough space for the logs on the filesystem on\n   each node where Presto is running. The default location for ``node.data-dir`` is ``/var/lib/presto/data``, the\n   default location for ``node.launcher-log-file`` is ``/var/log/presto/launcher.log``, and the default\n   location for ``node.server-log-file`` is ``/var/log/presto/server.log``.\n   Assuming the chosen locations are ``/data1/presto`` and ``/data2/presto`` for the data directory\n   and server logs respectively, the properties in ``~/.prestoadmin/coordinator/node.properties`` and\n   ``~/.prestoadmin/workers/node.properties`` will be as follows: ::\n\n        node.data-dir=/data1/presto/data\n        node.launcher-log-file=/data2/presto/launcher.log\n        node.server-log-file=/data2/presto/server.log\n\n3. The log directory(ies) (in the above example, ``/data1/presto`` and ``/data2/presto``; the ``data`` directory\n   for ``node.data-dir`` is created by Presto) need to\n   exist on all nodes and be owned by the ``presto`` user. The command ``presto-admin run_script``\n   can be used to perform these actions on all of the nodes. First, create a script in the same\n   directory as ``presto-admin``, called ``script.sh``: ::\n\n        #!/bin/bash\n        mkdir -p /data1/presto\n        mkdir -p /data2/presto\n        chown presto:presto /data1/presto\n        chown presto:presto /data2/presto\n\n   Then, run the following command: ::\n\n        ./presto-admin run_script script.sh\n\n4. Run the following command to deploy the log configuration change to the cluster: ::\n\n    ./presto-admin configuration deploy\n\n5. Restart the Presto servers so that the changes get picked up: ::\n\n    ./presto-admin server restart\n\n\nFor detailed documentation on ``configuration deploy``, see :ref:`configuration-deploy-label`.\nFor more configuration parameters, see the Presto documentation.\n"
  },
  {
    "path": "docs/installation/presto-port-configuration.rst",
    "content": ".. _presto-port-configuration-label:\n\n===========================\nConfiguring the Presto Port\n===========================\n\nBy default, Presto uses 8080 for the HTTP port. If the port is already in use on any given node on your cluster, Presto will not start on that node(s).\n\nTo configure the server to use a different port:\n\n1. Select a port that is free on all of the nodes. You can check if a port is already in use on a node by running the following on that node:\n::\n\n    netstat -an |grep 8081 |grep LISTEN\n\nIt will return nothing if port 8081 is free.\n\n2. Modify the following properties in ``~/.prestoadmin/coordinator/config.properties`` and ``~/.prestoadmin/workers/config.properties``:\n\n::\n\n    http-server.http.port=<port>\n    discovery.uri=http://<coordinator_ip_or_host>:<port>\n\n\n3. Run the following command to deploy the configuration change to the cluster: ::\n\n    ./presto-admin configuration deploy\n\n\n4. Restart the Presto servers so that the changes get picked up: ::\n\n    ./presto-admin server restart\n"
  },
  {
    "path": "docs/installation/presto-server-installation.rst",
    "content": ".. _presto-server-installation-label:\n\n============================\nInstalling the Presto Server\n============================\nPrerequisites: :ref:`presto-admin-installation-label`, :ref:`java-installation-label` and :ref:`presto-admin-configuration-label`\n\nTo install the Presto query engine on a cluster of nodes using ``presto-admin``:\n\n1. Download ``presto-server-rpm-VERSION.ARCH.rpm``\n\n2. Copy the RPM to a location accessible by ``presto-admin``.\n\n3. Run the following command to install Presto: ::\n\n    $ ./presto-admin server install <local_path_to_rpm>\n\n\nPresto! Presto is now installed on the coordinator and workers specified in your ``~/.prestoadmin/config.json`` file.\n\nThe default port for Presto is 8080.  If that port is already in use on your cluster, you will not be able to start Presto.\nIn order to change the port that Presto uses, proceed to :ref:`presto-port-configuration-label`.\n\nThere are additional configuration properties described at :ref:`presto-configuration-label` that\nmust be changed for optimal performance. These configuration changes can be done either\nbefore or after starting the Presto server and running queries for the first time, though\nall configuration changes require a restart of the Presto servers.\n\n4. Now, you are ready to start Presto: ::\n\n    $ ./presto-admin server start\n\nThis may take a few seconds, since the command doesn't exit until ``presto-admin`` verifies that Presto is fully up and ready to receive queries.\n"
  },
  {
    "path": "docs/installation/troubleshooting-installation.rst",
    "content": "===============\nTroubleshooting\n===============\n\n#. To troubleshoot problems with presto-admin or Presto, you can use the\n   incident report gathering commands from presto-admin to gather logs and\n   other system information from your cluster. Relevant commands:\n\n    * :ref:`collect-logs`\n    * :ref:`collect-query-info`\n    * :ref:`collect-system-info`\n\n#. You can find the ``presto-admin`` logs in the ``~/.prestoadmin/log``\n   directory.\n#. You can check the status of Presto on your cluster by using\n   :ref:`server-status`.\n#. If Presto is not running and you try to execute any command from the Presto CLI you might get:\n   ::\n\n    $ Error running command: Server refused connection: http://localhost:8080/v1/statement\n\n   To fix this, start Presto with:\n   ::\n\n     $ ./presto-admin server start\n\n#. If the Presto servers fail to start or crash soon after starting, look at\n   the presto server logs on the Presto cluster ``/var/log/presto`` for an\n   error message.  You can collect the logs locally using :ref:`collect-logs`.\n   The relevant error messages should be at the end of the log with the most\n   recent timestamp.  Below are tips for some common errors:\n\n    * Specifying a port that is already in use: Look at\n      :ref:`presto-port-configuration-label` to learn how to change the port\n      configuration.\n    * An error in a catalog configuration file, such as a syntax error or\n      a missing connector.name property: correct the file and deploy it to the\n      cluster again using :ref:`catalog-add`\n\n#. The following error can occur if you do not have passwordless ssh enabled\n   and have not provided a password or if the user requires a sudo password: ::\n\n    Fatal error: Needed to prompt for a connection or sudo password (host: master), but input would be ambiguous in parallel mode\n\n   See :ref:`ssh-configuration-label` for information on setting up\n   passwordless ssh and on providing a password, and :ref:`sudo-password-spec`\n   for information on providing a sudo password.\n\n#. Support for connecting to a cluster with internal HTTPS and/or LDAP communication\n   enabled is experimental. Make sure to check both the Presto server log and the\n   ``presto-admin`` log to troubleshoot problems with your configuration; it may also\n   be helpful to verify that you can connect to the cluster via the Presto CLI using\n   HTTPS/LDAP as appropriate.\n"
  },
  {
    "path": "docs/presto-admin-cli-options.rst",
    "content": "=================================\nPresto-Admin Command-Line Options\n=================================\n\nA quick overview of the possible CLI options for ``presto-admin`` can be found\nvia ``./presto-admin --extended-help``. More details on those options can\nbe found below.\n\n--version\n    Prints out the current ``presto-admin`` version and exits.\n\n-h, --help\n    Prints out a usage string, the basic ``presto-admin`` options and the\n    available commands, then exits.\n\n-d, --display\n    Prints detailed information about a given command.\n\n    e.g., to get detailed information about the ``server install`` command, enter: ::\n\n        ./presto-admin -d server install\n\n--extended-help\n    Prints out a usage string, all the ``presto-admin`` options and the\n    available commands, then exits.\n\n-I, --initial-password-prompt\n    Forces password prompt before running any commands on the cluster.\n\n    Either this option or the ``--password`` option is necessary if the user from\n    ``~/.prestoadmin/config.json`` needs a password for sudo.\n\n    Note that the SSH password and the sudo password must be the same,\n    if passwordless SSH is not used.\n\n-p PASSWORD, --password=PASSWORD\n    Sets password for use with authentication and/or sudo.\n\n    Either this option or the ``--initial-password-prompt`` option is necessary\n    if the user from ``~/.prestoadmin/config.json`` needs a password for sudo.\n\n    Note that the SSH password and the sudo password must be the same,\n    if passwordless SSH is not used.\n\n--abort-on-error\n    Aborts the command, instead of warning, if a command fails on any node. The\n    default for ``presto-admin`` is to warn if a command fails on any node.\n\n-a, --no_agent\n    Forces ``presto-admin`` not to seek out running SSH agents when using\n    key-based authentication.\n\n-A, --forward-agent\n    Enables forwarding of a local SSH agent to the remote end.\n\n--colorize-errors\n    Colorizes error output.\n\n-D, --disable-known-hosts\n    Turns off loading of a user's SSH known_hosts file. Disabling known_hosts leaves\n    you vulnerable to man-in-the-middle attacks. However,in some environments like\n    EC2, a particular host getting a different key should not mean that you are not\n    able to connect via SSH to that host.\n\n-g HOST, --gateway=HOST\n    Routes SSH connections through the SSH daemon on the\n    specified gateway host to their final destination.\n\n-H HOSTS, --hosts=HOSTS\n    Sets the list of hosts where a ``presto-admin`` command should be executed.\n    The values should be comma-separated and exist in your topology.\n\n-i PATH\n    Adds the SSH private key file specified by PATH to the set of keys to\n    try during key-based SSH authentication. May be repeated.\n\n-k, --no-keys\n    Disables loading private key files from ``~/.ssh/``.\n\n--keepalive=N\n    Sends an SSH keepalive every N seconds to keep SSH from timing out.\n\n-n M, --connection-attempts=M\n    Makes M attempts to connect before giving up. The default number of attempts to try is 1.\n\n--port=PORT\n    Sets the SSH connection port. If the SSH port is set both in\n    ``~/.prestoadmin/config.json`` and on the command line, the port\n    specified on the command line will be used.\n\n-r, --reject-unknown-hosts\n    Aborts when a host is not in the user's SSH ``known_hosts`` file.\n\n--system-known-hosts=SYSTEM_KNOWN_HOSTS\n    Loads the given SSH ``known_hosts`` file before reading the user's ``known_hosts``\n    file.\n\n-t N, --timeout=N\n    Sets the network connection timeout to N seconds. The default is 10 seconds.\n\n-T N, --command-timeout=N\n    Sets the timeout for the given remote command to N seconds. The default is\n    to have no timeout.\n\n-u USER, --user=USER\n    Sets the user that is used for SSH connections. If the SSH username is set both in\n    ``~/.prestoadmin/config.json`` and on the command line, the username\n    specified on the command line will be used.\n\n-x HOSTS, --exclude-hosts=HOSTS\n    Sets the list of hosts to be excluded when executing a ``presto-admin``\n    command. The values should be comma-separated and exist in your topology.\n\n--serial\n    Switches to run the command in serial. The default is to run in parallel, because\n    parallel mode is usually faster. However, if you want a password prompt while the command\n    is running (without specifying ``-I`` or ``--initial-password-prompt``), the ``--serial`` flag is necessary.\n"
  },
  {
    "path": "docs/presto-admin-commands.rst",
    "content": "=====================\nPresto-Admin Commands\n=====================\n\n.. _catalog-add:\n\n***********\ncatalog add\n***********\n::\n\n    presto-admin catalog add [<name>]\n\nThis command is used to deploy catalog configurations to the Presto cluster.\n`Catalog configurations <https://prestodb.io/docs/current/connector.html>`_ are\nkept in the configuration directory ``~/.prestoadmin/catalog``\n\nTo add a catalog using ``presto-admin``, first create a configuration file in\n``~/.prestoadmin/catalog``. The file should be named ``<name>.properties`` and\ncontain the configuration for that catalog.\n\nUse the optional ``name`` argument to add a particular catalog to your\ncluster. To deploy all catalogs in the catalog configuration directory,\nleave the name argument out.\n\nIn order to query using the newly added catalog, you need to restart the\nPresto server (see `server restart`_): ::\n\n    presto-admin server restart\n\nExample\n-------\nTo add a catalog for the jmx connector, create a file\n``~/.prestoadmin/catalog/jmx.properties`` with the content\n``connector.name=jmx``.\nThen run: ::\n\n    ./presto-admin catalog add jmx\n    ./presto-admin server restart\n\nIf you have two catalog configurations in the catalog directory, for example\n``jmx.properties`` and ``dummy.properties``, and would like to deploy both at\nonce, you could run ::\n\n    ./presto-admin catalog add\n    ./presto-admin server restart\n\nAdding a Custom Connector\n-------------------------\nIn order to install a catalog for a custom connector not included with Presto, the\njar must be added to the Presto plugin location using the ``plugin add_jar`` command\nbefore running the ``catalog add`` command.\n\nExample: ::\n\n   ./presto-admin plugin add_jar my_connector.jar my_connector\n   ./presto-admin catalog add my_connector\n   ./presto-admin server restart\n\nThe ``add_jar`` command assumes the default plugin location of\n``/usr/lib/presto/lib/plugin`` (see `plugin add_jar`_).  As with the default\nconnectors, a ``my_connector.properties`` file must be created. Refer to the\ncustom connector's documentation for the properties to specify.\n\nThe ``plugin add_jar`` command works with both jars and directories containing jars.\n\n**************\ncatalog remove\n**************\n::\n\n    presto-admin catalog remove <name>\n\nThe catalog remove command is used to remove a catalog from your presto\ncluster configuration. Running the command will remove the catalog from all\nnodes in the Presto cluster. Additionally, it will remove the local\nconfiguration file for the catalog.\n\nIn order for the change to take effect, you will need to restart services. ::\n\n    presto-admin server restart\n\n\nExample\n-------\nFor example: To remove the catalog for the jmx connector, run ::\n\n    ./presto-admin catalog remove jmx\n    ./presto-admin server restart\n\n.. _collect-logs:\n\n************\ncollect logs\n************\n::\n\n    presto-admin collect logs\n\nThis command gathers Presto server logs and launcher logs from the ``/var/log/presto/`` directory across the cluster along with the\n``~/.prestoadmin/log/presto-admin.log`` and creates a tar file. The final tar output will be saved at ``/tmp/presto-debug-logs.tar.gz``.\n\n\nExample\n-------\n::\n\n    ./presto-admin collect logs\n\n.. _collect-query-info:\n\n******************\ncollect query_info\n******************\n::\n\n    presto-admin collect query_info <query_id>\n\nThis command gathers information about a Presto query identified by the given ``query_id`` and stores that information in a JSON file.\nThe output file will be saved at ``/tmp/presto-debug/query_info_<query_id>.json``.\n\nExample\n-------\n::\n\n    ./presto-admin collect query_info 20150525_234711_00000_7qwaz\n\n.. _collect-system-info:\n\n*******************\ncollect system_info\n*******************\n::\n\n    presto-admin collect system_info\n\nThis command gathers various system specific information from the cluster. The information is saved in a tar file at ``/tmp/presto-debug-sysinfo.tar.gz``.\nThe gathered information includes:\n\n * Node specific information from Presto like node uri, last response time, recent failures, recent requests made to the node, etc.\n * List of catalogs configured\n * Catalog configuration files\n * Other system specific information like OS information, Java version, ``presto-admin`` version and Presto server version\n\nExample\n-------\n::\n\n    ./presto-admin collect system_info\n\n\n.. _configuration-deploy-label:\n\n********************\nconfiguration deploy\n********************\n::\n\n    presto-admin configuration deploy [coordinator|workers]\n\nThis command deploys `Presto configuration files <https://prestodb.io/docs/current/installation/deployment.html>`_\nonto the cluster. ``presto-admin`` uses different configuration directories for\nworker and coordinator configurations so that you can easily create different\nconfigurations for your coordinator and worker nodes. Create a\n``~/.prestoadmin/coordinator`` directory for your coordinator\nconfigurations and a ``~/.prestoadmin/workers`` directory for your\nworkers configuration. If you have the ``presto-admin`` configuration\ndirectory path set using the environment variable ``PRESTO_ADMIN_CONFIG_DIR``\nthen the coordinator and worker configuration directories must be created\nunder ``$PRESTO_ADMIN_CONFIG_DIR``.  Place the configuration files for the coordinator\nand workers in their respective directories. The optional ``coordinator`` or ``workers``\nargument tells ``presto-admin`` to only deploy the coordinator or workers\nconfigurations. To deploy both configurations at once, don't specify either\noption.\n\nWhen you run configuration deploy, the following files will be deployed to\nthe ``/etc/presto`` directory on your Presto cluster:\n\n* node.properties\n* config.properties\n* jvm.config\n* log.properties (if it exists)\n\n.. NOTE:: This command will not deploy the configurations for catalogs.  To deploy catalog configurations run `catalog add`_\n\nIf the coordinator is also a worker, it will get the coordinator configuration.\nThe deployed configuration files will overwrite the existing configurations on\nthe cluster. However, the node.id from the\nnode.properties file will be preserved. If no ``node.id`` exists, a new id will be\ngenerated. If any required files are absent when you run configuration deploy,\na default configuration will be deployed. Below are the default\nconfigurations:\n\n*node.properties* ::\n\n    node.environment=presto\n    node.data-dir=/var/lib/presto/data\n    node.launcher-log-file=/var/log/presto/launcher.log\n    node.server-log-file=/var/log/presto/server.log\n    catalog.config-dir=/etc/presto/catalog\n    plugin.dir=/usr/lib/presto/lib/plugin\n\n.. NOTE:: Do not change the value of catalog.config-dir=/etc/presto/catalog as it is necessary for Presto to be able to find the catalog directory when Presto has been installed by RPM.\n\n*jvm.config* ::\n\n    -server\n    -Xmx16G\n    -XX:-UseBiasedLocking\n    -XX:+UseG1GC\n    -XX:G1HeapRegionSize=32M\n    -XX:+ExplicitGCInvokesConcurrent\n    -XX:+HeapDumpOnOutOfMemoryError\n    -XX:+UseGCOverheadLimit\n    -XX:+ExitOnOutOfMemoryError\n    -XX:ReservedCodeCacheSize=512M\n    -DHADOOP_USER_NAME=hive\n\n*config.properties*\n\nFor workers: ::\n\n    coordinator=false\n    discovery.uri=http://<coordinator>:8080\n    http-server.http.port=8080\n    query.max-memory-per-node=8GB\n    query.max-memory=50GB\n\nFor coordinator: ::\n\n    coordinator=true\n    discovery-server.enabled=true\n    discovery.uri=http://<coordinator>:8080\n    http-server.http.port=8080\n    node-scheduler.include-coordinator=false\n    query.max-memory-per-node=8GB\n    query.max-memory=50GB\n\n    # if the coordinator is also a worker, it will have the following property instead\n    node-scheduler.include-coordinator=true\n\nSee :ref:`presto-port-configuration-label` for details on http port configuration.\n\nExample\n-------\nIf you want to change the jvm configuration on the coordinator and the\n``node.environment`` property from ``node.properties`` on all nodes, add the\nfollowing ``jvm.config`` to ``~/.prestoadmin/coordinator``\n\n.. code-block:: none\n\n    -server\n    -Xmx16G\n    -XX:-UseBiasedLocking\n    -XX:+UseG1GC\n    -XX:G1HeapRegionSize=32M\n    -XX:+ExplicitGCInvokesConcurrent\n    -XX:+HeapDumpOnOutOfMemoryError\n    -XX:+UseGCOverheadLimit\n    -XX:+ExitOnOutOfMemoryError\n    -XX:ReservedCodeCacheSize=512M\n\nFurther, add the following ``node.properties`` to\n``~/.prestoadmin/coordinator`` and ``~/.prestoadmin/workers``: ::\n\n    node.environment=test\n    node.data-dir=/var/lib/presto/data\n    node.launcher-log-file=/var/log/presto/launcher.log\n    node.server-log-file=/var/log/presto/server.log\n    catalog.config-dir=/etc/presto/catalog\n    plugin.dir=/usr/lib/presto/lib/plugin\n\nThen run: ::\n\n    ./presto-admin configuration deploy\n\nThis will distribute to the coordinator a default ``config.properties``, the new\n``jvm.config`` and ``node.properties``.  The workers will\nreceive the default ``config.properties`` and ``jvm.config``, and the same\n``node.properties`` as the coordinator.\n\nIf instead you just want to update the coordinator configuration, run: ::\n\n    ./presto-admin configuration deploy coordinator\n\nThis will leave the workers configuration as it was, but update the\ncoordinator's configuration\n\n******************\nconfiguration show\n******************\n::\n\n    presto-admin configuration show [node|jvm|config|log]\n\nThis command prints the contents of the Presto configuration files deployed in the cluster. It takes an optional configuration name argument for the configuration files node.properties, jvm.config, config.properties and log.properties. For missing configuration files a warning will be printed except for log.properties file, since it is an optional configuration file in your Presto cluster.\n\nIf no argument is specified, then all four configurations will be printed.\n\nExample\n-------\n::\n\n    ./presto-admin configuration show node\n\n\n***************\npackage install\n***************\n\n::\n\n    presto-admin package install local_path [--nodeps]\n\nThis command copies any rpm from ``local_path`` to all the nodes in the cluster and installs it. Similar to ``server install`` the cluster topology is obtained from the file ``~/.prestoadmin/config.json``. If this file is missing, then the command prompts for user input to get the topology information.\n\nThis command takes an optional ``--nodeps`` flag which indicates if the rpm installed should ignore checking any package dependencies.\n\n.. WARNING:: Using ``--nodeps`` can result in installing the rpm even with any missing dependencies, so you may end up with a broken rpm installation.\n\nExample\n-------\n::\n\n    ./presto-admin package install /tmp/jdk-8u45-linux-x64.rpm\n\n\n*****************\npackage uninstall\n*****************\n\n::\n\n    presto-admin package uninstall rpm_package_name [--nodeps]\n\nThis command uninstalls an rpm package from all the nodes in the cluster. Similar to ``server uninstall`` the cluster\ntopology is obtained from the file ``~/.prestoadmin/config.json``. If this file is missing, then the command\nprompts for user input to get the topology information.\n\nThis command takes an optional ``--nodeps`` flag which indicates if the rpm installed should ignore checking any package\ndependencies.\n\n.. WARNING:: Using ``--nodeps`` can result in uninstalling the rpm even when dependant packages are installed. It may end up with a broken rpm installation.\n\nExample\n-------\n::\n\n    ./presto-admin package uninstall jdk\n\n\n**************\nplugin add_jar\n**************\n::\n\n    presto-admin plugin add_jar <local-path> <plugin-name> [<plugin-dir>]\n\nThis command deploys the jar at ``local-path`` to the plugin directory for\n``plugin-name``.  By default ``/usr/lib/presto/lib/plugin`` is used as the\ntop-level plugin directory. To deploy the jar to a different location, use the\noptional ``plugin-dir`` argument.\n\nExample\n-------\n::\n\n    ./presto-admin plugin add_jar connector.jar my_connector\n    ./presto-admin plugin add_jar connector.jar my_connector /my/plugin/dir\n\nThe first example will deploy connector.jar to\n``/usr/lib/presto/lib/plugin/my_connector/connector.jar``\nThe second example will deploy it to ``/my/plugin/dir/my_connector/program.jar``.\n\n**********\nscript run\n**********\n::\n\n    presto-admin script run <local-path-to-script> [<remote-dir-to-put-script>]\n\nThis command can be used to run an arbitrary script on a cluster. It copies the\nscript from its local location to the specified remote directory (defaults to\n/tmp), makes the file executable, and runs it.\n\nExample\n-------\n::\n\n    ./presto-admin script run /my/local/script.sh\n    ./presto-admin script run /my/local/script.sh /remote/dir\n\n\n.. _server-install-label:\n\n**************\nserver install\n**************\n::\n\n    presto-admin server install <rpm_specifier> [--rpm-source] [--nodeps]\n\nThis command takes in a parameter ``rpm_specifier``. The parameter can be one of the following forms, listed in order of decreasing precedence:\n'latest' - This downloads of the latest version of the presto rpm.\nurl - This downloads the presto rpm found at the given url.\nversion number - This downloads the presto rpm of the specified version.\nlocal path - This uses a previously downloaded rpm. The local path should be accessible by ``presto-admin``.\nIf ``rpm_specifier`` matches multiple forms, it is interpreted only as the form with highest precedence.\nFor forms that require the rpm to be downloaded, if a local copy is found with a matching version to the rpm that would be downloaded, the local copy is used.\nRpms downloaded using a version number or 'latest' come from Maven Central.\nThis command fails if it cannot find or download the requested presto-server rpm.\n\nAfter successfully finding the rpm, this command copies the presto-server rpm to all the nodes in the cluster,\ninstalls it, deploys the general presto configuration along with tpch connector configuration.\nThe topology used to configure the nodes are obtained from ``~/.prestoadmin/config.json``. See :ref:`presto-admin-configuration-label` on how to configure your cluster using config.json. If this file is missing, then the command prompts for user input to get the topology information.\n\nThe general configurations for Presto's coordinator and workers are taken from the directories ``~/.prestoadmin/coordinator`` and ``~/.prestoadmin/workers`` respectively. If these directories or any required configuration files are absent when you run ``server install``, a default configuration will be deployed. See `configuration deploy`_ for details.\n\nThe catalog directory ``~/.prestoadmin/catalog/`` should contain the configuration files for any catalogs that you would\nlike to connect to in your Presto cluster.\nThe ``server install`` command will configure the cluster with all the catalogs in the directory. If the directory does\nnot exist or is empty prior to ``server install``, then by default the tpch connector is configured. See `catalog add`_\non how to add catalog configuration files after installation.\n\nThis command takes an optional ``--nodeps`` flag which indicates if the rpm installed should ignore checking any package dependencies.\n\n.. WARNING:: Using ``--nodeps`` can result in installing the rpm even with any missing dependencies, so you may end up with a broken rpm installation.\n\nExample\n-------\n::\n\n    ./presto-admin server install /tmp/presto.rpm\n    ./presto-admin server install 0.148\n    ./presto-admin server install http://search.maven.org/remotecontent?filepath=com/facebook/presto/presto-server-rpm/0.150/presto-server-rpm-0.150.rpm\n    ./presto-admin server install latest\n\n**Standalone RPM Install**\n\nIf you want to do a single node installation where coordinator and worker are co-located, you can just use:\n::\n\n    rpm -i presto.rpm\n\nThis will deploy the necessary configurations for the presto-server to operate in single-node mode.\n\n.. _server-restart-label:\n\n**************\nserver restart\n**************\n::\n\n    presto-admin server restart\n\nThis command first stops any Presto servers running and then starts them. A status check is performed on the entire cluster and is reported at the end.\n\nExample\n-------\n::\n\n    ./presto-admin server restart\n\n\n.. _server-start-label:\n\n************\nserver start\n************\n::\n\n    presto-admin server start\n\nThis command starts the Presto servers on the cluster. A status check is performed on the entire cluster and is reported at the end.\n\nExample\n-------\n::\n\n    ./presto-admin server start\n\n\n.. _server-status:\n\n*************\nserver status\n*************\n::\n\n    presto-admin server status\n\nThis command prints the status information of Presto in the cluster. This command will\nfail to report the correct status if the Presto installed is older than version 0.100. It will not print any status information if a given node is inaccessible.\n\nThe status output will have the following information:\n    * server status\n    * node uri\n    * Presto version installed\n    * node is active/inactive\n    * catalogs deployed\n\nExample\n-------\n::\n\n    ./presto-admin server status\n\n\n***********\nserver stop\n***********\n::\n\n    presto-admin server stop\n\nThis command stops the Presto servers on the cluster.\n\nExample\n-------\n::\n\n    ./presto-admin server stop\n\n\n****************\nserver uninstall\n****************\n::\n\n    presto-admin server uninstall [--nodeps]\n\nThis command stops the Presto server if running on the cluster and uninstalls the Presto rpm. The uninstall command removes any presto\nrelated files deployed during ``server install`` but retains the Presto logs at ``/var/log/presto``.\n\nThis command takes an optional ``--nodeps`` flag which indicates if the rpm uninstalled should ignore checking any package dependencies.\n\nExample\n-------\n::\n\n    ./presto-admin server uninstall\n\n\n**************\nserver upgrade\n**************\n::\n\n    presto-admin server upgrade path/to/new/package.rpm [local_config_dir] [--nodeps]\n\nThis command upgrades the Presto RPM on all of the nodes in the cluster to the RPM at\n``path/to/new/package.rpm``, preserving the existing configuration on the cluster. The existing\ncluster configuration is saved locally to local_config_dir (which defaults to a temporary\nfolder if not specified). The path can either be absolute or relative to the current\ndirectory.\n\nThis command can also be used to downgrade the Presto installation, if the RPM at\n``path/to/new/package.rpm`` is an earlier version than the Presto installed on the cluster.\n\nNote that if the configuration files on the cluster differ from the presto-admin configuration\nfiles found in ``~/.prestoadmin``, the presto-admin configuration files are not updated.\n\nThis command takes an optional ``--nodeps`` flag which indicates if the rpm upgrade should ignore checking any package dependencies.\n\n.. WARNING:: Using ``--nodeps`` can result in installing the rpm even with any missing dependencies, so you may end up with a broken rpm upgrade.\n\nExample\n-------\n::\n\n    ./presto-admin server upgrade path/to/new/package.rpm /tmp/cluster-configuration\n    ./presto-admin server upgrade /path/to/new/package.rpm /tmp/cluster-configuration\n\n\n*************\ntopology show\n*************\n::\n\n presto-admin topology show\n\nThis command shows the current topology configuration for the cluster (including the coordinators, workers, SSH port, and SSH username).\n\nExample\n-------\n::\n\n    ./presto-admin topology show\n\n\n"
  },
  {
    "path": "docs/quick-start-guide.rst",
    "content": ".. _quick-start-guide-label:\n\n*****************\nQuick Start Guide\n*****************\n\nThe following describes installing Presto on one or more nodes via the ``presto-admin`` software. This is an alternative to the installation steps described at `prestodb.io <https://prestodb.io/docs/current/installation.html>`_. Using the ``presto-admin`` tool is the simplest and preferred method for installing and managing a Presto cluster.\n\nFor a detailed explanation of all of the commands and their options, see :ref:`comprehensive-guide-label`.\n\n.. toctree::\n    :maxdepth: 1\n\n    installation/presto-admin-installation\n    installation/presto-admin-configuration\n    installation/java-installation\n    installation/presto-server-installation\n    installation/presto-cli-installation\n    installation/presto-catalog-installation\n    installation/presto-configuration\n    installation/troubleshooting-installation\n    installation/presto-admin-upgrade\n"
  },
  {
    "path": "docs/release/release-0.1.0.rst",
    "content": "=============\nRelease 0.1.0\n=============\n\nInitial Release!\nThis release works for Presto versions 0.100-0.102\n"
  },
  {
    "path": "docs/release/release-1.1.rst",
    "content": "===========\nRelease 1.1\n===========\n\nThis release works for Presto versions 0.103-0.115\n"
  },
  {
    "path": "docs/release/release-1.2.rst",
    "content": "===========\nRelease 1.2\n===========\n\nThe default values in this release are intended to work with Presto versions\n0.116 through at least 0.130. However, the user can supply non-default\nconfigurations to use this release with other versions of Presto.\n\nGeneral Fixes\n-------------\n* Fix server status to work with later versions of Presto\n* Exit with non-zero code when operations fail\n* Update configuration defaults for Presto versions >0.115\n* Make remote log directory configurable\n* Add support for specifying java8 home in config.json\n* :ref:`collect-logs` will use the log directory specified in\n  Presto's config.properties if configured.\n\n\nConfiguration\n-------------\nBefore this release, :ref:`configuration-deploy-label` would fill in default\nvalues for any required properties that the user did not supply in the\nconfiguration files. However, this created problems when different versions of\nPresto had different configuration requirements.  In particular, it became\nimpossible to remove any required properties from the configuration even if the\nuser's Presto version did not require those properties.\n\nIn the current behavior, when the user needs to override the defaults in any\nconfiguration file, they must write out all the properties for that\nconfiguration file, which will be deployed as-is.\n"
  },
  {
    "path": "docs/release/release-1.3.rst",
    "content": "===========\nRelease 1.3\n===========\n\nThe default values in this release are intended to work with Presto versions\n0.116 through x. However, the user can supply non-default\nconfigurations to use this release with other versions of Presto.\n\nGeneral Fixes\n-------------\n* Change ``make dist`` to build the online installer by default\n"
  },
  {
    "path": "docs/release/release-1.4.rst",
    "content": "===========\nRelease 1.4\n===========\n\nThis release works for Presto versions 0.116-0.148.\n\n* Add package uninstall support\n* Add --nodeps option to indicate if the server install/uninstall should ignore dependencies\n* Fix config files to be owned by the presto user and not accessible to other users\n* Update and add more Presto configuration defaults\n* Use proper Java version for server upgrade\n\n"
  },
  {
    "path": "docs/release/release-1.5.rst",
    "content": "===========\nRelease 1.5\n===========\n\nThis release works for Presto versions 0.116 through at least 0.152.1\n\nNew Features\n------------\n* Add the ability to download the rpm in ``server install`` by specifying ``latest`` or a version number\n* Add a ``file copy`` command to distribute files to all nodes on the cluster\n* Collect connector configurations from each node as part of ``collect system_info``\n\nBug Fixes\n---------\n* Fix a bug where a non-root user in ``config.json`` could not access files\n\nCompatiblity Notes\n------------------\n* The ``script run`` command was renamed to ``file run``\n"
  },
  {
    "path": "docs/release/release-2.0.rst",
    "content": "===========\nRelease 2.0\n===========\n\nNew Features\n------------\n* Make presto-admin log and configuration directories configurable. They can be\n  set using the environment variables ``PRESTO_ADMIN_LOG_DIR`` and\n  ``PRESTO_ADMIN_CONFIG_DIR``.\n* Change the default configuration directory to ``~/.prestoadmin`` and the\n  default log directory to ``~/.prestoadmin/log``.\n* Remove the requirement for running and installing presto-admin with sudo.\n  The user specified in ``config.json`` still needs sudo access on the Presto\n  nodes in order to execute commands like installing the RPM and setting\n  permissions on the configuration files.\n* Rename the ``connectors`` directory to ``catalog`` to match the Presto\n  nomenclature.\n* Rename the ``connector add`` and ``connector remove``. commands to\n  ``catalog add`` and ``catalog remove``.\n* Add experimental support for connecting to a Presto server with internal\n  communication via HTTPS and LDAP, where the HTTP connection is disabled.\n* Allow specifying which python interpreter to use as an argument to the\n  presto-admin installation script.\n* Add ``G1HeapRegionSize=32M`` to the jvm.config defaults as suggested by the\n  Presto documentation.\n\nBug Fixes\n---------\n* Keep the ``node.id`` in Presto's ``node.properties`` file consistent across\n  configuration updates.\n* Change the permissions on the Presto catalog directory to ``755`` and the\n  owner to``presto:presto``.\n* Use ``catalog.config-dir`` instead of ``plugin.config-dir`` in the\n  ``node.properties`` defaults. ``plugin.config-dir`` has been deprecated\n  in Presto since version 0.113.\n\nCompatibility Notes\n-------------------\n* The locations of config and log directories have been changed\n* The ``connectors`` directory has been renamed to ``catalog``.\n* The ``connector`` commands have been renamed to ``catalog``.\n"
  },
  {
    "path": "docs/release/release-2.1.rst",
    "content": "===========\nRelease 2.1\n===========\n\nBug Fixes\n---------\n* Fix bug with ``server start`` when only frontend LDAP in Presto is enabled.\n* Fix intermittent bug with ``server start`` printing out irrelevant error messages.\n"
  },
  {
    "path": "docs/release/release-2.2.rst",
    "content": "===========\nRelease 2.2\n===========\n\nNew Features\n------------\n* Support specifying a range of workers in ``config.json``\n\nBug Fixes and Enhancements\n--------------------------\n* Fix error with getting server status for complex Presto version names\n* Preserve all of ``/etc/presto`` during upgrade\n* Use ``rpm -U`` for ``package upgrade`` and ``server upgrade`` instead of uninstalling and reinstalling fresh\n* Use ``.gz`` instead of ``.bz2`` for the installation tarballs and for the files collected by ``collect logs``\n  and ``collect system_info``\n\n"
  },
  {
    "path": "docs/release/release-2.3.rst",
    "content": "===========\nRelease 2.3\n===========\n\nBug Fixes and Enhancements\n--------------------------\n* Update the default JVM settings to use the new -XX:+ExitOnOutOfMemoryError flag instead of the old -XX:OnOutOfMemoryError=kill -9 %p\n"
  },
  {
    "path": "docs/release.rst",
    "content": "=============\nRelease Notes\n=============\n.. toctree::\n    :maxdepth: 1\n\n    release/release-2.3\n    release/release-2.2\n    release/release-2.1\n    release/release-2.0\n    release/release-1.5\n    release/release-1.4\n    release/release-1.3\n    release/release-1.2\n    release/release-1.1\n    release/release-0.1.0\n"
  },
  {
    "path": "docs/software-requirements.rst",
    "content": "=====================\nSoftware Requirements\n=====================\n\n**Operating Systems**\n* RedHat Linux version 6.x\t\t\n* CentOS (equivalent to above)\n\n**Python**\n\n* Python 2.6.x OR\n* Python 2.7.x\n\n**SSH Configuration**\n\n* Passwordless SSH from the node running ``presto-admin`` to the nodes where Presto will be installed OR\n* Ability to SSH with a password from the node running ``presto-admin`` to the nodes where Presto will be installed\n\nFor more on SSH configuration, see :ref:`ssh-configuration-label`.\n\n**Other Configuration**\n\n* Sudo privileges on both the node running ``presto-admin`` and the nodes where Presto will be installed are required for a non-root presto-admin user.\n"
  },
  {
    "path": "docs/ssh-configuration.rst",
    "content": ".. _ssh-configuration-label:\n\n*****************\nSSH Configuration\n*****************\n\nIn order to run ``presto-admin``, the node that is running ``presto-admin`` must be able to connect to all of the nodes running Presto via SSH. ``presto-admin`` makes the SSH connection with the username and port specified in ``~/.prestoadmin/config.json``. Even if you have a single-node installation, ``ssh username@localhost`` needs to work properly.\n\nThere are two ways to configure SSH: with keys so that you can use passwordless SSH, or with passwords. If your cluster already has passwordless SSH configured for the username ``user``, you can skip this step if the username is root, otherwise the root public key (id_rsa.pub) needs to be appended to the non-root username’s authorized_keys file. If you are intending to use ``presto-admin`` with passwords, take a look at the documentation below, because there are several ways to specify the password.\n\nUsing ``presto-admin`` with passwordless SSH\n--------------------------------------------\nIn order to set up passwordless SSH, you must first login as username on the presto-admin node and generate keys with no passphrase on the node running ``presto-admin``:\n::\n\n ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa\n\nWhile logged in as username, copy the public key to all of the coordinator and worker nodes:\n::\n\n ssh <username>@<ip> \"mkdir -p ~/.ssh && chmod 700 ~/.ssh\"\n scp ~/.ssh/id_rsa.pub <username>@<ip>:~/.ssh/id_rsa.pub\n\nLog into all of those nodes and append the public key to the authorized key file:\n::\n\n ssh <username>@<ip> \"cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys\"\n\nFor non-root username, log into all of those nodes and append the root user public key to the username authorized key file, provided the passwordless ssh has been setup for root user.:\n::\n\n   ssh <username>@<ip> \"sudo cat /root/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys\"\n\nOnce you have passwordless SSH set up, you can just run ``presto-admin`` commands as they appear in the documentation. If your private key is not in ``~/.ssh``, it is possible to specify one or several private keys using the -i CLI option:\n\n::\n\n ./presto-admin <command> -i <path_to_private_key> -i <path_to_second_private_key>\n\n\nPlease also note that it is not common for servers to allow passwordless SSH for root because of security concerns, so it is preferable for the SSH user not to be root.\n\nUsing ``presto-admin`` with SSH passwords\n-----------------------------------------\nIf you do not want to set up passwordless SSH on your cluster, it is possible to use ``presto-admin`` with SSH passwords. However, you will need to add a password argument to the ``presto-admin`` commands as they appear in the documentation. There are several options. To specify a password on the CLI in plaintext:\n\n::\n\n ./presto-admin <command> -p <password>\n\nHowever, from a security perspective, it is preferable not to type your password in plaintext. Thus, it is also possible to add an interactive password prompt, which prompts you for the initial value of your password before running any commands:\n\n::\n\n ./presto-admin <command> -I\n Initial value for env.password: <type your password here>\n\nIf you do not specify a password, the command will fail with a parallel execution failure, since, by default, ``presto-admin`` runs in parallel and cannot prompt for a password while running in parallel. If you specify the ``--serial`` option for ``presto-admin``, ``presto-admin`` will prompt you for a password if it cannot connect.\n\nPlease note that the SSH password for the user specified in ``~/.prestoadmin/config.json`` must match the sudo password for that user.\n\n"
  },
  {
    "path": "docs/user-guide.rst",
    "content": ".. _comprehensive-guide-label:\n\n**********\nUser Guide\n**********\n\nA full explanation of the commands and features of ``presto-admin``.\n\n.. toctree::\n    :maxdepth: 2\n\n    quick-start-guide\n    emr\n    installation/advanced-installation-options\n    installation/presto-port-configuration\n    ssh-configuration\n    presto-admin-commands\n    presto-admin-cli-options\n\n"
  },
  {
    "path": "packaging/__init__.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nimport os\n\npackage_dir = os.path.abspath(os.path.dirname(__file__))\n"
  },
  {
    "path": "packaging/bdist_prestoadmin.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nimport os\nimport re\nfrom distutils import log as logger\nfrom distutils.dir_util import remove_tree\n\nimport pip\n\ntry:\n    from setuptools import Command\nexcept ImportError:\n    from distutils.core import Command\n\nfrom packaging import package_dir\n\n\nclass bdist_prestoadmin(Command):\n\n    description = 'create a distribution for prestoadmin'\n\n    user_options = [('bdist-dir=', 'b',\n                     'temporary directory for creating the distribution'),\n                    ('dist-dir=', 'd',\n                     'directory to put final built distributions in'),\n                    ('virtualenv-version=', None,\n                     'version of virtualenv to download'),\n                    ('keep-temp', 'k',\n                     'keep the pseudo-installation tree around after ' +\n                     'creating the distribution archive'),\n                    ('online-install', None, 'boolean flag indicating if ' +\n                     'the installation should pull dependencies from the ' +\n                     'Internet or use the ones supplied in the third party ' +\n                     'directory')\n                    ]\n\n    default_virtualenv_version = '12.0.7'\n\n    NATIVE_WHEELS = ['pycrypto-2.6.1-{0}-none-linux_x86_64.whl', 'twofish-0.3.0-{0}-none-linux_x86_64.whl']\n\n    def build_wheel(self, build_dir):\n        cmd = self.reinitialize_command('bdist_wheel')\n        cmd.dist_dir = build_dir\n        self.run_command('bdist_wheel')\n\n        # Ensure that you get the finalized archive name\n        cmd.finalize_options()\n        wheel_name = cmd.get_archive_basename()\n        logger.info('creating %s in %s', wheel_name + '.whl', build_dir)\n\n        return wheel_name\n\n    def generate_install_script(self, wheel_name, build_dir):\n        with open(os.path.join(package_dir, 'install-prestoadmin.template'), 'r') as template:\n            with open(os.path.join(build_dir, 'install-prestoadmin.sh'), 'w') as install_script_file:\n                install_script = self._fill_in_template(template.readlines(), wheel_name)\n                install_script_file.write(install_script)\n                os.chmod(os.path.join(build_dir, 'install-prestoadmin.sh'), 0755)\n\n    def _fill_in_template(self, template_lines, wheel_name):\n        if self.online_install:\n            extra_install_args = ''\n        else:\n            extra_install_args = '--no-index --find-links third-party'\n\n        filled_in = [self._replace_template_values(line, wheel_name, extra_install_args) for line in template_lines]\n        return ''.join(filled_in)\n\n    def _replace_template_values(self, line, wheel_name, extra_install_args):\n        line = re.sub(r'%ONLINE_OR_OFFLINE_INSTALL%', extra_install_args, line)\n        line = re.sub(r'%WHEEL_NAME%', wheel_name, line)\n        line = re.sub(r'%VIRTUALENV_VERSION%', self.virtualenv_version, line)\n        return line\n\n    def package_dependencies(self, build_dir):\n        thirdparty_dir = os.path.join(build_dir, 'third-party')\n\n        requirements = self.distribution.install_requires\n        for requirement in requirements:\n            pip.main(['wheel',\n                      '--wheel-dir={0}'.format(thirdparty_dir),\n                      '--no-cache',\n                      requirement])\n\n        pip.main(['install',\n                  '-d',\n                  thirdparty_dir,\n                  '--no-cache',\n                  '--no-use-wheel',\n                  'virtualenv=={0}'.format(self.virtualenv_version)])\n\n    def archive_dist(self, build_dir, dist_dir):\n        archive_basename = self.distribution.get_fullname()\n        if self.online_install:\n            archive_basename += '-online'\n        else:\n            archive_basename += '-offline'\n        archive_file = os.path.join(dist_dir, archive_basename)\n        self.mkpath(os.path.dirname(archive_file))\n        self.make_archive(archive_file, 'gztar',\n                          root_dir=os.path.dirname(build_dir),\n                          base_dir=os.path.basename(build_dir))\n        logger.info('created %s.tar.gz', archive_file)\n\n    def run(self):\n        build_dir = self.bdist_dir\n        self.mkpath(build_dir)\n\n        wheel_name = self.build_wheel(build_dir)\n        self.generate_install_script(wheel_name, build_dir)\n        if not self.online_install:\n            self.package_dependencies(build_dir)\n\n        self.archive_dist(build_dir, self.dist_dir)\n\n        if not self.keep_temp:\n            remove_tree(build_dir)\n\n    def initialize_options(self):\n        self.bdist_dir = None\n        self.dist_dir = None\n        self.virtualenv_url_base = None\n        self.virtualenv_version = None\n        self.keep_temp = False\n        self.online_install = False\n\n    def finalize_options(self):\n        if self.bdist_dir is None:\n            bdist_base = self.get_finalized_command('bdist').bdist_base\n            self.bdist_dir = os.path.join(bdist_base,\n                                          self.distribution.get_name())\n\n        if self.dist_dir is None:\n            self.dist_dir = 'dist'\n\n        if self.virtualenv_version is None:\n            self.virtualenv_version = self.default_virtualenv_version\n"
  },
  {
    "path": "packaging/install-prestoadmin.template",
    "content": "#!/bin/bash\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nset -e\n\nPYTHON_BIN=python\n\nwhile getopts \":p:\" c; do\n    case $c in\n        p)\n            PYTHON_BIN=\"$OPTARG\"\n            ;;\n        \\?)\n            echo \"Unrecognized option -$OPTARG\" >&2\n            exit 1\n            ;;\n        :)\n            echo \"Option -$OPTARG requires an argument\" >&2\n            exit 1\n            ;;\n    esac\ndone\n\nif [ -d \"third-party\" ]; then\n    tar xvzf third-party/virtualenv-%VIRTUALENV_VERSION%.tar.gz -C third-party || true\n    \"$PYTHON_BIN\" third-party/virtualenv-%VIRTUALENV_VERSION%/virtualenv.py presto-admin-install\nelse\n    wget --no-check-certificate https://pypi.python.org/packages/source/v/virtualenv/virtualenv-%VIRTUALENV_VERSION%.tar.gz\n    tar xvzf virtualenv-%VIRTUALENV_VERSION%.tar.gz || true\n    \"$PYTHON_BIN\" virtualenv-%VIRTUALENV_VERSION%/virtualenv.py presto-admin-install\nfi\n\nsource presto-admin-install/bin/activate\ncert_file=$1\n# trust pypi.python.org by default, otherwise use cert_file provided\ncert_options='--trusted-host pypi.python.org'\nif [ -n \"$1\" ]; then\n    if [ ! -f  $cert_file ]; then\n        echo \"Adding pypi.python.org as trusted-host. Cannot find certificate file: \"$cert_file\n    else\n        cert_options='--cert '$cert_file\n    fi\nfi\n\npip install $cert_options %WHEEL_NAME%.whl %ONLINE_OR_OFFLINE_INSTALL%\nif ! `\"$PYTHON_BIN\" -c \"import paramiko\" > /dev/null 2>&1` ; then\n    printf \"\\nERROR\\n\"\n    echo \"Paramiko could not be imported. This usually means that pycrypto (a dependency of paramiko)\"\n    echo \"has been compiled against a different libc version. Ensure the presto-admin installer is \"\n    echo \"built on the same OS as the target installation OS.\"\n    exit 1\nfi\ndeactivate\n\ncat > `pwd`/presto-admin << EOT\n#!/bin/bash\nexport VIRTUAL_ENV=\"`pwd`/presto-admin-install\"\nexport PATH=\"\\$VIRTUAL_ENV/bin:\\$PATH\"\nunset PYTHON_HOME\n\nexec presto-admin \"\\$@\"\nEOT\nchmod 755 `pwd`/presto-admin\n\nCONF_DIR=${PRESTO_ADMIN_CONFIG_DIR:-~/.prestoadmin}\nmkdir -p \"$CONF_DIR\"\n\nLOG_DIR=${PRESTO_ADMIN_LOG_DIR:-$CONF_DIR/log}\nmkdir -p \"$LOG_DIR\"\n\nCATALOG_DIR=$CONF_DIR/catalog\nmkdir -p \"$CATALOG_DIR\"\n\nCOORDINATOR_DIR=$CONF_DIR/coordinator\nmkdir -p \"$COORDINATOR_DIR\"\n\nWORKERS_DIR=$CONF_DIR/workers\nmkdir -p \"$WORKERS_DIR\"\n"
  },
  {
    "path": "prestoadmin/__init__.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"Presto-Admin tool for deploying and managing Presto clusters\"\"\"\n\nimport os\nimport sys\nimport prestoadmin._version\n\nfrom fabric.api import env\n\nmain_dir = os.path.dirname(os.path.abspath(os.path.dirname(__file__)))\n\nimport fabric_patches  # noqa\n\nfrom prestoadmin.mode import get_mode, for_mode, MODE_STANDALONE, \\\n        MODE_SLIDER  # noqa\nfrom prestoadmin.util.exception import ConfigFileNotFoundError, \\\n    ConfigurationError  # noqa\n\n__version__ = prestoadmin._version.__version__\n\n#\n# Subcommands common to all modes. If anybody knows why fabric_patches is in\n# the list, I'll make a note for the next person.\n#\n__all__ = ['fabric_patches']\n\ncfg_mode = MODE_STANDALONE\ntry:\n    cfg_mode = get_mode()\nexcept ConfigFileNotFoundError as e:\n    pass\nexcept ConfigurationError as e:\n    print >>sys.stderr, e.message\n\n\nADDITIONAL_TASK_MODULES = {\n    MODE_SLIDER: [('yarn_slider.server', 'server'),\n                  ('yarn_slider.slider', 'slider')],\n    MODE_STANDALONE: ['topology',\n                      ('configure_cmds', 'configuration'),\n                      'server',\n                      'catalog',\n                      'package',\n                      'collect',\n                      'file',\n                      'plugin']}\n\n\nif cfg_mode is not None:\n    atms = for_mode(cfg_mode, ADDITIONAL_TASK_MODULES)\n    for atm in atms:\n        try:\n            module, subcommand_name = atm\n        except ValueError:\n            module = atm\n            subcommand_name = atm\n\n        __all__.append(subcommand_name)\n\n        components = module.split('.')\n\n        if len(components) == 1:\n            # The simple case...\n            # import <module> as <subcommand_name>\n            globals()[subcommand_name] = __import__(module, globals())\n        else:\n            # The complicated case:\n            # import foo.bar doesn't actually import foo.bar; it imports foo.\n            # This is why, for example, you can't to the following:\n\n            # >>> import os.path\n            # >>> path.join('foo', 'bar', 'baz', 'zot')\n            #\n            # Doing the equivalent of import yarn_slider.slider as slider\n            # results in the global slider variable being assigned to the\n            # yarn_slider module, which is NOT what we want.\n            # Instead, we need to recursively traverse the submodules until we\n            # get to the one we're interested in.\n            submodule = __import__(module, globals())\n            for c in components[1:]:\n                submodule = submodule.__dict__[c]\n            globals()[subcommand_name] = submodule\n\n\nenv.roledefs = {\n    'coordinator': [],\n    'worker': [],\n    'all': []\n}\n"
  },
  {
    "path": "prestoadmin/_version.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Version information\"\"\"\n\n# This must be the last line in the file and the format must be maintained\n# even when the version is changed\n__version__ = '2.6-SNAPSHOT'\n"
  },
  {
    "path": "prestoadmin/catalog.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for presto catalog configurations\n\"\"\"\nimport errno\nimport logging\n\nimport fabric.utils\nfrom fabric.api import task, env\nfrom fabric.context_managers import hide\nfrom fabric.contrib import files\nfrom fabric.operations import sudo, os, get\n\nfrom prestoadmin.deploy import secure_create_directory\nfrom prestoadmin.standalone.config import StandaloneConfig, \\\n    PRESTO_STANDALONE_USER_GROUP\nfrom prestoadmin.util import constants\nfrom prestoadmin.util.base_config import requires_config\nfrom prestoadmin.util.exception import ConfigFileNotFoundError, \\\n    ConfigurationError\nfrom prestoadmin.util.fabricapi import put_secure\nfrom prestoadmin.util.filesystem import ensure_directory_exists\nfrom prestoadmin.util.local_config_util import get_catalog_directory\n\n_LOGGER = logging.getLogger(__name__)\n\n__all__ = ['add', 'remove']\nCOULD_NOT_REMOVE = 'Could not remove catalog'\n\n\n# we deploy catalog files with 0600 permissions because they can contain passwords\n# that should not be world readable\ndef deploy_files(filenames, local_dir, remote_dir, user_group, mode=0600):\n    _LOGGER.info('Deploying configurations for ' + str(filenames))\n    secure_create_directory(remote_dir, PRESTO_STANDALONE_USER_GROUP)\n    for name in filenames:\n        put_secure(user_group, mode, os.path.join(local_dir, name), remote_dir,\n                   use_sudo=True)\n\n\ndef gather_catalogs(local_config_dir, allow_overwrite=False):\n    local_catalog_dir = os.path.join(local_config_dir, env.host, 'catalog')\n    if not allow_overwrite and os.path.exists(local_catalog_dir):\n        fabric.utils.error(\"Refusing to overwrite %s. Use 'overwrite' \"\n                           \"option to overwrite.\" % local_catalog_dir)\n    ensure_directory_exists(local_catalog_dir)\n    if files.exists(constants.REMOTE_CATALOG_DIR):\n        return get(constants.REMOTE_CATALOG_DIR, local_catalog_dir, use_sudo=True)\n    else:\n        return []\n\n\ndef validate(filenames):\n    for name in filenames:\n        file_path = os.path.join(get_catalog_directory(), name)\n        _LOGGER.info('Validating catalog configuration: ' + str(name))\n        try:\n            with open(file_path) as f:\n                file_content = f.read()\n            if 'connector.name' not in file_content:\n                message = ('Catalog configuration %s does not contain '\n                           'connector.name' % name)\n                raise ConfigurationError(message)\n\n        except IOError, e:\n            fabric.utils.error(message='Error validating ' + file_path,\n                               exception=e)\n            return False\n\n    return True\n\n\n@task\n@requires_config(StandaloneConfig)\ndef add(name=None):\n    \"\"\"\n    Deploy configuration for a catalog onto a cluster.\n\n    E.g.: 'presto-admin catalog add tpch'\n    deploys a configuration file for the tpch connector.  The configuration is\n    defined by tpch.properties in the local catalog directory, which defaults to\n    ~/.prestoadmin/catalog.\n\n    If no catalog name is specified, then  configurations for all catalogs\n    in the catalog directory will be deployed\n\n    Parameters:\n        name - Name of the catalog to be added\n    \"\"\"\n    catalog_dir = get_catalog_directory()\n    if name:\n        filename = name + '.properties'\n        config_path = os.path.join(catalog_dir, filename)\n        if not os.path.isfile(config_path):\n            raise ConfigFileNotFoundError(\n                config_path=config_path,\n                message='Configuration for catalog ' + name + ' not found')\n        filenames = [filename]\n    elif not os.path.isdir(catalog_dir):\n        message = ('Cannot add catalogs because directory %s does not exist'\n                   % catalog_dir)\n        raise ConfigFileNotFoundError(config_path=catalog_dir,\n                                      message=message)\n    else:\n        try:\n            filenames = os.listdir(catalog_dir)\n        except OSError as e:\n            fabric.utils.error(e.strerror)\n            return\n        if not filenames:\n            fabric.utils.warn(\n                'Directory %s is empty. No catalogs will be deployed' %\n                catalog_dir)\n            return\n\n    if not validate(filenames):\n        return\n    filenames.sort()\n    _LOGGER.info('Adding catalog configurations: ' + str(filenames))\n    print('Deploying %s catalog configurations on: %s ' %\n          (', '.join(filenames), env.host))\n\n    deploy_files(filenames, catalog_dir,\n                 constants.REMOTE_CATALOG_DIR, PRESTO_STANDALONE_USER_GROUP)\n\n\n@task\n@requires_config(StandaloneConfig)\ndef remove(name):\n    \"\"\"\n    Remove a catalog from the cluster.\n\n    Parameters:\n        name - Name of the catalog to be removed\n    \"\"\"\n    _LOGGER.info('[' + env.host + '] Removing catalog: ' + name)\n    ret = remove_file(os.path.join(constants.REMOTE_CATALOG_DIR,\n                                   name + '.properties'))\n    if ret.succeeded:\n        if COULD_NOT_REMOVE in ret:\n            fabric.utils.error(ret)\n        else:\n            print('[%s] Catalog removed. Restart the server for the change '\n                  'to take effect' % env.host)\n    else:\n        fabric.utils.error('Failed to remove catalog ' + name + '.\\n\\t' +\n                           ret)\n\n    local_path = os.path.join(get_catalog_directory(), name + '.properties')\n    try:\n        os.remove(local_path)\n    except OSError as e:\n        if e.errno == errno.ENOENT:\n            pass\n        else:\n            raise\n\n\ndef remove_file(path):\n    script = ('if [ -f %(path)s ] ; '\n              'then rm %(path)s ; '\n              'else echo \"%(could_not_remove)s \\'%(name)s\\'. '\n              'No such file \\'%(path)s\\'\"; fi')\n\n    with hide('stderr', 'stdout'):\n        return sudo(script %\n                    {'path': path,\n                     'name': os.path.splitext(os.path.basename(path))[0],\n                     'could_not_remove': COULD_NOT_REMOVE})\n"
  },
  {
    "path": "prestoadmin/collect.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"\nModule for gathering various debug information for incident reporting\nusing presto-admin\n\"\"\"\n\nimport logging\nimport json\nimport shutil\nimport tarfile\n\nimport requests\nfrom fabric.contrib.files import append\nfrom fabric.context_managers import settings, hide\nfrom fabric.operations import os, get, run\nfrom fabric.tasks import execute\nfrom fabric.api import env, runs_once, task\nfrom fabric.utils import abort, warn\n\nfrom prestoadmin.prestoclient import PrestoClient\nfrom prestoadmin.server import get_presto_version, get_catalog_info_from\nfrom prestoadmin.util.base_config import requires_config\nfrom prestoadmin.util.filesystem import ensure_directory_exists\nfrom prestoadmin.util.local_config_util import get_log_directory\nfrom prestoadmin.util.remote_config_util import lookup_server_log_file,\\\n    lookup_launcher_log_file,  lookup_port, lookup_catalog_directory\nfrom prestoadmin.standalone.config import StandaloneConfig\nimport prestoadmin.util.fabricapi as fabricapi\nimport prestoadmin\n\n\nTMP_PRESTO_DEBUG = '/tmp/presto-debug/'\nTMP_PRESTO_DEBUG_REMOTE = '/tmp/presto-debug-remote'\nOUTPUT_FILENAME_FOR_LOGS = '/tmp/presto-debug-logs.tar.gz'\nOUTPUT_FILENAME_FOR_SYS_INFO = '/tmp/presto-debug-sysinfo.tar.gz'\nPRESTOADMIN_LOG_NAME = 'presto-admin.log'\n_LOGGER = logging.getLogger(__name__)\nQUERY_REQUEST_EXT = 'v1/query/'\nNODES_REQUEST_EXT = 'v1/node'\n\n__all__ = ['logs', 'query_info', 'system_info']\n\n\n@task\n@runs_once\n@requires_config(StandaloneConfig)\ndef logs():\n    \"\"\"\n    Gather all the server logs and presto-admin log and create a tar file.\n    \"\"\"\n    downloaded_logs_location = os.path.join(TMP_PRESTO_DEBUG, \"logs\")\n    ensure_directory_exists(downloaded_logs_location)\n\n    print 'Downloading logs from all the nodes...'\n    execute(get_remote_log_files, downloaded_logs_location, roles=env.roles)\n\n    copy_admin_log(downloaded_logs_location)\n\n    make_tarfile(OUTPUT_FILENAME_FOR_LOGS, downloaded_logs_location)\n    print 'logs archive created: ' + OUTPUT_FILENAME_FOR_LOGS\n\n\ndef copy_admin_log(log_folder):\n    shutil.copy(os.path.join(get_log_directory(), PRESTOADMIN_LOG_NAME), log_folder)\n\n\ndef make_tarfile(output_filename, source_dir):\n    tar = tarfile.open(output_filename, 'w:gz')\n\n    try:\n        tar.add(source_dir, arcname=os.path.basename(source_dir))\n    finally:\n        tar.close()\n\n\ndef get_remote_log_files(dest_path):\n    remote_server_log = lookup_server_log_file(env.host)\n    _LOGGER.debug('Logs to be archived on host ' + env.host + ': ' + remote_server_log)\n    get_files(remote_server_log + '*', dest_path)\n\n    remote_launcher_log = lookup_launcher_log_file(env.host)\n    _LOGGER.debug('LOG directory to be archived on host ' + env.host + ': ' + remote_launcher_log)\n    get_files(remote_launcher_log + '*', dest_path)\n\n\ndef get_files(remote_path, local_path):\n    path_with_host_name = os.path.join(local_path, env.host)\n\n    try:\n        os.makedirs(path_with_host_name)\n    except OSError:\n        if not os.path.isdir(path_with_host_name):\n            raise\n\n    _LOGGER.debug('local path used ' + path_with_host_name)\n\n    try:\n        get(remote_path, path_with_host_name, use_sudo=True)\n    except SystemExit:\n        warn('remote path ' + remote_path + ' not found on ' + env.host)\n\n\ndef request_url(url_extension):\n    host = env.host\n    port = lookup_port(host)\n    return 'http://%(host)s:%(port)i/%(url_ext)s' % {'host': host,\n                                                     'port': port,\n                                                     'url_ext': url_extension}\n\n\n@task\n@requires_config(StandaloneConfig)\ndef query_info(query_id):\n    \"\"\"\n    Gather information about the query identified by the given\n    query_id and store that in a JSON file.\n\n    Parameters:\n        query_id - id of the query for which info has to be gathered\n    \"\"\"\n\n    if env.host not in fabricapi.get_coordinator_role():\n        return\n\n    err_msg = 'Unable to retrieve information. Please check that the ' \\\n              'query_id is correct, or check that server is up with ' \\\n              'command: server status'\n    req = get_request(request_url(QUERY_REQUEST_EXT + query_id), err_msg)\n    query_info_file_name = os.path.join(TMP_PRESTO_DEBUG, 'query_info_' + query_id + '.json')\n\n    try:\n        os.makedirs(TMP_PRESTO_DEBUG)\n    except OSError:\n        if not os.path.isdir(TMP_PRESTO_DEBUG):\n            raise\n\n    with open(query_info_file_name, 'w') as out_file:\n        out_file.write(json.dumps(req.json(), indent=4))\n\n    print('Gathered query information in file: ' + query_info_file_name)\n\n\ndef get_request(url, err_msg):\n        try:\n            req = requests.get(url)\n        except requests.ConnectionError:\n            abort(err_msg)\n\n        if not req.status_code == requests.codes.ok:\n            abort(err_msg)\n\n        return req\n\n\n@task\n@requires_config(StandaloneConfig)\ndef system_info():\n    \"\"\"\n    Gather system information like nodes in the system, presto\n    version, presto-admin version, os version etc.\n    \"\"\"\n    if env.host not in fabricapi.get_coordinator_role():\n        return\n    err_msg = 'Unable to access node information. ' \\\n              'Please check that server is up with command: server status'\n    req = get_request(request_url(NODES_REQUEST_EXT), err_msg)\n\n    downloaded_sys_info_loc = os.path.join(TMP_PRESTO_DEBUG, \"sysinfo\")\n    try:\n        os.makedirs(downloaded_sys_info_loc)\n    except OSError:\n        if not os.path.isdir(downloaded_sys_info_loc):\n            raise\n\n    node_info_file_name = os.path.join(downloaded_sys_info_loc, 'node_info.json')\n    with open(node_info_file_name, 'w') as out_file:\n        out_file.write(json.dumps(req.json(), indent=4))\n\n    _LOGGER.debug('Gathered node information in file: ' + node_info_file_name)\n\n    catalog_file_name = os.path.join(downloaded_sys_info_loc, 'catalog_info.txt')\n    client = PrestoClient(env.host, env.user)\n    catalog_info = get_catalog_info_from(client)\n\n    with open(catalog_file_name, 'w') as out_file:\n        out_file.write(catalog_info + '\\n')\n\n    _LOGGER.debug('Gathered catalog information in file: ' + catalog_file_name)\n\n    execute(get_catalog_configs, downloaded_sys_info_loc, roles=env.roles)\n    execute(get_system_info, downloaded_sys_info_loc, roles=env.roles)\n\n    make_tarfile(OUTPUT_FILENAME_FOR_SYS_INFO, downloaded_sys_info_loc)\n    print 'System info archive created: ' + OUTPUT_FILENAME_FOR_SYS_INFO\n\n\ndef get_system_info(download_location):\n\n    run(\"mkdir -p \" + TMP_PRESTO_DEBUG_REMOTE)\n\n    version_file_name = os.path.join(TMP_PRESTO_DEBUG_REMOTE, 'version_info.txt')\n    run('rm -f ' + version_file_name)\n\n    append(version_file_name, \"platform information : \" +\n           get_platform_information() + '\\n')\n    append(version_file_name, 'Java version: ' + get_java_version() + '\\n')\n    append(version_file_name, 'Presto-admin version: ' +\n           prestoadmin.__version__ + '\\n')\n    append(version_file_name, 'Presto server version: ' +\n           get_presto_version() + '\\n')\n\n    _LOGGER.debug('Gathered version information in file: ' + version_file_name)\n\n    get_files(version_file_name, download_location)\n\n\ndef get_catalog_configs(dest_path):\n    remote_catalog_dir = lookup_catalog_directory(env.host)\n    _LOGGER.debug('catalogs to be archived on host ' + env.host + ': ' + remote_catalog_dir)\n    get_files(remote_catalog_dir, dest_path)\n\n\ndef get_platform_information():\n    with settings(hide('warnings', 'stdout'), warn_only=True):\n        platform_info = run('uname -a')\n        _LOGGER.debug('platform info: ' + platform_info)\n        return platform_info\n\n\ndef get_java_version():\n    with settings(hide('warnings', 'stdout'), warn_only=True):\n        version = run('java -version')\n        _LOGGER.debug('java version: ' + version)\n        return version\n"
  },
  {
    "path": "prestoadmin/config.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nModule for reading, writing, and processing configuration files\n\"\"\"\nimport json\nimport os\nimport logging\nimport errno\nimport re\n\nfrom prestoadmin.util.exception import ConfigurationError,\\\n    ConfigFileNotFoundError\n\nCOMMENT_CHARS = ['!', '#']\n_LOGGER = logging.getLogger(__name__)\n\n\ndef get_conf_from_json_file(path):\n    try:\n        with open(path, 'r') as conf_file:\n            if os.path.getsize(conf_file.name) == 0:\n                return {}\n            return json.load(conf_file)\n    except IOError:\n        raise ConfigFileNotFoundError(\n            config_path=path, message=\"Missing configuration file %s.\" %\n            (repr(path)))\n    except ValueError as e:\n        raise ConfigurationError(e)\n\n\ndef get_conf_from_properties_file(path):\n    with open(path, 'r') as conf_file:\n        return get_conf_from_properties_data(conf_file)\n\n\ndef get_conf_from_properties_data(data):\n    props = {}\n    for line in data.read().splitlines():\n        line = line.strip()\n        if len(line) > 0 and line[0] not in COMMENT_CHARS:\n            pair = split_to_pair(line)\n            props[pair[0]] = pair[1]\n    return props\n\n\ndef split_to_pair(line):\n    split_line = re.split(r'\\s*(?<!\\\\):\\s*|\\s*(?<!\\\\)=\\s*|(?<!\\\\)\\s+', line,\n                          maxsplit=1)\n    if len(split_line) != 2:\n        raise ConfigurationError(\n            line + \" is not in the expected format: <property>=<value>, \"\n                   \"<property>:<value> or <property> <value>\")\n    return tuple(split_line)\n\n\ndef get_conf_from_config_file(path):\n    with open(path, 'r') as conf_file:\n        settings = conf_file.read().splitlines()\n        return settings\n\n\ndef json_to_string(conf):\n    return json.dumps(conf, indent=4, separators=(',', ':'))\n\n\ndef write_conf_to_file(conf, path):\n    # Note: this function expects conf to be flat\n    # either a dict for .properties file or a list for .config\n    ext = os.path.splitext(path)[1]\n    if ext == \".properties\":\n        write_properties_file(conf, path)\n    elif ext == \".config\":\n        write_config_file(conf, path)\n\n\ndef write_properties_file(conf, path):\n    output = ''\n    for key, value in conf.iteritems():\n        output += '%s=%s\\n' % (key, value)\n    write(output, path)\n\n\ndef write_config_file(conf, path):\n    output = '\\n'.join(conf)\n    write(output, path)\n\n\ndef write(output, path):\n    conf_directory = os.path.dirname(path)\n    try:\n        os.makedirs(conf_directory)\n    except OSError as e:\n        if e.errno == errno.EEXIST:\n            pass\n        else:\n            raise\n\n    with open(path, 'w') as f:\n        f.write(output)\n\n\ndef fill_defaults(conf, defaults):\n    try:\n        default_items = defaults.iteritems()\n    except AttributeError:\n        return\n\n    for k, v in default_items:\n        conf.setdefault(k, v)\n        fill_defaults(conf[k], v)\n"
  },
  {
    "path": "prestoadmin/configure_cmds.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for various configuration management tasks using presto-admin\n\"\"\"\nimport logging\nimport os\nfrom StringIO import StringIO\nfrom contextlib import closing\n\nfrom fabric.contrib import files\nfrom fabric.decorators import task, serial\nfrom fabric.operations import get, sudo\nfrom fabric.state import env\nfrom fabric.utils import abort, warn\n\nimport prestoadmin.deploy\nfrom prestoadmin.standalone.config import StandaloneConfig\nfrom prestoadmin.util import constants\nfrom prestoadmin.util.base_config import requires_config\nfrom prestoadmin.util.constants import CONFIG_PROPERTIES, LOG_PROPERTIES, \\\n    JVM_CONFIG, NODE_PROPERTIES\n\n__all__ = ['show']\n\nALL_CONFIG = [CONFIG_PROPERTIES, LOG_PROPERTIES, JVM_CONFIG, NODE_PROPERTIES]\n\n_LOGGER = logging.getLogger(__name__)\n\n__all__ = ['deploy', 'show']\n\n\n@task\n@requires_config(StandaloneConfig)\ndef deploy(rolename=None):\n    \"\"\"\n    Deploy configuration on the remote hosts.\n\n    Possible arguments are -\n        coordinator - Deploy the coordinator configuration to the coordinator\n        node\n        workers - Deploy workers configuration to the worker nodes. This will\n        not deploy configuration for a coordinator that is also a worker\n\n    If no rolename is specified, then configuration for all roles will be\n    deployed.  If there is no presto configuration file found in the\n    configuration directory, default files will be deployed\n\n    Parameters:\n        rolename - [coordinator|workers]\n    \"\"\"\n    if rolename is None:\n        _LOGGER.info(\"Running configuration deploy\")\n        prestoadmin.deploy.coordinator()\n        prestoadmin.deploy.workers()\n    else:\n        if rolename.lower() == 'coordinator':\n            prestoadmin.deploy.coordinator()\n        elif rolename.lower() == 'workers':\n            prestoadmin.deploy.workers()\n        else:\n            abort(\"Invalid Argument. Possible values: coordinator, workers\")\n\n\n\"\"\"\ngather/deploy_config_directory are used for server upgrade when we want to\npreserve any existing configuration files across the upgrade exactly as they\nwere before the upgrade.\n\nIn order to preserve not just the data, but also the metadata, we tar up the\ncontents of /etc/presto to a temporary tar archive under /tmp. After the\nupgrade, we untar it into /etc/presto and delete the archive.\n\"\"\"\n\n\ndef gather_config_directory():\n    \"\"\"\n    For the benefit of the next person to hack this, a list of some things\n    that didn't work:\n    - passing combine_stderr=False to sudo. Dunno why, still got them\n    combined in the output.\n    - using a StringIO object cfg = StringIO() and passing stdout=cfg. Got\n    the host information at the start of the line.\n    - sucking the tar archive over the network into memory instead of\n    writing it out to a temporary file on the remote host. Since fabric\n    doesn't provide a stdin= kwarg, there's no way to send back a tar\n    archive larger than we can fit in a single bash command (~2MB on a\n    good day), meaning if /etc/presto contains any large files, we'd end\n    up having to send the archive to a temp file anyway.\n    \"\"\"\n    result = sudo(\n        'tarfile=`mktemp /tmp/presto_config-XXXXXXX.tar`; '\n        'tar -c -z -C %s -f \"${tarfile}\" . && echo \"${tarfile}\"' % (\n            constants.REMOTE_CONF_DIR,))\n    return result\n\n\ndef deploy_config_directory(tarfile):\n    sudo('tar -C \"%s\" -x -v -f \"%s\" ; rm \"%s\"' %\n         (constants.REMOTE_CONF_DIR, tarfile, tarfile))\n\n\ndef configuration_fetch(file_name, config_destination, should_warn=True):\n    remote_file_path = os.path.join(constants.REMOTE_CONF_DIR, file_name)\n    if not files.exists(remote_file_path):\n        if should_warn:\n            warn(\"No configuration file found for %s at %s\"\n                 % (env.host, remote_file_path))\n        return None\n    else:\n        get(remote_file_path, config_destination, use_sudo=True)\n        return remote_file_path\n\n\ndef configuration_show(file_name, should_warn=True):\n    with closing(StringIO()) as file_content_buffer:\n        file_path = configuration_fetch(file_name, file_content_buffer,\n                                        should_warn)\n        if file_path is None:\n            return\n        config_values = file_content_buffer.getvalue()\n        file_content_buffer.close()\n        print (\"\\n%s: Configuration file at %s:\" % (env.host, file_path))\n        print config_values\n\n\n@task\n@requires_config(StandaloneConfig)\n@serial\ndef show(config_type=None):\n    \"\"\"\n    Print to the user the contents of the configuration files deployed\n\n    If no config_type is specified, then all four configurations will be\n    printed.  No warning will be printed for a missing log.properties since\n    it is not a required configuration file.\n\n    Parameters:\n        config_type: [node|jvm|config|log]\n    \"\"\"\n    file_name = ''\n    if config_type is None:\n        configuration_show(NODE_PROPERTIES)\n        configuration_show(JVM_CONFIG)\n        configuration_show(CONFIG_PROPERTIES)\n        configuration_show(LOG_PROPERTIES, should_warn=False)\n    else:\n        if config_type.lower() == 'node':\n            file_name = NODE_PROPERTIES\n        elif config_type.lower() == 'jvm':\n            file_name = JVM_CONFIG\n        elif config_type.lower() == 'config':\n            file_name = CONFIG_PROPERTIES\n        elif config_type.lower() == 'log':\n            file_name = LOG_PROPERTIES\n        else:\n            abort(\"Invalid Argument. Possible values: node, jvm, config, log\")\n\n        configuration_show(file_name)\n"
  },
  {
    "path": "prestoadmin/coordinator.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for the presto coordinator's configuration.\nLoads and validates the coordinator.json file and creates the files needed\nto deploy on the presto cluster\n\"\"\"\nimport copy\nimport logging\n\nfrom fabric.api import env\n\nfrom prestoadmin.node import Node\nfrom prestoadmin.presto_conf import validate_presto_conf\nfrom prestoadmin.util.exception import ConfigurationError\nfrom prestoadmin.util.local_config_util import get_coordinator_directory\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Coordinator(Node):\n    DEFAULT_PROPERTIES = {'node.properties':\n                          {'node.environment': 'presto',\n                           'node.data-dir': '/var/lib/presto/data',\n                           'node.launcher-log-file':\n                               '/var/log/presto/launcher.log',\n                           'node.server-log-file':\n                               '/var/log/presto/server.log',\n                           'catalog.config-dir': '/etc/presto/catalog',\n                           'plugin.dir': '/usr/lib/presto/lib/plugin'},\n                          'jvm.config': ['-server',\n                                         '-Xmx16G',\n                                         '-XX:-UseBiasedLocking',\n                                         '-XX:+UseG1GC',\n                                         '-XX:G1HeapRegionSize=32M',\n                                         '-XX:+ExplicitGCInvokesConcurrent',\n                                         '-XX:+HeapDumpOnOutOfMemoryError',\n                                         '-XX:+UseGCOverheadLimit',\n                                         '-XX:+ExitOnOutOfMemoryError',\n                                         '-XX:ReservedCodeCacheSize=512M',\n                                         '-DHADOOP_USER_NAME=hive'],\n                          'config.properties': {\n                              'coordinator': 'true',\n                              'discovery-server.enabled': 'true',\n                              'http-server.http.port': '8080',\n                              'node-scheduler.include-coordinator': 'false',\n                              'query.max-memory': '50GB',\n                              'query.max-memory-per-node': '8GB'}\n                          }\n\n    def _get_conf_dir(self):\n        return get_coordinator_directory()\n\n    def default_config(self, filename):\n        try:\n            conf = copy.deepcopy(self.DEFAULT_PROPERTIES[filename])\n        except KeyError:\n            raise ConfigurationError('Invalid configuration file name: %s' %\n                                     filename)\n        if filename == 'config.properties':\n            coordinator = env.roledefs['coordinator'][0]\n            workers = env.roledefs['worker']\n            if coordinator in workers:\n                conf['node-scheduler.include-coordinator'] = 'true'\n            conf['discovery.uri'] = 'http://%s:8080' % coordinator\n        return conf\n\n    @staticmethod\n    def validate(conf):\n        validate_presto_conf(conf)\n        if 'coordinator' not in conf['config.properties']:\n            raise ConfigurationError('Must specify coordinator=true in '\n                                     'coordinator\\'s config.properties')\n        if conf['config.properties']['coordinator'] != 'true':\n            raise ConfigurationError('Coordinator cannot be false in the '\n                                     'coordinator\\'s config.properties.')\n        return conf\n"
  },
  {
    "path": "prestoadmin/deploy.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"\nCommon module for deploying the presto configuration\n\"\"\"\n\nimport logging\nimport os\n\nfrom fabric.contrib import files\nfrom fabric.context_managers import settings\nfrom fabric.contrib.files import exists\nfrom fabric.operations import sudo, abort\nfrom fabric.api import env\n\nfrom prestoadmin.util import constants\nfrom prestoadmin.standalone.config import PRESTO_STANDALONE_USER_GROUP\nimport coordinator as coord\nimport prestoadmin.util.fabricapi as util\nimport workers as w\n\n_LOGGER = logging.getLogger(__name__)\n\n\ndef coordinator():\n    \"\"\"\n    Deploy the coordinator configuration to the coordinator node\n    \"\"\"\n    if env.host in util.get_coordinator_role():\n        _LOGGER.info(\"Setting coordinator configuration for \" + env.host)\n        configure_presto(coord.Coordinator().get_conf(),\n                         constants.REMOTE_CONF_DIR)\n\n\ndef workers():\n    \"\"\"\n    Deploy workers configuration to the worker nodes.\n    This will not deploy configuration for a coordinator that is also a worker\n    \"\"\"\n    if env.host in util.get_worker_role() and env.host \\\n            not in util.get_coordinator_role():\n        _LOGGER.info(\"Setting worker configuration for \" + env.host)\n        configure_presto(w.Worker().get_conf(), constants.REMOTE_CONF_DIR)\n\n\ndef configure_presto(conf, remote_dir):\n    print(\"Deploying configuration on: \" + env.host)\n    deploy(dict((name, output_format(content)) for (name, content)\n                in conf.iteritems() if name != \"node.properties\"), remote_dir)\n    deploy_node_properties(output_format(conf['node.properties']), remote_dir)\n\n\ndef output_format(conf):\n    try:\n        return dict_to_equal_format(conf)\n    except AttributeError:\n        pass\n    try:\n        return list_to_line_separated(conf)\n    except TypeError:\n        pass\n    except AssertionError:\n        pass\n    return str(conf)\n\n\ndef dict_to_equal_format(conf):\n    sorted_list = sorted(key_val_to_equal(conf.iteritems()))\n    return list_to_line_separated(sorted_list)\n\n\ndef key_val_to_equal(items):\n    return [\"=\".join(item) for item in items]\n\n\ndef list_to_line_separated(conf):\n    assert not isinstance(conf, basestring)\n    return \"\\n\".join(conf)\n\n\ndef deploy(confs, remote_dir):\n    _LOGGER.info(\"Deploying configurations for \" + str(confs.keys()))\n    sudo(\"mkdir -p \" + remote_dir)\n    for name, content in confs.iteritems():\n        write_to_remote_file(content, os.path.join(remote_dir, name),\n                             owner=PRESTO_STANDALONE_USER_GROUP, mode=600)\n\n\ndef secure_create_file(filepath, user_group, mode=600):\n    user, group = user_group.split(':')\n    missing_owner_code = 42\n    command = \\\n        \"( getent passwd {user} >/dev/null || exit {missing_owner_code} ) &&\" \\\n        \" echo '' > {filepath} && \" \\\n        \"chown {user_group} {filepath} && \" \\\n        \"chmod {mode} {filepath} \".format(\n            filepath=filepath, user=user, user_group=user_group, mode=mode,\n            missing_owner_code=missing_owner_code)\n\n    with settings(warn_only=True):\n        result = sudo(command)\n        if result.return_code == missing_owner_code:\n            abort(\"User %s does not exist. Make sure the Presto server RPM \"\n                  \"is installed and try again\" % user)\n        elif result.failed:\n            abort(\"Failed to securely create file %s\" % (filepath))\n\n\ndef secure_create_directory(filepath, user_group, mode=755):\n    user, group = user_group.split(':')\n    missing_owner_code = 42\n    command = \\\n        \"( getent passwd {user} >/dev/null || exit {missing_owner_code} ) && \" \\\n        \"mkdir -p {filepath} && \" \\\n        \"chown {user_group} {filepath} && \" \\\n        \"chmod {mode} {filepath} \".format(\n            filepath=filepath, user=user, user_group=user_group, mode=mode,\n            missing_owner_code=missing_owner_code)\n\n    with settings(warn_only=True):\n        result = sudo(command)\n        if result.return_code == missing_owner_code:\n            abort(\"User %s does not exist. Make sure the Presto server RPM \"\n                  \"is installed and try again\" % user)\n        elif result.failed:\n            abort(\"Failed to securely create file %s\" % (filepath))\n\n\ndef deploy_node_properties(content, remote_dir):\n    _LOGGER.info(\"Deploying node.properties configuration\")\n    name = \"node.properties\"\n    node_file_path = (os.path.join(remote_dir, name))\n    if not exists(node_file_path, use_sudo=True):\n        secure_create_file(node_file_path, PRESTO_STANDALONE_USER_GROUP, mode=600)\n    else:\n        sudo('chown %(owner)s %(filepath)s && chmod %(mode)s %(filepath)s'\n             % {'owner': PRESTO_STANDALONE_USER_GROUP, 'mode': 600, 'filepath': node_file_path})\n    node_id_command = (\n        \"if ! ( grep -q -s 'node.id' \" + node_file_path + \" ); then \"\n        \"uuid=$(uuidgen); \"\n        \"echo node.id=$uuid >> \" + node_file_path + \";\"\n        \"fi; \"\n        \"sed -i '/node.id/!d' \" + node_file_path + \"; \"\n        )\n    sudo(node_id_command)\n    files.append(os.path.join(remote_dir, name), content, True, shell=True)\n\n\ndef write_to_remote_file(text, filepath, owner, mode=600):\n    secure_create_file(filepath, owner, mode)\n    command = \"echo '{text}' > {filepath}\".format(\n        text=escape_single_quotes(text), filepath=filepath)\n    sudo(command)\n\n\ndef escape_single_quotes(text):\n    # replace a single quote with a (closing) single quote followed by\n    # an escaped quote followed by an (opening) single quote\n    return text.replace(r\"'\", r\"'\\''\")\n"
  },
  {
    "path": "prestoadmin/fabric_patches.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"Monkey patches needed to change logging and error handling in Fabric\"\"\"\nimport traceback\nimport sys\nimport logging\nfrom traceback import format_exc\n\nfrom fabric import state\nfrom fabric.context_managers import settings\nfrom fabric.exceptions import NetworkError\nfrom fabric.job_queue import JobQueue\nfrom fabric.tasks import _is_task, WrappedCallableTask, requires_parallel\nfrom fabric.task_utils import crawl, parse_kwargs\nfrom fabric.utils import error\nimport fabric.api\nimport fabric.operations\nimport fabric.tasks\nfrom fabric.network import needs_host, to_dict, disconnect_all\n\nfrom prestoadmin.util import exception\n\n\n_LOGGER = logging.getLogger(__name__)\nold_warn = fabric.utils.warn\nold_abort = fabric.utils.abort\nold_run = fabric.operations.run\nold_sudo = fabric.operations.sudo\n\n\n# Need to monkey patch Fabric's warn method in order to print out\n# all exceptions seen to the logs.\ndef warn(msg):\n    if fabric.api.env.host:\n        msg = '[' + fabric.api.env.host + '] ' + msg\n    old_warn(msg)\n    _LOGGER.warn(msg + '\\n\\n' + format_exc())\n\nfabric.utils.warn = warn\nfabric.api.warn = warn\n\n\ndef abort(msg):\n    if fabric.api.env.host:\n        msg = '[' + fabric.api.env.host + '] ' + msg\n    old_abort(msg)\n\nfabric.utils.abort = abort\nfabric.api.abort = abort\n\n\n# Monkey patch run and sudo so that the stdout and stderr\n# also go to the logs.\n@needs_host\ndef run(command, shell=True, pty=True, combine_stderr=None, quiet=False,\n        warn_only=False, stdout=None, stderr=None, timeout=None,\n        shell_escape=None):\n    out = old_run(command, shell=shell, pty=pty,\n                  combine_stderr=combine_stderr, quiet=quiet,\n                  warn_only=warn_only, stdout=stdout, stderr=stderr,\n                  timeout=timeout, shell_escape=shell_escape)\n    log_output(out)\n    return out\n\n\nfabric.operations.run = run\nfabric.api.run = run\n\n\n@needs_host\ndef sudo(command, shell=True, pty=True, combine_stderr=None, user=None,\n         quiet=False, warn_only=False, stdout=None, stderr=None, group=None,\n         timeout=None, shell_escape=None):\n    out = old_sudo(command, shell=shell, pty=pty,\n                   combine_stderr=combine_stderr, user=user, quiet=quiet,\n                   warn_only=warn_only, stdout=stdout, stderr=stderr,\n                   group=group, timeout=timeout, shell_escape=shell_escape)\n    log_output(out)\n    return out\n\n\nfabric.operations.sudo = sudo\nfabric.api.sudo = sudo\n\n\ndef log_output(out):\n    _LOGGER.info('\\nCOMMAND: ' + out.command + '\\nFULL COMMAND: ' +\n                 out.real_command + '\\nSTDOUT: ' + out + '\\nSTDERR: ' +\n                 out.stderr)\n\n\n# Monkey patch _execute and execute so that we can handle errors differently\ndef _execute(task, host, my_env, args, kwargs, jobs, queue, multiprocessing):\n    \"\"\"\n    Primary single-host work body of execute().\n    \"\"\"\n    # Log to stdout\n    if state.output.running and not hasattr(task, 'return_value'):\n        print(\"[%s] Executing task '%s'\" % (host, my_env['command']))\n    # Create per-run env with connection settings\n    local_env = to_dict(host)\n    local_env.update(my_env)\n    # Set a few more env flags for parallelism\n    if queue is not None:\n        local_env.update({'parallel': True, 'linewise': True})\n    # Handle parallel execution\n    if queue is not None:  # Since queue is only set for parallel\n        name = local_env['host_string']\n\n        # Wrap in another callable that:\n        # * expands the env it's given to ensure parallel, linewise, etc are\n        # all set correctly and explicitly. Such changes are naturally\n        # insulted from the parent process.\n        # * nukes the connection cache to prevent shared-access problems\n        # * knows how to send the tasks' return value back over a Queue\n        # * captures exceptions raised by the task\n        def inner(args, kwargs, queue, name, env):\n            state.env.update(env)\n\n            def submit(result):\n                queue.put({'name': name, 'result': result})\n\n            try:\n                state.connections.clear()\n                submit(task.run(*args, **kwargs))\n            except BaseException, e:\n                _LOGGER.error(traceback.format_exc())\n                submit(e)\n                sys.exit(1)\n\n        # Stuff into Process wrapper\n        kwarg_dict = {\n            'args': args,\n            'kwargs': kwargs,\n            'queue': queue,\n            'name': name,\n            'env': local_env,\n        }\n        p = multiprocessing.Process(target=inner, kwargs=kwarg_dict)\n        # Name/id is host string\n        p.name = name\n        # Add to queue\n        jobs.append(p)\n    # Handle serial execution\n    else:\n        with settings(**local_env):\n            return task.run(*args, **kwargs)\n\n\ndef execute(task, *args, **kwargs):\n    \"\"\"\n    Patched version of fabric's execute task with alternative error handling\n    \"\"\"\n    my_env = {'clean_revert': True}\n    results = {}\n    # Obtain task\n    is_callable = callable(task)\n    if not (is_callable or _is_task(task)):\n        # Assume string, set env.command to it\n        my_env['command'] = task\n        task = crawl(task, state.commands)\n        if task is None:\n            msg = \"%r is not callable or a valid task name\" % (\n                my_env['command'],)\n            if state.env.get('skip_unknown_tasks', False):\n                warn(msg)\n                return\n            else:\n                abort(msg)\n    # Set env.command if we were given a real function or callable task obj\n    else:\n        dunder_name = getattr(task, '__name__', None)\n        my_env['command'] = getattr(task, 'name', dunder_name)\n    # Normalize to Task instance if we ended up with a regular callable\n    if not _is_task(task):\n        task = WrappedCallableTask(task)\n    # Filter out hosts/roles kwargs\n    new_kwargs, hosts, roles, exclude_hosts = parse_kwargs(kwargs)\n    # Set up host list\n    my_env['all_hosts'], my_env[\n        'effective_roles'] = task.get_hosts_and_effective_roles(hosts, roles,\n                                                                exclude_hosts,\n                                                                state.env)\n\n    parallel = requires_parallel(task)\n    if parallel:\n        # Import multiprocessing if needed, erroring out usefully\n        # if it can't.\n        try:\n            import multiprocessing\n        except ImportError:\n            import traceback\n\n            tb = traceback.format_exc()\n            abort(tb + \"\"\"\n    At least one task needs to be run in parallel, but the\n    multiprocessing module cannot be imported (see above\n    traceback.) Please make sure the module is installed\n    or that the above ImportError is fixed.\"\"\")\n    else:\n        multiprocessing = None\n\n    # Get pool size for this task\n    pool_size = task.get_pool_size(my_env['all_hosts'], state.env.pool_size)\n    # Set up job queue in case parallel is needed\n    queue = multiprocessing.Queue() if parallel else None\n    jobs = JobQueue(pool_size, queue)\n    if state.output.debug:\n        jobs._debug = True\n\n    # Call on host list\n    if my_env['all_hosts']:\n        # Attempt to cycle on hosts, skipping if needed\n        for host in my_env['all_hosts']:\n            try:\n                results[host] = _execute(\n                    task, host, my_env, args, new_kwargs, jobs, queue,\n                    multiprocessing\n                )\n            except NetworkError, e:\n                results[host] = e\n                # Backwards compat test re: whether to use an exception or\n                # abort\n                if state.env.skip_bad_hosts or state.env.warn_only:\n                    func = warn\n                else:\n                    func = abort\n                error(e.message, func=func, exception=e.wrapped)\n            except SystemExit, e:\n                results[host] = e\n\n            # If requested, clear out connections here and not just at the end.\n            if state.env.eagerly_disconnect:\n                disconnect_all()\n\n        # If running in parallel, block until job queue is emptied\n        if jobs:\n            jobs.close()\n            # Abort if any children did not exit cleanly (fail-fast).\n            # This prevents Fabric from continuing on to any other tasks.\n            # Otherwise, pull in results from the child run.\n            ran_jobs = jobs.run()\n            for name, d in ran_jobs.iteritems():\n                if d['exit_code'] != 0:\n                    if isinstance(d['results'], NetworkError):\n                        func = warn if state.env.skip_bad_hosts \\\n                            or state.env.warn_only else abort\n                        error(d['results'].message,\n                              exception=d['results'].wrapped, func=func)\n                    elif exception.is_arguments_error(d['results']):\n                        raise d['results']\n                    elif isinstance(d['results'], SystemExit):\n                        # System exit indicates abort\n                        pass\n                    elif isinstance(d['results'], BaseException):\n                        error(d['results'].message, exception=d['results'])\n                    else:\n                        error('One or more hosts failed while executing task.')\n                results[name] = d['results']\n\n    # Or just run once for local-only\n    else:\n        with settings(**my_env):\n            results['<local-only>'] = task.run(*args, **new_kwargs)\n    # Return what we can from the inner task executions\n\n    return results\n\n\nfabric.tasks._execute = _execute\nfabric.tasks.execute = execute\n"
  },
  {
    "path": "prestoadmin/file.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nCommands for running scripts on a cluster\n\"\"\"\nimport logging\nfrom fabric.operations import put, sudo\nfrom fabric.decorators import task\nfrom fabric.api import env\nfrom os import path\n\nfrom prestoadmin.standalone.config import StandaloneConfig\nfrom prestoadmin.util.base_config import requires_config\nfrom prestoadmin.util.constants import REMOTE_COPY_DIR\nfrom prestoadmin.plugin import write\n\n_LOGGER = logging.getLogger(__name__)\n__all__ = ['run', 'copy']\n\n\n@task\n@requires_config(StandaloneConfig)\ndef run(script, remote_dir='/tmp'):\n    \"\"\"\n    Run an arbitrary script on all nodes in the cluster.\n\n    Parameters:\n        script - The path to the script\n        remote_dir - Where to put the script on the cluster.  Default is /tmp.\n    \"\"\"\n    script_name = path.basename(script)\n    remote_path = path.join(remote_dir, script_name)\n    put(script, remote_path)\n    sudo('chmod u+x %s' % remote_path)\n    sudo(remote_path)\n    sudo('rm %s' % remote_path)\n\n\n@task\n@requires_config(StandaloneConfig)\ndef copy(local_file, remote_dir=REMOTE_COPY_DIR):\n    \"\"\"\n    Copy a file to all nodes in the cluster.\n\n    Parameters:\n        local_file - The path to the file\n        remote_dir - Where to put the file on the cluster.  Default is /tmp.\n    \"\"\"\n    _LOGGER.info('copying file to %s' % env.host)\n    write(local_file, remote_dir)\n"
  },
  {
    "path": "prestoadmin/main.py",
    "content": "# -*- coding: utf-8 -*-\n\n##\n# This file was copied from Fabric-1.8.0 with some modifications.\n#\n# This distribution of fabric is distributed under the following BSD license:\n#\n#  Copyright (c) 2009, Christian Vest Hansen and Jeffrey E. Forcier\n#  All rights reserved.\n#\n#  Redistribution and use in source and binary forms, with or without\n#  modification, are permitted provided that the following conditions are met:\n#\n#      * Redistributions of source code must retain the above copyright notice,\n#        this list of conditions and the following disclaimer.\n#      * Redistributions in binary form must reproduce the above copyright\n#        notice, this list of conditions and the following disclaimer in the\n#        documentation and/or other materials provided with the distribution.\n#\n#  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n#  AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n#  IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n#  ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE\n#  LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n#  CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n#  SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n#  INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n#  CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n#  ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n#  POSSIBILITY OF SUCH DAMAGE.\n#\n##\n\n\n\"\"\"\nThis module contains Fab's `main` method plus related subroutines.\n\n`main` is executed as the command line ``fab`` program and takes care of\nparsing options and commands, loading the user settings file, loading a\nfabfile, and executing the commands given.\n\nThe other callables defined in this module are internal only. Anything useful\nto individuals leveraging Fabric as a library, should be kept elsewhere.\n\"\"\"\nimport copy\nimport getpass\nimport logging\nfrom operator import isMappingType\nfrom optparse import Values, SUPPRESS_HELP\nimport os\nimport sys\nimport textwrap\nimport types\n\n# For checking callables against the API, & easy mocking\nfrom fabric import api, state\nfrom fabric.contrib import console, files, project\n\nfrom fabric.state import env_options\nfrom fabric.tasks import Task, execute\nfrom fabric.task_utils import _Dict, crawl\nfrom fabric.utils import abort, indent, warn, _pty_size\n\nfrom prestoadmin.util.exception import ConfigurationError, is_arguments_error\nfrom prestoadmin import __version__\nfrom prestoadmin.util.application import entry_point\nfrom prestoadmin.util.fabric_application import FabricApplication\nfrom prestoadmin.util.hiddenoptgroup import HiddenOptionGroup\nfrom prestoadmin.util.parser import LoggingOptionParser\n\n# One-time calculation of \"all internal callables\" to avoid doing this on every\n# check of a given fabfile callable (in is_classic_task()).\n_modules = [api, project, files, console]\n_internals = reduce(lambda x, y: x + filter(callable, vars(y).values()),\n                    _modules, [])\n_LOGGER = logging.getLogger(__name__)\n\n\ndef _get_presto_env_options():\n    new_env_options = copy.deepcopy(env_options)\n    commands_to_remove = ['fabfile', 'parallel', 'rcfile', 'skip_bad_hosts',\n                          'warn_only', 'always_use_pty', 'skip_unknown_tasks',\n                          'abort_on_prompts', 'pool_size',\n                          'eagerly_disconnect', 'ssh_config_path']\n    commands_to_hide = ['--roles', '--shell', '--linewise', '--show', '--hide']\n    new_env_options = \\\n        [x for x in new_env_options if x.dest not in commands_to_remove]\n    for env_option in new_env_options:\n        if env_option.get_opt_string() in commands_to_hide:\n            env_option.help = SUPPRESS_HELP\n    return new_env_options\n\n\npresto_env_options = _get_presto_env_options()\n\n\n# Module recursion cache\nclass _ModuleCache(object):\n    \"\"\"\n    Set-like object operating on modules and storing __name__s internally.\n    \"\"\"\n\n    def __init__(self):\n        self.cache = set()\n\n    def __contains__(self, value):\n        return value.__name__ in self.cache\n\n    def add(self, value):\n        return self.cache.add(value.__name__)\n\n    def clear(self):\n        return self.cache.clear()\n\n\n_seen = _ModuleCache()\n\n\ndef is_classic_task(tup):\n    \"\"\"\n    Takes (name, object) tuple, returns True if it's a non-Fab public callable.\n    \"\"\"\n    name, func = tup\n    try:\n        is_classic = (\n            callable(func) and (func not in _internals) and not\n            name.startswith('_')\n        )\n    # Handle poorly behaved __eq__ implementations\n    except (ValueError, TypeError):\n        is_classic = False\n    return is_classic\n\n\ndef load_fabfile(path, importer=None):\n    \"\"\"\n    Import given fabfile path and return (docstring, callables).\n\n    Specifically, the fabfile's ``__doc__`` attribute (a string) and a\n    dictionary of ``{'name': callable}`` containing all callables which pass\n    the \"is a Fabric task\" test.\n    \"\"\"\n    if importer is None:\n        importer = __import__\n    # Get directory and fabfile name\n    directory, fabfile = os.path.split(path)\n    # If the directory isn't in the PYTHONPATH, add it so our import will work\n    added_to_path = False\n    index = None\n    if directory not in sys.path:\n        sys.path.insert(0, directory)\n        added_to_path = True\n    # If the directory IS in the PYTHONPATH, move it to the front temporarily,\n    # otherwise other fabfiles -- like Fabric's own -- may scoop the intended\n    # one.\n    else:\n        i = sys.path.index(directory)\n        if i != 0:\n            # Store index for later restoration\n            index = i\n            # Add to front, then remove from original position\n            sys.path.insert(0, directory)\n            del sys.path[i + 1]\n    # Perform the import (trimming off the .py)\n    imported = importer(os.path.splitext(fabfile)[0])\n    # Remove directory from path if we added it ourselves (just to be neat)\n    if added_to_path:\n        del sys.path[0]\n    # Put back in original index if we moved it\n    if index is not None:\n        sys.path.insert(index + 1, directory)\n        del sys.path[0]\n\n    # Actually load tasks\n    docstring, new_style, classic, default = load_tasks_from_module(imported)\n    tasks = new_style if state.env.new_style_tasks else classic\n    # Clean up after ourselves\n    _seen.clear()\n    return docstring, tasks\n\n\ndef load_tasks_from_module(imported):\n    \"\"\"\n    Handles loading all of the tasks for a given `imported` module\n    \"\"\"\n    # Obey the use of <module>.__all__ if it is present\n    imported_vars = vars(imported)\n    if \"__all__\" in imported_vars:\n        imported_vars = [(name, imported_vars[name]) for name in\n                         imported_vars if name in imported_vars[\"__all__\"]]\n    else:\n        imported_vars = imported_vars.items()\n    # Return a two-tuple value.  First is the documentation, second is a\n    # dictionary of callables only (and don't include Fab operations or\n    # underscored callables)\n    new_style, classic, default = extract_tasks(imported_vars)\n    return imported.__doc__, new_style, classic, default\n\n\ndef extract_tasks(imported_vars):\n    \"\"\"\n    Handle extracting tasks from a given list of variables\n    \"\"\"\n    new_style_tasks = _Dict()\n    classic_tasks = {}\n    default_task = None\n    if 'new_style_tasks' not in state.env:\n        state.env.new_style_tasks = False\n    for tup in imported_vars:\n        name, obj = tup\n        if is_task_object(obj):\n            state.env.new_style_tasks = True\n            # Use instance.name if defined\n            if obj.name and obj.name != 'undefined':\n                new_style_tasks[obj.name] = obj\n            else:\n                obj.name = name\n                new_style_tasks[name] = obj\n            # Handle aliasing\n            if obj.aliases is not None:\n                for alias in obj.aliases:\n                    new_style_tasks[alias] = obj\n            # Handle defaults\n            if obj.is_default:\n                default_task = obj\n        elif is_classic_task(tup):\n            classic_tasks[name] = obj\n        elif is_task_module(obj):\n            docs, newstyle, classic, default = load_tasks_from_module(obj)\n            for task_name, task in newstyle.items():\n                if name not in new_style_tasks:\n                    new_style_tasks[name] = _Dict()\n                new_style_tasks[name][task_name] = task\n            if default is not None:\n                new_style_tasks[name].default = default\n    return new_style_tasks, classic_tasks, default_task\n\n\ndef is_task_module(a):\n    \"\"\"\n    Determine if the provided value is a task module\n    \"\"\"\n    # return (type(a) is types.ModuleType and\n    #        any(map(is_task_object, vars(a).values())))\n    if isinstance(a, types.ModuleType) and a not in _seen:\n        # Flag module as seen\n        _seen.add(a)\n        # Signal that we need to check it out\n        return True\n\n\ndef is_task_object(a):\n    \"\"\"\n    Determine if the provided value is a ``Task`` object.\n\n    This returning True signals that all tasks within the fabfile\n    module must be Task objects.\n    \"\"\"\n    return isinstance(a, Task) and a.use_task_objects\n\n\ndef parser_for_options():\n    \"\"\"\n    Handle command-line options with LoggingOptionParser.\n\n    Return parser, largely for use in `parse_arguments`.\n\n    On this parser, you must call parser.parse_args()\n    \"\"\"\n    #\n    # Initialize\n    #\n    parser = LoggingOptionParser(\n        usage='presto-admin [options] <command> [arg]',\n        version='presto-admin %s' % __version__,\n        epilog='\\n' + '\\n'.join(list_commands(None, 'normal')))\n\n    #\n    # Define options that don't become `env` vars (typically ones which cause\n    # Fabric to do something other than its normal execution, such as\n    # --version)\n    #\n\n    # Display info about a specific command\n    parser.add_option(\n        '-d',\n        '--display',\n        dest='display',\n        action='store_true',\n        default=False,\n        help='print detailed information about command'\n    )\n\n    parser.add_option(\n        '--extended-help',\n        action='store_true',\n        dest='extended_help',\n        default=False,\n        help='print out all options, including advanced ones'\n    )\n\n    parser.add_option(\n        '-I',\n        '--initial-password-prompt',\n        action='store_true',\n        default=False,\n        help=\"Force password prompt up-front\"\n    )\n\n    parser.add_option(\n        '--nodeps',\n        action='store_true',\n        dest='nodeps',\n        default=False,\n        help=SUPPRESS_HELP\n    )\n\n    parser.add_option(\n        '--force',\n        action='store_true',\n        dest='force',\n        default=False,\n        help=SUPPRESS_HELP\n    )\n\n    #\n    # Add in options which are also destined to show up as `env` vars.\n    #\n\n    advanced_options = HiddenOptionGroup(parser, \"Advanced Options\",\n                                         suppress_help=True)\n\n    # Hide most of the options from the help text so it's simpler. Need to\n    # document the other options, however.\n    commands_to_show = ['password']\n\n    for option in presto_env_options:\n        if option.dest in commands_to_show:\n            parser.add_option(option)\n        else:\n            advanced_options.add_option(option)\n\n    advanced_options.add_option(\n        '--serial',\n        action='store_true',\n        dest='serial',\n        default=False,\n        help=\"default to serial execution method\"\n    )\n\n    # Allow setting of arbitrary env vars at runtime.\n    advanced_options.add_option(\n        '--set',\n        metavar=\"KEY=VALUE,...\",\n        dest='env_settings',\n        default=\"\",\n        help=SUPPRESS_HELP\n    )\n\n    parser.add_option_group(advanced_options)\n\n    # Return parser\n    return parser\n\n\ndef _is_task(name, value):\n    \"\"\"\n    Is the object a task as opposed to e.g. a dict or int?\n    \"\"\"\n    return is_classic_task((name, value)) or is_task_object(value)\n\n\ndef _sift_tasks(mapping):\n    tasks, collections = [], []\n    for name, value in mapping.iteritems():\n        if _is_task(name, value):\n            tasks.append(name)\n        elif isMappingType(value):\n            collections.append(name)\n    tasks = sorted(tasks)\n    collections = sorted(collections)\n    return tasks, collections\n\n\ndef _task_names(mapping):\n    \"\"\"\n    Flatten & sort task names in a breadth-first fashion.\n\n    Tasks are always listed before submodules at the same level, but within\n    those two groups, sorting is alphabetical.\n    \"\"\"\n    tasks, collections = _sift_tasks(mapping)\n    for collection in collections:\n        module = mapping[collection]\n        if hasattr(module, 'default'):\n            tasks.append(collection)\n        tasks.extend(map(lambda x: \" \".join((collection, x)),\n                     _task_names(module)))\n    return tasks\n\n\ndef _print_docstring(docstrings, name):\n    if not docstrings:\n        return False\n    docstring = crawl(name, state.commands).__doc__\n    if isinstance(docstring, basestring):\n        return docstring\n\n\ndef _normal_list(docstrings=True):\n    result = []\n    task_names = _task_names(state.commands)\n    # Want separator between name, description to be straight col\n    max_len = reduce(lambda a, b: max(a, len(b)), task_names, 0)\n    sep = '  '\n    trail = '...'\n    max_width = _pty_size()[1] - 1 - len(trail)\n    for name in task_names:\n        docstring = _print_docstring(docstrings, name)\n        if docstring:\n            lines = filter(None, docstring.splitlines())\n            first_line = lines[0].strip()\n            # Truncate it if it's longer than N chars\n            size = max_width - (max_len + len(sep) + len(trail))\n            if len(first_line) > size:\n                first_line = first_line[:size] + trail\n            output = name.ljust(max_len) + sep + first_line\n        # Or nothing (so just the name)\n        else:\n            output = name\n        result.append(indent(output))\n    return result\n\n\nCOMMANDS_HEADER = 'Commands:'\n\n\ndef list_commands(docstring, format_):\n    \"\"\"\n    Print all found commands/tasks, then exit. Invoked with ``-l/--list.``\n\n    If ``docstring`` is non-empty, it will be printed before the task list.\n\n    ``format_`` should conform to the options specified in\n    ``LIST_FORMAT_OPTIONS``, e.g. ``\"short\"``, ``\"normal\"``.\n    \"\"\"\n    # Short-circuit with simple short output\n    if format_ == \"short\":\n        return _task_names(state.commands)\n    # Otherwise, handle more verbose modes\n    result = []\n    # Docstring at top, if applicable\n    if docstring:\n        trailer = \"\\n\" if not docstring.endswith(\"\\n\") else \"\"\n        result.append(docstring + trailer)\n    header = COMMANDS_HEADER\n    result.append(header)\n    c = _normal_list()\n    result.extend(c)\n    result.extend(\"\\n\")\n    return result\n\n\ndef get_task_docstring(task):\n    details = [\n        textwrap.dedent(task.__doc__)\n        if task.__doc__\n        else 'No docstring provided']\n\n    return '\\n'.join(details)\n\n\ndef display_command(name, code=0):\n    \"\"\"\n    Print command function's docstring, then exit. Invoked with -d/--display.\n    \"\"\"\n    # Sanity check\n    command = crawl(name, state.commands)\n    name = name.replace(\".\", \" \")\n    if command is None:\n        msg = \"Task '%s' does not appear to exist. Valid task names:\\n%s\"\n        abort(msg % (name, \"\\n\".join(_normal_list(False))))\n    # get the presented docstring if found\n    task_details = get_task_docstring(command)\n\n    if task_details:\n        print(\"Displaying detailed information for task '%s':\" % name)\n        print('')\n        print(indent(task_details, strip=True))\n        print('')\n    # Or print notice if not\n    else:\n        print(\"No detailed information available for task '%s':\" % name)\n    sys.exit(code)\n\n\ndef parse_arguments(arguments, commands):\n    \"\"\"\n    Parse string list into list of tuples: command, args.\n\n    commands is formatted like {'install' : {'server' : WrappedCallable,\n    'cli' : WrappedCallable}, 'topology': {'show' : WrappedCallable}}\n\n    Thus, since our arguments are separated by spaces, and is of the form\n    ['install', 'server'], we iterate through the commands, progressively\n    going deeper into the dict.  If we run out of elements in the dict,\n    the rest of the tokens are arguments to the function. If we don't\n    get down to the bottom-most level, the command is not valid. If\n    at any point the next token is not in the possible_cmd map, the\n    command is invalid.\n    \"\"\"\n\n    possible_cmds = commands.copy()\n    pos = 0\n\n    while pos < len(arguments):\n        if not isinstance(possible_cmds, dict):\n            # the rest of are all arguments to the cmd\n            break\n        if arguments[pos] not in possible_cmds:\n            invalid_command_error(arguments)\n        possible_cmds = possible_cmds[arguments[pos]]\n        pos += 1\n\n    if isinstance(possible_cmds, dict):\n        invalid_command_error(arguments)\n\n    cmds = [(\".\".join(arguments[:pos]), arguments[pos:], {}, [], [], [])]\n    return cmds\n\n\ndef invalid_command_error(arguments):\n    raise NameError(\"Command not found:\\n%s\" % indent(\" \".join(arguments)))\n\n\ndef update_output_levels(show, hide):\n    \"\"\"\n    Update state.output values as per given comma-separated list of key names.\n\n    For example, ``update_output_levels(show='debug,warnings')`` is\n    functionally equivalent to ``state.output['debug'] = True ;\n    state.output['warnings'] = True``. Conversely, anything given to ``hide``\n    sets the values to ``False``.\n    \"\"\"\n    if show:\n        for key in show.split(','):\n            state.output[key] = True\n    if hide:\n        for key in hide.split(','):\n            state.output[key] = False\n\n\ndef show_commands(docstring, format, code=0):\n    print(\"\\n\".join(list_commands(docstring, format)))\n    sys.exit(code)\n\n\ndef run_tasks(task_list):\n    for name, args, kwargs, arg_hosts, arg_roles, arg_excl_hosts in task_list:\n        try:\n            nodeps_tasks = ['package.install', 'server.uninstall',\n                            'server.install', 'server.upgrade']\n            if state.env.nodeps and name.strip() not in nodeps_tasks:\n                sys.stderr.write('Invalid argument --nodeps to task: %s\\n'\n                                 % name)\n                display_command(name, 2)\n\n            return execute(\n                name,\n                hosts=state.env.hosts,\n                roles=arg_roles,\n                exclude_hosts=state.env.exclude_hosts,\n                *args, **kwargs\n            )\n        except TypeError as e:\n            if is_arguments_error(e):\n                print(\"Incorrect number of arguments to task.\\n\")\n                _LOGGER.error('Incorrect number of arguments to task',\n                              exc_info=True)\n                display_command(name, 2)\n            else:\n                raise\n        except BaseException as e:\n            raise\n\n\ndef _escape_split(sep, argstr):\n    \"\"\"\n    Allows for escaping of the separator: e.g. task:arg='foo\\, bar'\n\n    It should be noted that the way bash et. al. do command line parsing, those\n    single quotes are required.\n    \"\"\"\n    escaped_sep = r'\\%s' % sep\n\n    if escaped_sep not in argstr:\n        return argstr.split(sep)\n\n    before, _, after = argstr.partition(escaped_sep)\n    startlist = before.split(sep)  # a regular split is fine here\n    unfinished = startlist[-1]\n    startlist = startlist[:-1]\n\n    # recurse because there may be more escaped separators\n    endlist = _escape_split(sep, after)\n\n    # finish building the escaped value. we use endlist[0] becaue the first\n    # part of the string sent in recursion is the rest of the escaped value.\n    unfinished += sep + endlist[0]\n\n    return startlist + [unfinished] + endlist[1:]  # put together all the parts\n\n\ndef _to_boolean(string):\n    \"\"\"\n    Parses the given string into a boolean.  If its already a boolean, its\n    returned unchanged.\n\n    This method does strict parsing; only the string \"True\" returns the boolean\n    True, and only the string \"False\" returns the boolean False.  All other\n    values throw a ValueError.\n\n    Args:\n        string: the string to parse\n    \"\"\"\n    if string is True or string == 'True':\n        return True\n    elif string is False or string == 'False':\n        return False\n\n    raise ValueError(\"invalid boolean string: %s\" % string)\n\n\ndef _handle_generic_set_env_vars(non_default_options):\n    if not hasattr(non_default_options, 'env_settings'):\n        return non_default_options\n\n    # Allow setting of arbitrary env keys.\n    # This comes *before* the \"specific\" env_options so that those may\n    # override these ones. Specific should override generic, if somebody\n    # was silly enough to specify the same key in both places.\n    # E.g. \"fab --set shell=foo --shell=bar\" should have env.shell set to\n    # 'bar', not 'foo'.\n    for pair in _escape_split(',', non_default_options.env_settings):\n        pair = _escape_split('=', pair)\n        # \"--set x\" => set env.x to True\n        # \"--set x=\" => set env.x to \"\"\n        key = pair[0]\n        value = True\n        if len(pair) == 2:\n            try:\n                value = _to_boolean(pair[1])\n            except ValueError:\n                value = pair[1]\n        state.env[key] = value\n\n    non_default_options_dict = vars(non_default_options)\n    del non_default_options_dict['env_settings']\n    return Values(non_default_options_dict)\n\n\ndef validate_hosts(cli_hosts, config_path):\n    # If there's no config file to validate against, don't. This would happen\n    # in the case of a task that doesn't define a callback that loads config.\n    if config_path is None:\n        return\n\n    # At this point, state.env.conf_hosts contains the hosts that we loaded\n    # from the configuration, if any.\n    cli_host_set = set(cli_hosts.split(','))\n    if 'conf_hosts' in state.env:\n        conf_hosts = set(state.env.conf_hosts)\n        if not cli_host_set.issubset(conf_hosts):\n            raise ConfigurationError('Hosts defined in --hosts/-H must be '\n                                     'present in %s' % (config_path))\n    else:\n        raise ConfigurationError(\n            'Hosts cannot be defined with --hosts/-H when no hosts are listed '\n            'in the configuration file %s. Correct the configuration file or '\n            'run the command again without the --hosts or -H option.' %\n            config_path)\n\n\ndef _update_env(default_options, non_default_options, load_config_callback):\n    # Fill in the state with the default values\n    for opt, value in default_options.__dict__.items():\n        state.env[opt] = value\n\n    if load_config_callback:\n        config_path = load_config(load_config_callback)\n    else:\n        config_path = None\n\n    # Save env.hosts from the config into another env variable for validation.\n    # _handle_generic_set_env_vars will overwrite it if --set hosts=...\n    # is present.\n    if state.env.hosts:\n        state.env.conf_hosts = state.env.hosts\n\n    non_default_options = _handle_generic_set_env_vars(non_default_options)\n\n    if isinstance(state.env.hosts, basestring):\n        # Take advantage of the fact that if there was a generic --set option\n        # for hosts, it's still an unsplit, comma separated string rather than\n        # a list, which is what it would be after loading hosts from a config\n        # file.\n        validate_hosts(state.env.hosts, config_path)\n\n    # Go back through and add the non-default values (e.g. the values that\n    # were set on the CLI)\n    for opt, value in non_default_options.__dict__.items():\n        # raise error if hosts not in topology\n        if opt == 'hosts':\n            validate_hosts(value, config_path)\n\n        state.env[opt] = value\n\n    # Handle --hosts, --roles, --exclude-hosts (comma separated string =>\n    # list)\n    for key in ['hosts', 'roles', 'exclude_hosts']:\n        if key in state.env and isinstance(state.env[key], basestring):\n            state.env[key] = state.env[key].split(',')\n\n    state.output['running'] = False\n    state.output['status'] = False\n    update_output_levels(show=state.env.show, hide=state.env.hide)\n    state.env.skip_bad_hosts = True\n\n    # env.conf_hosts is an implementation detail of the option parsing and\n    # validation. Hide it from the world.\n    if 'conf_hosts' in state.env:\n        del state.env['conf_hosts']\n\n\ndef get_default_options(options, non_default_options):\n    \"\"\"\n    Given a dictionary of options containing the defaults optparse has filled\n    in, and a dictionary of options containing only options parsed from the\n    command line, returns a dictionary containing the default options that\n    remain after removing the default options that were overridden by the\n    options passed on the command line.\n\n    Mathematically, this returns a dictionary with\n    default_options.keys = options.keys() \\ non_default_options.keys()\n    where \\ is the set difference operator.\n    The value of a key present in default_options is the value of the same key\n    in options.\n    \"\"\"\n    options_dict = vars(options)\n    non_default_options_dict = vars(non_default_options)\n    default_options = Values(dict((k, options_dict[k]) for k in options_dict\n                                  if k not in non_default_options_dict))\n    return default_options\n\n\ndef _get_config_callback(commands_to_run):\n    config_callback = None\n    if len(commands_to_run) != 1:\n        raise Exception('Multiple commands are not supported')\n\n    c = commands_to_run[0][0]\n    module, command = c.split('.')\n\n    module_dict = state.commands[module]\n    command_callable = module_dict[command]\n\n    try:\n        config_callback = command_callable.pa_config_callback\n    except AttributeError:\n        pass\n\n    return config_callback\n\n\ndef parse_and_validate_commands(args=sys.argv[1:]):\n    # Find local fabfile path or abort\n    fabfile = \"prestoadmin\"\n\n    # Store absolute path to fabfile in case anyone needs it\n    state.env.real_fabfile = fabfile\n\n    # Load fabfile (which calls its module-level code, including\n    # tweaks to env values) and put its commands in the shared commands\n    # dict\n    docstring, callables = load_fabfile(fabfile)\n    state.commands.update(callables)\n\n    # Parse command line options\n    parser = parser_for_options()\n\n    # Unless you pass in values, optparse fills in the default values for all\n    # of the options. We want to save the version of the options without\n    # default values, because that takes precedence over all other env vars.\n    non_default_options, arguments = parser.parse_args(args, values=Values())\n    options, arguments = parser.parse_args(args)\n    default_options = get_default_options(options, non_default_options)\n\n    # Handle regular args vs -- args\n    arguments = parser.largs\n\n    if len(parser.rargs) > 0:\n        warn(\"Arbitrary remote shell commands not supported.\")\n        show_commands(None, 'normal', 2)\n\n    if options.extended_help:\n        parser.print_extended_help()\n        sys.exit(0)\n\n    # If user didn't specify any commands to run, show help\n    if not arguments:\n        parser.print_help()\n        sys.exit(0)  # don't consider this an error\n\n    # Parse arguments into commands to run (plus args/kwargs/hosts)\n    commands_to_run = None\n    try:\n        commands_to_run = parse_arguments(arguments, state.commands)\n    except NameError as e:\n        warn(e.message)\n        _LOGGER.warn(\"Unable to parse arguments\", exc_info=True)\n        parser.print_help()\n        sys.exit(2)\n\n    # Handle show (command-specific help) option\n    if options.display:\n        display_command(commands_to_run[0][0])\n\n    load_config_callback = _get_config_callback(commands_to_run)\n    _update_env(default_options, non_default_options, load_config_callback)\n\n    if not options.serial:\n        state.env.parallel = True\n\n    state.env.warn_only = False\n\n    # Initial password prompt, if requested\n    if options.initial_password_prompt:\n        prompt = \"Initial value for env.password: \"\n        state.env.password = getpass.getpass(prompt)\n\n    state.env['tasks'] = [x[0] for x in commands_to_run]\n\n    return commands_to_run\n\n\ndef load_config(load_config_callback):\n    \"\"\"\n    This provides a patch point for the unit tests so that individual test\n    cases don't need to know the internal details of what happens in\n    _load_topology. See test_main.py for examples.\n    \"\"\"\n    return load_config_callback()\n\n\ndef _exit_code(results):\n    \"\"\"\n    results from run_tasks take the form of a dict with one or more entries\n    hostname: Exception | None\n\n    If every entry in the dict has a value of None, the exit code is 0.\n    If any entry has a value that is not None, something failed, and we should\n    exit with a non-zero exit code.\n\n    That isn't really the whole story: Any task that calls tasks.execute() and\n    returns that as the result will have an item in the dictionary of the form\n    hostname: {hostname: Exception | None}. This means that we need to\n    recursively check any values in the map that are of type dict following the\n    above scheme.\n    \"\"\"\n    for v in results.values():\n        # No exception, inspect the next value.\n        if v is None:\n            continue\n\n        # The value is a dict resulting from calling fabric.tasks.execute.\n        # Check the results recursively\n        if type(v) is dict:\n            exit_code = _exit_code(v)\n            if exit_code != 0:\n                return exit_code\n            continue\n\n        # In any case where things were OK above, we've continued the loop. At\n        # this point, we know something failed.\n        return 1\n    return 0\n\n\n@entry_point('presto-admin', version=__version__,\n             log_file_path=\"presto-admin.log\",\n             application_class=FabricApplication)\ndef main(args=sys.argv[1:]):\n    \"\"\"\n    Main command-line execution loop.\n    \"\"\"\n    commands_to_run = parse_and_validate_commands(args)\n\n    names = \", \".join(x[0] for x in commands_to_run)\n    _LOGGER.debug(\"Commands to run: %s\" % names)\n\n    # At this point all commands must exist, so execute them in order.\n    return _exit_code(run_tasks(commands_to_run))\n\n\nif __name__ == \"__main__\":\n    sys.exit(main())\n"
  },
  {
    "path": "prestoadmin/mode.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for handling presto-admin mode-related functionality.\n\"\"\"\n\nimport os\n\nfrom fabric.api import abort, task\nfrom fabric.decorators import runs_once\n\nfrom prestoadmin import config\nfrom prestoadmin.util.exception import ConfigurationError, \\\n    ConfigFileNotFoundError\nfrom prestoadmin.util.local_config_util import get_config_directory\n\nMODE_CONF_PATH = os.path.join(get_config_directory(), 'mode.json')\nMODE_KEY = 'mode'\n\nMODE_SLIDER = 'yarn_slider'\nMODE_STANDALONE = 'standalone'\n\nVALID_MODES = [MODE_SLIDER, MODE_STANDALONE]\n\n\ndef _load_mode_config():\n    return config.get_conf_from_json_file(MODE_CONF_PATH)\n\n\ndef _store_mode_config(mode_config):\n    config.write(config.json_to_string(mode_config), MODE_CONF_PATH)\n\n\ndef get_mode(validate=True):\n    mode_config = _load_mode_config()\n    mode = mode_config.get(MODE_KEY)\n\n    if validate and mode is None:\n        raise ConfigurationError(\n            'Required key %s not found in configuration file %s' % (\n                MODE_KEY, MODE_CONF_PATH))\n\n    if validate and not validate_mode(mode):\n        raise ConfigurationError(\n            'Invalid mode %s in configuration file %s. Valid modes are %s' % (\n                mode, MODE_CONF_PATH, ' '.join(VALID_MODES)))\n\n    return mode\n\n\ndef validate_mode(mode):\n    return mode in VALID_MODES\n\n\ndef for_mode(mode, mode_map):\n    if sorted(mode_map.keys()) != sorted(VALID_MODES):\n        raise Exception(\n            'keys in for_nodes\\n%s\\ndo not match VALID_MODES\\n%s' % (\n                mode_map.keys(), VALID_MODES))\n    return mode_map[mode]\n\n\n@task\n@runs_once\ndef select(new_mode):\n    \"\"\"\n    Change the mode.\n    \"\"\"\n    if not validate_mode(new_mode):\n        abort('Invalid mode selection %s. Valid modes are %s' % (\n            new_mode, ' '.join(VALID_MODES)))\n\n    mode_config = {}\n    try:\n        mode_config = _load_mode_config()\n    except ConfigFileNotFoundError:\n        pass\n\n    mode_config[MODE_KEY] = new_mode\n    _store_mode_config(mode_config)\n\n\n@task\n@runs_once\ndef get():\n    \"\"\"\n    Display the current mode.\n    \"\"\"\n    mode = None\n    try:\n        mode = get_mode(validate=False)\n        print mode\n    except ConfigFileNotFoundError:\n        abort(\"Select a mode using the subcommand 'mode select <mode>'\")\n\n\n@task\n@runs_once\ndef list():\n    \"\"\"\n    List the supported modes.\n    \"\"\"\n    print ' '.join(VALID_MODES)\n"
  },
  {
    "path": "prestoadmin/node.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for the presto coordinator's configuration.\nLoads and validates the coordinator.json file and creates the files needed\nto deploy on the presto cluster\n\"\"\"\nfrom abc import abstractmethod, ABCMeta\nimport logging\nimport os\n\nimport config\nimport presto_conf\nfrom prestoadmin.presto_conf import get_presto_conf\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Node():\n    __metaclass__ = ABCMeta\n\n    def __init__(self):\n        pass\n\n    def get_conf(self):\n        conf = get_presto_conf(self._get_conf_dir())\n        for name in presto_conf.REQUIRED_FILES:\n            if name not in conf:\n                _LOGGER.debug('%s configuration for %s not found.  '\n                              'Default configuration will be deployed',\n                              type(self).__name__, name)\n                conf_value = self.default_config(name)\n                conf[name] = conf_value\n                file_path = os.path.join(self._get_conf_dir(), name)\n                config.write_conf_to_file(conf_value, file_path)\n\n        self.validate(conf)\n        return conf\n\n    @abstractmethod\n    def _get_conf_dir(self):\n        pass\n\n    @abstractmethod\n    def default_config(self, filename):\n        pass\n\n    @staticmethod\n    @abstractmethod\n    def validate(conf):\n        pass\n\n    def build_all_defaults(self):\n        conf = {}\n        for name in presto_conf.REQUIRED_FILES:\n            conf[name] = self.default_config(name)\n        return conf\n"
  },
  {
    "path": "prestoadmin/package.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for rpm package deploy and install using presto-admin\n\"\"\"\nimport logging\n\nfrom fabric.context_managers import settings, hide, shell_env\nfrom fabric.decorators import task, runs_once\nfrom fabric.operations import sudo, put, os, local\nfrom fabric.state import env\nfrom fabric.tasks import execute\nfrom fabric.utils import abort\n\nfrom prestoadmin.util import constants\nfrom prestoadmin.standalone.config import StandaloneConfig\nfrom prestoadmin.util.base_config import requires_config\nfrom prestoadmin.util.fabricapi import get_host_list\n\n_LOGGER = logging.getLogger(__name__)\n__all__ = ['install', 'uninstall']\n\n\n@task\n@runs_once\n@requires_config(StandaloneConfig)\ndef install(local_path):\n    \"\"\"\n    Install the rpm package on the cluster\n\n    Args:\n        local_path: Absolute path to the rpm to be installed\n        --nodeps (optional): Flag to indicate if rpm install\n            should ignore checking package dependencies. Equivalent\n            to adding --nodeps flag to rpm -i.\n    \"\"\"\n    check_if_valid_rpm(local_path)\n    return execute(deploy_install, local_path, hosts=get_host_list())\n\n\ndef check_if_valid_rpm(local_path):\n    _LOGGER.info(\"Checking rpm checksum to see if it is corrupted\")\n    with settings(hide('warnings', 'stdout'), warn_only=True):\n        result = local('rpm -K --nosignature ' + local_path, capture=True)\n    if 'MD5 NOT OK' in result.stdout:\n        abort(\"Corrupted RPM. Try downloading the RPM again.\")\n    elif result.stderr:\n        abort(result.stderr)\n\n\ndef deploy_install(local_path):\n    deploy_action(local_path, rpm_install)\n\n\ndef deploy_upgrade(local_path):\n    deploy_action(local_path, rpm_upgrade)\n\n\ndef deploy_action(local_path, rpm_action):\n    deploy(local_path)\n    rpm_action(os.path.basename(local_path))\n\n\ndef deploy(local_path=None):\n    if not os.path.isfile(local_path):\n        abort('RPM file not found at %s.' % local_path)\n\n    _LOGGER.info(\"Deploying rpm on %s...\" % env.host)\n    print(\"Deploying rpm on %s...\" % env.host)\n    sudo('mkdir -p ' + constants.REMOTE_PACKAGES_PATH)\n    ret_list = put(local_path, constants.REMOTE_PACKAGES_PATH, use_sudo=True)\n    if not ret_list.succeeded:\n        _LOGGER.warn(\"Failure during put. Now using /tmp as temp dir...\")\n        ret_list = put(local_path, constants.REMOTE_PACKAGES_PATH,\n                       use_sudo=True, temp_dir='/tmp')\n    if ret_list.succeeded:\n        print(\"Package deployed successfully on: \" + env.host)\n\n\ndef _rpm_install(package_path):\n    nodeps = _nodeps_rpm_option()\n\n    if 'java8_home' not in env or env.java8_home is None:\n        return sudo('rpm -i %s%s' % (nodeps, package_path))\n    else:\n        with shell_env(JAVA8_HOME='%s' % env.java8_home):\n            return sudo('rpm -i %s%s' % (nodeps, package_path))\n\n\ndef _nodeps_rpm_option():\n    nodeps = ''\n    if env.nodeps:\n        nodeps = '--nodeps '\n    return nodeps\n\n\ndef rpm_install(rpm_name):\n    _LOGGER.info(\"Installing the rpm\")\n    if _rpm_install(_rpm_path(rpm_name)).succeeded:\n        print(\"Package installed successfully on: \" + env.host)\n\n\ndef _rpm_path(rpm_filename):\n    return os.path.join(constants.REMOTE_PACKAGES_PATH, rpm_filename)\n\n\ndef rpm_upgrade(rpm_name):\n    _LOGGER.info(\"Upgrading the rpm\")\n    rpm_path = _rpm_path(rpm_name)\n    package_name = sudo('rpm -qp --queryformat \\'%%{NAME}\\' %s' % rpm_path,\n                        quiet=True)\n\n    if not package_name.succeeded:\n        abort(\"Corrupted RPM file: %s\" % rpm_path)\n\n    if _rpm_upgrade(rpm_path).succeeded:\n        print(\"Package upgraded successfully on: \" + env.host)\n\n\ndef _rpm_upgrade(package_name):\n    return sudo('rpm -U %s%s' % (_nodeps_rpm_option(), package_name))\n\n\n@task\n@runs_once\n@requires_config(StandaloneConfig)\ndef uninstall(rpm_name):\n    \"\"\"\n    Uninstall the rpm package from the cluster\n\n    Args:\n        rpm_name: Name of the rpm to be uninstalled\n        --nodeps (optional): Flag to indicate if rpm uninstall) should ignore checking package dependencies. Equivalent\n            to adding --nodeps flag to rpm -e.\n        --force (optional): Flag to indicate that rpm uninstall should not fail if package is not installed.\n    \"\"\"\n    return execute(rpm_uninstall, rpm_name, hosts=get_host_list())\n\n\ndef rpm_uninstall(package_name):\n    _LOGGER.info(\"Uninstalling the rpm\")\n\n    if not is_rpm_installed(package_name):\n        if not env.force:\n            abort('Package is not installed: ' + package_name)\n    elif _rpm_uninstall(package_name).succeeded:\n        print(\"Package uninstalled successfully on: \" + env.host)\n\n\ndef is_rpm_installed(package_name):\n    return sudo('rpm -qi %s' % package_name, quiet=True).succeeded\n\n\ndef _rpm_uninstall(package_name):\n    return sudo('rpm -e %s%s' % (_nodeps_rpm_option(), package_name))\n"
  },
  {
    "path": "prestoadmin/plugin.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nmodule for tasks relating to presto plugins\n\"\"\"\nimport logging\nfrom fabric.decorators import task\nfrom fabric.operations import sudo, put\nimport os\nfrom fabric.api import env\nfrom prestoadmin.standalone.config import StandaloneConfig\nfrom prestoadmin.util.base_config import requires_config\nfrom prestoadmin.util.constants import REMOTE_PLUGIN_DIR\n\n__all__ = ['add_jar']\n_LOGGER = logging.getLogger(__name__)\n\n\ndef write(local_path, remote_dir):\n    sudo(\"mkdir -p \" + remote_dir)\n    put(local_path, remote_dir, use_sudo=True)\n\n\n@task\n@requires_config(StandaloneConfig)\ndef add_jar(local_path, plugin_name, plugin_dir=REMOTE_PLUGIN_DIR):\n    \"\"\"\n    Deploy jar for the specified plugin to the plugin directory.\n\n    Parameters:\n        local_path - Local path to the jar to be deployed\n        plugin_name - Name of the plugin subdirectory to deploy jars to\n        plugin_dir - (Optional) The plugin directory.  If no directory is\n                     given, '/usr/lib/presto/lib/plugin' is used by default.\n    \"\"\"\n    _LOGGER.info('deploying jars on %s' % env.host)\n    write(local_path, os.path.join(plugin_dir, plugin_name))\n"
  },
  {
    "path": "prestoadmin/presto-admin-logging.ini",
    "content": "[loggers]\nkeys=root\n\n[logger_root]\nlevel=DEBUG\nhandlers=file\n\n[handlers]\nkeys=file\n\n[handler_file]\nclass=prestoadmin.util.all_write_handler.AllWriteTimedRotatingFileHandler\nformatter=verbose\nargs=('%(log_file_path)s', 'D', 7)\n\n[formatters]\nkeys=verbose\n\n[formatter_verbose]\nformat=%(asctime)s|%(process)d|%(thread)d|%(name)s|%(levelname)s|%(message)s\n"
  },
  {
    "path": "prestoadmin/presto_conf.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nModule for processing presto configuration files\n\"\"\"\nimport logging\nimport os\n\nfrom prestoadmin.config import get_conf_from_properties_file, \\\n    get_conf_from_config_file\nfrom prestoadmin.util.exception import ConfigurationError\n\n\nREQUIRED_FILES = [\"node.properties\", \"jvm.config\", \"config.properties\"]\nPRESTO_FILES = [\"node.properties\", \"jvm.config\", \"config.properties\",\n                \"log.properties\"]\n_LOGGER = logging.getLogger(__name__)\n\n\ndef get_presto_conf(conf_dir):\n    if os.path.isdir(conf_dir):\n        file_list = [name for name in os.listdir(conf_dir) if\n                     name in PRESTO_FILES]\n    else:\n        _LOGGER.debug(\"No directory \" + conf_dir)\n        file_list = []\n\n    conf = {}\n    for filename in file_list:\n        ext = os.path.splitext(filename)[1]\n        file_path = os.path.join(conf_dir, filename)\n        if ext == \".properties\":\n            conf[filename] = get_conf_from_properties_file(file_path)\n        elif ext == \".config\":\n            conf[filename] = get_conf_from_config_file(file_path)\n    return conf\n\n\ndef validate_presto_conf(conf):\n    for required in REQUIRED_FILES:\n        if required not in conf:\n            raise ConfigurationError(\"Missing configuration for required \"\n                                     \"file: \" + required)\n\n    expect_object_msg = \"%s must be an object with key-value property pairs\"\n    if not isinstance(conf[\"node.properties\"], dict):\n        raise ConfigurationError(expect_object_msg % \"node.properties\")\n\n    if not isinstance(conf[\"jvm.config\"], list):\n        raise ConfigurationError(\"jvm.config must contain a json array of jvm \"\n                                 \"arguments ([arg1, arg2, arg3])\")\n\n    if not isinstance(conf[\"config.properties\"], dict):\n        raise ConfigurationError(expect_object_msg % \"config.properties\")\n\n    return conf\n"
  },
  {
    "path": "prestoadmin/prestoclient.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nSimple client to communicate with a Presto server.\n\"\"\"\nimport json\nimport logging\nimport os\nimport socket\nimport urlparse\nfrom httplib import HTTPConnection, HTTPException\nfrom tempfile import mkstemp\n\nfrom StringIO import StringIO\nfrom fabric.operations import get\nfrom fabric.state import env\nfrom fabric.utils import error\nfrom jks import jks, base64, textwrap\nfrom prestoadmin.util.constants import REMOTE_CONF_DIR, CONFIG_PROPERTIES\nfrom prestoadmin.util.exception import InvalidArgumentError\nfrom prestoadmin.util.httpscacertconnection import HTTPSCaCertConnection\nfrom prestoadmin.util.local_config_util import get_coordinator_directory, get_topology_path\nfrom prestoadmin.util.presto_config import PrestoConfig, LDAP_CLIENT_USER_KEY, LDAP_CLIENT_PASSWORD_KEY\n\n_LOGGER = logging.getLogger(__name__)\nURL_TIMEOUT_MS = 5000\nNUM_ROWS = 1000\nDATA_RESP = \"data\"\nNEXT_URI_RESP = \"nextUri\"\n\nCERTIFICATE_ALIAS = 'certificate_alias'\n\n\nclass PrestoClient:\n    def __init__(self, server, user, coordinator_config=None):\n        # immutable stuff\n        self.server = server\n        self.user = user\n        if (coordinator_config is None):\n            coordinator_config = PrestoConfig.coordinator_config()\n        self.coordinator_config = coordinator_config\n        self.port = PrestoClient._get_configured_port(self.coordinator_config)\n\n        # mutable stuff\n        self.ca_file_path = \"\"\n        self.keystore_data = \"\"\n        self.rows = []\n        self.next_uri = ''\n        self.response_from_server = {}\n\n    @staticmethod\n    def _remove_silently(path):\n        try:\n            os.remove(path)\n        except:\n            pass\n\n    def close(self):\n        PrestoClient._remove_silently(self.ca_file_path)\n\n    def _clear_old_results(self):\n        if self.rows:\n            self.rows = []\n\n        if self.next_uri:\n            self.next_uri = ''\n\n        if self.response_from_server:\n            self.response_from_server = {}\n\n    def run_sql(self, sql, schema=\"default\", catalog=\"hive\"):\n        \"\"\"\n        Execute a query connecting to Presto server using passed parameters.\n\n        Args:\n            sql: SQL query to be executed\n            schema: Presto schema to be used while executing query\n                (default=default)\n            catalog: Catalog to be used by the server\n\n        Returns:\n            list of rows or None if client was unable to connect to Presto\n        \"\"\"\n        status = self._execute_query(sql, schema, catalog)\n        if status:\n            return self._get_rows()\n        else:\n            return None\n\n    def _execute_query(self, sql, schema, catalog):\n        if not sql:\n            raise InvalidArgumentError(\"SQL query missing\")\n\n        if not self.server:\n            raise InvalidArgumentError(\"Server IP missing\")\n\n        if not self.user:\n            raise InvalidArgumentError(\"Username missing\")\n\n        self._clear_old_results()\n\n        headers = {\"X-Presto-Catalog\": catalog,\n                   \"X-Presto-Schema\": schema,\n                   \"X-Presto-User\": self.user,\n                   \"X-Presto-Source\": \"presto-admin\"}\n        answer = ''\n        try:\n            _LOGGER.info(\"Connecting to server at: \" + self.server +\n                         \":\" + str(self.port) + \" as user \" + self.user +\n                         \" to execute query \" + sql)\n            conn = self._get_connection()\n            self._add_auth_headers(headers)\n            conn.request(\"POST\", \"/v1/statement\", sql, headers)\n            response = conn.getresponse()\n\n            if response.status != 200:\n                conn.close()\n                _LOGGER.error(\"Connection error: \" +\n                              str(response.status) + \" \" + response.reason)\n                return False\n\n            answer = response.read()\n            conn.close()\n\n            self.response_from_server = json.loads(answer)\n            _LOGGER.info(\"Query executed successfully: %s\" % (sql))\n            return True\n        except (HTTPException, socket.error) as e:\n            _LOGGER.error(\"Error connecting to presto server at: \" +\n                          self.server + \":\" + str(self.port) + ' ' + e.message)\n            return False\n        except ValueError as e:\n            _LOGGER.error('Error connecting to Presto server: ' + e.message +\n                          ' error from server: ' + answer)\n            raise e\n\n    def _get_response_from(self, uri):\n        \"\"\"\n        Sends a GET request to the Presto server at the specified next_uri\n        and updates the response\n\n        Remove the scheme and host/port from the uri; the connection itself\n        has that information.\n        \"\"\"\n        parts = list(urlparse.urlsplit(uri))\n        parts[0] = None\n        parts[1] = None\n        location = urlparse.urlunsplit(parts)\n        conn = self._get_connection()\n        headers = {\"X-Presto-User\": self.user}\n        self._add_auth_headers(headers)\n        conn.request(\"GET\", location, headers=headers)\n        response = conn.getresponse()\n\n        if response.status != 200:\n            conn.close()\n            _LOGGER.error(\"Error making GET request to %s: %s %s\" %\n                          (uri, response.status, response.reason))\n            return False\n\n        answer = response.read()\n        conn.close()\n\n        self.response_from_server = json.loads(answer)\n        _LOGGER.info(\"GET request successful for uri: \" + uri)\n        return True\n\n    def _build_results_from_response(self):\n        \"\"\"\n        Build result from the response\n\n        The reponse_from_server may contain up to 3 uri's.\n        1. link to fetch the next packet of data ('nextUri')\n        2. TODO: information about the query execution ('infoUri')\n        3. TODO: cancel the query ('partialCancelUri').\n        \"\"\"\n        if NEXT_URI_RESP in self.response_from_server:\n            self.next_uri = self.response_from_server[NEXT_URI_RESP]\n        else:\n            self.next_uri = \"\"\n\n        if DATA_RESP in self.response_from_server:\n            if self.rows:\n                self.rows.extend(self.response_from_server[DATA_RESP])\n            else:\n                self.rows = self.response_from_server[DATA_RESP]\n\n    def _get_rows(self, num_of_rows=NUM_ROWS):\n        \"\"\"\n        Get the rows returned from the query.\n\n        The client sends GET requests to the server using the 'nextUri'\n        from the previous response until the servers response does not\n        contain anymore 'nextUri's.  When there is no 'nextUri' the query is\n        finished\n\n        Note that this can only be called once and does not page through\n        the results.\n\n        Parameters:\n            num_of_rows: to be retrieved. 1000 by default\n        \"\"\"\n        if num_of_rows == 0:\n            return []\n\n        self._build_results_from_response()\n\n        if not self._get_next_uri():\n            return []\n\n        while self._get_next_uri():\n            if not self._get_response_from(self._get_next_uri()):\n                return []\n            if (len(self.rows) <= num_of_rows):\n                self._build_results_from_response()\n        return self.rows\n\n    def _get_next_uri(self):\n        return self.next_uri\n\n    def _get_connection(self):\n        if self.coordinator_config.use_https():\n            return self._get_https_connection()\n        else:\n            return HTTPConnection(self.server, self.port, False, URL_TIMEOUT_MS)\n\n    @staticmethod\n    def _get_configured_port(coordinator_config):\n        if coordinator_config.use_https():\n            return coordinator_config.get_https_port()\n        else:\n            return coordinator_config.get_http_port()\n\n    def _get_https_connection(self):\n        ca_file_path = self._get_pem()\n        result = HTTPSCaCertConnection(\n                self.server, self.port, None, None, ca_file_path, False, URL_TIMEOUT_MS)\n        return result\n\n    def _fetch_keystore_data(self):\n        if not self.keystore_data:\n            remote_keystore_path = self.coordinator_config.get_client_keystore_path()\n            keystore_data = StringIO()\n            get(remote_keystore_path, keystore_data, use_sudo=True)\n            keystore_data.seek(0)\n            self.keystore_data = keystore_data.getvalue()\n        return self.keystore_data\n\n    def _pem_string(self, der_bytes, type):\n        result = \"-----BEGIN %s-----\\n\" % type\n        result += \"\\r\\n\".join(\n            textwrap.wrap(base64.b64encode(der_bytes).decode('ascii'), 64))\n        result += \"\\n-----END %s-----\\n\" % type\n        return result\n\n    def _write_pem_file(self, directory, der_bytes_list, type):\n        prefix = os.path.join(directory,\n                              '%s-' % type.lower().replace(' ', '-'))\n        fd, pem_path = mkstemp('.pem', prefix)\n        # https://www.digicert.com/ssl-support/pem-ssl-creation.htm\n        with open(pem_path, 'w') as pem_file:\n            for der_bytes in der_bytes_list:\n                pem_file.write(self._pem_string(der_bytes, type))\n        os.close(fd)\n        return pem_path\n\n    def _get_pem(self):\n        keystore_data = self._fetch_keystore_data()\n\n        keystore = jks.KeyStore.loads(\n                keystore_data,\n                self.coordinator_config.get_client_keystore_password())\n\n        if len(keystore.private_keys.items()) == 1:\n            _, private_key = keystore.private_keys.items()[0]\n        else:\n            private_key = self._get_private_key(keystore)\n        if not self.ca_file_path:\n            \"\"\"\n            Each member of the cert chain is a tuple (cert_type, cert_data)\n            We only need to write the data out to the .PEM file.\n\n            This usage is shown in the example in the README.md on github:\n            https://github.com/kurtbrose/pyjks\n            \"\"\"\n            self.ca_file_path = self._write_pem_file(\n                    get_coordinator_directory(),\n                    [cert[1] for cert in private_key.cert_chain], 'CERTIFICATE')\n\n        return self.ca_file_path\n\n    def _get_private_key(self, keystore):\n        all_keys = \", \".join(keystore.private_keys.keys())\n        try:\n            alias = env.conf[CERTIFICATE_ALIAS]\n        except KeyError:\n            error('Multiple keys found in %s. Set %s in %s. Available aliases are %s' %\n                  (self.coordinator_config.get_client_keystore_path(),\n                   CERTIFICATE_ALIAS, get_topology_path(), all_keys))\n\n        try:\n            return keystore.private_keys[alias]\n        except KeyError:\n            error('No alias %s found in %s. Available aliases are %s' %\n                  (alias, self.coordinator_config.get_client_keystore_path(),\n                   all_keys))\n\n    def _add_auth_headers(self, headers):\n        if self.coordinator_config.use_ldap():\n            if self.coordinator_config.use_ldap():\n                auth_headers = self._create_auth_headers(\n                    self.coordinator_config.get_ldap_user(),\n                    self.coordinator_config.get_ldap_password())\n                headers.update(auth_headers)\n                _LOGGER.info(\"Using LDAP = %s\" % self.coordinator_config.use_ldap())\n\n    @staticmethod\n    def _create_auth_headers(user, password):\n        if not user:\n            error('LDAP user (taken from %s in %s on the coordinator) cannot be null or empty' %\n                  (LDAP_CLIENT_USER_KEY, os.path.join(REMOTE_CONF_DIR, CONFIG_PROPERTIES)))\n            return {}\n        if not password:\n            error('LDAP password (taken from %s in %s on the coordinator) cannot be null or empty' %\n                  (LDAP_CLIENT_PASSWORD_KEY, os.path.join(REMOTE_CONF_DIR, CONFIG_PROPERTIES)))\n            return {}\n\n        if ':' in user:\n            error(\"LDAP user cannot contain ':': %s\" % user)\n\n        # base64 encode the username and password\n        auth = base64.encodestring('%s:%s' % (user, password)).replace('\\n', '')\n        return {'Authorization': 'Basic %s' % auth}\n"
  },
  {
    "path": "prestoadmin/server.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for installing, monitoring, and controlling presto server\nusing presto-admin\n\"\"\"\nimport cgi\nimport logging\nimport re\nimport sys\nimport urllib2\nimport urlparse\nfrom contextlib import closing\n\nfrom fabric.api import task, sudo, env\nfrom fabric.context_managers import settings, hide\nfrom fabric.decorators import runs_once, with_settings, parallel\nfrom fabric.operations import run, os\nfrom fabric.tasks import execute\nfrom fabric.utils import warn, error, abort\nfrom retrying import retry, RetryError\n\nimport util.filesystem\nfrom prestoadmin import catalog\nfrom prestoadmin import configure_cmds\nfrom prestoadmin import package\nfrom prestoadmin.prestoclient import PrestoClient\nfrom prestoadmin.standalone.config import StandaloneConfig\nfrom prestoadmin.util import constants\nfrom prestoadmin.util.base_config import requires_config\nfrom prestoadmin.util.exception import ConfigFileNotFoundError, ConfigurationError\nfrom prestoadmin.util.fabricapi import get_host_list, get_coordinator_role\nfrom prestoadmin.util.local_config_util import get_catalog_directory\nfrom prestoadmin.util.remote_config_util import lookup_port, \\\n    lookup_server_log_file, lookup_launcher_log_file, lookup_string_config\nfrom prestoadmin.util.version_util import VersionRange, VersionRangeList, \\\n    split_version, strip_tag\n\n__all__ = ['install', 'uninstall', 'upgrade', 'start', 'stop', 'restart',\n           'status']\n\nINIT_SCRIPTS = '/etc/init.d/presto'\nRETRY_TIMEOUT = 120\nSYSTEM_RUNTIME_NODES = 'select * from system.runtime.nodes'\n\n\ndef old_sysnode_processor(node_info_rows):\n    def old_transform(node_is_active):\n        return 'active' if node_is_active else 'inactive'\n\n    return get_sysnode_info_from(node_info_rows, old_transform)\n\n\ndef new_sysnode_processor(node_info_rows):\n    return get_sysnode_info_from(node_info_rows, lambda x: x)\n\n\nNODE_INFO_PER_URI_SQL = VersionRangeList(\n    VersionRange((0, 0), (0, 128),\n                 ('select http_uri, node_version, active from '\n                  'system.runtime.nodes where '\n                  'url_extract_host(http_uri) = \\'%s\\'',\n                  old_sysnode_processor)),\n    VersionRange((0, 128), (sys.maxsize,),\n                 ('select http_uri, node_version, state from '\n                  'system.runtime.nodes where '\n                  'url_extract_host(http_uri) = \\'%s\\'',\n                  new_sysnode_processor))\n)\n\nEXTERNAL_IP_SQL = 'select url_extract_host(http_uri) from ' \\\n                  'system.runtime.nodes WHERE node_id = \\'%s\\''\nCATALOG_INFO_SQL = 'select catalog_name from system.metadata.catalogs'\n_LOGGER = logging.getLogger(__name__)\n\nDOWNLOAD_DIRECTORY = '/tmp'\nDEFAULT_RPM_NAME = 'presto-server-rpm.rpm'\nLATEST_RPM_URL = 'https://repository.sonatype.org/service/local/artifact/maven' \\\n                 '/content?r=central-proxy&g=com.facebook.presto' \\\n                 '&a=presto-server-rpm&e=rpm&v=RELEASE'\n\n\nclass LocalPrestoRpmFinder:\n    def __init__(self, local_path):\n        self.local_path = local_path\n\n    @staticmethod\n    def _check_rpm_uncorrupted(rpm_path):\n        # package.check_if_valid_rpm() outputs information that is not applicable\n        # to this function\n        # stderr is redirected to not be displayed and should be restored at the\n        # end of the function to behave as expected later\n        old_stderr = sys.stderr\n        sys.stderr = open(os.devnull, 'w')\n        try:\n            package.check_if_valid_rpm(rpm_path)\n        except SystemExit:\n            try:\n                os.remove(rpm_path)\n                warn('Removed corrupted rpm at: %s' % rpm_path)\n            except OSError:\n                pass\n            return False\n        finally:\n            sys.stderr = old_stderr\n\n        return True\n\n    def _check_if_absolute_path(self):\n        if os.path.isfile(self.local_path) and self._check_rpm_uncorrupted(self.local_path):\n            print('Found existing rpm at: %s' % self.local_path)\n            return self.local_path\n        else:\n            return None\n\n    def _check_if_relative_path(self, directory_path):\n        path_relative_to_download_dir = os.path.join(directory_path, self.local_path)\n        if os.path.isfile(path_relative_to_download_dir) and self._check_rpm_uncorrupted(path_relative_to_download_dir):\n            print('Found existing rpm at: %s' % path_relative_to_download_dir)\n            return path_relative_to_download_dir\n        else:\n            return None\n\n    def find_local_presto_rpm(self):\n        rpm_at_absolute_path = self._check_if_absolute_path()\n        if rpm_at_absolute_path:\n            return rpm_at_absolute_path\n\n        rpm_at_relative_path = self._check_if_relative_path(DOWNLOAD_DIRECTORY)\n        if rpm_at_relative_path:\n            return rpm_at_relative_path\n\n        return None\n\n\nclass UrlHandler:\n    def __init__(self, url):\n        self.url = url\n        self.url_response = None\n        try:\n            self.url_response = urllib2.urlopen(self.url)\n        except urllib2.HTTPError as e:\n            _LOGGER.error('Url %s responded with code %s' % (url, e.code))\n            raise\n\n    def __enter__(self):\n        return self\n\n    def __exit__(self, exc_type, exc_value, traceback):\n        self.close_url()\n\n    def get_url(self):\n        return self.url_response.geturl()\n\n    def get_content_length(self):\n        try:\n            headers = self.url_response.info()\n            return int(headers['Content-Length'])\n        except (KeyError, ValueError):\n            # Handle the case when the server does not include\n            # the 'Content-Length' header\n            return None\n\n    def get_download_file_name(self, version=None):\n        try:\n            headers = self.url_response.info()\n            content_disposition = headers['Content-Disposition']\n            values, params = cgi.parse_header(content_disposition)\n            return params['filename']\n        except KeyError:\n            # Handle the case when the server does not include\n            # the 'Content-Disposition' header\n            if not version:\n                return DEFAULT_RPM_NAME\n            else:\n                return 'presto-server-rpm-' + version + '.rpm'\n\n    def read_block(self, block_size):\n        return self.url_response.read(block_size)\n\n    def close_url(self):\n        if self.url_response:\n            self.url_response.close()\n\n\nclass PrestoRpmDownloader:\n    def __init__(self, url_handler):\n        self.url_handler = url_handler\n\n    def download_rpm(self, version=None):\n        content_length = self.url_handler.get_content_length()\n        download_file_path = self.get_download_file_path(version)\n\n        with open(download_file_path, 'wb') as local_file:\n            bytes_read = 0\n            block_size = 16 * 1024 * 1024\n            while True:\n                download_buffer = self.url_handler.read_block(block_size)\n                if not download_buffer:\n                    break\n                bytes_read += len(download_buffer)\n                local_file.write(download_buffer)\n                self.print_download_status(bytes_read, content_length)\n            print(\"Downloaded %d bytes\" % bytes_read)\n\n        print('Rpm downloaded to: %s' % download_file_path)\n        return download_file_path\n\n    def get_download_file_path(self, version=None):\n        return os.path.join(DOWNLOAD_DIRECTORY, self.url_handler.get_download_file_name(version))\n\n    @staticmethod\n    def print_download_status(bytes_read, content_length):\n        if content_length:\n            percent = float(bytes_read) / content_length\n            percent = round(percent * 100, 2)\n            print('Downloaded %d of %d bytes (%0.2f%%)' %\n                  (bytes_read, content_length, percent))\n\n\nclass PrestoRpmFetcher:\n    def __init__(self, rpm_specifier):\n        self.rpm_specifier = rpm_specifier\n\n    def check_valid_version(self):\n        return re.match('^[0-9]+(\\.[0-9]+){0,2}$', self.rpm_specifier)\n\n    @staticmethod\n    def _find_or_download_latest_presto_rpm():\n        return PrestoRpmFetcher.find_or_download_rpm_by_url(LATEST_RPM_URL)\n\n    def use_rpm_specifier_as_latest(self):\n        print('Using rpm_specifier as \"latest\"\\n'\n              'Fetching the latest presto rpm')\n        return self._find_or_download_latest_presto_rpm()\n\n    def _find_or_download_rpm_by_version(self, rpm_version):\n        # See here for more information: http://search.maven.org/#api\n        download_url = 'http://search.maven.org/remotecontent?filepath=com/facebook/presto/' \\\n                       'presto-server-rpm/' + rpm_version + '/presto-server-rpm-' + \\\n                       rpm_version + '.rpm'\n        return self.find_or_download_rpm_by_url(download_url, rpm_version)\n\n    def use_rpm_specifier_as_version(self):\n        print('Using rpm_specifier as a version\\n'\n              'Fetching presto rpm version %s' % self.rpm_specifier)\n        return self._find_or_download_rpm_by_version(self.rpm_specifier)\n\n    def use_rpm_specifier_as_url(self):\n        print('Using rpm_specifier as a url\\n'\n              'Fetching presto rpm at url: %s' % self.rpm_specifier)\n        return self.find_or_download_rpm_by_url(self.rpm_specifier)\n\n    def use_rpm_specifier_as_local_path(self):\n        print('Using rpm_specifier as a local path\\n'\n              'Fetching local presto rpm at path: %s' % self.rpm_specifier)\n        local_finder = LocalPrestoRpmFinder(self.rpm_specifier)\n        return local_finder.find_local_presto_rpm()\n\n    @staticmethod\n    def find_or_download_rpm_by_url(url, version=None):\n        \"\"\"\n        Args:\n            url:      The url of the presto rpm to be downloaded.\n            version:  An optional version number.\n                      If the server doesn't respond with the file name that is being\n                      requested, this allows the downloaded file to have the correct\n                      version attached to its name (presto-server-rpm-'version'.rpm)\n                      rather than the default name\n\n        If downloading the presto rpm at the given url would overwrite an existing rpm,\n        this function returns the path to the existing rpm. However, if the rpm that\n        would be downloaded takes the default rpm name, it will overwrite the existing\n        rpm because there is no way to know if the default rpm name is of the same version\n        as the requested rpm. If the rpm is corrupted, this function will remove the corrupted\n        rpm and attempt to download it.\n\n        Returns:\n            The path to the downloaded or found presto rpm\n        \"\"\"\n        with UrlHandler(url) as url_handler:\n            downloader = PrestoRpmDownloader(url_handler)\n            download_file_path = downloader.get_download_file_path(version)\n            local_finder = LocalPrestoRpmFinder(download_file_path)\n            local_rpm_path = local_finder.find_local_presto_rpm()\n            if local_rpm_path and os.path.basename(local_rpm_path) != DEFAULT_RPM_NAME:\n                print('Found and using local presto rpm at path: %s\\n'\n                      'Delete the existing rpm to force a new download' % local_rpm_path)\n                return local_rpm_path\n            elif local_rpm_path:\n                print('Found local presto rpm at path: %s\\n'\n                      'The rpm has the default name, so it will not be used' % local_rpm_path)\n            print('Downloading rpm from %s\\n'\n                  'to %s\\n'\n                  'This can take a few minutes' % (url_handler.get_url(), download_file_path))\n            return downloader.download_rpm(version)\n\n    def get_path_to_presto_rpm(self):\n        \"\"\"\n        This function finds and downloads (if necessary) the requested presto rpm, which can be\n        figured out from the rpm_specifier. The rpm_specifier can take many forms:\n        'latest', url, version, and local path (from highest to lowest precedence). This function\n        interprets the rpm_specifier only as the highest precedence form.\n        \"\"\"\n        scheme, netloc, path, parameters, query, fragment = urlparse.urlparse(self.rpm_specifier)\n        if self.rpm_specifier == \"latest\":\n            rpm_path = self.use_rpm_specifier_as_latest()\n        elif scheme != '' and scheme != 'file':\n            rpm_path = self.use_rpm_specifier_as_url()\n        elif self.check_valid_version():\n            rpm_path = self.use_rpm_specifier_as_version()\n        else:\n            rpm_path = self.use_rpm_specifier_as_local_path()\n\n        if not rpm_path:\n            abort('Unable to find or download presto rpm with specifier %s' % self.rpm_specifier)\n        else:\n            return rpm_path\n\n\n@task\n@runs_once\n@requires_config(StandaloneConfig)\ndef install(rpm_specifier):\n    \"\"\"\n    Copy and install the presto-server rpm to all the nodes in the cluster and\n    configure the nodes.\n\n    The topology information will be read from the config.json file. If this\n    file is missing, then the coordinator and workers will be obtained\n    interactively. Install will fail for invalid json configuration.\n\n    The catalog configurations will be read from the local catalog directory\n    which defaults to ~/.prestoadmin/catalog. If this directory is missing or empty\n    then no catalog configuration is deployed.\n\n    Install will fail for incorrectly formatted configuration files. Expected\n    format is key=value for .properties files and one option per line for\n    jvm.config\n\n    Parameters:\n        rpm_specifier - String specifying location of presto rpm to copy and install\n                        to nodes in the cluster. The string can specify a presto rpm\n                        in the following ways:\n\n                        1.  'latest' to download the latest release\n                        2.  Url to download\n                        3.  Version number to download\n                        4.  Path to a local copy\n\n                        If rpm_specifier matches multiple forms, it is interpreted as the form with\n                        highest precedence. The forms are listed from highest to lowest precedence\n                        (going top to bottom) For example, if the rpm_specifier matches the criteria\n                        to be a url to download, it will be interpreted as such and will never be\n                        interpreted as a version number or a local path.\n\n                        Before downloading an rpm, install will attempt to find a local\n                        copy with a matching version number to the requested rpm. If such\n                        a match is found, it will use the local copy instead of downloading\n                        the rpm again.\n\n        --nodeps -      (optional) Flag to indicate if server install\n                        should ignore checking Presto rpm package\n                        dependencies. Equivalent to adding --nodeps\n                        flag to rpm -i.\n    \"\"\"\n    rpm_fetcher = PrestoRpmFetcher(rpm_specifier)\n    path_to_rpm = rpm_fetcher.get_path_to_presto_rpm()\n    package.check_if_valid_rpm(path_to_rpm)\n    return execute(deploy_install_configure, path_to_rpm, hosts=get_host_list())\n\n\ndef deploy_install_configure(local_path):\n    package.deploy_install(local_path)\n    update_configs()\n    wait_for_presto_user()\n\n\ndef add_tpch_catalog():\n    tpch_catalog_config = os.path.join(get_catalog_directory(), 'tpch.properties')\n    util.filesystem.write_to_file_if_not_exists('connector.name=tpch', tpch_catalog_config)\n\n\ndef update_configs():\n    configure_cmds.deploy()\n\n    add_tpch_catalog()\n    try:\n        catalog.add()\n    except ConfigFileNotFoundError:\n        _LOGGER.info('No catalog directory found, not adding catalogs.')\n\n\n@retry(stop_max_delay=3000, wait_fixed=250)\ndef wait_for_presto_user():\n    ret = sudo('getent passwd presto', quiet=True)\n    if not ret.succeeded:\n        raise Exception('Presto package was not installed successfully. '\n                        'Presto user was not created.')\n\n\n@task\n@requires_config(StandaloneConfig)\ndef uninstall():\n    \"\"\"\n    Uninstall Presto after stopping the services on all nodes\n\n    Parameters:\n        --nodeps -              (optional) Flag to indicate if server uninstall\n                                should ignore checking Presto rpm package\n                                dependencies. Equivalent to adding --nodeps\n                                flag to rpm -e.\n    \"\"\"\n    stop()\n\n    if package.is_rpm_installed('presto'):\n        package.rpm_uninstall('presto')\n    elif package.is_rpm_installed('presto-server'):\n        package.rpm_uninstall('presto-server')\n    elif package.is_rpm_installed('presto-server-rpm'):\n        package.rpm_uninstall('presto-server-rpm')\n    else:\n        abort('Unable to uninstall package on: ' + env.host)\n\n\n@task\n@requires_config(StandaloneConfig)\ndef upgrade(new_rpm_path, local_config_dir=None, overwrite=False):\n    \"\"\"\n    Copy and upgrade a new presto-server rpm to all of the nodes in the\n    cluster. Retains existing node configuration.\n\n    The existing topology information is read from the config.json file.\n    Unlike install, there is no provision to supply topology information\n    interactively.\n\n    The existing cluster configuration is collected from the nodes on the\n    cluster and stored on the host running presto-admin. After the\n    presto-server packages have been upgraded, presto-admin pushes the\n    collected configuration back out to the hosts on the cluster.\n\n    Note that the configuration files in the presto-admin configuration\n    directory are not updated during upgrade.\n\n    :param new_rpm_path -       The path to the new Presto RPM to\n                                install\n    :param local_config_dir -   (optional) Directory to store the cluster\n                                configuration in. If not specified, a temp\n                                directory is used.\n    :param overwrite -          (optional) if set to True then existing\n                                configuration will be orerwriten.\n\n    :param --nodeps -           (optional) Flag to indicate if server upgrade\n                                should ignore checking Presto rpm package\n                                dependencies. Equivalent to adding --nodeps\n                                flag to rpm -U.\n    \"\"\"\n    stop()\n\n    temp_config_tar = configure_cmds.gather_config_directory()\n\n    package.deploy_upgrade(new_rpm_path)\n\n    configure_cmds.deploy_config_directory(temp_config_tar)\n\n\ndef service(control=None):\n    if check_presto_version() != '':\n        return False\n    if control == 'start' and is_port_in_use(env.host):\n        return False\n    _LOGGER.info('Executing %s on presto server' % control)\n    ret = sudo('set -m; ' + INIT_SCRIPTS + ' ' + control)\n    return ret.succeeded\n\n\ndef check_status_for_control_commands():\n    print('Waiting to make sure we can connect to the Presto server on %s, '\n          'please wait. This check will time out after %d minutes if the '\n          'server does not respond.'\n          % (env.host, (RETRY_TIMEOUT / 60)))\n    if check_server_status():\n        print('Server started successfully on: ' + env.host)\n    else:\n        warn('Could not verify server status for: ' + env.host +\n             '\\nThis could mean that the server failed to start or that there was no coordinator or worker up. '\n             'Please check ' + lookup_server_log_file(env.host) + ' and ' +\n             lookup_launcher_log_file(env.host))\n\n\ndef is_port_in_use(host):\n    _LOGGER.info(\"Checking if port used by Prestoserver is already in use..\")\n    try:\n        portnum = lookup_port(host)\n    except Exception:\n        _LOGGER.info(\"Cannot find port from config.properties. \"\n                     \"Skipping check for port already being used\")\n        return 0\n    with settings(hide('warnings', 'stdout'), warn_only=True):\n        output = run('netstat -ln |grep -E \"\\<%s\\>\" |grep LISTEN' % str(portnum))\n    if output:\n        _LOGGER.info(\"Presto server port already in use. Skipping \"\n                     \"server start...\")\n        error('Server failed to start on %s. Port %s already in use'\n              % (env.host, str(portnum)))\n    return output\n\n\n@task\n@requires_config(StandaloneConfig)\ndef start():\n    \"\"\"\n    Start the Presto server on all nodes\n\n    A status check is performed on the entire cluster and a list of\n    servers that did not start, if any, are reported at the end.\n    \"\"\"\n    if service('start'):\n        check_status_for_control_commands()\n\n\n@task\n@requires_config(StandaloneConfig)\ndef stop():\n    \"\"\"\n    Stop the Presto server on all nodes\n    \"\"\"\n    service('stop')\n\n\ndef stop_and_start():\n    if check_presto_version() != '':\n        return False\n    sudo('set -m; ' + INIT_SCRIPTS + ' stop')\n    if is_port_in_use(env.host):\n        return False\n    _LOGGER.info('Executing start on presto server')\n    ret = sudo('set -m; ' + INIT_SCRIPTS + ' start')\n    return ret.succeeded\n\n\n@task\n@requires_config(StandaloneConfig)\ndef restart():\n    \"\"\"\n    Restart the Presto server on all nodes.\n\n    A status check is performed on the entire cluster and a list of\n    servers that did not start, if any, are reported at the end.\n    \"\"\"\n    if stop_and_start():\n        check_status_for_control_commands()\n\n\ndef check_presto_version():\n    \"\"\"\n    Checks that the Presto version is suitable.\n\n    Returns:\n        Error string if applicable\n    \"\"\"\n    if not presto_installed():\n        not_installed_str = 'Presto is not installed.'\n        warn(not_installed_str)\n        return not_installed_str\n\n    return ''\n\n\ndef presto_installed():\n    with settings(hide('warnings', 'stdout'), warn_only=True):\n        package_search = run('rpm -q presto')\n        if not package_search.succeeded:\n            package_search = run('rpm -q presto-server-rpm')\n        return package_search.succeeded\n\n\ndef get_presto_version():\n    with settings(hide('warnings', 'stdout'), warn_only=True):\n        version = run('rpm -q --qf \\\"%{VERSION}\\\\n\\\" presto')\n        # currently we have two rpm names out so we need this retry\n        if not version.succeeded:\n            version = run('rpm -q --qf \\\"%{VERSION}\\\\n\\\" presto-server-rpm')\n        version = version.strip()\n        _LOGGER.debug('Presto rpm version: ' + version)\n        return version\n\n\ndef check_server_status():\n    \"\"\"\n    Checks if server is running for env.host. Retries connecting to server\n    until server is up or till RETRY_TIMEOUT is reached\n\n    Parameters:\n        client - client that executes the query\n\n    Returns:\n        True or False\n    \"\"\"\n    if len(get_coordinator_role()) < 1:\n        warn('No coordinator defined.  Cannot verify server status.')\n    with closing(PrestoClient(get_coordinator_role()[0], env.user)) as client:\n        node_id = lookup_string_config('node.id', os.path.join(constants.REMOTE_CONF_DIR, 'node.properties'), env.host)\n\n        try:\n            return query_server_for_status(client, node_id)\n        except RetryError:\n            return False\n\n\n@retry(stop_max_delay=RETRY_TIMEOUT * 1000, wait_fixed=5000, retry_on_result=lambda result: result is False)\ndef query_server_for_status(client, node_id):\n    try:\n        rows = client.run_sql(SYSTEM_RUNTIME_NODES)\n        if rows is not None:\n            return _is_in_rows(node_id, rows)\n    except ConfigurationError as e:\n        _LOGGER.warn(e)\n    return False\n\n\ndef _is_in_rows(value, rows):\n    for row in rows:\n        if value in row:\n            return True\n    return False\n\n\ndef execute_catalog_info_sql(client):\n    \"\"\"\n    Returns [[catalog_name], [catalog_2]..] from catalogs system table\n\n    Parameters:\n        client - client that executes the query\n    \"\"\"\n    return client.run_sql(CATALOG_INFO_SQL)\n\n\ndef execute_external_ip_sql(client, uuid):\n    \"\"\"\n    Returns external ip of the host with uuid after parsing the http_uri column\n    from nodes system table\n\n    Parameters:\n        client - client that executes the query\n        uuid - node_id of the node\n    \"\"\"\n    return client.run_sql(EXTERNAL_IP_SQL % uuid)\n\n\ndef get_sysnode_info_from(node_info_row, state_transform):\n    \"\"\"\n    Returns system node info dict from node info row for a node\n\n    Parameters:\n        node_info_row -\n\n    Returns:\n        Node info dict in format:\n        {'http://node1/statement': [presto-main:0.97-SNAPSHOT, True]}\n    \"\"\"\n    output = {}\n    for row in node_info_row:\n        if row:\n            output[row[0]] = [row[1], state_transform(row[2])]\n\n    _LOGGER.info('Node info: %s ', output)\n    return output\n\n\ndef get_catalog_info_from(client):\n    \"\"\"\n    Returns installed catalogs\n\n    Parameters:\n        client - client that executes the query\n\n    Returns:\n        comma delimited catalogs eg: tpch, hive, system\n    \"\"\"\n    syscatalog = []\n    catalog_info = execute_catalog_info_sql(client)\n    for conn_info in catalog_info:\n        if conn_info:\n            syscatalog.append(conn_info[0])\n    return ', '.join(syscatalog)\n\n\ndef is_server_up(status):\n    if status:\n        return 'Running'\n    else:\n        return 'Not Running'\n\n\ndef get_roles_for(host):\n    roles = []\n    for role in ['coordinator', 'worker']:\n        if host in env.roledefs[role]:\n            roles.append(role)\n    return roles\n\n\ndef print_node_info(node_status, catalog_status):\n    for k in node_status:\n        print('\\tNode URI(http): ' + str(k) +\n              '\\n\\tPresto Version: ' + str(node_status[k][0]) +\n              '\\n\\tNode status:    ' + str(node_status[k][1]))\n        if catalog_status:\n            print('\\tCatalogs:     ' + catalog_status)\n\n\ndef get_ext_ip_of_node(client):\n    node_properties_file = os.path.join(constants.REMOTE_CONF_DIR,\n                                        'node.properties')\n    with settings(hide('stdout')):\n        node_uuid = sudo('sed -n s/^node.id=//p ' + node_properties_file)\n    external_ip_row = execute_external_ip_sql(client, node_uuid)\n    external_ip = ''\n    if len(external_ip_row) > 1:\n        warn_more_than_one_ip = 'More than one external ip found for ' + env.host + \\\n                                '. There could be multiple nodes associated with the same node.id'\n        _LOGGER.debug(warn_more_than_one_ip)\n        warn(warn_more_than_one_ip)\n        return external_ip\n    for row in external_ip_row:\n        if row:\n            external_ip = row[0]\n    if not external_ip:\n        _LOGGER.debug('Cannot get external IP for ' + env.host)\n        external_ip = 'Unknown'\n    return external_ip\n\n\ndef print_status_header(external_ip, server_status, host):\n    print('Server Status:')\n    print('\\t%s(IP: %s, Roles: %s): %s' % (host, external_ip,\n                                           ', '.join(get_roles_for(host)),\n                                           is_server_up(server_status)))\n\n\n@parallel\ndef collect_node_information():\n    with closing(PrestoClient(get_coordinator_role()[0], env.user)) as client:\n        with settings(hide('warnings')):\n            error_message = check_presto_version()\n        if error_message:\n            external_ip = 'Unknown'\n            is_running = False\n        else:\n            with settings(hide('warnings', 'aborts', 'stdout')):\n                try:\n                    external_ip = get_ext_ip_of_node(client)\n                except:\n                    external_ip = 'Unknown'\n                try:\n                    is_running = service('status')\n                except:\n                    is_running = False\n        return external_ip, is_running, error_message\n\n\ndef get_status_from_coordinator():\n    with closing(PrestoClient(get_coordinator_role()[0], env.user)) as client:\n        try:\n            coordinator_status = client.run_sql(SYSTEM_RUNTIME_NODES)\n            catalog_status = get_catalog_info_from(client)\n        except BaseException as e:\n            # Just log errors that come from a missing port or anything else; if\n            # we can't connect to the coordinator, we just want to print out a\n            # minimal status anyway.\n            _LOGGER.warn(e.message)\n            coordinator_status = []\n            catalog_status = []\n\n        with settings(hide('running')):\n            node_information = execute(collect_node_information,\n                                       hosts=get_host_list())\n\n        for host in get_host_list():\n            if isinstance(node_information[host], Exception):\n                external_ip = 'Unknown'\n                is_running = False\n                error_message = node_information[host].message\n            else:\n                (external_ip, is_running, error_message) = node_information[host]\n\n            print_status_header(external_ip, is_running, host)\n            if error_message:\n                print('\\t' + error_message)\n            elif not coordinator_status:\n                print('\\tNo information available: unable to query coordinator')\n            elif not is_running:\n                print('\\tNo information available')\n            else:\n                version_string = get_presto_version()\n                version = strip_tag(split_version(version_string))\n                query, processor = NODE_INFO_PER_URI_SQL.for_version(version)\n                # just get the node_info row for the host if server is up\n                node_info_row = client.run_sql(query % external_ip)\n                node_status = processor(node_info_row)\n                if node_status:\n                    print_node_info(node_status, catalog_status)\n                else:\n                    print('\\tNo information available: the coordinator has not yet'\n                          ' discovered this node')\n\n\n@task\n@runs_once\n@requires_config(StandaloneConfig)\n@with_settings(hide('warnings'))\ndef status():\n    \"\"\"\n    Print the status of presto in the cluster\n    \"\"\"\n    get_status_from_coordinator()\n"
  },
  {
    "path": "prestoadmin/standalone/__init__.py",
    "content": ""
  },
  {
    "path": "prestoadmin/standalone/config.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for setting and validating the presto-admin config\n\"\"\"\nimport re\n\nfrom fabric.api import env\nfrom overrides import overrides\n\nimport prestoadmin.util.fabricapi as util\nfrom prestoadmin import config\nfrom prestoadmin.prestoclient import CERTIFICATE_ALIAS\nfrom prestoadmin.util.base_config import BaseConfig, SingleConfigItem\nfrom prestoadmin.util.exception import ConfigurationError\nfrom prestoadmin.util.local_config_util import get_topology_path\nfrom prestoadmin.util.validators import validate_username, validate_port, \\\n    validate_host\n\n# Created by the presto-server RPM package.\nPRESTO_STANDALONE_USER = 'presto'\nPRESTO_STANDALONE_GROUP = 'presto'\nPRESTO_STANDALONE_USER_GROUP = \"%s:%s\" % (PRESTO_STANDALONE_USER,\n                                          PRESTO_STANDALONE_GROUP)\n\n# CONFIG KEYS\nUSERNAME = 'username'\nPORT = 'port'\nCOORDINATOR = 'coordinator'\nWORKERS = 'workers'\n\nSTANDALONE_CONFIG_LOADED = 'standalone_config_loaded'\n\nPRESTO_ADMIN_PROPERTIES = ['username', 'port', 'coordinator', 'workers',\n                           'java8_home', CERTIFICATE_ALIAS]\n\nDEFAULT_PROPERTIES = {USERNAME: 'root',\n                      PORT: 22,\n                      COORDINATOR: 'localhost',\n                      WORKERS: ['localhost']}\n\n\ndef validate_coordinator(coordinator):\n    validate_host(coordinator)\n    return coordinator\n\n\ndef validate_workers_for_prompt(workers):\n    return validate_workers(workers.split())\n\n\n_TOPOLOGY_CONFIG = [\n    SingleConfigItem(\n        USERNAME, 'Enter user name for SSH connection to all nodes:',\n        default=DEFAULT_PROPERTIES[USERNAME], validate=validate_username),\n    SingleConfigItem(\n        PORT, 'Enter port number for SSH connections to all nodes:',\n        default=DEFAULT_PROPERTIES['port'], validate=validate_port),\n    SingleConfigItem(\n        COORDINATOR,\n        'Enter host name or IP address for coordinator node. '\n        'Enter an external host name or ip address if this is a multi-node '\n        'cluster:',\n        default=DEFAULT_PROPERTIES['coordinator'],\n        validate=validate_coordinator),\n    SingleConfigItem(\n        WORKERS,\n        'Enter host names or IP addresses for worker nodes separated by spaces:',\n        default=' '.join(DEFAULT_PROPERTIES['workers']),\n        validate=validate_workers_for_prompt)\n]\n\n\ndef validate_java8_home(java8_home):\n    return java8_home\n\n\ndef validate(conf):\n    for key in conf.keys():\n        if key not in PRESTO_ADMIN_PROPERTIES:\n            raise ConfigurationError('Invalid property: ' + key)\n\n    try:\n        username = conf['username']\n    except KeyError:\n        pass\n    else:\n        conf['username'] = validate_username(username)\n\n    try:\n        java8_home = conf['java8_home']\n    except KeyError:\n        pass\n    else:\n        conf['java8_home'] = validate_java8_home(java8_home)\n\n    try:\n        coordinator = conf['coordinator']\n    except KeyError:\n        pass\n    else:\n        conf['coordinator'] = validate_coordinator(coordinator)\n\n    try:\n        workers = conf['workers']\n    except KeyError:\n        pass\n    else:\n        workers = [h for host in workers for h in _expand_host(host)]\n        conf['workers'] = validate_workers(workers)\n\n    try:\n        port = conf['port']\n    except KeyError:\n        pass\n    else:\n        conf['port'] = validate_port(port)\n    return conf\n\n\ndef validate_workers(workers):\n    if not isinstance(workers, list):\n        raise ConfigurationError('Workers must be of type list.  Found ' +\n                                 str(type(workers)) + '.')\n\n    if len(workers) < 1:\n        raise ConfigurationError('Must specify at least one worker')\n\n    for worker in workers:\n        validate_host(worker)\n    return workers\n\n\ndef _expand_host(host):\n    match = re.match(\"(.*)\\[(\\d{1,})-(\\d{1,})\\](.*)\", host)\n    if match is not None and len(match.groups()) == 4:\n        prefix, start, end, suffix = match.groups()\n        if int(start) > int(end):\n            raise ValueError(\"the range must be in ascending order\")\n        if len(start) == len(end) and len(start) > 1:\n            number_format = \"{0:0\" + str(len(start)) + \"d}\"\n            host_list = [number_format.format(i) for i in range(int(start), int(end) + 1)]\n            return [_format_hostname(prefix, i, suffix) for i in host_list]\n        else:\n            return [_format_hostname(prefix, i, suffix) for i in range(int(start), int(end) + 1)]\n    else:\n        return [host]\n\n\ndef _format_hostname(prefix, number, suffix):\n    return \"{prefix}{num}{suffix}\".format(prefix=prefix, num=number, suffix=suffix)\n\n\nclass StandaloneConfig(BaseConfig):\n    def __init__(self):\n        super(StandaloneConfig, self).__init__(get_topology_path(), _TOPOLOGY_CONFIG)\n\n    @overrides\n    def read_conf(self):\n        conf = self._get_conf_from_file()\n        config.fill_defaults(conf, DEFAULT_PROPERTIES)\n        validate(conf)\n        return conf\n\n    def _get_conf_from_file(self):\n        return config.get_conf_from_json_file(self.config_path)\n\n    @overrides\n    def is_config_loaded(self):\n        return STANDALONE_CONFIG_LOADED in env and env[STANDALONE_CONFIG_LOADED]\n\n    @overrides\n    def set_config_loaded(self):\n        env[STANDALONE_CONFIG_LOADED] = True\n\n    @overrides\n    def set_env_from_conf(self, conf):\n        self.config = conf\n        env.user = conf['username']\n        env.port = conf['port']\n        try:\n            env.java8_home = conf['java8_home']\n        except KeyError:\n            env.java8_home = None\n        env.roledefs['coordinator'] = [conf['coordinator']]\n        env.roledefs['worker'] = conf['workers']\n        env.roledefs['all'] = self._dedup_list(util.get_coordinator_role() +\n                                               util.get_worker_role())\n\n        env.hosts = env.roledefs['all'][:]\n        env.conf = conf\n\n    @staticmethod\n    def _dedup_list(host_list):\n        deduped_list = []\n        for item in host_list:\n            if item not in deduped_list:\n                deduped_list.append(item)\n        return deduped_list\n"
  },
  {
    "path": "prestoadmin/topology.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for setting and validating the presto-admin config\n\"\"\"\nimport pprint\n\nfrom fabric.api import env, runs_once, task\n\nfrom prestoadmin.standalone.config import StandaloneConfig\nfrom prestoadmin.util.base_config import requires_config\n\nimport prestoadmin.util.fabricapi as util\n\n\n@task\n@runs_once\n@requires_config(StandaloneConfig)\ndef show():\n    \"\"\"\n    Shows the current topology configuration for the cluster (including the\n    coordinators, workers, SSH port, and SSH username)\n    \"\"\"\n    pprint.pprint(get_conf_from_fabric(), width=1)\n\n\ndef get_conf_from_fabric():\n    return {'coordinator': util.get_coordinator_role()[0],\n            'workers': util.get_worker_role(),\n            'port': env.port,\n            'username': env.user}\n"
  },
  {
    "path": "prestoadmin/util/__init__.py",
    "content": ""
  },
  {
    "path": "prestoadmin/util/all_write_handler.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom logging import handlers\nimport os\n\n\nclass AllWriteTimedRotatingFileHandler(handlers.TimedRotatingFileHandler):\n    def _open(self):\n        prev_umask = os.umask(000)\n        rotating_file_handler = handlers.TimedRotatingFileHandler._open(self)\n        os.umask(prev_umask)\n        return rotating_file_handler\n"
  },
  {
    "path": "prestoadmin/util/application.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Logic at the application level for logging and exception handling\"\"\"\n\nimport functools\nimport logging\nimport logging.config\nimport os\nimport sys\nimport traceback\n\nimport __main__ as main\n\nfrom prestoadmin import __version__\nfrom prestoadmin.util import constants\nfrom prestoadmin.util import filesystem\nfrom prestoadmin.util.exception import ExceptionWithCause\nfrom prestoadmin.util.local_config_util import get_log_directory\n\n# Normally this would use the logger for __name__, however, this is\n# effectively the \"root\" logger for the application.  If this code\n# were running directly in the executable script __name__ would be\n# set to '__main__', so we emulate that same behavior here.  This should\n# resolve to the same logger that will be used by the entry point script.\nlogger = logging.getLogger('__main__')\n\n\nclass Application(object):\n    \"\"\"\n    A generic application entry point.  Provides logging and exception handling\n    features.  This class is expected to be used as a base class for various\n    applications.\n\n    Parameters:\n        name - human readable name for the application\n        version - the version of the application, as a string\n        log_file_path - optional name of the log file including whatever\n        extension you may want to use.  For example, 'foo.log' would create\n        a file called 'foo.log' in the default presto-admin logging directory\n        tree.\n    \"\"\"\n\n    def __init__(self, name, version=None, log_file_path=None):\n        self.name = str(name)\n        self.__log_file_path = log_file_path or (self.name + '.log')\n        if not os.path.isabs(self.__log_file_path):\n            self.__log_file_path = os.path.join(\n                get_log_directory(),\n                self.__log_file_path\n            )\n\n        self.version = version or __version__\n\n    def __enter__(self):\n        self.__configure_logging()\n        return self\n\n    def __configure_logging(self):\n        try:\n            for maybe_file_path in self.__logging_configuration_file_paths():\n                if not os.path.exists(maybe_file_path):\n                    continue\n                else:\n                    config_file_path = maybe_file_path\n\n                filesystem.ensure_parent_directories_exist(\n                    self.__log_file_path\n                )\n                logging.config.fileConfig(\n                    config_file_path,\n                    defaults={'log_file_path': self.__log_file_path},\n                    disable_existing_loggers=False\n                )\n\n                self.__log_application_start()\n                logger.debug(\n                    'Loaded logging configuration from %s',\n                    config_file_path\n                )\n                break\n        except Exception as e:\n            sys.stderr.write(\n                'Please run %s with sudo.\\n' % self.name\n            )\n            sys.stderr.flush()\n            sys.exit(str(e))\n\n    def __logging_configuration_file_paths(self):\n        # Current working directory\n        yield constants.LOGGING_CONFIG_FILE_NAME\n        # Application specific\n        yield (self.__log_file_path + '.ini')\n        yield (self.__main_module_path() + '.ini')\n        # Global locations\n        for dir_path in constants.LOGGING_CONFIG_FILE_DIRECTORIES:\n            yield os.path.join(dir_path, constants.LOGGING_CONFIG_FILE_NAME)\n\n    def __main_module_path(self):\n        return os.path.abspath(main.__file__)\n\n    def __log_application_start(self):\n        LOG_SEPARATOR = '**************************************************'\n\n        logger.debug(LOG_SEPARATOR)\n        logger.debug(\n            'Starting {name} {version}'.format(\n                name=self.name,\n                version=self.version\n            )\n        )\n        logger.debug(LOG_SEPARATOR)\n        logger.debug('raw arguments = {0}'.format(sys.argv))\n\n    def __exit__(self, exc_type, exception, trace):\n        self.exc_type = exc_type\n        self.exception = exception\n        self.trace = trace\n\n        try:\n            if exc_type is None:\n                self.__handle_no_exception()\n            elif exc_type == SystemExit:\n                self.__handle_system_exit()\n            else:\n                self._handle_error()\n                sys.exit(1)\n        finally:\n            self._exit_cleanup_hook()\n\n    def _exit_cleanup_hook(self):\n        logging.shutdown()\n\n    def __handle_no_exception(self):\n        logger.debug('Exiting normally')\n\n    def __handle_system_exit(self):\n        # Unfortunately a SystemExit can be raised with all kinds of\n        # wonky values.  This code attempts to determine the actual\n        # exit status.\n        code = None\n        try:\n            # according to the docs a None value for this is equivalent\n            # to a 0 value.\n            if self.exception is None or self.exception.code is None:\n                code = 0\n            else:\n                code = int(self.exception.code)\n        except ValueError:\n            code = 1\n        except AttributeError:\n            # In Python 2.6, the exceptions are passed as strings sometimes.\n            # Thus exception.code gets an AttributeError.\n            try:\n                code = int(self.exception)\n            except ValueError:\n                code = 1\n        except:\n            logger.exception(\"Unknown exception: %s\" % str(self.exception))\n\n        if code is not None:\n            if code is not 0:\n                self._log_exception()\n            logger.debug('Application exiting with status %d', code)\n        else:\n            self._log_exception()\n        sys.exit(code)\n\n    def _handle_error(self):\n        self._log_exception()\n        self.__display_error_message(str(self.exception))\n\n    def __display_error_message(self, message):\n        log_file_path = self.__get_root_log_file_path()\n        error_message = ''\n        if log_file_path:\n            error_message += '  More detailed information can be found in '\n            error_message += log_file_path\n        print >> sys.stderr, message + error_message\n\n    def __get_root_log_file_path(self):\n        for handler in logging.root.handlers:\n            if isinstance(handler, logging.FileHandler):\n                return handler.baseFilename\n        return None\n\n    def _log_exception(self):\n        formatted_stack_trace = ''.join(\n            traceback.format_exception(\n                self.exc_type,\n                self.exception,\n                self.trace\n            ) + [ExceptionWithCause.get_cause_if_supported(self.exception)]\n        )\n\n        logger.error(\n            'Handling uncaught exception: {t}, \"{ex}\"\\n{tb}'.format(\n                t=self.exc_type,\n                ex=str(self.exception),\n                tb=formatted_stack_trace\n            )\n        )\n\n\ndef entry_point(name, version=None, log_file_path=None,\n                application_class=Application):\n    \"\"\"\n    A decorator for application entry points.  The decorated function will\n    be wrapped in an Application object and executed in that safe environment.\n    Note that decorating a function with this decorator will not actually\n    cause it to be invoked.  You must explicitly call the function in the\n    script.\n\n    Parameters:\n        name - human readable name for the application\n        version - the version of the application, as a string\n        log_file_path - optional name of the log file including whatever\n        extension you may want to use.  For example, 'foo.log' would create\n        a file called 'foo.log' in the default prestoadmin logging directory\n        tree.\n        application_class - Type of application to run. The default is\n        Application but there can be subclasses of that class.\n    \"\"\"\n\n    def application_decorator(method):\n        @functools.wraps(method)\n        def wrapped_application(*args, **kwargs):\n            with application_class(\n                    name,\n                    version=version,\n                    log_file_path=log_file_path\n            ):\n                return method(*args, **kwargs)\n\n        return wrapped_application\n\n    return application_decorator\n"
  },
  {
    "path": "prestoadmin/util/base_config.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"\nModule for common configuration stuff.\n\"\"\"\n\nimport abc\n\nfrom functools import wraps\n\nfrom fabric.context_managers import settings\nfrom fabric.operations import prompt\n\nfrom prestoadmin import config\nfrom prestoadmin.config import ConfigFileNotFoundError\nfrom prestoadmin.util.exception import ConfigurationError\n\n\nclass SingleConfigItem(object):\n    def __init__(self, key, prompt, default=None, validate=None):\n        self.key = key\n        self.prompt = prompt\n        self.default = default\n        self.validate = validate\n\n    def prompt_user(self, conf):\n        conf[self.key] = prompt(self.prompt,\n                                default=conf.get(self.key, self.default),\n                                validate=self.validate)\n\n    def collect_prompts(self, l):\n        l.append((self.prompt, self.key))\n\n\nclass MultiConfigItem(object):\n    def __init__(self, items, validate, validate_keys,\n                 validate_failed_text):\n        self.items = items\n        self.validate = validate\n        self.validate_keys = validate_keys\n        self.validate_failed_text = validate_failed_text\n\n    def prompt_user(self, conf):\n        while True:\n            for item in self.items:\n                item.prompt_user(conf)\n\n            validate_args = [conf[k] for k in self.validate_keys]\n            if self.validate(*validate_args):\n                break\n            print (self.validate_failed_text % self.validate_keys) % conf\n\n    def collect_prompts(self, l):\n        for item in self.items:\n            item.collect_prompts(l)\n\n\ndef requires_config(config_class):\n    def wrap(func):\n        config_instance = config_class()\n        func.pa_config_callback = config_instance.get_config\n\n        @wraps(func)\n        def wrapper(*args, **kwargs):\n            if not config_instance.is_config_loaded():\n                raise ConfigurationError('Required config not loaded at task '\n                                         'execution time.')\n            return func(*args, **kwargs)\n        return wrapper\n    return wrap\n\n\nclass BaseConfig(object):\n    '''\n    BaseConfig provides the common config functionality for loading\n    configuration files for presto-admin and going through the interactive\n    config process if a config file isn't present.\n\n    Instances of classes that subclass BaseConfig are intended to be used with\n    the @requires_config decorator, which is responsible for adding an\n    attribute to the task that tells main() how to load the configuration\n    and subsequently for enforcing that the configuration has been loaded at\n    the time the task is actually run.\n\n    In order to be compatible with @requires_config, subclasses must define\n    a no-arguments constructor.\n    '''\n    __metaclass__ = abc.ABCMeta\n\n    def __init__(self, config_path, config_items):\n        self.config_path = config_path\n        self.config_items = config_items\n        self.config = {}\n\n    def __getitem__(self, key):\n        return self.config[key]\n\n    def __setitem__(self, key, value):\n        self.config[key] = value\n\n    def __delitem__(self, key):\n        del self.config[key]\n\n    def read_conf(self):\n        return config.get_conf_from_json_file(self.config_path)\n\n    def write_conf(self, conf):\n        config.write(config.json_to_string(conf), self.config_path)\n        return self.config_path\n\n    def get_conf_interactive(self):\n        conf = {}\n        for item in self.config_items:\n            item.prompt_user(conf)\n        return conf\n\n    def get_config(self):\n        with settings(parallel=False):\n            if not self.is_config_loaded():\n                conf = {}\n                try:\n                    conf = self.read_conf()\n                except ConfigFileNotFoundError:\n                    conf = self.get_conf_interactive()\n                    self.write_conf(conf)\n\n                self.set_env_from_conf(conf)\n                self.set_config_loaded()\n            return self.config_path\n\n    @abc.abstractmethod\n    def is_config_loaded(self):\n        pass\n\n    @abc.abstractmethod\n    def set_config_loaded(self):\n        pass\n\n    @abc.abstractmethod\n    def set_env_from_conf(self, conf):\n        pass\n"
  },
  {
    "path": "prestoadmin/util/constants.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis modules contains read-only constants used throughout\nthe presto admin project.\n\"\"\"\n\nimport os\n\nimport prestoadmin\n\n# Logging Config File Locations\nLOGGING_CONFIG_FILE_NAME = 'presto-admin-logging.ini'\nLOGGING_CONFIG_FILE_DIRECTORIES = [\n    os.path.join(prestoadmin.main_dir, 'prestoadmin')\n]\n\n# local configuration\nLOG_DIR_ENV_VARIABLE = 'PRESTO_ADMIN_LOG_DIR'\nCONFIG_DIR_ENV_VARIABLE = 'PRESTO_ADMIN_CONFIG_DIR'\nLOCAL_CONF_DIR = '.prestoadmin'\nDEFAULT_LOCAL_CONF_DIR = os.path.join(os.path.expanduser('~'), LOCAL_CONF_DIR)\nTOPOLOGY_CONFIG_FILE = 'config.json'\nCOORDINATOR_DIR_NAME = 'coordinator'\nWORKERS_DIR_NAME = 'workers'\nCATALOG_DIR_NAME = 'catalog'\n\n# remote configuration\nREMOTE_CONF_DIR = '/etc/presto'\nREMOTE_CATALOG_DIR = os.path.join(REMOTE_CONF_DIR, 'catalog')\nREMOTE_PACKAGES_PATH = '/opt/prestoadmin/packages'\nDEFAULT_PRESTO_SERVER_LOG_FILE = '/var/log/presto/server.log'\nDEFAULT_PRESTO_LAUNCHER_LOG_FILE = '/var/log/presto/launcher.log'\nREMOTE_PLUGIN_DIR = '/usr/lib/presto/lib/plugin'\nREMOTE_COPY_DIR = '/tmp'\n\n# Presto configuration files\nCONFIG_PROPERTIES = \"config.properties\"\nLOG_PROPERTIES = \"log.properties\"\nJVM_CONFIG = \"jvm.config\"\nNODE_PROPERTIES = \"node.properties\"\n"
  },
  {
    "path": "prestoadmin/util/exception.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis module defines error types relevant to the Presto\nadministrative suite.\n\"\"\"\nimport re\n\nimport sys\nimport traceback\n\n\n# Beware the nuances of pickling Exceptions:\n# http://bugs.python.org/issue1692335\nclass ExceptionWithCause(Exception):\n\n    def __init__(self, message=''):\n        self.inner_exception = None\n\n        causing_exception = sys.exc_info()[1]\n        if causing_exception:\n            self.inner_exception = traceback.format_exc() + \\\n                ExceptionWithCause.get_cause_if_supported(causing_exception)\n\n        super(ExceptionWithCause, self).__init__(message)\n\n    @staticmethod\n    def get_cause_if_supported(exception):\n        try:\n            inner = exception.inner_exception\n        except AttributeError:\n            inner = None\n\n        if inner:\n            return '\\nCaused by:\\n{tb}'.format(\n                tb=inner\n            )\n        else:\n            return ''\n\n\nclass InvalidArgumentError(ExceptionWithCause):\n    pass\n\n\nclass ConfigurationError(ExceptionWithCause):\n    pass\n\n\nclass ConfigFileNotFoundError(ConfigurationError):\n    def __init__(self, message='', config_path=''):\n        super(ConfigFileNotFoundError, self).__init__(message)\n        self.config_path = config_path\n\n\ndef is_arguments_error(exception):\n    return isinstance(exception, TypeError) and \\\n        re.match(r'.+\\(\\) takes (at most \\d+|no|exactly \\d+|at least \\d+) '\n                 r'arguments? \\(\\d+ given\\)', exception.message)\n"
  },
  {
    "path": "prestoadmin/util/fabric_application.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nLogic for starting and stopping Fabric applications.\n\"\"\"\n\nfrom fabric.network import disconnect_all\nfrom prestoadmin.util.application import Application\n\nimport logging\nimport sys\n\n\n# Normally this would use the logger for __name__, however, this is\n# effectively the \"root\" logger for the application.  If this code\n# were running directly in the executable script __name__ would be\n# set to '__main__', so we emulate that same behavior here.  This should\n# resolve to the same logger that will be used by the entry point script.\nlogger = logging.getLogger('__main__')\n\n\nclass FabricApplication(Application):\n    \"\"\"\n    A Presto Fabric application entry point.  Provides logging and exception\n    handling features.  Additionally cleans up Fabric network connections\n    before exiting.\n    \"\"\"\n\n    def _exit_cleanup_hook(self):\n        \"\"\"\n        Disconnect all Fabric connections in addition to shutting down the\n        logging.\n        \"\"\"\n        disconnect_all()\n        Application._exit_cleanup_hook(self)\n\n    def _handle_error(self):\n        \"\"\"\n        Handle KeyboardInterrupt in a special way: don't indicate\n        that it's an error.\n\n        Returns:\n            Nothing\n        \"\"\"\n        self._log_exception()\n        if isinstance(self.exception, KeyboardInterrupt):\n            print >> sys.stderr, \"Stopped.\"\n            sys.exit(0)\n        else:\n            Application._handle_error(self)\n"
  },
  {
    "path": "prestoadmin/util/fabricapi.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule to add extensions and helpers for fabric api methods\n\"\"\"\n\nfrom functools import wraps\n\nfrom fabric.api import env, put, settings, sudo\nfrom fabric.utils import abort\n\n\ndef get_host_list():\n    return [host for host in env.hosts if host not in env.exclude_hosts]\n\n\ndef get_coordinator_role():\n    return env.roledefs['coordinator']\n\n\ndef get_worker_role():\n    return env.roledefs['worker']\n\n\ndef task_by_rolename(rolename):\n    def inner_decorator(f):\n        @wraps(f)\n        def wrapper(*args, **kwargs):\n            return by_rolename(env.host, rolename, f, *args, **kwargs)\n        return wrapper\n    return inner_decorator\n\n\ndef by_rolename(host, rolename, f, *args, **kwargs):\n    if rolename is None:\n        f(*args, **kwargs)\n    else:\n        if rolename not in env.roledefs.keys():\n            abort(\"Invalid role name %s. Valid rolenames are %s\" %\n                  (rolename, env.roledefs.keys()))\n        if host in env.roledefs[rolename]:\n            return f(*args, **kwargs)\n\n\ndef by_role_coordinator(host, f, *args, **kwargs):\n    if host in get_coordinator_role():\n        return f(*args, **kwargs)\n\n\ndef by_role_worker(host, f, *args, **kwargs):\n    if host in get_worker_role() and host not in get_coordinator_role():\n        return f(*args, **kwargs)\n\n\ndef put_secure(user_group, mode, *args, **kwargs):\n    missing_owner_code = 42\n    user, group = user_group.split(\":\")\n\n    files = put(*args, mode=mode, **kwargs)\n\n    for file in files:\n        with settings(warn_only=True):\n            command = \\\n                \"( getent passwd {user} >/dev/null || ( rm -f {file} ; \" \\\n                \"exit {missing_owner_code} ) ) && \" \\\n                \"chown {user_group} {file}\".format(\n                    user=user, file=file, user_group=user_group,\n                    missing_owner_code=missing_owner_code)\n\n            result = sudo(command)\n\n            if result.return_code == missing_owner_code:\n                abort(\"User %s does not exist. Make sure the Presto \"\n                      \"server RPM is installed and try again\" % (user,))\n            elif result.failed:\n                abort(\"Failed to chown file %s\" % (file,))\n"
  },
  {
    "path": "prestoadmin/util/filesystem.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\" Filesystem tools.\"\"\"\n\nimport errno\nimport logging\nimport os\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef ensure_parent_directories_exist(path):\n    try:\n        os.makedirs(os.path.dirname(path))\n    except OSError as e:\n        if e.errno != errno.EEXIST:\n            raise e\n\n\ndef ensure_directory_exists(path):\n    try:\n        os.makedirs(path)\n    except OSError as e:\n        if e.errno != errno.EEXIST:\n            raise e\n\n\ndef write_to_file_if_not_exists(content, path):\n    flags = os.O_CREAT | os.O_EXCL | os.O_WRONLY\n\n    try:\n        os.makedirs(os.path.dirname(path))\n    except OSError as e:\n        if e.errno == errno.EEXIST:\n            pass\n        else:\n            raise\n\n    try:\n        file_handle = os.open(path, flags)\n    except OSError as e:\n        if e.errno == errno.EEXIST:\n            pass\n        else:\n            raise\n    else:\n        with os.fdopen(file_handle, 'w') as f:\n            f.write(content)\n"
  },
  {
    "path": "prestoadmin/util/hiddenoptgroup.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nAn option group for which you can hide the help text.\n\"\"\"\n\nimport logging\nfrom optparse import OptionGroup\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass HiddenOptionGroup(OptionGroup):\n    \"\"\"\n    Optparse allows you to suppress Options from the help text, but not\n    groups. This class allows you to suppress the help of groups.\n    \"\"\"\n\n    def __init__(self, parser, title, description=None, suppress_help=False):\n        OptionGroup.__init__(self, parser, title, description)\n        self.suppress_help = suppress_help\n\n    def format_help(self, formatter):\n        if not self.suppress_help:\n            return OptionGroup.format_help(self, formatter)\n        else:\n            return \"\"\n"
  },
  {
    "path": "prestoadmin/util/httpscacertconnection.py",
    "content": "import socket\nimport ssl\nimport httplib\n\n# Adapted from http://code.activestate.com/recipes/577548-https-httplib-client-connection-with-certificate-v/\n# BSD-licensed.\n\n\nclass HTTPSCaCertConnection(httplib.HTTPSConnection):\n    \"\"\" Class to make a HTTPS connection, with support for full client-based SSL Authentication\"\"\"\n\n    def __init__(self, host, port, key_file, cert_file, ca_file, strict, timeout=None):\n        httplib.HTTPSConnection.__init__(self, host, port, key_file, cert_file, strict, timeout)\n        self.key_file = key_file\n        self.cert_file = cert_file\n        self.ca_file = ca_file\n        self.timeout = timeout\n\n    def connect(self):\n        \"\"\" Connect to a host on a given (SSL) port.\n            If ca_file is pointing somewhere, use it to check Server Certificate.\n\n            Redefined/copied and extended from httplib.py:1105 (Python 2.6.x).\n            This is needed to pass cert_reqs=ssl.CERT_REQUIRED as parameter to ssl.wrap_socket(),\n            which forces SSL to check server certificate against our client certificate.\n        \"\"\"\n        sock = socket.create_connection((self.host, self.port), self.timeout)\n        if self._tunnel_host:\n            self.sock = sock\n            self._tunnel()\n        # If there's no CA File, don't force Server Certificate Check\n        if self.ca_file:\n            self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, ca_certs=self.ca_file,\n                                        cert_reqs=ssl.CERT_REQUIRED)\n        else:\n            self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, cert_reqs=ssl.CERT_NONE)\n"
  },
  {
    "path": "prestoadmin/util/local_config_util.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\n\nfrom prestoadmin.util.constants import LOG_DIR_ENV_VARIABLE, CONFIG_DIR_ENV_VARIABLE, DEFAULT_LOCAL_CONF_DIR, \\\n    TOPOLOGY_CONFIG_FILE, COORDINATOR_DIR_NAME, WORKERS_DIR_NAME, CATALOG_DIR_NAME\n\n\ndef get_config_directory():\n    config_directory = os.environ.get(CONFIG_DIR_ENV_VARIABLE)\n    if not config_directory:\n        config_directory = DEFAULT_LOCAL_CONF_DIR\n    return config_directory\n\n\ndef get_log_directory():\n    config_directory = os.environ.get(LOG_DIR_ENV_VARIABLE)\n    if not config_directory:\n        config_directory = os.path.join(get_config_directory(), 'log')\n    return config_directory\n\n\ndef get_topology_path():\n    return os.path.join(get_config_directory(), TOPOLOGY_CONFIG_FILE)\n\n\ndef get_coordinator_directory():\n    return os.path.join(get_config_directory(), COORDINATOR_DIR_NAME)\n\n\ndef get_workers_directory():\n    return os.path.join(get_config_directory(), WORKERS_DIR_NAME)\n\n\ndef get_catalog_directory():\n    return os.path.join(get_config_directory(), CATALOG_DIR_NAME)\n"
  },
  {
    "path": "prestoadmin/util/parser.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nAn extension to optparse for presto-admin which logs user parsing errors.\n\"\"\"\n\nimport logging\nfrom optparse import OptionParser\nimport sys\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass LoggingOptionParser(OptionParser):\n    \"\"\"\n    An extension to optparse which logs exceptions via the logging\n    module in addition to writing the out to stderr.\n\n    If used with HiddenOptionGroup, print_extended_help disables the\n    suppress_help attribute of HiddenOptionGroup so as to print out\n    extended helptext.\n    \"\"\"\n\n    def exit(self, status=0, msg=None):\n        _LOGGER.debug(\"Exiting option parser!\")\n        if msg:\n            sys.stderr.write(msg)\n            _LOGGER.error(msg)\n        sys.exit(status)\n\n    def print_extended_help(self, filename=None):\n        old_suppress_help = {}\n        for group in self.option_groups:\n            try:\n                old_suppress_help[group] = group.suppress_help\n                group.suppress_help = False\n            except AttributeError as e:\n                old_suppress_help[group] = None\n                _LOGGER.debug(\"Option group does not have option to \"\n                              \"suppress help; exception is \" + e.message)\n        self.print_help(file=filename)\n\n        for group in self.option_groups:\n            # Restore the suppressed help when applicable\n            if old_suppress_help[group]:\n                group.suppress_help = True\n\n    def format_epilog(self, formatter):\n        \"\"\"\n        The default format_epilog strips the newlines (using textwrap),\n        so we override format_epilog here to use its own epilog\n        \"\"\"\n        if not self.epilog:\n            self.epilog = \"\"\n        return self.epilog\n"
  },
  {
    "path": "prestoadmin/util/presto_config.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport logging\nimport os\nfrom StringIO import StringIO\n\nfrom fabric.context_managers import settings, hide\nfrom fabric.operations import get, run\nfrom fabric.state import env\nfrom fabric.utils import error\n\nfrom prestoadmin.config import get_conf_from_properties_data\nfrom prestoadmin.util.constants import REMOTE_CONF_DIR, CONFIG_PROPERTIES\n\nHTTP_ENABLED_KEY = 'http-server.http.enabled'\nHTTPS_ENABLED_KEY = 'http-server.https.enabled'\nHTTP_PORT_KEY = 'http-server.http.port'\nHTTPS_PORT_KEY = 'http-server.https.port'\nCLIENT_KEYSTORE_PATH_KEY = 'internal-communication.https.keystore.path'\nCLIENT_KEYSTORE_PASSWORD_KEY = 'internal-communication.https.keystore.key'\nAUTHENTICATION_KEY = 'http-server.authentication.type'\nLDAP_CLIENT_USER_KEY = 'internal-communication.authentication.ldap.user'\nLDAP_CLIENT_PASSWORD_KEY = 'internal-communication.authentication.ldap.password'\n\n_LOGGER = logging.getLogger(__name__)\n# properties file literals\nPROPERTIES_TRUE = 'true'\nPROPERTIES_FALSE = 'false'\n\n\nclass PrestoConfig:\n    # Defaults from Presto\n    default_config = {\n        HTTP_ENABLED_KEY: PROPERTIES_TRUE,\n        HTTPS_ENABLED_KEY: PROPERTIES_FALSE,\n        HTTP_PORT_KEY: '8080',\n        HTTPS_PORT_KEY: '8443',\n        CLIENT_KEYSTORE_PATH_KEY: None,\n        CLIENT_KEYSTORE_PASSWORD_KEY: None,\n        LDAP_CLIENT_USER_KEY: None,\n        LDAP_CLIENT_PASSWORD_KEY: None\n    }\n\n    def __init__(self, config_properties, config_path, config_host):\n        self.config_path = config_path\n        self.config_host = config_host\n        if not config_properties:\n            self.config_properties = self.default_config\n        else:\n            self.config_properties = config_properties\n\n    @staticmethod\n    def from_file(config, config_path=None, config_host=None):\n        presto_config_dict = get_conf_from_properties_data(config)\n        return PrestoConfig(presto_config_dict, config_path, config_host)\n\n    @staticmethod\n    def coordinator_config():\n        config_path = os.path.join(REMOTE_CONF_DIR, CONFIG_PROPERTIES)\n        config_host = env.roledefs['coordinator'][0]\n        try:\n            data = StringIO()\n            with settings(host_string='%s@%s' % (env.user, config_host)):\n                with hide('stderr', 'stdout'):\n                    temp_dir = run('mktemp -d /tmp/prestoadmin.XXXXXXXXXXXXXX')\n                try:\n                    get(config_path, data, use_sudo=True, temp_dir=temp_dir)\n                finally:\n                    run('rm -rf %s' % temp_dir)\n\n            data.seek(0)\n            return PrestoConfig.from_file(data, config_path, config_host)\n        except:\n            _LOGGER.info('Could not find Presto config.')\n            return PrestoConfig(None, config_path, config_host)\n\n    def _lookup(self, key):\n        result = self.config_properties.get(key, self.default_config[key])\n        if not result:\n            error(\n                    \"Key %s is not configured in coordinator configuration\"\n                    \"%s on host %s and has no default\" %\n                    (key, self.config_host, self.config_path))\n        return result\n\n    def use_https(self):\n        http_enabled = self._lookup(HTTP_ENABLED_KEY) == PROPERTIES_TRUE\n        https_enabled = self._lookup(HTTPS_ENABLED_KEY) == PROPERTIES_TRUE\n\n        return https_enabled and not http_enabled\n\n    def get_client_keystore_path(self):\n        return self._lookup(CLIENT_KEYSTORE_PATH_KEY)\n\n    def get_client_keystore_password(self):\n        return self._lookup(CLIENT_KEYSTORE_PASSWORD_KEY)\n\n    def get_https_port(self):\n        return int(self._lookup(HTTPS_PORT_KEY))\n\n    def get_http_port(self):\n        return int(self._lookup(HTTP_PORT_KEY))\n\n    def use_ldap(self):\n        if not self.use_https():\n            return False\n\n        if AUTHENTICATION_KEY in self.config_properties:\n            return self.config_properties[AUTHENTICATION_KEY] == 'LDAP'\n        return False\n\n    def get_ldap_user(self):\n        return self._lookup(LDAP_CLIENT_USER_KEY)\n\n    def get_ldap_password(self):\n        return self._lookup(LDAP_CLIENT_PASSWORD_KEY)\n"
  },
  {
    "path": "prestoadmin/util/remote_config_util.py",
    "content": "# -*- coding: utf-8 -*-\n\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport logging\nfrom fabric.context_managers import settings, hide\nfrom fabric.operations import sudo\nfrom fabric.tasks import execute\nfrom prestoadmin.util.exception import ConfigurationError\nfrom prestoadmin.util.constants import DEFAULT_PRESTO_LAUNCHER_LOG_FILE,\\\n    DEFAULT_PRESTO_SERVER_LOG_FILE, REMOTE_CONF_DIR, REMOTE_CATALOG_DIR\nimport prestoadmin.util.validators\n\n_LOGGER = logging.getLogger(__name__)\n\nNODE_CONFIG_FILE = REMOTE_CONF_DIR + '/node.properties'\nGENERAL_CONFIG_FILE = REMOTE_CONF_DIR + '/config.properties'\n\n\ndef lookup_port(host):\n    \"\"\"\n    Get the http port from config.properties http-server.http.port property\n    if available.\n    If the property is missing return default port 8080.\n    If the file is missing or cannot parse the port number,\n    throw ConfigurationError\n    :param host:\n    :return:\n    \"\"\"\n    port = lookup_in_config('http-server.http.port', GENERAL_CONFIG_FILE, host)\n    if not port:\n        _LOGGER.info('Could not find property http-server.http.port.'\n                     'Defaulting to 8080.')\n        return 8080\n    try:\n        port = port.split('=', 1)[1]\n        port = prestoadmin.util.validators.validate_port(port)\n        _LOGGER.info('Looked up port ' + str(port) + ' on host ' +\n                     host)\n        return port\n    except ConfigurationError as e:\n        raise ConfigurationError(e.message +\n                                 ' for property '\n                                 'http-server.http.port on host ' +\n                                 host + '.')\n\n\ndef lookup_server_log_file(host):\n    try:\n        return lookup_string_config('node.server-log-file', NODE_CONFIG_FILE,\n                                    host, DEFAULT_PRESTO_SERVER_LOG_FILE)\n    except:\n        return DEFAULT_PRESTO_SERVER_LOG_FILE\n\n\ndef lookup_launcher_log_file(host):\n    try:\n        return lookup_string_config('node.launcher-log-file', NODE_CONFIG_FILE,\n                                    host, DEFAULT_PRESTO_LAUNCHER_LOG_FILE)\n    except:\n        return DEFAULT_PRESTO_LAUNCHER_LOG_FILE\n\n\ndef lookup_catalog_directory(host):\n    try:\n        return lookup_string_config('catalog.config-dir', NODE_CONFIG_FILE,\n                                    host, REMOTE_CATALOG_DIR)\n    except:\n        return REMOTE_CATALOG_DIR\n\n\ndef lookup_string_config(config_value, config_file, host, default=''):\n    value = lookup_in_config(config_value, config_file, host)\n    if value:\n        return value.split('=', 1)[1]\n    else:\n        return default\n\n\ndef lookup_in_config(config_key, config_file, host):\n    with settings(hide('stdout', 'warnings', 'aborts')):\n        config_value = execute(sudo, 'grep %s= %s' % (config_key, config_file),\n                               user='presto',\n                               warn_only=True, host=host)[host]\n\n    if isinstance(config_value, Exception) or config_value.return_code == 2:\n        raise ConfigurationError('Could not access config file %s on '\n                                 'host %s' % (config_file, host))\n\n    return config_value\n"
  },
  {
    "path": "prestoadmin/util/validators.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for validating configuration information supplied by the user.\n\"\"\"\nimport re\nimport socket\n\nfrom fabric.context_managers import settings\nfrom fabric.operations import run, sudo\n\nfrom prestoadmin.util.exception import ConfigurationError\n\n\ndef validate_username(username):\n    if not isinstance(username, basestring):\n        raise ConfigurationError('Username must be of type string.')\n    return username\n\n\ndef validate_port(port):\n    try:\n        port_int = int(port)\n    except TypeError:\n        raise ConfigurationError('Port must be of type string, but '\n                                 'found ' + str(type(port)) + '.')\n    except ValueError:\n        raise ConfigurationError('Invalid port number ' + port +\n                                 ': port must be a number between 1 and 65535')\n    if not port_int > 0 or not port_int < 65535:\n        raise ConfigurationError('Invalid port number ' + port +\n                                 ': port must be a number between 1 and 65535')\n    return port_int\n\n\ndef validate_host(host):\n    try:\n        socket.inet_pton(socket.AF_INET, host)\n        return host\n    except TypeError:\n        raise ConfigurationError('Host must be of type string.  Found ' +\n                                 str(type(host)) + '.')\n    except socket.error:\n        pass\n\n    try:\n        socket.inet_pton(socket.AF_INET6, host)\n        return host\n    except socket.error:\n        pass\n\n    if not is_valid_hostname(host):\n        raise ConfigurationError(repr(host) + ' is not a valid '\n                                 'ip address or host name.')\n    return host\n\n\ndef is_valid_hostname(hostname):\n    valid_name = '^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)*' \\\n                 '([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\\-]*[A-Za-z0-9])$'\n    return re.match(valid_name, hostname)\n\n\ndef validate_can_connect(user, host, port):\n    with settings(host_string='%s@%s:%d' % (user, host, port), user=user):\n        return run('exit 0').succeeded\n\n\ndef validate_can_sudo(sudo_user, conn_user, host, port):\n    with settings(host_string='%s@%s:%d' % (conn_user, host, port),\n                  warn_only=True):\n        return sudo('exit 0', user=sudo_user).succeeded\n"
  },
  {
    "path": "prestoadmin/util/version_util.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStuff to handle version ranges.\n\"\"\"\n\nimport re\n\nTD_VERSION = re.compile(r'^\\d+t$')\n\n\ndef split_version(version_string):\n    # We split on '.' and '-' because ancient tagged versions had the tag\n    # delimited by a '-'\n    return re.split('\\.|-', version_string.strip())\n\n\ndef get_int_or_t(x):\n    try:\n        return int(x)\n    except ValueError as e:\n        if x is 't':\n            return x\n        if x[-1] is 't':\n            int(x[:-1])\n            return x\n        raise e\n\n\ndef is_int_or_t(x):\n    try:\n        get_int_or_t(x)\n        return True\n    except ValueError:\n        return False\n\n\ndef strip_tag(version):\n    \"\"\"\n    Strip any parts of the version that are not numeric components or t's\n    We leave the 't' on numeric components if it's present.\n    ['1', '2', 'THREE'] -> (1, 2)\n    ['1', 'TWO', '3'] -> (1, 3)\n    ['0', '115t', 'SNAPSHOT'] -> (0, '115t')\n    ['ZERO', '123t'] -> (123t)\n    ['0', '148', 't'] => (0, 148, 't')\n    ['0', '148', 't', 0, 1] => (0, 148, 't', 0, 1)\n    ['0', '148', 't', 0, 1, 'SNAPSHOT'] => (0, 148, 't', 0, 1)\n    ['0', '162', 'SNAPSHOT', 't', 'SNAPSHOT'] => (0, 162, 't')\n\n    This checks the components of the version from least to most significant.\n\n    :param version: something that can be sliced\n    :return: a tuple containing only integer components or the letter t\n    \"\"\"\n\n    result = list(version[:])\n    result = [get_int_or_t(x) for x in result if is_int_or_t(x)]\n    return tuple(result)\n\n\nclass VersionRange(object):\n    \"\"\"\n    Represents a range of version numbers [min_version, max_version).\n    The interval is right-open so that you can construct a numerically\n    continuous list of versions like so:\n    l = [VersionRange((0, 0), (0, 5)), VersionRange((0, 5), (1, 0))]\n    and for all versions v where 0.0 <= v < 1.0 is contained in exactly one\n    VersionRange in l.\n\n    Continuity between version ranges can be checked using is_continuous.\n\n    VersionRanges understand how to check if a Teradata version is contained\n    in a VersionRange, but do no special handling to accomodate Teradata\n    versions in their internal min_version and max_version members. I.e.,\n    creating a VersionRange with a Teradata version will work, but __contains__\n    will not work correctly. We don't currently need this, and hope not to.\n\n    Note that the right-open interval representation of a version range does\n    NOT allow the creation of a VersionRange that contains exactly one version.\n\n    Note that empty intervals cannot be constructed as the serve no useful\n    purpose. Specifically, we assert that min_version < max_version in the\n    constructor.\n    \"\"\"\n\n    def __init__(self, min_version, max_version, versioned_thing=None):\n        # not pythonic, but bare ints screw things up.\n        assert isinstance(min_version, tuple)\n        assert isinstance(max_version, tuple)\n        l = max(len(min_version), len(max_version))\n        min_pad = VersionRange.pad_tuple(min_version, l, 0)\n        max_pad = VersionRange.pad_tuple(max_version, l, 0)\n        assert min_pad < max_pad\n        self.min_version = min_version\n        self.max_version = max_version\n        self.versioned_thing = versioned_thing\n\n    def __str__(self):\n        return '[%s, %s) -> %s' % (\n            '.'.join([str(c) for c in self.min_version]),\n            '.'.join([str(c) for c in self.max_version]),\n            self.versioned_thing)\n\n    @staticmethod\n    def strip_td_suffix(version):\n        new_version = ()\n        for component in version:\n            if TD_VERSION.match(str(component)):\n                new_last = component[:-1]\n                new_version += (int(new_last),)\n            elif component is not 't':\n                new_version += (int(component),)\n\n        return new_version\n\n    @staticmethod\n    def pad_tuple(tup, length, pad):\n        assert len(tup) <= length\n        result = list(tup)\n        while len(result) < length:\n            result.append(pad)\n        return tuple(result)\n\n    def zero_pad(self, other):\n        \"\"\"\n        Pad out min_version, max_version, and other with zeroes to the length\n        of the longest of the three. This allows subsequent comparisons to work\n        as expected when tuples are of unequal length.\n        Returns a tuple of tuples padded out to the same length\n        \"\"\"\n        l = max(len(self.min_version), len(self.max_version), len(other))\n        return (self.pad_tuple(self.min_version, l, 0),\n                self.pad_tuple(self.max_version, l, 0),\n                self.pad_tuple(other, l, 0))\n\n    def __contains__(self, other):\n        other = self.strip_td_suffix(other)\n        other = tuple([int(component) for component in other])\n\n        min_pad, max_pad, o_pad = self.zero_pad(other)\n        return min_pad <= o_pad and o_pad < max_pad\n\n    def is_continuous(self, next):\n        min_pad, max_pad, next_min_pad = self.zero_pad(next.min_version)\n        return max_pad == next_min_pad\n\n\nclass VersionRangeList(object):\n    \"\"\"\n    A VersionRangeList is a list of continuous, non-overlapping VersionRanges.\n    This is guaranteed by calling VersionRange.is_continuous on all pairs of\n    VersionRanges vr[i], vr[i + 1] in the list, which ensures that the list is\n    both sorted in order of ascending version and that the interval\n    [vr[0].min_version, vr[n].max_version) has no discontinuities.\n    \"\"\"\n\n    def __init__(self, *range_list):\n        if len(range_list) >= 2:\n            for i in range(0, len(range_list) - 1):\n                assert range_list[i].is_continuous(range_list[i + 1])\n\n        self.range_list = range_list\n\n    def __str__(self):\n        return '\\n'.join([str(vr) for vr in self.range_list])\n\n    def for_version(self, version):\n        for range in self.range_list:\n            if version in range:\n                return range.versioned_thing\n        raise KeyError(version)\n"
  },
  {
    "path": "prestoadmin/workers.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for the presto worker`'s configuration.\nLoads and validates the workers.json file and creates the files needed\nto deploy on the presto cluster\n\"\"\"\n\nimport copy\nimport logging\nimport urlparse\n\nfrom fabric.api import env\n\nimport prestoadmin.util.fabricapi as util\nfrom prestoadmin.node import Node\nfrom prestoadmin.presto_conf import validate_presto_conf\nfrom prestoadmin.util.exception import ConfigurationError\nfrom prestoadmin.util.local_config_util import get_workers_directory\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Worker(Node):\n    DEFAULT_PROPERTIES = {'node.properties':\n                          {'node.environment': 'presto',\n                           'node.data-dir': '/var/lib/presto/data',\n                           'node.launcher-log-file':\n                               '/var/log/presto/launcher.log',\n                           'node.server-log-file':\n                               '/var/log/presto/server.log',\n                           'catalog.config-dir': '/etc/presto/catalog',\n                           'plugin.dir': '/usr/lib/presto/lib/plugin'},\n                          'jvm.config': ['-server',\n                                         '-Xmx16G',\n                                         '-XX:-UseBiasedLocking',\n                                         '-XX:+UseG1GC',\n                                         '-XX:G1HeapRegionSize=32M',\n                                         '-XX:+ExplicitGCInvokesConcurrent',\n                                         '-XX:+HeapDumpOnOutOfMemoryError',\n                                         '-XX:+UseGCOverheadLimit',\n                                         '-XX:+ExitOnOutOfMemoryError',\n                                         '-XX:ReservedCodeCacheSize=512M',\n                                         '-DHADOOP_USER_NAME=hive'],\n                          'config.properties': {'coordinator': 'false',\n                                                'http-server.http.port':\n                                                    '8080',\n                                                'query.max-memory': '50GB',\n                                                'query.max-memory-per-node':\n                                                    '8GB'}\n                          }\n\n    def _get_conf_dir(self):\n        return get_workers_directory()\n\n    def default_config(self, filename):\n        try:\n            conf = copy.deepcopy(self.DEFAULT_PROPERTIES[filename])\n        except KeyError:\n            raise ConfigurationError('Invalid configuration file name: %s' %\n                                     filename)\n        if filename == 'config.properties':\n            coordinator = util.get_coordinator_role()[0]\n            conf['discovery.uri'] = 'http://%s:8080' % coordinator\n        return conf\n\n    @staticmethod\n    def is_localhost(hostname):\n        return hostname in ['localhost', '127.0.0.1', '::1']\n\n    @staticmethod\n    def validate(conf):\n        validate_presto_conf(conf)\n        if 'coordinator' not in conf['config.properties']:\n            raise ConfigurationError('Must specify coordinator=false in '\n                                     'worker\\'s config.properties')\n        if conf['config.properties']['coordinator'] != 'false':\n            raise ConfigurationError('Coordinator must be false in the '\n                                     'worker\\'s config.properties')\n        uri = urlparse.urlparse(conf['config.properties']['discovery.uri'])\n        if Worker.is_localhost(uri.hostname) and len(env.roledefs['all']) > 1:\n            raise ConfigurationError(\n                'discovery.uri should not be localhost in a '\n                'multi-node cluster, but found ' + urlparse.urlunparse(uri) +\n                '.  You may have encountered this error by '\n                'choosing a coordinator that is localhost and a worker that '\n                'is not.  The default discovery-uri is '\n                'http://<coordinator>:8080')\n        return conf\n"
  },
  {
    "path": "prestoadmin/yarn_slider/__init__.py",
    "content": ""
  },
  {
    "path": "prestoadmin/yarn_slider/config.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"\nModule for setting and validating the presto-admin Apache Slider config\n\"\"\"\n\nimport os\n\nfrom overrides import overrides\n\nfrom fabric.state import env\n\nfrom prestoadmin.util.base_config import BaseConfig, SingleConfigItem, \\\n    MultiConfigItem\nfrom prestoadmin.util.local_config_util import get_config_directory\nfrom prestoadmin.util.validators import validate_host, validate_port, \\\n    validate_username, validate_can_connect, validate_can_sudo\n\nSLIDER_CONFIG_LOADED = 'slider_config_loaded'\nSLIDER_CONFIG_DIR = os.path.join(get_config_directory(), 'slider')\nSLIDER_CONFIG_PATH = os.path.join(SLIDER_CONFIG_DIR, 'config.json')\nSLIDER_MASTER = 'slider_master'\n\nHOST = 'slider_master'\nADMIN_USER = 'admin'\nSSH_PORT = 'ssh_port'\n\nDIR = 'slider_directory'\nAPPNAME = 'slider_appname'\nINSTANCE_NAME = 'slider_instname'\nSLIDER_USER = 'slider_user'\nJAVA_HOME = 'JAVA_HOME'\nHADOOP_CONF = 'HADOOP_CONF'\n\n# This key comes from the server install step, NOT a user prompt. Accordingly,\n# there is no SliderConfigItem for it in _SLIDER_CONFIG\nPRESTO_PACKAGE = 'presto_slider_package'\n\n\n_SLIDER_CONFIG = [\n    MultiConfigItem([\n        SingleConfigItem(HOST, 'Enter the hostname for the slider master:',\n                         'localhost', validate_host),\n        SingleConfigItem(ADMIN_USER, 'Enter the user name to use when ' +\n                         'installing slider on the slider master:',\n                         'root', validate_username),\n        SingleConfigItem(SSH_PORT, 'Enter the port number for SSH ' +\n                         'connections to the slider master', 22,\n                         validate_port)],\n                    validate_can_connect, (ADMIN_USER, HOST, SSH_PORT),\n                    'Connection failed for %%(%s)s@%%(%s)s:%%(%s)d. ' +\n                    'Re-enter connection information.'),\n\n    SingleConfigItem(DIR, 'Enter the directory to install slider into on '\n                     'the slider master:', '/opt/slider', None),\n\n    MultiConfigItem([\n        SingleConfigItem(SLIDER_USER, 'Enter a user name for running slider '\n                         'on the slider master ', 'yarn',\n                         validate_username)],\n                    validate_can_sudo,\n                    (SLIDER_USER, ADMIN_USER, HOST, SSH_PORT),\n                    'Failed to sudo to user %%(%s)s while connecting as ' +\n                    '%%(%s)s@%%(%s)s:%%(%s)d. Enter a new username and try' +\n                    'again.'),\n\n    SingleConfigItem(JAVA_HOME, 'Enter the value of JAVA_HOME to use when' +\n                     'running slider on the slider master:',\n                     '/usr/lib/jvm/java', None),\n    SingleConfigItem(HADOOP_CONF, 'Enter the location of the Hadoop ' +\n                     'configuration on the slider master:',\n                     '/etc/hadoop/conf', None),\n    SingleConfigItem(APPNAME, 'Enter a name for the presto slider application',\n                     'PRESTO', None)]\n\n\nclass SliderConfig(BaseConfig):\n    '''\n    presto-admin needs to update the slider config other than through the\n    interactive config process because it needs to keep track of the name\n    of the presto-yarn-package we install.\n\n    As a result, SliderConfig acts a little funny; it acts like enough of a\n    dict to allow env.conf[NAME] lookups and modifications, and it also exposes\n    the ability to store the config after it's been modified.\n    '''\n\n    def __init__(self):\n        super(SliderConfig, self).__init__(SLIDER_CONFIG_PATH, _SLIDER_CONFIG)\n\n    @overrides\n    def is_config_loaded(self):\n        return SLIDER_CONFIG_LOADED in env and env[SLIDER_CONFIG_LOADED]\n\n    @overrides\n    def set_config_loaded(self):\n        env[SLIDER_CONFIG_LOADED] = True\n\n    @overrides\n    def set_env_from_conf(self, conf):\n        self.config.update(conf)\n        env.user = conf[ADMIN_USER]\n        env.port = conf[SSH_PORT]\n        env.roledefs[SLIDER_MASTER] = [conf[HOST]]\n        env.roledefs['all'] = env.roledefs[SLIDER_MASTER]\n\n        env.conf = self\n\n        env.hosts = env.roledefs['all'][:]\n\n    def store_conf(self):\n        super(SliderConfig, self).write_conf(self.config)\n"
  },
  {
    "path": "prestoadmin/yarn_slider/server.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for managing presto/YARN integration.\n\"\"\"\n\nimport os.path\n\nfrom fabric.api import env, task, abort\nfrom fabric.context_managers import shell_env\nfrom fabric.operations import put, sudo, local\n\nfrom prestoadmin.yarn_slider.config import SliderConfig, \\\n    DIR, SLIDER_USER, APPNAME, JAVA_HOME, HADOOP_CONF, SLIDER_MASTER, \\\n    PRESTO_PACKAGE, SLIDER_CONFIG_DIR\nfrom prestoadmin.util.base_config import requires_config\n\nfrom prestoadmin.util.fabricapi import task_by_rolename\n\n__all__ = ['install', 'uninstall']\n\n\nSLIDER_PKG_DEFAULT_FILES = ['appConfig-default.json', 'resources-default.json']\n\n\ndef get_slider_bin(conf):\n    return os.path.join(conf[DIR], 'bin', 'slider')\n\n\ndef run_slider(slider_command, conf):\n    with shell_env(JAVA_HOME=conf[JAVA_HOME],\n                   HADOOP_CONF_DIR=conf[HADOOP_CONF]):\n        return sudo(slider_command, user=conf[SLIDER_USER])\n\n\n@task\n@requires_config(SliderConfig)\n@task_by_rolename(SLIDER_MASTER)\ndef install(presto_yarn_package):\n    \"\"\"\n    Install the presto-yarn package on the cluster using Apache Slider. The\n    presto-yarn package takes the form of a zip file that conforms to Slider's\n    packaging requirements. After installing the presto-yarn package the presto\n    application is registered with Slider.\n\n    Before Slider can install the presto-yarn package, the slider user's hdfs\n    home directory needs to be created. This needs to be done by a user that\n    has write access to the hdfs /user directory, typically the user hdfs or a\n    member of the superuser group.\n\n    The name of the presto application is arbitrary and set in the slider\n    configuration file. The default is PRESTO\n\n    :param presto_yarn_package: The zip file containing the presto-yarn\n                                package as structured for Slider.\n    \"\"\"\n    conf = env.conf\n    package_filename = os.path.basename(presto_yarn_package)\n    package_file = os.path.join('/tmp', package_filename)\n\n    result = put(presto_yarn_package, package_file)\n    if result.failed:\n        abort('Failed to send slider application package to %s on host %s' %\n              (package_file, env.host))\n\n    package_install_command = \\\n        '%s package --install --package %s --name %s' % \\\n        (get_slider_bin(conf), package_file, conf[APPNAME])\n\n    try:\n        run_slider(package_install_command, conf)\n\n        conf[PRESTO_PACKAGE] = package_filename\n        conf.store_conf()\n\n        local('unzip %s %s -d %s' %\n              (presto_yarn_package, ' '.join(SLIDER_PKG_DEFAULT_FILES),\n               SLIDER_CONFIG_DIR))\n    finally:\n        sudo('rm -f %s' % (package_file))\n\n\n@task\n@requires_config(SliderConfig)\n@task_by_rolename(SLIDER_MASTER)\ndef uninstall():\n    \"\"\"\n    Uninstall unregisters the presto application with slider and removes the\n    installed package.\n    \"\"\"\n    conf = env.conf\n    package_delete_command = '%s package --delete --name %s' % \\\n                             (get_slider_bin(conf), conf[APPNAME])\n    run_slider(package_delete_command, conf)\n\n    try:\n        del conf[PRESTO_PACKAGE]\n        conf.store_conf()\n    except KeyError:\n        pass\n\n    local('rm %s' % (' '.join([os.path.join(SLIDER_CONFIG_DIR, f)\n                     for f in SLIDER_PKG_DEFAULT_FILES])))\n"
  },
  {
    "path": "prestoadmin/yarn_slider/slider.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for installing and uninstalling slider.\n\"\"\"\n\nimport os\n\nfrom fabric.api import env, task, abort\nfrom fabric.operations import put, sudo\n\nfrom prestoadmin.yarn_slider.config import SliderConfig, \\\n    DIR, SLIDER_MASTER\nfrom prestoadmin.util.base_config import requires_config\n\nfrom prestoadmin.util.fabricapi import task_by_rolename\n\n__all__ = ['install', 'uninstall']\n\n\n@task\n@requires_config(SliderConfig)\n@task_by_rolename(SLIDER_MASTER)\ndef install(slider_tarball):\n    \"\"\"\n    Install slider on the slider master. You must provide a tar file on the\n    local machine that contains the slider distribution.\n\n    :param slider_tarball: The gzipped tar file containing the Apache Slider\n                           distribution\n    \"\"\"\n    deploy_install(slider_tarball)\n\n\ndef deploy_install(slider_tarball):\n    slider_dir = env.conf[DIR]\n    slider_parent = os.path.dirname(slider_dir)\n    slider_file = os.path.join(slider_parent, os.path.basename(slider_tarball))\n\n    sudo('mkdir -p %s' % (slider_dir))\n\n    result = put(slider_tarball, os.path.join(slider_parent, slider_file))\n    if result.failed:\n        abort('Failed to send slider tarball %s to directory %s on host %s' %\n              (slider_tarball, slider_dir, env.host))\n\n    sudo('gunzip -c %s | tar -x -C %s --strip-components=1 && rm -f %s' %\n         (slider_file, slider_dir, slider_file))\n\n\n@task\n@requires_config(SliderConfig)\n@task_by_rolename(SLIDER_MASTER)\ndef uninstall():\n    \"\"\"\n    Uninstall slider from the slider master.\n    \"\"\"\n    sudo('rm -r \"%s\"' % (env.conf[DIR]))\n"
  },
  {
    "path": "release.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport base64\nimport getpass\nimport json\nimport os\nimport re\nimport subprocess\n\nfrom util import __version__\nfrom util.http import send_get_request, send_authorized_post_request\nfrom util.semantic_version import SemanticVersion\n\ntry:\n    from setuptools import Command\nexcept ImportError:\n    from distutils.core import Command\n\nGITHUB_REPOSITORY_API_PATH = 'https://api.github.com/repos/prestodb/presto-admin'\nCURRENT_DIRECTORY = os.path.dirname(os.path.realpath(__file__))\n\n\nclass ReleaseFetcher:\n    def __init__(self, directory, github_api_path):\n        self.directory = directory\n        self.github_api_path = github_api_path\n        self.release_validator = ReleaseValidator(directory)\n\n    def get_latest_release(self):\n        headers, contents = send_get_request(self.github_api_path + '/releases/latest')\n        return json.loads(contents)\n\n    def _get_remote_branches(self):\n        headers, contents = send_get_request(self.github_api_path + '/branches')\n        return json.loads(contents)\n\n    def _get_current_branch(self):\n        return subprocess.check_output(['git', 'rev-parse', '--abbrev-ref', 'HEAD'], cwd=self.directory).strip()\n\n    def _get_last_remote_commit(self, branch):\n        headers, contents = send_get_request(self.github_api_path + '/commits/' + branch)\n        return json.loads(contents)\n\n    def _get_last_local_commit(self):\n        return subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=self.directory).strip()\n\n    def _get_latest_tag(self):\n        latest_release = self.get_latest_release()\n        return latest_release['tag_name']\n\n    def get_requested_release_tag(self):\n        release_note_docs = self._get_all_release_note_docs()\n        release_note_names = [os.path.splitext(release_note_doc)[0] for release_note_doc in release_note_docs]\n        versions = [SemanticVersion(release_note_name.split('-')[1]) for release_note_name in release_note_names]\n        latest_version_number = sorted(versions, reverse=True)[0]\n        return str(latest_version_number)\n\n    @staticmethod\n    def _is_valid_release_doc_name(release_doc_name):\n        return re.match('^release-[0-9]+(\\.[0-9]+){0,2}\\.rst$', release_doc_name)\n\n    def _get_all_release_note_docs(self):\n        release_docs_directory = os.path.join(self.directory, 'docs/release/')\n        return [release_doc_name for release_doc_name in os.listdir(release_docs_directory)\n                if (os.path.isfile(os.path.join(release_docs_directory, release_doc_name)) and\n                    ReleaseFetcher._is_valid_release_doc_name(release_doc_name))]\n\n    @staticmethod\n    def _find_nth(haystack, needle, n):\n        start = haystack.find(needle)\n        while start >= 0 and n > 1:\n            start = haystack.find(needle, start+1)\n            n -= 1\n        return start\n\n    def get_body_from_release_notes(self, tag_name):\n        release_notes_file_path = os.path.join(self.directory, 'docs/release/release-%s.rst' % tag_name)\n        with open(release_notes_file_path, 'r') as release_notes_file:\n            release_notes = release_notes_file.read()\n            release_notes_without_header = release_notes.strip()[ReleaseFetcher._find_nth(release_notes, '\\n', 3):]\n            return release_notes_without_header.strip()\n\n    def _get_and_check_branch(self):\n        current_local_branch = self._get_current_branch()\n        ReleaseValidator.check_branch_remote_exists(current_local_branch, self._get_remote_branches())\n        return current_local_branch\n\n    def get_and_check_target_commitish(self):\n        self.release_validator.check_repo()\n        branch = self._get_and_check_branch()\n        last_remote_commit = self._get_last_remote_commit(branch)['sha']\n        last_local_commit = self._get_last_local_commit()\n        ReleaseValidator.check_commit(last_local_commit, last_remote_commit)\n        return last_remote_commit\n\n    def get_and_check_tag(self):\n        \"\"\"\n        This functions finds the requested release tag by looking at the names of the\n        release documents. It checks that the requested release tag is an acceptable bump\n        from the latest release tag.\n        \"\"\"\n        latest_tag = self._get_latest_tag()\n        requested_release_tag = self.get_requested_release_tag()\n        ReleaseValidator.check_tag(latest_tag, requested_release_tag)\n        return requested_release_tag\n\n\nclass ReleaseValidator:\n    def __init__(self, directory):\n        self.directory = directory\n\n    def check_repo(self):\n        if subprocess.check_output(['git', 'status', '--porcelain'], cwd=self.directory).strip():\n            exit('Repository is not clean. Commit or stash all changes')\n        else:\n            print 'Repository is clean'\n\n    @staticmethod\n    def check_branch_remote_exists(local_branch_name, remote_branches):\n        for remote_branch in remote_branches:\n            if local_branch_name == remote_branch['name']:\n                print 'Local branch %s exists remotely' % local_branch_name\n                return\n        exit('Local branch %s does not exist remotely' % local_branch_name)\n\n    @staticmethod\n    def check_tag(latest_tag, requested_release_tag):\n        print 'The latest release tag is %s.\\n' \\\n              'Detected requested release tag: %s' \\\n              % (latest_tag, requested_release_tag)\n\n        latest_version = SemanticVersion(latest_tag)\n        acceptable_tags = latest_version.get_acceptable_version_bumps()\n        if requested_release_tag not in acceptable_tags:\n            exit('Detected release tag %s is not part of the acceptable release tags: %s'\n                 % (requested_release_tag, acceptable_tags))\n\n    @staticmethod\n    def check_commit(last_local_commit, last_remote_commit):\n        if last_remote_commit != last_local_commit:\n            exit('Last local and remote commits do not match')\n        else:\n            print 'Last local and remote commits match'\n\n    @staticmethod\n    def _get_and_check_release_file(file_path, string_contained=None, string_begins=None):\n        with open(file_path, 'r') as release_file:\n            file_contents = release_file.read()\n            if string_contained:\n                if string_contained not in file_contents:\n                    exit('Expected \"%s\" to be in %s' % (string_contained, file_path))\n            if string_begins:\n                if not file_contents.startswith(string_begins):\n                    print file_contents\n                    exit('Expected %s to begin with \"%s\"' % (file_path, string_contained))\n\n            return file_contents\n\n    @staticmethod\n    def _confirm_version_changed(tag_name):\n        if __version__ != tag_name:\n            exit('Version in prestoadmin/_version is %s, but expected %s' % (__version__, tag_name))\n\n    def _confirm_release_docs_format(self, tag_name):\n        \"\"\"\n        This function checks the format of the release documents.\n        It checks the release document to make sure it has a header and that the\n        release document name has been added to the file with the list of releases.\n        \"\"\"\n        release_doc_name = 'release-' + tag_name + '.rst'\n        release_doc_path = os.path.join(self.directory, 'docs/release', release_doc_name)\n        release_doc_header = 'Release ' + tag_name\n        release_doc_header = ('=' * len(release_doc_header)) + '\\n' + release_doc_header + '\\n' + \\\n                             ('=' * len(release_doc_header)) + '\\n'\n        ReleaseValidator._get_and_check_release_file(release_doc_path,\n                                                     string_begins=release_doc_header)\n\n        string_contained = 'release/release-' + tag_name\n        release_list_doc_path = os.path.join(self.directory, 'docs/release.rst')\n        ReleaseValidator._get_and_check_release_file(release_list_doc_path,\n                                                     string_contained=string_contained)\n\n        print 'Release docs confirmed for tag %s' % tag_name\n\n    def confirm_all_release_file_changes(self, tag_name):\n        ReleaseValidator._confirm_version_changed(tag_name)\n        self._confirm_release_docs_format(tag_name)\n\n\nclass GithubReleaser:\n    def __init__(self, directory, github_api_path):\n        self.directory = directory\n        self.github_api_path = github_api_path\n        self.release_fetcher = ReleaseFetcher(directory, github_api_path)\n        self.release_validator = ReleaseValidator(directory)\n        self.username = None\n        self.password = None\n        self.tag_name = None\n        self.release_name = None\n        self.target_commitish = None\n        self.name = None\n        self.body = None\n        self.is_draft = 'false'\n        self.is_prerelease = 'false'\n\n    def _prompt_username(self):\n        self.username = raw_input('Please input your Github username: ')\n\n    def _prompt_password(self):\n        self.password = getpass.getpass(\"Enter password for '%s': \" % self.username)\n\n    def _get_authorization_string(self):\n        self._prompt_username()\n        self._prompt_password()\n        return base64.standard_b64encode('%s:%s' % (self.username, self.password))\n\n    def _check_and_set_release_fields(self):\n        \"\"\"\n        This functions checks that files have been added and/or modified for the release.\n        It sets the fields necessary to release to Github.\n        \"\"\"\n        self.target_commitish = self.release_fetcher.get_and_check_target_commitish()\n        self.tag_name = self.release_fetcher.get_and_check_tag()\n        self.release_validator.confirm_all_release_file_changes(self.tag_name)\n        self.body = self.release_fetcher.get_body_from_release_notes(self.tag_name)\n        self.body = GithubReleaser._escape_newlines(self.body)\n        self.release_name = 'Release ' + self.tag_name\n\n    @staticmethod\n    def _escape_newlines(multiline_string):\n        return multiline_string.replace('\\n', '\\\\n')\n\n    def _build_json_post_contents(self):\n        return '{\"tag_name\": \"%s\", \"target_commitish\": \"%s\", \"name\": \"%s\", \"body\": \"%s\",' \\\n               ' \"draft\": %s, \"prerelease\": %s}' \\\n               % (self.tag_name, self.target_commitish, self.release_name,\n                  self.body, self.is_draft, self.is_prerelease)\n\n    @staticmethod\n    def _send_github_create_release_post_request(url, json_data, authorization_string):\n        send_authorized_post_request(url, json_data, authorization_string, 'application/json', len(json_data))\n        print 'Successfully created Github release'\n\n    @staticmethod\n    def _send_bztar_post_request(url, bztar_data, authorization_string, content_length):\n        send_authorized_post_request(url, bztar_data, authorization_string, 'application/octet-stream', content_length)\n\n    def _send_installer_post_request(self, release_url, installer_name, authorization_string, command_args):\n        installer_path = os.path.join(self.directory, 'dist/', installer_name)\n        with open(os.devnull, 'w') as dev_null:\n            subprocess.check_call(command_args, stdout=dev_null, stderr=dev_null)\n        with open(installer_path, mode='rb') as online_installer:\n            GithubReleaser._send_bztar_post_request('%s?name=%s' % (release_url, installer_name),\n                                                    online_installer,\n                                                    authorization_string,\n                                                    os.path.getsize(installer_path))\n        print 'Successfully posted %s' % installer_name\n\n    def _send_online_installer_post_request(self, release_url, online_install_name, authorization_string):\n        self._send_installer_post_request(release_url, online_install_name, authorization_string,\n                                          ['make', 'dist-online'])\n\n    def _send_offline_installer_post_request(self, release_url, offline_install_name, authorization_string):\n        self._send_installer_post_request(release_url, offline_install_name, authorization_string,\n                                          ['make', 'dist-offline'])\n\n    def _send_github_release_posts(self, json_data):\n        # Creating a release:\n        # https://developer.github.com/v3/repos/releases/#create-a-release\n        authorization_string = self._get_authorization_string()\n        GithubReleaser._send_github_create_release_post_request(self.github_api_path + '/releases',\n                                                                json_data, authorization_string)\n\n        latest_release = self.release_fetcher.get_latest_release()\n        release_tag = latest_release['tag_name']\n        # Each release has an associated upload url that allows it to link to other resources:\n        # https://developer.github.com/v3/#hypermedia\n        release_url = latest_release['upload_url'].split('{')[0]\n\n        # The expected names of the online and offline installers\n        online_install_name = 'prestoadmin-%s-online.tar.gz' % release_tag\n        offline_install_name = 'prestoadmin-%s-offline.tar.gz' % release_tag\n\n        # Upload release assets:\n        # https://developer.github.com/v3/repos/releases/#upload-a-release-asset\n        self._send_online_installer_post_request(release_url, online_install_name, authorization_string)\n        self._send_offline_installer_post_request(release_url, offline_install_name, authorization_string)\n        print 'Successfully created release and uploaded assets to Github'\n\n    def check_and_create_new_github_release(self):\n        print '\\nCreating a new Github release'\n        self._check_and_set_release_fields()\n\n        json_post_contents = self._build_json_post_contents()\n        self._send_github_release_posts(json_post_contents)\n\n\nclass PypiReleaser:\n    def __init__(self, directory, github_api_directory):\n        self.directory = directory\n        self.github_api_directory = github_api_directory\n        self.release_fetcher = ReleaseFetcher(directory, github_api_directory)\n        self.release_validator = ReleaseValidator(directory)\n\n    def _confirm_pypi_release_state(self):\n        self.release_fetcher.get_and_check_target_commitish()\n        requested_release_tag = self.release_fetcher.get_requested_release_tag()\n        self.release_validator.confirm_all_release_file_changes(requested_release_tag)\n\n    @staticmethod\n    def _check_pypi_success(output):\n        if 'Server response (200): OK' in output:\n            return True\n        else:\n            return False\n\n    def _run_pypi_command(self, command):\n        try:\n            output = subprocess.check_output(command, stderr=subprocess.STDOUT)\n        except subprocess.CalledProcessError as e:\n            print e.output\n            raise\n        if PypiReleaser._check_pypi_success(output):\n            return True\n        else:\n            print output\n            return False\n\n    def _check_pypi_setup(self):\n            command = ['python', 'setup.py', 'register', '-r', 'pypi']\n            if self._run_pypi_command(command):\n                print 'Setup correctly for Pypi release'\n                return\n            else:\n                exit('Not setup correctly for Pypi release')\n\n    def _submit_pypi_release(self):\n        command = ['python', 'setup.py', 'bdist_wheel', 'upload', '-r', 'pypi']\n        if self._run_pypi_command(command):\n            print 'Released successfully to Pypi'\n            return\n        else:\n            exit('Failed to release to Pypi')\n\n    def create_new_pypi_release(self):\n        print '\\nCreating a new Pypi release'\n        self._confirm_pypi_release_state()\n        self._check_pypi_setup()\n        self._submit_pypi_release()\n\n\nclass release(Command):\n    description = 'create release to github and/or pypi'\n\n    user_options = [('github', None,\n                     'boolean flag indicating if a release should be created for github'),\n                    ('pypi', None,\n                     'boolean flag indicating if a release should be created for pypi'),\n                    ('all', None,\n                     'boolean flag indicating if a release should be created for github and pypi')]\n\n    def initialize_options(self):\n        self.github = False\n        self.pypi = False\n        self.all = True\n\n    def finalize_options(self):\n        if self.github or self.pypi:\n            self.all = False\n\n    def run(self):\n        github_releaser = GithubReleaser(CURRENT_DIRECTORY, GITHUB_REPOSITORY_API_PATH)\n        pypi_releaser = PypiReleaser(CURRENT_DIRECTORY, GITHUB_REPOSITORY_API_PATH)\n        if self.all:\n            github_releaser.check_and_create_new_github_release()\n            pypi_releaser.create_new_pypi_release()\n        else:\n            if self.github:\n                github_releaser.check_and_create_new_github_release()\n            if self.pypi:\n                pypi_releaser.create_new_pypi_release()\n        print 'Now might be a good time to update the version to SNAPSHOT'\n"
  },
  {
    "path": "requirements.txt",
    "content": "pycparser==2.18 # BSD\nargparse==1.4 # Python\nparamiko==1.15.3  # LGPL\nflake8==2.5.4  # MIT\nmock==1.0.1  # License :: OSI Approved :: BSD License\npy==1.4.26  # MIT license\nSphinx==1.3.1  # BSD\ntox==1.9.2  # http://opensource.org/licenses/MIT\nvirtualenv==12.0.7  # MIT\nwheel==0.23.0  # MIT\nFabric==1.10.1  # License :: OSI Approved :: BSD License\nrequests==2.7.0  # Apache 2.0\ndocker==2.5.1  # Apache License 2.0\ncertifi==2015.4.28 # Mozilla Public License\nnose==1.3.7  # GNU LGPL\nnose-timer==0.6 # MIT\nfudge==1.1.0  # The MIT License\nPyYAML==3.11  # MIT\noverrides==0.5  # Apache License, Version 2.0\nsetuptools==20.1.1  # License :: OSI Approved :: MIT License\npip==8.1.2  # MIT\nretrying==1.3.3  # Apache 2.0\npyjks==0.5.1  # MIT\n"
  },
  {
    "path": "setup.cfg",
    "content": "[wheel]\nuniversal = 0\n[nosetests]\nverbosity=3\n[flake8]\nmax-line-length = 120\n"
  },
  {
    "path": "setup.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\n# This is necessary for nose to handle multiprocessing correctly\nfrom multiprocessing import util  # noqa\n\ntry:\n    from setuptools import setup, find_packages\nexcept ImportError:\n    from distutils.core import setup\n\nfrom packaging.bdist_prestoadmin import bdist_prestoadmin\nfrom release import release\n\n# Import this from util instead of prestoadmin because prestoadmin has third\n# party dependencies that can't be resolved by setup.py. Util should not.\nfrom util import __version__\n\nwith open('README.md') as readme_file:\n    readme = readme_file.read()\n\n# Requirements for both development and testing are duplicated here\n# and in the requirements.txt. Unfortunately this is required by\n# tox which relies on the existence of both.\n\n# Note that argparse is special. We don't actually depend on argparse, but\n# wheel does. If argparse exists in the system libraries, pip wheel won't\n# package it up into the third-party directory, and the resulting dist-offline\n# will fail to install if argparse isn't in the system python libraries.\nrequirements = [\n    'pycparser==2.18',\n    'argparse==1.4',\n    'paramiko==1.15.3',\n    'Fabric==1.10.1',\n    'requests==2.7.0',\n    'overrides==0.5',\n    'pip==8.1.2',\n    'setuptools==20.1.1',\n    'wheel==0.23.0',\n    'flake8==2.5.4',\n    'tox==1.9.2',\n    'retrying==1.3.3',\n    'pyjks==0.5.1'\n]\n\ntest_requirements = [\n    'tox==1.9.2',\n    'nose==1.3.7',\n    'nose-timer==0.6',\n    'mock==1.0.1',\n    'wheel==0.23.0',\n    'docker-py==1.5.0',\n    'certifi==2015.4.28',\n    'fudge==1.1.0',\n    'PyYAML==3.11'\n]\n\n# =====================================================\n# Welcome to HackLand! We monkey patch the _get_rc_file\n# method of PyPIRCCommand so that we can read a .pypirc\n# that is located in the current directory. This enables\n# us to check it in with the code and not require\n# developers to create files in their home directory.\nfrom distutils.config import PyPIRCCommand  # noqa\n\n\ndef get_custom_rc_file(self):\n    home_pypi = os.path.join(os.path.expanduser('~'),\n                             '.pypirc')\n    local_pypi = os.path.join(\n        os.path.dirname(os.path.realpath(__file__)),\n        '.pypirc')\n    return local_pypi if os.path.exists(local_pypi) \\\n        else home_pypi\n\nPyPIRCCommand._get_rc_file = get_custom_rc_file\n# Thank you for visiting HackLand!\n# =====================================================\n\nsetup(\n    name='prestoadmin',\n    version=__version__,\n    description=\"Presto-admin installs, configures, and manages Presto \" + \\\n                \"installations.\",\n    long_description=readme,\n    author=\"PrestoDB Team\",\n    url='https://github.com/prestodb/presto-admin',\n    packages=find_packages(exclude=['*tests*']),\n    package_dir={'prestoadmin':\n                 'prestoadmin'},\n    package_data={'prestoadmin': ['presto-admin-logging.ini']},\n    include_package_data=True,\n    install_requires=requirements,\n    license=\"APLv2\",\n    zip_safe=False,\n    keywords='prestoadmin',\n    classifiers=[\n        'Development Status :: 2 - Pre-Alpha',\n        'Intended Audience :: Developers',\n        'License :: OSI Approved :: Apache Software License',\n        'Natural Language :: English',\n        \"Programming Language :: Python :: 2\",\n        'Programming Language :: Python :: 2.6',\n        'Programming Language :: Python :: 2.7'\n    ],\n    test_suite='tests',\n    tests_require=test_requirements,\n    cmdclass={'bdist_prestoadmin': bdist_prestoadmin,\n              'release': release},\n    entry_points={'console_scripts': ['presto-admin = prestoadmin.main:main']}\n)\n"
  },
  {
    "path": "tests/__init__.py",
    "content": "# -*- coding: utf-8 -*-\n"
  },
  {
    "path": "tests/bare_image_provider.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nAbstract base class for bare image providers.\n\nBare image providers know how to bring bare docker images into existence for\nthe product tests.\n\"\"\"\n\nimport abc\n\nfrom docker import DockerClient\n\n\nclass BareImageProvider(object):\n    __metaclass__ = abc.ABCMeta\n\n    def __init__(self, tag_decoration):\n        super(BareImageProvider, self).__init__()\n        self.tag_decoration = tag_decoration\n\n    @abc.abstractmethod\n    def create_bare_images(self, cluster, master_name, slave_name):\n        \"\"\"Create master and slave images to be tagged with master_name and\n        slave_name, respectively.\"\"\"\n        pass\n\n    def get_tag_decoration(self):\n        \"\"\"Returns a string that's prepended to docker image tags for images\n        based off of the bare image created by the provider.\"\"\"\n        return self.tag_decoration\n\n\n\"\"\"\nProvides bare images from existing tags in Docker. For some of the heftier\nimages, we don't want to go through a long and drawn-out Docker build on a\nregular basis. For these, we count on having an image in Docker that we can\ntag appropriately into the teradatalabs/pa_tests namespace. Test cleanup can\ncontinue to obliterate that namespace without disrupting the actual heavyweight\nimages.\n\nAs an additional benefit, this means we can have tests depend on images that\nthe test code doesn't know how to build. That seems like a liability, but it\nthat the build process for complex images can be versioned outside of the\npresto-admin codebase.\n\"\"\"\n\n\nclass TagBareImageProvider(BareImageProvider):\n    def __init__(\n            self, base_master_name, base_slave_name, base_tag, tag_decoration):\n        super(TagBareImageProvider, self).__init__(tag_decoration)\n        self.base_master_name = base_master_name\n        self.base_slave_name = base_slave_name\n        self.base_tag = base_tag\n        self.client = DockerClient()\n\n    def create_bare_images(self, cluster, master_name, slave_name):\n        self.client.images.pull(self.base_master_name, self.base_tag)\n        self.client.images.pull(self.base_slave_name, self.base_tag)\n        self.client.api.tag(self.base_master_name + \":\" + self.base_tag, master_name)\n        self.client.api.tag(self.base_slave_name + \":\" + self.base_tag, slave_name)\n"
  },
  {
    "path": "tests/base_cluster.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nAbstract base class for clusters\n\nBaseCluster defines the minimum set of methods that a cluster needs to\nimplement in order to be useful.\n\"\"\"\n\nimport abc\nimport sys\n\nfrom tests.product import determine_jdk_directory\n\n\nclass BaseCluster(object):\n    \"\"\"\n    Besides the instance methods defined here, clusters typically have a static\n    factory method that hides some of the complexity of bringing a bare cluster\n    into existence. The parameters to this method vary greatly depending on the\n    nature of the implementation, and so it doesn't make sense to try to\n    include this method in BaseCluster.\n    \"\"\"\n    __metaclass__ = abc.ABCMeta\n\n    @abc.abstractmethod\n    def tear_down(self):\n        \"\"\" Tear down the cluster.\n\n        For ephemeral clusters, this should include destroying the cluster and\n        freeing the associated resources.\n\n        For long-lived clusters, this would mean returning the cluster to a\n        state in which future tests will run successfully. Unfortunately, this\n        means that the tear-down method of a long-lived cluster necessarily\n        knows stuff about how tests mutate the cluster. Opportunity for\n        improvement?\n        \"\"\"\n        pass\n\n    @abc.abstractmethod\n    def all_hosts(self):\n        \"\"\"The difference between the all_hosts() method and\n        all_internal_hosts() is that all_hosts() returns the unique, \"outside\n        facing\" hostnames that docker uses. On the other hand\n        all_internal_hosts() returns the more human readable host aliases for\n        the containers used internally between containers. For example the\n        unique master host will look something like\n        'master-07d1774e-72d7-45da-bf84-081cfaa5da9a', whereas the internal\n        master host will be 'master'.\n\n        :return: List of all hosts with the random suffix.\n        \"\"\"\n        pass\n\n    @abc.abstractmethod\n    def all_internal_hosts(self):\n        \"\"\"See the docstring for all_hosts() for an explanation of the\n        differences between this and all_hosts().\n\n        Returns a list of all hosts with the random suffix removed.\n        \"\"\"\n        pass\n\n    @abc.abstractmethod\n    def get_ip_address_dict(self):\n        \"\"\"Returns a dict containing entries mapping both internal and external\n        hostnames to the IP address of the node. I.e. the resulting dict will\n        contain two entries per host with the same IP address as follows:\n        'master-07d1774e-72d7-45da-bf84-081cfaa5da9a': '192.168.21.79'\n        'master': '192.168.21.79'\n        \"\"\"\n        pass\n\n    @abc.abstractmethod\n    def stop_host(self, host_name):\n        \"\"\"Stops a host. Paradoxically, start_host doesn't seem to be required\n        for the product tests to run successfully.\"\"\"\n        pass\n\n    @abc.abstractmethod\n    def get_down_hostname(self, host_name):\n        \"\"\"This is part of the magic involved in stopping a host. If you're\n        rolling a new implementation, you should dig more deeply into the\n        existing implementations, figure out how it all works, and update this\n        comment.\n        \"\"\"\n        pass\n\n    @abc.abstractmethod\n    def postinstall(self, installer):\n        \"\"\"Some installers need the cluster to do some work after they're run\n        so as to get some cluster-specific knowledge into the files created by\n        the installer. In particular, clusters that support persisting the\n        state of the hosts and bringing up a new cluster from that state may\n        need to update host information on the new cluster.\n        \"\"\"\n        pass\n\n    @abc.abstractmethod\n    def exec_cmd_on_host(self, host, cmd, user=None, raise_error=True,\n                         tty=False, invoke_sudo=False):\n        pass\n\n    @abc.abstractmethod\n    def run_script_on_host(self, script_contents, host, tty=True):\n        \"\"\"Create a script on the remote host with the given content and execute it.\n\n        NOTE: if tty is set to True then the results of the execution on stdout will\n        have ^M (carriage return) at the end of every line. If doing string\n        comparison of the output, turn off tty.\n\n        :param script_contents: a string with the script contents\n        :param host: the host where to execute the script\n        :param tty: whether to execute the script with tty enabled\n        \"\"\"\n        pass\n\n    @abc.abstractmethod\n    def write_content_to_host(self, content, remote_path, host):\n        pass\n\n    @abc.abstractmethod\n    def copy_to_host(self, source_path, host, **kwargs):\n        pass\n\n    @abc.abstractproperty\n    def master(self):\n        \"\"\"               +++++ WARNING +++++\n        When overriding this property make sure the child class uses the @property decorator.\n        The declaration of the property in the child should look like this:\n\n        @property\n        def master(self):\n            return self._master\n\n        Returns the hostname of the master node of the cluster\"\"\"\n        pass\n\n    @abc.abstractproperty\n    def user(self):\n        \"\"\"               +++++ WARNING +++++\n        When overriding this property make sure the child class uses the @property decorator.\n        The declaration of the property in the child should look like this:\n\n        @property\n        def user(self):\n            return self._user\n\n        Returns the user with which to execute commands on the cluster\"\"\"\n        pass\n\n    @abc.abstractproperty\n    def rpm_cache_dir(self):\n        \"\"\"               +++++ WARNING +++++\n        When overriding this property make sure the child class uses the @property decorator.\n        The declaration of the property in the child should look like this:\n\n        @property\n        def rpm_cache_dir(self):\n            return self._rpm_cache_dir\n\n        Return directory where to cache the presto RPM. For DockerCluster this can be the\n        mount directory but for ConfigurableCluster where uploading the RPM involves a large\n        latency, the RPM cache has to be different so it doesn't get deleted before every test.\"\"\"\n        pass\n\n    @abc.abstractproperty\n    def mount_dir(self):\n        \"\"\"               +++++ WARNING +++++\n        When overriding this property make sure the child class uses the @property decorator.\n        The declaration of the property in the child should look like this:\n\n        @property\n        def mount_dir(self):\n            return self._mount_dir\n\n        Return the mount directory of the cluster. The mount directory is the place where files, scripts\n        and other resources needed by a test are uploaded. The mount directory may or may not be\n        ephemeral; see the implementation of the tear_down() method to confirm.\"\"\"\n        pass\n\n    def ensure_correct_execution_environment(self):\n        \"\"\"Make sure the cluster environment we're executing on conforms to our\n        expectations.\n\n        For now just check that the cluster has a single JDK installed.\n\n        :return: without error if only a single JDK is installed, otherwise exit\n        \"\"\"\n        try:\n            determine_jdk_directory(self)\n        except Exception as e:\n            sys.stderr.write(e.message)\n            sys.stderr.flush()\n            sys.exit(1)\n"
  },
  {
    "path": "tests/base_installer.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nAbstract base class for installers.\n\"\"\"\n\nimport abc\n\n\nclass BaseInstaller(object):\n    __metaclass__ = abc.ABCMeta\n\n    @staticmethod\n    @abc.abstractmethod\n    def get_dependencies():\n        \"\"\"Returns a list of installers that need to be run prior to running\n        this one. Dependencies are considered satisfied if their\n        assert_installed() returns without asserting.\n        \"\"\"\n        raise NotImplementedError()\n\n    @abc.abstractmethod\n    def install(self):\n        \"\"\"Run the installer on the cluster.\n\n        Installers may install something on one or more hosts of a cluster.\n        After calling install(), the installer's assert_installed method should\n        pass.\n        \"\"\"\n        pass\n\n    @abc.abstractmethod\n    def get_keywords(self, *args, **kwargs):\n        \"\"\"Get a map of keyword: value mappings.\n\n        We do a bunch of string formatting in the product tests when comparing\n        actual command output to expected output. Installers can use this\n        method to return additional keywords to be used in string formatting.\n        \"\"\"\n        pass\n\n    @staticmethod\n    @abc.abstractmethod\n    def assert_installed(testcase):\n        \"\"\"Check the cluster and assert if the installer hasn't been run. This\n        should return without asserting if install() has been run.\n        \"\"\"\n        raise NotImplementedError()\n"
  },
  {
    "path": "tests/base_test_case.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nGeneral utilities for running tests.  To be able to use the methods in\nBaseTestCase, your test cases should extend BaseTestCase rather than\nunittest.TestCase\n\"\"\"\n\nimport StringIO\nimport copy\nimport logging\nimport os\nimport re\nimport sys\nimport tempfile\nimport unittest\n\nfrom fabric.state import env\n\nfrom prestoadmin.util.constants import LOG_DIR_ENV_VARIABLE\n\n\nclass BaseTestCase(unittest.TestCase):\n    test_stdout = None\n    test_stderr = None\n    old_stdout = sys.__stdout__\n    old_stderr = sys.__stderr__\n    env_vars = None\n\n    def setUp(self, capture_output=False):\n        if capture_output:\n            self.capture_stdout_stderr()\n        self.env_vars = copy.deepcopy(env)\n        logging.disable(logging.CRITICAL)\n        self.redirect_log_to_tmp()\n\n    def capture_stdout_stderr(self):\n        sys.stdout = self.test_stdout = StringIO.StringIO()\n        sys.stderr = self.test_stderr = StringIO.StringIO()\n\n    def redirect_log_to_tmp(self):\n        # put log files in a temporary dir\n        self.__old_dir = os.environ.get(LOG_DIR_ENV_VARIABLE)\n        self.__temporary_dir_path = tempfile.mkdtemp(prefix='app-int-test-')\n        os.environ[LOG_DIR_ENV_VARIABLE] = self.__temporary_dir_path\n\n    def restore_log_and_delete_temp_dir(self):\n        # restore the log location\n        if self.__old_dir:\n            os.environ.update({LOG_DIR_ENV_VARIABLE: self.__old_dir})\n        else:\n            os.environ.pop(LOG_DIR_ENV_VARIABLE)\n\n        # clean up the temporary directory\n        os.system('rm -rf ' + self.__temporary_dir_path)\n\n    def restore_stdout_stderr(self):\n        if self.test_stdout:\n            self.test_stdout.close()\n        sys.stdout = self.old_stdout\n\n        if self.test_stderr:\n            self.test_stderr.close()\n        sys.stderr = self.old_stderr\n\n    def restore_stdout_stderr_keep_open(self):\n        sys.stdout = self.old_stdout\n        sys.stderr = self.old_stderr\n\n    # This method is equivalent to Python 2.7's unittest.assertIn()\n    def assertIsNone(self, foo, msg=None):\n        self.assertTrue(foo is None, msg=msg)\n\n    # This method is equivalent to Python 2.7's unittest.assertIn()\n    def assertIn(self, member, container, msg=None):\n        self.assertTrue(member in container, msg=msg)\n\n    # This method is equivalent to Python 2.7's unittest.assertNotIn()\n    def assertNotIn(self, member, container, msg=None):\n        self.assertTrue(member not in container, msg=msg)\n\n    # This method is equivalent to Python 2.7's unittest.assertRaisesRegexp()\n    def assertRaisesRegexp(self, expected_exception, expected_regexp,\n                           callable_object, *args, **kwargs):\n        # Copy kwargs so we remove msg from the copy before passing it into\n        # callable_object. This lets us use this assertion with callables that\n        # don't expect to get an msg parameter.\n        callable_kwargs = kwargs.copy()\n        msg = ''\n\n        if 'msg' in kwargs:\n            del callable_kwargs['msg']\n            if kwargs['msg']:\n                msg = '\\n' + kwargs['msg']\n\n        try:\n            callable_object(*args, **callable_kwargs)\n        except expected_exception as e:\n            self.assertRegexpMatches(str(e), expected_regexp, msg)\n        else:\n            self.fail(\"Expected exception \" + str(expected_exception) +\n                      \" not raised\" + msg)\n\n    def assertRaisesMessageIgnoringOrder(self, expected_exception,\n                                         expected_msg, callable_object,\n                                         *args, **kwargs):\n        try:\n            callable_object(*args, **kwargs)\n        except expected_exception as e:\n            self.assertEqualIgnoringOrder(expected_msg, str(e))\n        else:\n            self.fail(\"Expected exception \" + str(expected_exception) +\n                      \" not raised\")\n\n    def assertLazyMessage(self, msg_func, assert_function, *args, **kwargs):\n        try:\n            assert_function(*args, **kwargs)\n        except AssertionError:\n            self.fail(msg=msg_func())\n\n    def _format_regexp_not_found(self, msg, regexp, text):\n        return '%s:\\n' \\\n               '\\t\\t======== vv REGEXP vv ========\\n%s\\n' \\\n               '\\t\\t========   not found in   ========\\n%s\\n' \\\n               '\\t\\t======== ^^  TEXT  ^^ ========\\n' % (msg, regexp, text)\n\n    # equivalent to python 2.7's unittest.assertRegexpMatches()\n    def assertRegexpMatches(\n            self, text, expected_regexp, msg=\"Regexp didn't match\"):\n        msg = self._format_regexp_not_found(msg, expected_regexp, text)\n        self.assertTrue(re.search(expected_regexp, text), msg)\n\n    def assertRegexpMatchesLineByLine(self, actual_lines,\n                                      expected_regexp_lines, msg=None):\n        for expected_regexp, actual_line in zip(sorted(expected_regexp_lines),\n                                                sorted(actual_lines)):\n            try:\n                self.assertRegexpMatches(actual_line, expected_regexp, msg=msg)\n            except AssertionError:\n                self.assertEqualIgnoringOrder('\\n'.join(actual_lines),\n                                              '\\n'.join(expected_regexp_lines))\n\n    def remove_runs_once_flag(self, callable_obj):\n        # since we annotated show with @runs_once, we need to delete the\n        # attribute the Fabric decorator gives it to indicate that it has\n        # already run once in this session\n        if hasattr(callable_obj, 'return_value'):\n            delattr(callable_obj.wrapped, 'return_value')\n\n    def assertEqualIgnoringOrder(self, one, two):\n        self.assertEqual([line.rstrip() for line in sorted(one.splitlines())],\n                         [line.rstrip() for line in sorted(two.splitlines())])\n\n    def tearDown(self):\n        self.restore_stdout_stderr()\n        env.clear()\n        env.update(self.env_vars)\n        logging.disable(logging.NOTSET)\n        self.restore_log_and_delete_temp_dir()\n"
  },
  {
    "path": "tests/configurable_cluster.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\" Cluster object used to control a cluster that can be configured with\na yaml file.\n\nTest writers should use this module for all of their cluster related needs.\n\"\"\"\n\nimport fnmatch\nimport os\nimport tempfile\nimport uuid\nfrom subprocess import check_call\n\nimport paramiko\nimport yaml\nfrom prestoadmin import main_dir\nfrom tests.base_cluster import BaseCluster\nfrom tests.product.config_dir_utils import get_config_file_path, get_install_directory, get_config_directory\n\nCONFIG_FILE_GLOB = r'*.yaml'\nDIST_DIR = os.path.join(main_dir, 'tmp/installer')\n\n\nclass ConfigurableCluster(BaseCluster):\n    \"\"\"Start/stop/control/query a cluster defined by a configuration file.\n\n    This class allows you to run the presto-admin product tests on a real\n    cluster.\n\n    The configuration file must specify one master and three slaves, and a\n    user. That user must have sudo access on the cluster. If you want to\n    teardown a cluster that already has presto installed, specify\n    teardown_existing_cluster: true. An example config on vagrant:\n\n    master: '172.16.1.10'\n    slaves: ['172.16.1.11', '172.16.1.12', '172.16.1.13']\n    user: root\n    teardown_existing_cluster: true\n    key_path: /path/to/cluster-key.pem\n    mount_point: /home/ec2-user/presto-admin\n    rpm_cache_dir: /home/ec2-user/presto-rpm-cache\n    \"\"\"\n\n    def __init__(self, config_filename):\n        with open(os.path.join(main_dir, config_filename)) as config_file:\n            config = yaml.load(config_file)\n\n        self._master = config['master']\n        if type(self.master) is not str:\n            raise Exception('Must have just one master with type string.')\n\n        self.slaves = config['slaves']\n        if len(self.slaves) is not 3 or type(self.slaves) is not list:\n            raise Exception('Must specify three slaves in the config file.')\n\n        self.internal_master = 'master'\n        self.internal_slaves = ['slave1', 'slave2', 'slave3']\n        self._user = config['user']\n\n        self.key_path = config['key_path']\n        if not os.path.exists(self.key_path):\n            raise Exception('Key path specified {path} does not exist.'.format(\n                path=self.key_path))\n\n        self.config = config\n        self._mount_dir = config['mount_point']\n        self._rpm_cache_dir = config['rpm_cache_dir']\n\n    @staticmethod\n    def check_for_cluster_config():\n        config_name = fnmatch.filter(os.listdir(main_dir), CONFIG_FILE_GLOB)\n        if config_name:\n            return config_name[0]\n        else:\n            return None\n\n    def all_hosts(self):\n        return self.slaves + [self.master]\n\n    def all_internal_hosts(self, stopped_host=None):\n        internal_hosts = self.internal_slaves + [self.internal_master]\n        return internal_hosts\n\n    def get_dist_dir(self, unique):\n        if unique:\n            return os.path.join(DIST_DIR, self.master)\n        else:\n            return DIST_DIR\n\n    def tear_down(self):\n        for host in self.all_hosts():\n            # Remove the rm -rf /var/log/presto when the following issue\n            # is resolved https://github.com/prestodb/presto-admin/issues/226\n            script = \"\"\"\n            sudo service presto stop\n            sudo rpm -e presto-server-rpm\n            rm -rf {install_dir}\n            rm -rf ~/prestoadmin*.tar.gz\n            rm -rf {config_dir}\n            sudo rm -rf /etc/presto/\n            sudo rm -rf /usr/lib/presto/\n            sudo rm -rf /tmp/presto-debug\n            sudo rm -rf /tmp/presto-debug-remote\n            sudo rm -rf /var/log/presto\n            rm -rf {mount_dir}\n            \"\"\".format(install_dir=get_install_directory(),\n                       config_dir=get_config_directory(),\n                       mount_dir=self.mount_dir)\n            self.run_script_on_host(script, host)\n\n    def stop_host(self, host_name):\n        if host_name not in self.all_hosts():\n            raise Exception('Must specify external hostname to stop_host')\n\n        # Change the topology to something that doesn't exist\n        ips = self.get_ip_address_dict()\n        down_hostname = self.get_down_hostname(host_name)\n        self.exec_cmd_on_host(\n            self.master,\n            'sed -i s/%s/%s/g %s' % (host_name, down_hostname, get_config_file_path())\n        )\n        self.exec_cmd_on_host(\n            self.master,\n            'sed -i s/%s/%s/g %s' % (ips[host_name], down_hostname, get_config_file_path())\n        )\n        index = self.all_hosts().index(host_name)\n        self.exec_cmd_on_host(\n            self.master,\n            'sed -i s/%s/%s/g %s' % (self.all_internal_hosts()[index], down_hostname, get_config_file_path())\n        )\n\n        if index >= len(self.internal_slaves):\n            self.internal_master = down_hostname\n        else:\n            self.internal_slaves[index] = down_hostname\n\n    def get_down_hostname(self, host_name):\n        return '1.0.0.0'\n\n    def exec_cmd_on_host(self, host, cmd, user=None, raise_error=True,\n                         tty=False, invoke_sudo=False):\n        # If the corresponding variable is set, invoke command with sudo since EMR's login\n        # user is ec2-user. If sudo is already present in the command then no error will occur\n        # as arbitrary nesting of sudo is allowed.\n        if invoke_sudo:\n            cmd = 'sudo ' + cmd\n\n        if user is None:\n            user = self.user\n        # We need to execute the commands on the external, not internal, host.\n        if host not in self.all_hosts():\n            index = self.all_internal_hosts().index(host)\n            host = self.all_hosts()[index]\n        ssh = paramiko.SSHClient()\n        ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n        ssh.connect(host, username=user, key_filename=self.key_path,\n                    timeout=180)\n        stdin, stdout, stderr = ssh.exec_command(cmd, get_pty=True)\n        stdin.close()\n        output = ''.join(stdout.readlines()).replace('\\r', '') \\\n            .encode('ascii', 'ignore')\n        exit_status = stdout.channel.recv_exit_status()\n        ssh.close()\n        if exit_status and raise_error:\n            raise OSError(exit_status, output)\n        return output\n\n    @staticmethod\n    def start_bare_cluster(config_filename, testcase, assert_installed):\n        cluster = ConfigurableCluster(config_filename)\n        if 'teardown_existing_cluster' in cluster.config \\\n                and cluster.config['teardown_existing_cluster']:\n            cluster.tear_down()\n        elif cluster._presto_is_installed(testcase, assert_installed):\n            raise Exception('Cluster already has Presto installed, '\n                            'either uninstall Presto or specify '\n                            '\\'teardown_existing_cluster: true\\' in the '\n                            'cluster.yaml file.')\n        return cluster\n\n    def run_script_on_host(self, script_contents, host, tty=True):\n        temp_script = '~/tmp.sh'\n        self.write_content_to_host('#!/bin/bash\\n%s' % script_contents,\n                                   temp_script, host)\n        self.exec_cmd_on_host(host, 'chmod +x %s' % temp_script)\n        return self.exec_cmd_on_host(host, temp_script, tty=tty)\n\n    def write_content_to_host(self, content, remote_path, host):\n        with tempfile.NamedTemporaryFile('w', dir='/tmp', delete=False) \\\n                as temp_config_file:\n            temp_config_file.write(content)\n            temp_config_file.close()\n            self.copy_to_host(temp_config_file.name, host,\n                              dest_path=remote_path)\n            check_call(['rm', temp_config_file.name])\n\n    def copy_to_host(self, source_path, host, dest_path=None):\n        if not dest_path:\n            dest_path = os.path.join(self.mount_dir,\n                                     os.path.basename(source_path))\n        self.exec_cmd_on_host(host, 'mkdir -p {dir}'.format(dir=os.path.dirname(dest_path)))\n\n        ssh = paramiko.SSHClient()\n        ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n        ssh.connect(host, username=self.user, key_filename=self.key_path,\n                    timeout=180)\n\n        # Upload to dummy location because paramiko doesn't allow SFTP using\n        # sudo when logged in as a non root user. Due to this limitation, Fabric\n        # uses the same methodology to upload files.\n        dummy_path = '/tmp/{random_dir}/{dest_dir}'.format(\n            random_dir=str(uuid.uuid1()), dest_dir=os.path.basename(dest_path))\n        self.exec_cmd_on_host(host, 'mkdir -p {dir}'.format(dir=os.path.dirname(dummy_path)))\n        sftp = ssh.open_sftp()\n        sftp.put(source_path, dummy_path)\n        sftp.close()\n\n        # Move to final location using sudo\n        self.exec_cmd_on_host(host, 'mv {source} {dest}'.format(source=dummy_path, dest=dest_path), invoke_sudo=True)\n\n        # Remove dummy path directory\n        self.exec_cmd_on_host(host, 'rm -rf {dir}'.format(dir=os.path.dirname(dummy_path)))\n\n        ssh.close()\n\n    # Since ConfigurableCluster is configured using external IPs, those act as\n    # hosts and so the dict returned contains an identity mapping from external IPs\n    # to external IPs in addition to internal host to internal IP mappings\n    def get_ip_address_dict(self):\n        ip_addresses = {}\n        for ip in self.all_hosts():\n            ip_addresses[ip] = ip\n\n        hosts_file = self.exec_cmd_on_host(self.master, 'cat /etc/hosts').splitlines()\n        for internal_host in self.all_internal_hosts():\n            ip_addresses[internal_host] = self._get_ip_from_hosts_file(\n                hosts_file, internal_host)\n        return ip_addresses\n\n    @staticmethod\n    def _get_ip_from_hosts_file(hosts_file, host):\n        for line in hosts_file:\n            if host in line:\n                return line.split(' ')[0]\n        return None\n\n    def _presto_is_installed(self, testcase, assert_installed):\n        for host in self.all_hosts():\n            try:\n                assert_installed(testcase, host, cluster=self)\n            except AssertionError:\n                return False\n        return True\n\n    def postinstall(self, installer):\n        pass\n\n    @property\n    def rpm_cache_dir(self):\n        return self._rpm_cache_dir\n\n    @property\n    def mount_dir(self):\n        return self._mount_dir\n\n    @property\n    def user(self):\n        return self._user\n\n    @property\n    def master(self):\n        return self._master\n"
  },
  {
    "path": "tests/docker_cluster.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Docker related functions, constants and objects needed by product tests.\n\nTest writers should use this module for all of their docker related needs\nand not directly call into the docker-py API.\n\"\"\"\n\nimport errno\nimport os\nimport shutil\nimport subprocess\nimport sys\nimport uuid\n\nfrom docker import DockerClient\nfrom docker.errors import APIError\nfrom docker.utils.utils import kwargs_from_env\nfrom retrying import retry\n\nfrom prestoadmin import main_dir\nfrom tests.base_cluster import BaseCluster\nfrom tests.product.constants import \\\n    DEFAULT_DOCKER_MOUNT_POINT, DEFAULT_LOCAL_MOUNT_POINT\n\nDIST_DIR = os.path.join(main_dir, 'tmp/installer')\n\n_DOCKER_START_TIMEOUT = 60000\n_DOCKER_START_WAIT = 1000\n\n\nclass NotStartedException(Exception):\n    def __init__(self, hosts):\n        super(NotStartedException, self).__init__(\"Hosts not yet started %s\" %\n                                                  \", \".join(hosts))\n\n\nclass DockerCluster(BaseCluster):\n    IMAGE_NAME_BASE = os.path.join('teradatalabs', 'pa_test')\n    BARE_CLUSTER_TYPE = 'bare'\n\n    \"\"\"Start/stop/control/query arbitrary clusters of docker containers.\n\n    This class is aimed at product test writers to create docker containers\n    for testing purposes.\n\n    \"\"\"\n    def __init__(self, master_host, slave_hosts,\n                 local_mount_dir, docker_mount_dir):\n        # see PyDoc for all_internal_hosts() for an explanation on the\n        # difference between an internal and regular host\n        self.internal_master = master_host\n        self.internal_slaves = slave_hosts\n        self._master = master_host + '-' + str(uuid.uuid4())\n        self.slaves = [slave + '-' + str(uuid.uuid4())\n                       for slave in slave_hosts]\n        # the root path for all local mount points; to get a particular\n        # container mount point call get_local_mount_dir()\n        self.local_mount_dir = local_mount_dir\n        self._mount_dir = docker_mount_dir\n\n        kwargs = kwargs_from_env()\n        if 'tls' in kwargs:\n            kwargs['tls'].assert_hostname = False\n        kwargs['timeout'] = 300\n        self.client = DockerClient(**kwargs)\n        self._user = 'root'\n        self._network_name = 'presto-admin-test-' + str(uuid.uuid4())\n\n        DockerCluster.__check_if_docker_exists()\n\n    def all_hosts(self):\n        return self.slaves + [self.master]\n\n    def all_internal_hosts(self):\n        return [host.split('-')[0] for host in self.all_hosts()]\n\n    def get_local_mount_dir(self, host):\n        return os.path.join(self.local_mount_dir,\n                            self.__get_unique_host(host))\n\n    def get_dist_dir(self, unique):\n        if unique:\n            return os.path.join(DIST_DIR, self.master)\n        else:\n            return DIST_DIR\n\n    def __get_unique_host(self, host):\n        matches = [unique_host for unique_host in self.all_hosts()\n                   if unique_host.startswith(host)]\n        if matches:\n            return matches[0]\n        elif host in self.all_hosts():\n            return host\n        else:\n            raise DockerClusterException(\n                'Specified host: {0} does not exist.'.format(host))\n\n    @staticmethod\n    def __check_if_docker_exists():\n        try:\n            subprocess.call(['docker', '--version'])\n        except OSError:\n            sys.exit('Docker is not installed. Try installing it with '\n                     'presto-admin/bin/install-docker.sh.')\n\n    def start_containers(self, master_image, slave_image=None, cmd=None, **kwargs):\n        self._create_host_mount_dirs()\n        self._create_network()\n\n        self._create_and_start_containers(master_image, slave_image, cmd, **kwargs)\n        self._ensure_docker_containers_started()\n\n    def tear_down(self):\n        for container_name in self.all_hosts():\n            self._tear_down_container(container_name)\n        self._remove_host_mount_dirs()\n        self._remove_network()\n\n    def _tear_down_container(self, container_name):\n        try:\n            shutil.rmtree(self.get_dist_dir(unique=True))\n        except OSError as e:\n            # no such file or directory\n            if e.errno != errno.ENOENT:\n                raise\n\n        try:\n            self.stop_host(container_name)\n            container = self.client.containers.get(container_name)\n            container.remove(v=True, force=True)\n        except APIError as e:\n            # container does not exist\n            if e.response.status_code != 404:\n                raise\n\n    def stop_host(self, container_name):\n        container = self.client.containers.get(container_name)\n        container.stop()\n        container.wait()\n\n    def start_host(self, container_name):\n        container = self.client.containers.get(container_name)\n        container.start()\n\n    def get_down_hostname(self, host_name):\n        return host_name\n\n    def _remove_host_mount_dirs(self):\n        for container_name in self.all_hosts():\n            try:\n                shutil.rmtree(\n                    self.get_local_mount_dir(container_name))\n            except OSError as e:\n                # no such file or directory\n                if e.errno != errno.ENOENT:\n                    raise\n\n    def _create_host_mount_dirs(self):\n        for container_name in self.all_hosts():\n            try:\n                os.makedirs(\n                    self.get_local_mount_dir(container_name))\n            except OSError as e:\n                # file exists\n                if e.errno != errno.EEXIST:\n                    raise\n\n    def _create_network(self):\n        self.client.networks.create(self._network_name)\n\n    def _get_network(self):\n        return self.client.networks.get(self._network_name)\n\n    def _remove_network(self):\n        self._get_network().remove()\n\n    def _create_and_start_containers(self, master_image, slave_image=None, cmd=None, **kwargs):\n        if slave_image:\n            for container_name in self.slaves:\n                self._create_container(slave_image, container_name, container_name.split('-')[0], cmd, **kwargs)\n                container = self.client.containers.get(container_name)\n                container.start()\n\n        self._create_container(\n            master_image,\n            self.master,\n            hostname=self.internal_master,\n            cmd=cmd,\n            **kwargs)\n        container = self.client.containers.get(self.master)\n        container.start()\n\n    def _create_container(self, image, container_name, hostname, cmd, **kwargs):\n        master_mount_dir = self.get_local_mount_dir(container_name)\n        self.client.containers.create(\n                               image,\n                               detach=True,\n                               name=container_name,\n                               hostname=hostname,\n                               volumes={master_mount_dir: {'bind': self.mount_dir, 'mode': 'rw'}},\n                               command=cmd,\n                               mem_limit='2g',\n                               network=None,\n                               **kwargs)\n\n        self._get_network().connect(\n            container_name,\n            aliases=[hostname.split('-')[0]])\n\n    @retry(stop_max_delay=_DOCKER_START_TIMEOUT, wait_fixed=_DOCKER_START_WAIT)\n    def _ensure_docker_containers_started(self):\n        host_started = {}\n        for host in self.all_hosts():\n            host_started[host] = False\n        for host in host_started.keys():\n            if host_started[host]:\n                continue\n            is_started = self.client.containers.get(host).status == 'running'\n            if is_started:\n                is_started &= self._are_centos_container_services_up(host)\n            host_started[host] = is_started\n        not_started = [host for (host, started) in host_started.items() if not started]\n        if len(not_started):\n            raise NotStartedException(not_started)\n\n    @staticmethod\n    def _are_all_hosts_started(host_started_map):\n        all_started = True\n        for host in host_started_map.keys():\n            all_started &= host_started_map[host]\n        return all_started\n\n    def _are_centos_container_services_up(self, host):\n        \"\"\"Some essential services in our CentOS containers take some time\n        to start after the container itself is up. This function checks\n        whether those services are up and returns a boolean accordingly.\n        Specifically, we check that the app-admin user has been created\n        and that the ssh daemon is up, as well as that the SSH keys are\n        in the right place.\n\n        Args:\n          host: the host to check.\n\n        Returns:\n          True if the specified services have started, False otherwise.\n\n        \"\"\"\n        ps_output = self.exec_cmd_on_host(host, 'ps')\n        # also ensure that the app-admin user exists\n        try:\n            user_output = self.exec_cmd_on_host(\n                host, 'grep app-admin /etc/passwd'\n            )\n            user_output += self.exec_cmd_on_host(host, 'stat /home/app-admin')\n        except OSError:\n            user_output = ''\n        if 'sshd_bootstrap' in ps_output or 'sshd\\n' not in ps_output\\\n                or not user_output:\n            return False\n        # check for .ssh being in the right place\n        try:\n            ssh_output = self.exec_cmd_on_host(host, 'ls /home/app-admin/.ssh')\n            if 'id_rsa' not in ssh_output:\n                return False\n        except OSError:\n            return False\n        return True\n\n    def exec_cmd_on_host(self, host, cmd, user=None, raise_error=True,\n                         tty=False, invoke_sudo=False):\n        ex = self.client.api.exec_create(\n            self.__get_unique_host(host),\n            ['sh', '-c', cmd],\n            tty=tty,\n            user=user)\n        output = self.client.api.exec_start(ex['Id'], tty=tty)\n        exit_code = self.client.api.exec_inspect(ex['Id'])['ExitCode']\n        if raise_error and exit_code:\n            raise OSError(exit_code, output)\n        return output\n\n    @staticmethod\n    def _get_tag_basename(bare_image_provider, cluster_type, ms):\n        return '_'.join(\n            [bare_image_provider.get_tag_decoration(), cluster_type, ms])\n\n    @staticmethod\n    def _get_master_image_name(bare_image_provider, cluster_type):\n        return os.path.join(DockerCluster.IMAGE_NAME_BASE,\n                            DockerCluster._get_tag_basename(\n                                bare_image_provider, cluster_type, 'master'))\n\n    @staticmethod\n    def _get_slave_image_name(bare_image_provider, cluster_type):\n        return os.path.join(DockerCluster.IMAGE_NAME_BASE,\n                            DockerCluster._get_tag_basename(\n                                bare_image_provider, cluster_type, 'slave'))\n\n    @staticmethod\n    def _get_image_names(bare_image_provider, cluster_type):\n        dc = DockerCluster\n        return (dc._get_master_image_name(bare_image_provider, cluster_type),\n                dc._get_slave_image_name(bare_image_provider, cluster_type))\n\n    @staticmethod\n    def start_cluster(bare_image_provider, cluster_type, master_host='master',\n                      slave_hosts=None, **kwargs):\n        if slave_hosts is None:\n            slave_hosts = ['slave1', 'slave2', 'slave3']\n        created_bare = False\n        dc = DockerCluster\n\n        centos_cluster = DockerCluster(master_host, slave_hosts,\n                                       DEFAULT_LOCAL_MOUNT_POINT,\n                                       DEFAULT_DOCKER_MOUNT_POINT)\n\n        master_name, slave_name = dc._get_image_names(\n            bare_image_provider, cluster_type)\n\n        if not dc._check_for_images(master_name, slave_name):\n            master_name, slave_name = dc._get_image_names(\n                bare_image_provider, dc.BARE_CLUSTER_TYPE)\n            if not dc._check_for_images(master_name, slave_name):\n                bare_image_provider.create_bare_images(\n                    centos_cluster, master_name, slave_name)\n            created_bare = True\n\n        centos_cluster.start_containers(master_name, slave_name, **kwargs)\n\n        return centos_cluster, created_bare\n\n    @staticmethod\n    def _check_for_images(master_image_name, slave_image_name, tag='latest'):\n        master_repotag = '%s:%s' % (master_image_name, tag)\n        slave_repotag = '%s:%s' % (slave_image_name, tag)\n        client = DockerClient(timeout=180)\n        images = client.images.list()\n        has_master_image = False\n        has_slave_image = False\n        for image in images:\n            if master_repotag in image.tags:\n                has_master_image = True\n            if slave_repotag in image.tags:\n                has_slave_image = True\n        return has_master_image and has_slave_image\n\n    def commit_images(self, bare_image_provider, cluster_type):\n        container = self.client.containers.get(self.master)\n        container.commit(self._get_master_image_name(bare_image_provider, cluster_type))\n        if self.slaves:\n            container = self.client.containers.get(self.slaves[0])\n            container.commit(self._get_slave_image_name(bare_image_provider, cluster_type))\n\n    def run_script_on_host(self, script_contents, host, tty=True):\n        temp_script = '/tmp/tmp.sh'\n        self.write_content_to_host('#!/bin/bash\\n%s' % script_contents,\n                                   temp_script, host)\n        self.exec_cmd_on_host(host, 'chmod +x %s' % temp_script)\n        return self.exec_cmd_on_host(host, temp_script, tty=tty)\n\n    def write_content_to_host(self, content, path, host):\n        filename = os.path.basename(path)\n        dest_dir = os.path.dirname(path)\n        host_local_mount_point = self.get_local_mount_dir(host)\n        local_path = os.path.join(host_local_mount_point, filename)\n\n        with open(local_path, 'w') as config_file:\n            config_file.write(content)\n\n        self.exec_cmd_on_host(host, 'mkdir -p ' + dest_dir)\n        self.exec_cmd_on_host(\n            host, 'cp %s %s' % (os.path.join(self.mount_dir, filename),\n                                dest_dir))\n\n    def copy_to_host(self, source_path, dest_host, **kwargs):\n        shutil.copy(source_path, self.get_local_mount_dir(dest_host))\n\n    def get_ip_address_dict(self):\n        ip_addresses = {}\n        for host, internal_host in zip(self.all_hosts(),\n                                       self.all_internal_hosts()):\n            inspect = self.client.api.inspect_container(host)\n            ip_addresses[host] = inspect['NetworkSettings']['IPAddress']\n            ip_addresses[internal_host] = \\\n                inspect['NetworkSettings']['IPAddress']\n        return ip_addresses\n\n    def _post_presto_install(self):\n        for worker in self.slaves:\n            self.run_script_on_host(\n                'sed -i /node.id/d /etc/presto/node.properties; '\n                'uuid=$(uuidgen); '\n                'echo node.id=$uuid >> /etc/presto/node.properties',\n                worker\n            )\n\n    def postinstall(self, installer):\n        from tests.product.standalone.presto_installer \\\n            import StandalonePrestoInstaller\n\n        _post_install_hooks = {\n            StandalonePrestoInstaller: DockerCluster._post_presto_install\n        }\n\n        hook = _post_install_hooks.get(installer, None)\n        if hook:\n            hook(self)\n\n    @property\n    def rpm_cache_dir(self):\n        return self._mount_dir\n\n    @property\n    def mount_dir(self):\n        return self._mount_dir\n\n    @property\n    def user(self):\n        return self._user\n\n    @property\n    def master(self):\n        return self._master\n\n\nclass DockerClusterException(Exception):\n    def __init__(self, msg):\n        self.msg = msg\n"
  },
  {
    "path": "tests/integration/__init__.py",
    "content": ""
  },
  {
    "path": "tests/integration/util/__init__.py",
    "content": ""
  },
  {
    "path": "tests/integration/util/data/presto-admin-logging.ini",
    "content": "[loggers]\nkeys=root\n\n[logger_root]\nlevel=DEBUG\nhandlers=file\n\n[handlers]\nkeys=file\n\n[handler_file]\nclass=handlers.TimedRotatingFileHandler\nformatter=verbose\nargs=('%(log_file_path)s', 'D', 7)\n\n[formatters]\nkeys=verbose\n\n[formatter_verbose]\nformat=%(asctime)s|%(process)d|%(thread)d|%(name)s|%(levelname)s|%(message)s\n"
  },
  {
    "path": "tests/integration/util/test_application.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nimport os\nimport tempfile\nfrom unittest import TestCase\n\nfrom prestoadmin.util import constants\nfrom prestoadmin.util.application import Application\nfrom prestoadmin.util.constants import LOG_DIR_ENV_VARIABLE\nfrom prestoadmin.util.local_config_util import get_log_directory\n\nEXECUTABLE_NAME = 'foo.py'\nAPPLICATION_NAME = 'foo'\n\n\nclass ApplicationTest(TestCase):\n    def setUp(self):\n        # put log files in a temporary dir\n        self.__old_prestoadmin_log = get_log_directory()\n        self.__temporary_dir_path = tempfile.mkdtemp(prefix='app-int-test-')\n        os.environ[LOG_DIR_ENV_VARIABLE] = self.__temporary_dir_path\n\n        # monkey patch in a fake logging config file\n        self.__old_log_dirs = list(constants.LOGGING_CONFIG_FILE_DIRECTORIES)\n        constants.LOGGING_CONFIG_FILE_DIRECTORIES.append(\n            os.path.join(os.path.dirname(__file__), 'data')\n        )\n\n        # basicConfig is a noop if there are already handlers\n        # present on the root logger, remove them all here\n        self.__old_log_handlers = []\n        for handler in logging.root.handlers:\n            self.__old_log_handlers.append(handler)\n            logging.root.removeHandler(handler)\n\n    def tearDown(self):\n        constants.LOGGING_CONFIG_FILE_DIRECTORIES = self.__old_log_dirs\n\n        # restore the log location\n        if self.__old_prestoadmin_log:\n            os.environ[LOG_DIR_ENV_VARIABLE] = self.__old_prestoadmin_log\n        else:\n            os.environ.pop(LOG_DIR_ENV_VARIABLE)\n\n        # clean up the temporary directory\n        os.system('rm -rf ' + self.__temporary_dir_path)\n\n        # restore the old log handlers\n        for handler in logging.root.handlers:\n            logging.root.removeHandler(handler)\n        for handler in self.__old_log_handlers:\n            logging.root.addHandler(handler)\n\n    def test_log_file_is_created(self):\n        with Application(APPLICATION_NAME):\n            pass\n\n        log_file_path = os.path.join(\n            get_log_directory(),\n            APPLICATION_NAME + '.log'\n        )\n        self.assertTrue(\n            os.path.exists(log_file_path),\n            'Expected log file does not exist'\n        )\n        self.assertTrue(\n            os.path.getsize(log_file_path) > 0,\n            'Log file is empty'\n        )\n"
  },
  {
    "path": "tests/no_hadoop_bare_image_provider.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProvides bare images for standalone clusters.\n\"\"\"\n\nimport re\n\nfrom tests.bare_image_provider import TagBareImageProvider\nfrom tests.product.constants import BASE_IMAGE_TAG\nfrom tests.product.constants import BASE_IMAGE_NAME_BUILD\nfrom tests.product.constants import BASE_IMAGE_NAME_RUNTIME\n\n\nclass NoHadoopBareImageProvider(TagBareImageProvider):\n    def __init__(self, build_or_runtime=\"runtime\"):\n        if build_or_runtime == \"runtime\":\n            base_image_name = BASE_IMAGE_NAME_RUNTIME\n        elif build_or_runtime == \"build\":\n            base_image_name = BASE_IMAGE_NAME_BUILD\n        else:\n            raise Exception(\"build_or_runtime must be one of \\\"build\\\" or \\\"runtime\\\"\")\n\n        # encode base image name into name of created test image, to prevent image name clash.\n        decoration = 'nohadoop_' + re.sub(r\"[^A-Za-z0-9]\", \"_\", base_image_name)\n\n        super(NoHadoopBareImageProvider, self).__init__(\n            base_image_name, base_image_name,\n            BASE_IMAGE_TAG, decoration)\n"
  },
  {
    "path": "tests/product/__init__.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport contextlib\nimport os\n\nfrom exceptions import Exception\n\n\ndef determine_jdk_directory(cluster):\n    \"\"\"\n    Return the directory where the JDK is installed. For example if the JDK is\n    located in /usr/java/jdk1.8_91, then this method will return the string\n    'jdk1.8_91'.\n\n    This method will throw an Exception if the number of JDKs matching the\n    /usr/java/jdk* pattern is not equal to 1.\n\n    :param cluster: cluster on which to search for the JDK directory\n    \"\"\"\n    number_of_jdks = cluster.exec_cmd_on_host(cluster.master, 'bash -c \"ls -ld /usr/java/j*| wc -l\"')\n    if int(number_of_jdks) != 1:\n        raise Exception('The number of JDK directories matching /usr/java/jdk* is not 1')\n    output = cluster.exec_cmd_on_host(cluster.master, 'ls -d /usr/java/j*')\n    return output.split(os.path.sep)[-1].strip('\\n')\n\n\n@contextlib.contextmanager\ndef relocate_jdk_directory(cluster, destination):\n    \"\"\"\n    Temporarily move the JDK to the destination directory\n\n    :param cluster: cluster object on which to relocate the JDK directory\n    :param destination: destination parent JDK directory, e.g. /tmp/\n    :returns the new full JDK directory, e.g. /tmp/jdk1.8_91\n    \"\"\"\n    # assume that Java is installed in the same folder on all nodes\n    jdk_directory = determine_jdk_directory(cluster)\n    source_jdk = os.path.join('/usr/java', jdk_directory)\n    destination_jdk = os.path.join(destination, jdk_directory)\n    for host in cluster.all_hosts():\n        cluster.exec_cmd_on_host(\n            host, \"mv %s %s\" % (source_jdk, destination_jdk), invoke_sudo=True)\n\n    yield destination_jdk\n\n    for host in cluster.all_hosts():\n        cluster.exec_cmd_on_host(\n            host, \"mv %s %s\" % (destination_jdk, source_jdk), invoke_sudo=True)\n"
  },
  {
    "path": "tests/product/base_product_case.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nBase class for product tests.  Handles setting up a docker cluster and has\nother utilities\n\"\"\"\n\nimport json\nimport os\nimport re\nfrom StringIO import StringIO\n\nfrom nose.tools import nottest\nfrom retrying import Retrying\n\nfrom prestoadmin.prestoclient import PrestoClient\nfrom prestoadmin.util import constants\nfrom prestoadmin.util.constants import CONFIG_PROPERTIES, COORDINATOR_DIR_NAME, LOCAL_CONF_DIR\nfrom prestoadmin.util.presto_config import PrestoConfig\nfrom tests.base_test_case import BaseTestCase\nfrom tests.configurable_cluster import ConfigurableCluster\nfrom tests.docker_cluster import DockerCluster\nfrom tests.product.cluster_types import cluster_types\nfrom tests.product.config_dir_utils import get_coordinator_directory, get_workers_directory, get_config_file_path, \\\n    get_log_directory, get_install_directory, get_presto_admin_path\nfrom tests.product.standalone.presto_installer import StandalonePrestoInstaller\n\nPRESTO_VERSION = r'.+'\nRETRY_TIMEOUT = 120\nRETRY_INTERVAL = 5\n\n\nclass BaseProductTestCase(BaseTestCase):\n    default_workers_test_config_ = \"\"\"coordinator=false\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\\n\"\"\"\n\n    default_node_properties_ = \"\"\"catalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\\n\"\"\"\n\n    default_jvm_config_ = \"\"\"-server\n-Xmx16G\n-XX:-UseBiasedLocking\n-XX:+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:+ExplicitGCInvokesConcurrent\n-XX:+HeapDumpOnOutOfMemoryError\n-XX:+UseGCOverheadLimit\n-XX:+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\\n\"\"\"\n\n    default_coordinator_config_ = \"\"\"coordinator=true\ndiscovery-server.enabled=true\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nnode-scheduler.include-coordinator=false\nquery.max-memory-per-node=8GB\nquery.max-memory=50GB\\n\"\"\"\n\n    default_coordinator_test_config_ = \"\"\"coordinator=true\ndiscovery-server.enabled=true\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nnode-scheduler.include-coordinator=false\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\\n\"\"\"\n\n    # The two strings below (down_node_connection_string and status_down_node_string) aggregate\n    # all possible error messages one might encounter when trying to perform an action when a\n    # node is not accessible. The variety in error messages comes from differences in the OS.\n    down_node_connection_string = r'\\nWarning: (\\[%(host)s\\] )?Name lookup failed for %(host)s'\n\n    status_down_node_string = r'\\tName lookup failed for %(host)s'\n\n    len_down_node_error = 6\n\n    def setUp(self):\n        super(BaseProductTestCase, self).setUp()\n        self.maxDiff = None\n        self.cluster = None\n        self.default_keywords = {}\n\n    def tearDown(self):\n        self.restore_stdout_stderr_keep_open()\n        if self.cluster:\n            self.cluster.tear_down()\n        super(BaseProductTestCase, self).tearDown()\n\n    def _apply_post_install_hooks(self, installers):\n        for installer in installers:\n            self.cluster.postinstall(installer)\n\n    def _update_replacement_keywords(self, installers):\n        for installer in installers:\n            installer_instance = installer(self)\n            self.default_keywords.update(installer_instance.get_keywords())\n\n    def setup_cluster(self, bare_image_provider, cluster_type):\n        installers = cluster_types[cluster_type]\n\n        config_filename = ConfigurableCluster.check_for_cluster_config()\n\n        if config_filename:\n            self.cluster = ConfigurableCluster.start_bare_cluster(\n                config_filename, self,\n                StandalonePrestoInstaller.assert_installed)\n            self.cluster.ensure_correct_execution_environment()\n            BaseProductTestCase.run_installers(self.cluster, installers, self)\n        else:\n            self.cluster, bare_cluster = DockerCluster.start_cluster(\n                bare_image_provider, cluster_type)\n            self.cluster.ensure_correct_execution_environment()\n\n            # If we've found images and started a non-bare cluster, the\n            # containers have already had the installers applied to them.\n            # We do need to get the test environment in sync with the\n            # containers by calling the following two functions.\n            #\n            # We do this to save the cost of running the installers on the\n            # docker containers every time we run a test. In practice,\n            # that turns out to be a fairly expensive thing to do.\n            if not bare_cluster:\n                self._apply_post_install_hooks(installers)\n                self._update_replacement_keywords(installers)\n            else:\n                raise RuntimeError(\"Docker images have not been created\")\n\n    # Do not call this method directory from tests or anywhere other than the BaseInstaller\n    # implementation classes.\n    @staticmethod\n    def run_installers(cluster, installers, testcase):\n        for installer in installers:\n            dependencies = installer.get_dependencies()\n\n            for dependency in dependencies:\n                dependency.assert_installed(testcase)\n\n            installer_instance = installer(testcase)\n            installer_instance.install()\n\n            testcase.default_keywords.update(installer_instance.get_keywords())\n            cluster.postinstall(installer)\n\n    def dump_and_cp_topology(self, topology, cluster=None):\n        if not cluster:\n            cluster = self.cluster\n        cluster.write_content_to_host(\n            json.dumps(topology),\n            get_config_file_path(),\n            cluster.master\n        )\n\n    def upload_topology(self, topology=None, cluster=None):\n        if not cluster:\n            cluster = self.cluster\n        if not topology:\n            topology = {\"coordinator\": \"master\",\n                        \"workers\": [\"slave1\", \"slave2\", \"slave3\"]}\n        self.dump_and_cp_topology(topology, cluster)\n\n    @nottest\n    def write_test_configs(self, cluster, extra_configs=None,\n                           coordinator=None):\n        if not coordinator:\n            coordinator = self.cluster.internal_master\n        config = 'http-server.http.port=7070\\n' \\\n                 'query.max-memory=50GB\\n' \\\n                 'query.max-memory-per-node=512MB\\n' \\\n                 'discovery.uri=http://%s:7070' % coordinator\n        if extra_configs:\n            config += '\\n' + extra_configs\n        coordinator_config = '%s\\n' \\\n                             'coordinator=true\\n' \\\n                             'node-scheduler.include-coordinator=false\\n' \\\n                             'discovery-server.enabled=true' % config\n        workers_config = '%s\\ncoordinator=false' % config\n        cluster.write_content_to_host(\n            coordinator_config,\n            os.path.join(get_coordinator_directory(), 'config.properties'),\n            cluster.master\n        )\n        cluster.write_content_to_host(\n            workers_config,\n            os.path.join(get_workers_directory(), 'config.properties'),\n            cluster.master\n        )\n\n    def fetch_log_tail(self, lines=50):\n        return self.cluster.exec_cmd_on_host(\n            self.cluster.master,\n            'tail -%d %s' % (lines, os.path.join(get_log_directory(), 'presto-admin.log')),\n            raise_error=False)\n\n    def run_prestoadmin(self, command, raise_error=True, cluster=None,\n                        **kwargs):\n        if not cluster:\n            cluster = self.cluster\n        command = self.replace_keywords(command, cluster=cluster, **kwargs)\n        return cluster.exec_cmd_on_host(\n            cluster.master,\n            \"{path} --user {user} {cmd}\".format(path=get_presto_admin_path(), user=cluster.user, cmd=command),\n            raise_error=raise_error,\n            invoke_sudo=False\n        )\n\n    def run_script_from_prestoadmin_dir(self, script_contents, host='',\n                                        raise_error=True, **kwargs):\n        if not host:\n            host = self.cluster.master\n\n        script_contents = self.replace_keywords(script_contents,\n                                                **kwargs)\n        temp_script = os.path.join(get_install_directory(), 'tmp.sh')\n        self.cluster.write_content_to_host(\n            '#!/bin/bash\\ncd %s\\n%s' % (get_install_directory(), script_contents),\n            temp_script, host)\n        self.cluster.exec_cmd_on_host(\n            host, 'chmod +x %s' % temp_script)\n        return self.cluster.exec_cmd_on_host(\n            host, temp_script, raise_error=raise_error)\n\n    def run_prestoadmin_expect(self, command, expect_statements):\n        temp_script = os.path.join(get_install_directory(), 'tmp.expect')\n        script_content = '#!/usr/bin/expect\\n' + \\\n                         'spawn %s %s\\n%s' % \\\n                         (get_presto_admin_path(), command, expect_statements)\n\n        self.cluster.write_content_to_host(script_content, temp_script,\n                                           self.cluster.master)\n        self.cluster.exec_cmd_on_host(\n            self.cluster.master, 'chmod +x %s' % temp_script)\n        return self.cluster.exec_cmd_on_host(\n            self.cluster.master, temp_script)\n\n    def assert_path_exists(self, host, file_path):\n        self.cluster.exec_cmd_on_host(\n            host, ' [ -e %s ] ' % file_path)\n\n    def get_file_content(self, host, filepath):\n        return self.cluster.exec_cmd_on_host(host, 'cat %s' % (filepath), invoke_sudo=True)\n\n    def assert_config_perms(self, host, filepath):\n        self.assert_file_perm_owner(\n            host, filepath, '-rw-------', 'presto', 'presto')\n\n    def assert_directory_perm_owner(self, host, filepath, permissions, owner, group):\n        self.assertEqual(permissions[0], 'd', 'expected permissions should begin with a d')\n        ls = self.cluster.exec_cmd_on_host(host, \"ls -l -d %s\" % filepath)\n        self.assert_perm_owner(permissions, owner, group, ls)\n\n    def assert_file_perm_owner(self, host, filepath, permissions, owner, group):\n        ls = self.cluster.exec_cmd_on_host(host, \"ls -l %s\" % filepath)\n        self.assert_perm_owner(permissions, owner, group, ls)\n\n    def assert_perm_owner(self, permissions, owner, group, actual):\n        fields = actual.split()\n        self.assertEqual(fields[0], permissions)\n        self.assertEqual(fields[2], owner)\n        self.assertEqual(fields[3], group)\n\n    def assert_file_content(self, host, filepath, expected):\n        content = self.get_file_content(host, filepath)\n\n        split_path = os.path.split(filepath)\n        pa_file = None\n        if (split_path[0] == '/etc/presto' and split_path[1] in ['config.properties', 'log.properties', 'jvm.config']):\n            if host in self.cluster.slaves:\n                config_dir = get_workers_directory()\n            else:\n                config_dir = get_coordinator_directory()\n\n            pa_file = os.path.join(config_dir, split_path[1])\n\n        self.assertLazyMessage(\n            lambda: self.file_content_message(content, expected, pa_file),\n            self.assertEqual,\n            content,\n            expected)\n\n    def file_content_message(self, actual, expected, pa_file):\n        msg = '\\t===== vv ACTUAL FILE CONTENT vv =====\\n' \\\n              '%s\\n' \\\n              '\\t=========== DID NOT EQUAL ===========\\n' \\\n              '%s\\n' \\\n              '\\t==== ^^ EXPECTED FILE CONTENT ^^ ====\\n' \\\n              '' % (actual, expected)\n        if pa_file:\n            try:\n                # If the actual file content should have come from a file that\n                # lives on the presto-admin host that we shove over to some\n                # other host, display the content of the file as it is on the\n                # presto-admin host. Presumably this will match the actual\n                # file content that we display above.\n                msg += '\\t==== Content for presto-admin file %s ====\\n' % (pa_file,)\n                msg += self.get_file_content(self.cluster.master, pa_file)\n                msg += '\\n\\t==========================================\\n'\n            except OSError as e:\n                msg += e.message\n        return msg\n\n    def assert_file_content_regex(self, host, filepath, expected):\n        config = self.get_file_content(host, filepath)\n        self.assertRegexpMatches(config, expected)\n\n    def assert_has_default_catalog(self, host):\n        catalog_dir = constants.REMOTE_CATALOG_DIR\n        self.assert_directory_perm_owner(host, catalog_dir, 'drwxr-xr-x', 'presto', 'presto')\n\n        filepath = os.path.join(catalog_dir, 'tpch.properties')\n        self.assert_config_perms(host, filepath)\n        self.assert_file_content(host, filepath, 'connector.name=tpch')\n\n    def assert_has_jmx_catalog(self, container):\n        self.assert_file_content(container,\n                                 '/etc/presto/catalog/jmx.properties',\n                                 'connector.name=jmx')\n\n    def assert_path_removed(self, container, directory):\n        self.cluster.exec_cmd_on_host(\n            container, ' [ ! -e %s ]' % directory)\n\n    def assert_has_default_config(self, host):\n        jvm_config_path = '/etc/presto/jvm.config'\n        self.assert_config_perms(host, jvm_config_path)\n        self.assert_file_content(\n            host, jvm_config_path, self.default_jvm_config_)\n\n        self.assert_node_config(host, self.default_node_properties_)\n\n        config_properties_path = os.path.join(constants.REMOTE_CONF_DIR,\n                                              'config.properties')\n\n        self.assert_config_perms(host, config_properties_path)\n        if host in self.cluster.slaves:\n            self.assert_file_content(host, config_properties_path,\n                                     self.default_workers_test_config_)\n\n        else:\n            self.assert_file_content(host, config_properties_path,\n                                     self.default_coordinator_test_config_)\n\n    def assert_node_config(self, host, expected, expected_node_id=None):\n        node_properties_path = '/etc/presto/node.properties'\n        self.assert_config_perms(host, node_properties_path)\n        node_properties = self.cluster.exec_cmd_on_host(\n            host, 'cat %s' % (node_properties_path,), invoke_sudo=True)\n        split_properties = node_properties.split('\\n', 1)\n        if expected_node_id:\n            self.assertEqual(expected_node_id, split_properties[0])\n        else:\n            self.assertRegexpMatches(split_properties[0], 'node.id=.*')\n        actual = split_properties[1]\n        if host in self.cluster.slaves:\n            conf_dir = get_workers_directory()\n        else:\n            conf_dir = get_coordinator_directory()\n        self.assertLazyMessage(\n            lambda: self.file_content_message(actual, expected, os.path.join(conf_dir, 'node.properties')),\n            self.assertEqual,\n            actual,\n            expected)\n\n    def expected_stop(self, running=None, not_running=None):\n        if running is None:\n            running = self.cluster.all_internal_hosts()\n            if not_running:\n                for host in not_running:\n                    running.remove(host)\n\n        expected_output = []\n        for host in running:\n            expected_output += [r'\\[%s\\] out: ' % host,\n                                r'\\[%s\\] out: Stopped .*' % host,\n                                r'\\[%s\\] out: Stopping presto' % host]\n        if not_running:\n            for host in not_running:\n                expected_output += [r'\\[%s\\] out: ' % host,\n                                    r'\\[%s\\] out: Not running' % host,\n                                    r'\\[%s\\] out: Stopping presto' % host]\n\n        return expected_output\n\n    def assert_stopped(self, process_per_host):\n        for host, pid in process_per_host:\n            self.retry(lambda:\n                       self.assertRaisesRegexp(OSError,\n                                               'No such process',\n                                               self.cluster.exec_cmd_on_host,\n                                               host,\n                                               'kill -0 %s' % pid),\n                       retry_timeout=10,\n                       retry_interval=2)\n\n    @staticmethod\n    def get_process_per_host(output_lines):\n        process_per_host = []\n        # We found some places where we were incorrectly passing a string\n        # containing the output rather than an iterable collection of lines.\n        # Since strings don't have an __iter__ attribute, we can catch this\n        # error.\n        if not hasattr(output_lines, '__iter__'):\n            raise Exception('output_lines doesn\\'t have an __iter__ ' +\n                            'attribute. Did you pass an unsplit string?')\n        for line in output_lines:\n            match = re.search(r'\\[(?P<host>.*?)\\] out: Started as (?P<pid>.*)',\n                              line)\n            if match:\n                process_per_host.append((match.group('host'),\n                                         match.group('pid')))\n        return process_per_host\n\n    def assert_started(self, process_per_host):\n        for host, pid in process_per_host:\n            self.cluster.exec_cmd_on_host(host, 'kill -0 %s' % pid, invoke_sudo=True)\n        return process_per_host\n\n    def replace_keywords(self, text, cluster=None, **kwargs):\n        if not cluster:\n            cluster = self.cluster\n\n        test_keywords = self.default_keywords.copy()\n        test_keywords.update({\n            'master': cluster.internal_master\n        })\n        if cluster.internal_slaves:\n            test_keywords.update({\n                'slave1': cluster.internal_slaves[0],\n                'slave2': cluster.internal_slaves[1],\n                'slave3': cluster.internal_slaves[2]\n            })\n        test_keywords.update(**kwargs)\n        return text % test_keywords\n\n    @staticmethod\n    def escape_for_regex(expected):\n        expected = expected.replace('[', '\\[')\n        expected = expected.replace(']', '\\]')\n        expected = expected.replace(')', '\\)')\n        expected = expected.replace('(', '\\(')\n        expected = expected.replace('+', '\\+')\n        return expected\n\n    @staticmethod\n    def retry(method_to_check, retry_timeout=RETRY_TIMEOUT,\n              retry_interval=RETRY_INTERVAL):\n        return Retrying(stop_max_delay=retry_timeout * 1000,\n                        wait_fixed=retry_interval * 1000).call(method_to_check)\n\n    def down_node_connection_error(self, host):\n        hostname = self.cluster.get_down_hostname(host)\n        return self.down_node_connection_string % {'host': hostname}\n\n    def status_node_connection_error(self, host):\n        hostname = self.cluster.get_down_hostname(host)\n        return self.status_down_node_string % {'host': hostname}\n\n    def create_presto_client(self, host=None):\n        ips = self.cluster.get_ip_address_dict()\n        config_path = os.path.join('~', LOCAL_CONF_DIR, COORDINATOR_DIR_NAME, CONFIG_PROPERTIES)\n        config = self.cluster.exec_cmd_on_host(self.cluster.master, 'cat ' + config_path)\n        user = 'root'\n        if host is None:\n            host = self.cluster.master\n        return PrestoClient(ips[host], user, PrestoConfig.from_file(StringIO(config), config_path, host))\n\n\ndef docker_only(original_function):\n    def test_inner(self, *args, **kwargs):\n        if type(getattr(self, 'cluster')) is DockerCluster:\n            original_function(self, *args, **kwargs)\n        else:\n            print 'Warning: Docker only test, passing with a noop'\n    return test_inner\n\n\nclass PrestoError(Exception):\n    pass\n"
  },
  {
    "path": "tests/product/base_test_installer.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProduct tests for generating an online and offline installer for presto-admin\n\"\"\"\nimport fnmatch\nimport os\nimport re\nimport subprocess\n\nfrom prestoadmin import main_dir\nfrom tests.docker_cluster import DockerCluster\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_BARE_CLUSTER\nfrom tests.product.prestoadmin_installer import PrestoadminInstaller\n\n\nclass BaseTestInstaller(BaseProductTestCase):\n    def setUp(self, build_or_runtime):\n        super(BaseTestInstaller, self).setUp()\n        self.setup_cluster(NoHadoopBareImageProvider(build_or_runtime), STANDALONE_BARE_CLUSTER)\n        self.centos_container = \\\n            self.__create_and_start_single_centos_container(build_or_runtime)\n        self.pa_installer = PrestoadminInstaller(self)\n\n    def tearDown(self):\n        super(BaseTestInstaller, self).tearDown()\n        self.centos_container.tear_down()\n\n    def __create_and_start_single_centos_container(self, build_or_runtime):\n        cluster_type = 'installer_tester'\n        bare_image_provider = NoHadoopBareImageProvider(build_or_runtime)\n        centos_container, bare_cluster = DockerCluster.start_cluster(\n            bare_image_provider, cluster_type, 'master', [],\n            cap_add=['NET_ADMIN'])\n\n        if bare_cluster:\n            centos_container.commit_images(bare_image_provider, cluster_type)\n\n        return centos_container\n\n    def _verify_third_party_dir(self, is_third_party_present):\n        matches = fnmatch.filter(\n            os.listdir(self.centos_container.get_dist_dir(unique=True)),\n            'prestoadmin-*.tar.gz')\n        if len(matches) > 1:\n            raise RuntimeError(\n                'More than one archive found in the dist directory ' +\n                ' '.join(matches)\n            )\n        cmd_to_run = ['tar', '-tf',\n                      os.path.join(\n                          self.centos_container.get_dist_dir(unique=True),\n                          matches[0])\n                      ]\n        popen_obj = subprocess.Popen(cmd_to_run,\n                                     cwd=main_dir, stdout=subprocess.PIPE)\n        retcode = popen_obj.returncode\n        if retcode:\n            raise RuntimeError('Non zero return code when executing ' +\n                               ' '.join(cmd_to_run))\n        stdout = popen_obj.communicate()[0]\n        match = re.search('/third-party/', stdout)\n        if is_third_party_present and match is None:\n            raise RuntimeError('Expected to have an offline installer with '\n                               'a third-party directory. Found no '\n                               'third-party directory in the installer '\n                               'archive.')\n        elif not is_third_party_present and match:\n            raise RuntimeError('Expected to have an online installer with no '\n                               'third-party directory. Found a third-party '\n                               'directory in the installer archive.')\n"
  },
  {
    "path": "tests/product/cluster_types.py",
    "content": "from tests.product.mode_installers import StandaloneModeInstaller\nfrom tests.product.prestoadmin_installer import PrestoadminInstaller\nfrom tests.product.topology_installer import TopologyInstaller\nfrom tests.product.standalone.presto_installer import StandalonePrestoInstaller\n\n\nSTANDALONE_BARE_CLUSTER = 'bare'\nBARE_CLUSTER = 'bare'\nSTANDALONE_PA_CLUSTER = 'pa_only_standalone'\nSTANDALONE_PRESTO_CLUSTER = 'presto'\n\ncluster_types = {\n    BARE_CLUSTER: [],\n    STANDALONE_PA_CLUSTER: [PrestoadminInstaller,\n                            StandaloneModeInstaller],\n    STANDALONE_PRESTO_CLUSTER: [PrestoadminInstaller,\n                                StandaloneModeInstaller,\n                                TopologyInstaller,\n                                StandalonePrestoInstaller],\n}\n"
  },
  {
    "path": "tests/product/config_dir_utils.py",
    "content": "import os\n\nfrom prestoadmin.util.constants import COORDINATOR_DIR_NAME, WORKERS_DIR_NAME, CATALOG_DIR_NAME\n\n\n# gets the information for presto-admin config directories on the cluster\ndef get_config_directory():\n    return os.path.join('~', '.prestoadmin')\n\n\ndef get_config_file_path():\n    return os.path.join(get_config_directory(), 'config.json')\n\n\ndef get_coordinator_directory():\n    return os.path.join(get_config_directory(), COORDINATOR_DIR_NAME)\n\n\ndef get_workers_directory():\n    return os.path.join(get_config_directory(), WORKERS_DIR_NAME)\n\n\ndef get_catalog_directory():\n    return os.path.join(get_config_directory(), CATALOG_DIR_NAME)\n\n\ndef get_log_directory():\n    return os.path.join(get_config_directory(), 'log')\n\n\ndef get_mode_config_path():\n    return os.path.join(get_config_directory(), 'mode.json')\n\n\ndef get_install_directory():\n    return os.path.join('~', 'prestoadmin')\n\n\ndef get_presto_admin_path():\n    return os.path.join(get_install_directory(), 'presto-admin')\n"
  },
  {
    "path": "tests/product/constants.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule defining constants global to the product tests\n\"\"\"\nimport json\nimport os\n\nimport prestoadmin\nfrom prestoadmin import main_dir\n\nBASE_IMAGES_TAG_CONFIG = 'base-images-tag.json'\n\n_BASE_IMAGE_NAME = os.environ.get('BASE_IMAGE_NAME')\nBASE_IMAGE_TAG = os.environ.get('BASE_IMAGE_TAG')\n\nif _BASE_IMAGE_NAME is None:\n    _BASE_IMAGE_NAME = 'prestodb/centos6-presto-admin-tests'\n\nif BASE_IMAGE_TAG is None:\n    try:\n        with open(os.path.join(main_dir, BASE_IMAGES_TAG_CONFIG)) as tag_config:\n            tag_json = json.load(tag_config)\n        BASE_IMAGE_TAG = tag_json['base_images_tag']\n    except KeyError:\n        raise Exception(\"base_images_tag must be set in %s\" % (BASE_IMAGES_TAG_CONFIG,))\n\nBASE_IMAGE_NAME_BUILD = _BASE_IMAGE_NAME + \"-build\"\nBASE_IMAGE_NAME_RUNTIME = _BASE_IMAGE_NAME + \"-runtime\"\n\nprint \"using test build IMAGE %s:%s\" % (BASE_IMAGE_NAME_BUILD, BASE_IMAGE_TAG)\nprint \"using test runtime IMAGE %s:%s\" % (BASE_IMAGE_NAME_RUNTIME, BASE_IMAGE_TAG)\n\nLOCAL_RESOURCES_DIR = os.path.join(prestoadmin.main_dir,\n                                   'tests/product/resources/')\n\nDEFAULT_DOCKER_MOUNT_POINT = '/mnt/presto-admin'\nDEFAULT_LOCAL_MOUNT_POINT = os.path.join(main_dir, 'tmp/docker-pa/')\n"
  },
  {
    "path": "tests/product/image_builder.py",
    "content": "import argparse\n\nfrom tests.docker_cluster import DockerCluster\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_BARE_CLUSTER, STANDALONE_PA_CLUSTER, \\\n    STANDALONE_PRESTO_CLUSTER, cluster_types\n\n\nclass ImageBuilder:\n    def __init__(self, testcase):\n        self.testcase = testcase\n        self.testcase.default_keywords = {}\n        self.testcase.cluster = None\n\n    def _setup_image(self, bare_image_provider, cluster_type):\n        installers = cluster_types[cluster_type]\n\n        self.testcase.cluster, bare_cluster = DockerCluster.start_cluster(\n            bare_image_provider, cluster_type)\n\n        # If we got a bare cluster back, we need to run the installers on it.\n        # applying the post-install hooks and updating the replacement\n        # keywords is handled internally in _run_installers.\n        #\n        # If we got a non-bare cluster back, that means the image already exists\n        # and we created the cluster using that image.\n        if bare_cluster:\n            BaseProductTestCase.run_installers(self.testcase.cluster, installers, self.testcase)\n\n            if isinstance(self.testcase.cluster, DockerCluster):\n                self.testcase.cluster.commit_images(bare_image_provider, cluster_type)\n\n        self.testcase.cluster.tear_down()\n\n    def _setup_image_with_no_hadoop_provider(self, cluster_type):\n        self._setup_image(NoHadoopBareImageProvider(),\n                          cluster_type)\n\n    def setup_standalone_presto_images(self):\n        cluster_type = STANDALONE_PRESTO_CLUSTER\n        self._setup_image_with_no_hadoop_provider(cluster_type)\n\n    def setup_standalone_presto_admin_images(self):\n        cluster_type = STANDALONE_PA_CLUSTER\n        self._setup_image_with_no_hadoop_provider(cluster_type)\n\n    def setup_standalone_bare_images(self):\n        cluster_type = STANDALONE_BARE_CLUSTER\n        self._setup_image_with_no_hadoop_provider(cluster_type)\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n\n    # Update the Makefile to list supported images if more are added\n    parser.add_argument(\n        \"image_type\", metavar=\"image_type\", type=str, nargs=\"+\",\n        choices=[\"standalone_presto\", \"standalone_presto_admin\",\n                 \"standalone_bare\", \"all\"],\n        help=\"Specify the type of image to create. The available choices are: \"\n             \"standalone_presto, standalone_presto_admin, standalone_bare, all\")\n\n    args = parser.parse_args()\n\n    # ImageBuilder needs an input testcase with access to unittest assertions\n    # so the installers can check their resulting installations as well as some\n    # product test helper functions.\n    # This supplies a dummy testcase. BaseProductTestCase inherits from\n    # unittest. A unittest instance can be successfully created if the name\n    # of an existing method of the class is passed into the constructor.\n    dummy_testcase = BaseProductTestCase('__init__')\n    image_builder = ImageBuilder(dummy_testcase)\n\n    if \"all\" in args.image_type:\n        image_builder.setup_standalone_presto_images()\n        image_builder.setup_standalone_presto_admin_images()\n        image_builder.setup_standalone_bare_images()\n    else:\n        if \"standalone_presto\" in args.image_type:\n            image_builder.setup_standalone_presto_images()\n        if \"standalone_presto_admin\" in args.image_type:\n            image_builder.setup_standalone_presto_admin_images()\n        if \"standalone_bare\" in args.image_type:\n            image_builder.setup_standalone_bare_images()\n"
  },
  {
    "path": "tests/product/mode_installers.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nInstallers for installing mode.json onto clusters\n\"\"\"\n\nimport json\n\nfrom overrides import overrides\nfrom prestoadmin import config\nfrom prestoadmin.mode import VALID_MODES, MODE_KEY, MODE_STANDALONE\nfrom tests.base_installer import BaseInstaller\nfrom tests.product.config_dir_utils import get_mode_config_path\n\n\nclass BaseModeInstaller(BaseInstaller):\n    def __init__(self, testcase, mode):\n        self.testcase = testcase\n        testcase.assertIn(mode, VALID_MODES)\n        self.mode = mode\n        self.json = config.json_to_string(self._get_mode_cfg(self.mode))\n\n    @staticmethod\n    def _get_mode_cfg(mode):\n        return {MODE_KEY: mode}\n\n    @staticmethod\n    @overrides\n    def get_dependencies():\n        return []\n\n    @overrides\n    def install(self):\n        self.testcase.cluster.write_content_to_host(\n            self.json, get_mode_config_path(), self.testcase.cluster.master)\n\n    @overrides\n    def get_keywords(self, *args, **kwargs):\n        return {}\n\n    @staticmethod\n    def _assert_installed(testcase, expected_mode):\n        json_str = testcase.cluster.exec_cmd_on_host(\n            testcase.cluster.master, 'cat %s' % get_mode_config_path())\n\n        actual_mode_cfg = json.loads(json_str)\n        testcase.assertEqual(\n            BaseModeInstaller._get_mode_cfg(\n                expected_mode), actual_mode_cfg)\n\n\nclass StandaloneModeInstaller(BaseModeInstaller):\n    def __init__(self, testcase):\n        super(StandaloneModeInstaller, self).__init__(\n            testcase, MODE_STANDALONE)\n\n    @staticmethod\n    def assert_installed(testcase):\n        BaseModeInstaller._assert_installed(testcase, MODE_STANDALONE)\n"
  },
  {
    "path": "tests/product/prestoadmin_installer.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for installing prestoadmin on a cluster.\n\"\"\"\n\nimport errno\nimport fnmatch\nimport os\nimport shutil\n\nimport prestoadmin\nfrom tests.base_installer import BaseInstaller\nfrom tests.configurable_cluster import ConfigurableCluster\nfrom tests.docker_cluster import DockerCluster\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.config_dir_utils import get_install_directory\nfrom tests.product.constants import LOCAL_RESOURCES_DIR\n\n\nclass PrestoadminInstaller(BaseInstaller):\n    def __init__(self, testcase):\n        self.testcase = testcase\n\n    @staticmethod\n    def get_dependencies():\n        return []\n\n    def install(self, cluster=None, dist_dir=None):\n        # Passing in a cluster supports the installation tests. We need to be\n        # able to try an installation against an unsupported OS, and for that\n        # testcase, we create a cluster that is local to the testcase and then\n        # run the install on it. We can't replace self.cluster with the local\n        # cluster in the test, because that would prevent the test's \"regular\"\n        # cluster from getting torn down.\n        if not cluster:\n            cluster = self.testcase.cluster\n\n        if not dist_dir:\n            dist_dir = self._build_dist_if_necessary(cluster)\n        self._copy_dist_to_host(cluster, dist_dir, cluster.master)\n        with open(LOCAL_RESOURCES_DIR + \"/install-admin.sh\", 'r') as file_obj:\n            script = file_obj.read()\n\n        script = script.format(mount_dir=cluster.mount_dir)\n        cluster.run_script_on_host(script, cluster.master, tty=False)\n\n    @staticmethod\n    def assert_installed(testcase, msg=None):\n        cluster = testcase.cluster\n        cluster.exec_cmd_on_host(cluster.master, 'test -x %s' % get_install_directory())\n\n    def get_keywords(self):\n        return {}\n\n    def _build_dist_if_necessary(self, cluster, unique=False):\n        if (not os.path.isdir(cluster.get_dist_dir(unique)) or\n                not fnmatch.filter(\n                    os.listdir(cluster.get_dist_dir(unique)),\n                    'prestoadmin-*.tar.gz')):\n            self._build_installer_in_docker(cluster, unique=unique)\n        return cluster.get_dist_dir(unique)\n\n    def _build_installer_in_docker(self, cluster, online_installer=None,\n                                   unique=False):\n        if online_installer is None:\n            pa_test_online_installer = os.environ.get('PA_TEST_ONLINE_INSTALLER')\n            online_installer = pa_test_online_installer is not None\n\n        if isinstance(cluster, ConfigurableCluster):\n            online_installer = True\n\n        container_name = 'installer'\n        cluster_type = 'installer_builder'\n        bare_image_provider = NoHadoopBareImageProvider(\"build\")\n\n        installer_container, created_bare = DockerCluster.start_cluster(\n            bare_image_provider, cluster_type, 'installer', [])\n\n        if created_bare:\n            installer_container.commit_images(\n                bare_image_provider, cluster_type)\n\n        try:\n            shutil.copytree(\n                prestoadmin.main_dir,\n                os.path.join(\n                    installer_container.get_local_mount_dir(container_name),\n                    'presto-admin'),\n                ignore=shutil.ignore_patterns('tmp', '.git', 'presto*.rpm')\n            )\n\n            # Pin pip to 7.1.2 because 8.0.0 removed support for distutils\n            # installed projects, of which the system setuptools is one on our\n            # Docker image. pip 8.0.1 or 8.0.2 replaced the error with a\n            # deprecation warning, and also warns that Python 2.6 is\n            # deprecated. While we still need to support Python 2.6, we'll pin\n            # pip to a 7.x version, but we should revisit this once we no\n            # longer need to support 2.6:\n            # https://github.com/pypa/pip/issues/3384\n            installer_container.run_script_on_host(\n                'set -e\\n'\n                # use explicit versions of dependent packages\n                'pip install --upgrade pycparser==2.18 cffi==1.11.5\\n'\n                'pip install --upgrade pycparser==2.18 PyNaCl==1.2.1\\n'\n                'pip install --upgrade pycparser==2.18 cryptography==2.1.1\\n'\n                'pip install --upgrade pip==7.1.2\\n'\n                'pip install --upgrade wheel==0.23.0\\n'\n                'pip install --upgrade setuptools==20.1.1\\n'\n                'mv %s/presto-admin ~/\\n'\n                'cd ~/presto-admin\\n'\n                'make %s\\n'\n                'cp dist/prestoadmin-*.tar.gz %s'\n                % (installer_container.mount_dir,\n                   'dist' if online_installer else 'dist-offline',\n                   installer_container.mount_dir),\n                container_name)\n\n            try:\n                os.makedirs(cluster.get_dist_dir(unique))\n            except OSError, e:\n                if e.errno != errno.EEXIST:\n                    raise\n            local_container_dist_dir = os.path.join(\n                prestoadmin.main_dir,\n                installer_container.get_local_mount_dir(container_name)\n            )\n            installer_file = fnmatch.filter(\n                os.listdir(local_container_dist_dir),\n                'prestoadmin-*.tar.gz')[0]\n            shutil.copy(\n                os.path.join(local_container_dist_dir, installer_file),\n                cluster.get_dist_dir(unique))\n        finally:\n            installer_container.tear_down()\n\n    @staticmethod\n    def _copy_dist_to_host(cluster, local_dist_dir, dest_host):\n        for dist_file in os.listdir(local_dist_dir):\n            if fnmatch.fnmatch(dist_file, \"prestoadmin-*.tar.gz\"):\n                cluster.copy_to_host(\n                    os.path.join(local_dist_dir, dist_file),\n                    dest_host)\n"
  },
  {
    "path": "tests/product/resources/configuration_show_config.txt",
    "content": "\nmaster: Configuration file at /etc/presto/config.properties:\ncoordinator=true\ndiscovery-server.enabled=true\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nnode-scheduler.include-coordinator=false\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n\n\nslave1: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n\n\nslave2: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n\n\nslave3: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n\n"
  },
  {
    "path": "tests/product/resources/configuration_show_default.txt",
    "content": "master: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n\n\nmaster: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:\\-UseBiasedLocking\n-XX:\\+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:\\+ExplicitGCInvokesConcurrent\n-XX:\\+HeapDumpOnOutOfMemoryError\n-XX:\\+UseGCOverheadLimit\n-XX:\\+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n\nmaster: Configuration file at /etc/presto/config.properties:\ncoordinator=true\ndiscovery-server.enabled=true\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nnode.scheduler.include-coordinator=false\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n\n\nslave1: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n\n\nslave1: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:\\-UseBiasedLocking\n-XX:\\+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:\\+ExplicitGCInvokesConcurrent\n-XX:\\+HeapDumpOnOutOfMemoryError\n-XX:\\+UseGCOverheadLimit\n-XX:\\+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n\nslave1: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n\n\nslave2: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n\n\nslave2: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:\\-UseBiasedLocking\n-XX:\\+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:\\+ExplicitGCInvokesConcurrent\n-XX:\\+HeapDumpOnOutOfMemoryError\n-XX:\\+UseGCOverheadLimit\n-XX:\\+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n\nslave2: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n\n\nslave3: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n\n\nslave3: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:\\-UseBiasedLocking\n-XX:\\+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:\\+ExplicitGCInvokesConcurrent\n-XX:\\+HeapDumpOnOutOfMemoryError\n-XX:\\+UseGCOverheadLimit\n-XX:\\+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n\nslave3: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n"
  },
  {
    "path": "tests/product/resources/configuration_show_default_master_slave1.txt",
    "content": "master: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n\n\nmaster: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:\\-UseBiasedLocking\n-XX:\\+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:\\+ExplicitGCInvokesConcurrent\n-XX:\\+HeapDumpOnOutOfMemoryError\n-XX:\\+UseGCOverheadLimit\n-XX:\\+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n\nmaster: Configuration file at /etc/presto/config.properties:\ncoordinator=true\ndiscovery-server.enabled=true\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nnode.scheduler.include-coordinator=false\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n\n\nslave1: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n\n\nslave1: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:\\-UseBiasedLocking\n-XX:\\+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:\\+ExplicitGCInvokesConcurrent\n-XX:\\+HeapDumpOnOutOfMemoryError\n-XX:\\+UseGCOverheadLimit\n-XX:\\+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n\nslave1: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n"
  },
  {
    "path": "tests/product/resources/configuration_show_default_slave2_slave3.txt",
    "content": "slave2: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n\n\nslave2: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:\\-UseBiasedLocking\n-XX:\\+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:\\+ExplicitGCInvokesConcurrent\n-XX:\\+HeapDumpOnOutOfMemoryError\n-XX:\\+UseGCOverheadLimit\n-XX:\\+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n\nslave2: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n\n\nslave3: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n\n\nslave3: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:\\-UseBiasedLocking\n-XX:\\+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:\\+ExplicitGCInvokesConcurrent\n-XX:\\+HeapDumpOnOutOfMemoryError\n-XX:\\+UseGCOverheadLimit\n-XX:\\+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n\nslave3: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://master:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n"
  },
  {
    "path": "tests/product/resources/configuration_show_down_node.txt",
    "content": "\nmaster: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://.*:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n\n\nslave2: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://.*:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n\n\nslave3: Configuration file at /etc/presto/config.properties:\ncoordinator=false\ndiscovery.uri=http://.*:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\n"
  },
  {
    "path": "tests/product/resources/configuration_show_jvm.txt",
    "content": "\nmaster: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:-UseBiasedLocking\n-XX:+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:+ExplicitGCInvokesConcurrent\n-XX:+HeapDumpOnOutOfMemoryError\n-XX:+UseGCOverheadLimit\n-XX:+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n\nslave1: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:-UseBiasedLocking\n-XX:+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:+ExplicitGCInvokesConcurrent\n-XX:+HeapDumpOnOutOfMemoryError\n-XX:+UseGCOverheadLimit\n-XX:+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n\nslave2: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:-UseBiasedLocking\n-XX:+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:+ExplicitGCInvokesConcurrent\n-XX:+HeapDumpOnOutOfMemoryError\n-XX:+UseGCOverheadLimit\n-XX:+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n\nslave3: Configuration file at /etc/presto/jvm.config:\n-server\n-Xmx16G\n-XX:-UseBiasedLocking\n-XX:+UseG1GC\n-XX:G1HeapRegionSize=32M\n-XX:+ExplicitGCInvokesConcurrent\n-XX:+HeapDumpOnOutOfMemoryError\n-XX:+UseGCOverheadLimit\n-XX:+ExitOnOutOfMemoryError\n-XX:ReservedCodeCacheSize=512M\n-DHADOOP_USER_NAME=hive\n\n"
  },
  {
    "path": "tests/product/resources/configuration_show_log.txt",
    "content": "\nmaster: Configuration file at /etc/presto/log.properties:\ncom.facebook.presto=WARN\n\n\nslave1: Configuration file at /etc/presto/log.properties:\ncom.facebook.presto=WARN\n\n\nslave2: Configuration file at /etc/presto/log.properties:\ncom.facebook.presto=WARN\n\n\nslave3: Configuration file at /etc/presto/log.properties:\ncom.facebook.presto=WARN\n\n"
  },
  {
    "path": "tests/product/resources/configuration_show_log_none.txt",
    "content": "\nWarning: [master] No configuration file found for master at /etc/presto/log.properties\n\n\nWarning: [slave1] No configuration file found for slave1 at /etc/presto/log.properties\n\n\nWarning: [slave2] No configuration file found for slave2 at /etc/presto/log.properties\n\n\nWarning: [slave3] No configuration file found for slave3 at /etc/presto/log.properties\n\n"
  },
  {
    "path": "tests/product/resources/configuration_show_node.txt",
    "content": "master: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n\n\nslave1: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n\n\nslave2: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n\n\nslave3: Configuration file at /etc/presto/node.properties:\nnode.id=.*\ncatalog.config-dir=/etc/presto/catalog\nnode.data-dir=/var/lib/presto/data\nnode.environment=presto\nnode.launcher-log-file=/var/log/presto/launcher.log\nnode.server-log-file=/var/log/presto/server.log\nplugin.dir=/usr/lib/presto/lib/plugin\n"
  },
  {
    "path": "tests/product/resources/configuration_show_none.txt",
    "content": "\nWarning: [master] No configuration file found for master at /etc/presto/node.properties\n\n\nWarning: [master] No configuration file found for master at /etc/presto/jvm.config\n\n\nWarning: [master] No configuration file found for master at /etc/presto/config.properties\n\n\nWarning: [slave1] No configuration file found for slave1 at /etc/presto/node.properties\n\n\nWarning: [slave1] No configuration file found for slave1 at /etc/presto/jvm.config\n\n\nWarning: [slave1] No configuration file found for slave1 at /etc/presto/config.properties\n\n\nWarning: [slave2] No configuration file found for slave2 at /etc/presto/node.properties\n\n\nWarning: [slave2] No configuration file found for slave2 at /etc/presto/jvm.config\n\n\nWarning: [slave2] No configuration file found for slave2 at /etc/presto/config.properties\n\n\nWarning: [slave3] No configuration file found for slave3 at /etc/presto/node.properties\n\n\nWarning: [slave3] No configuration file found for slave3 at /etc/presto/jvm.config\n\n\nWarning: [slave3] No configuration file found for slave3 at /etc/presto/config.properties\n\n"
  },
  {
    "path": "tests/product/resources/install-admin.sh",
    "content": "#!/bin/bash\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nset -e\n\ncp /{mount_dir}/prestoadmin-*.tar.gz ~\ncd ~\ntar -zxf prestoadmin-*.tar.gz\ncd prestoadmin\n./install-prestoadmin.sh\n"
  },
  {
    "path": "tests/product/resources/install_twice.txt",
    "content": "Using rpm_specifier as a local path\nFetching local presto rpm at path: .*\nFound existing rpm at: .*\n\nFatal error: [%(slave2)s] sudo() received nonzero return code 1 while executing!\n\nRequested: rpm -i /opt/prestoadmin/packages/%(rpm)s\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"rpm -i /opt/prestoadmin/packages/%(rpm)s\"\n\nAborting.\nDeploying rpm on %(slave2)s...\nPackage deployed successfully on: %(slave2)s\n[%(slave2)s] out: \tpackage %(rpm_basename)s is already installed\n[%(slave2)s] out:\n\nFatal error: [%(master)s] sudo() received nonzero return code 1 while executing!\n\nRequested: rpm -i /opt/prestoadmin/packages/%(rpm)s\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"rpm -i /opt/prestoadmin/packages/%(rpm)s\"\n\nAborting.\nDeploying rpm on %(master)s...\nPackage deployed successfully on: %(master)s\n[%(master)s] out: \tpackage %(rpm_basename)s is already installed\n[%(master)s] out:\n\nFatal error: [%(slave3)s] sudo() received nonzero return code 1 while executing!\n\nRequested: rpm -i /opt/prestoadmin/packages/%(rpm)s\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"rpm -i /opt/prestoadmin/packages/%(rpm)s\"\n\nAborting.\nDeploying rpm on %(slave3)s...\nPackage deployed successfully on: %(slave3)s\n[%(slave3)s] out: \tpackage %(rpm_basename)s is already installed\n[%(slave3)s] out:\n\nFatal error: [%(slave1)s] sudo() received nonzero return code 1 while executing!\n\nRequested: rpm -i /opt/prestoadmin/packages/%(rpm)s\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"rpm -i /opt/prestoadmin/packages/%(rpm)s\"\n\nAborting.\nDeploying rpm on %(slave1)s...\nPackage deployed successfully on: %(slave1)s\n[%(slave1)s] out: \tpackage %(rpm_basename)s is already installed\n[%(slave1)s] out:"
  },
  {
    "path": "tests/product/resources/invalid_json.json",
    "content": "{\n  \"user\": \"root\"\n  bad json!!!\n}\n"
  },
  {
    "path": "tests/product/resources/non_root_sudo_warning_text.txt",
    "content": "[master] out: \n[master] out: \n[master] out: \n[master] out: \n[master] out:     #1) Respect the privacy of others.\n[master] out:     #2) Think before you type.\n[master] out:     #3) With great power comes great responsibility.\n[master] out: Administrator. It usually boils down to these three things:\n[master] out: We trust you have received the usual lecture from the local System\n[master] out: sudo password:\n[slave1] out: \n[slave1] out: \n[slave1] out: \n[slave1] out: \n[slave1] out:     #1) Respect the privacy of others.\n[slave1] out:     #2) Think before you type.\n[slave1] out:     #3) With great power comes great responsibility.\n[slave1] out: Administrator. It usually boils down to these three things:\n[slave1] out: We trust you have received the usual lecture from the local System\n[slave1] out: sudo password:\n[slave2] out: \n[slave2] out: \n[slave2] out: \n[slave2] out: \n[slave2] out:     #1) Respect the privacy of others.\n[slave2] out:     #2) Think before you type.\n[slave2] out:     #3) With great power comes great responsibility.\n[slave2] out: Administrator. It usually boils down to these three things:\n[slave2] out: We trust you have received the usual lecture from the local System\n[slave2] out: sudo password:\n[slave3] out: \n[slave3] out: \n[slave3] out: \n[slave3] out: \n[slave3] out:     #1) Respect the privacy of others.\n[slave3] out:     #2) Think before you type.\n[slave3] out:     #3) With great power comes great responsibility.\n[slave3] out: Administrator. It usually boils down to these three things:\n[slave3] out: We trust you have received the usual lecture from the local System\n[slave3] out: sudo password:\n"
  },
  {
    "path": "tests/product/resources/non_sudo_uninstall.txt",
    "content": "\nFatal error: [slave3] sudo() received nonzero return code 1 while executing!\n\nRequested: set -m; /etc/init.d/presto stop\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"set -m; /etc/init.d/presto stop\"\n\nAborting.\n[slave3] out:\n[slave3] out: We trust you have received the usual lecture from the local System\n[slave3] out: Administrator. It usually boils down to these three things:\n[slave3] out:\n[slave3] out:     #1) Respect the privacy of others.\n[slave3] out:     #2) Think before you type.\n[slave3] out:     #3) With great power comes great responsibility.\n[slave3] out:\n[slave3] out: sudo password:\n[slave3] out: testuser is not in the sudoers file.  This incident will be reported.\n[slave3] out:\n\nFatal error: [slave1] sudo() received nonzero return code 1 while executing!\n\nRequested: set -m; /etc/init.d/presto stop\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"set -m; /etc/init.d/presto stop\"\n\nAborting.\n[slave1] out:\n[slave1] out: We trust you have received the usual lecture from the local System\n[slave1] out: Administrator. It usually boils down to these three things:\n[slave1] out:\n[slave1] out:     #1) Respect the privacy of others.\n[slave1] out:     #2) Think before you type.\n[slave1] out:     #3) With great power comes great responsibility.\n[slave1] out:\n[slave1] out: sudo password:\n[slave1] out: testuser is not in the sudoers file.  This incident will be reported.\n[slave1] out:\n\nFatal error: [slave2] sudo() received nonzero return code 1 while executing!\n\nRequested: set -m; /etc/init.d/presto stop\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"set -m; /etc/init.d/presto stop\"\n\nAborting.\n[slave2] out:\n[slave2] out: We trust you have received the usual lecture from the local System\n[slave2] out: Administrator. It usually boils down to these three things:\n[slave2] out:\n[slave2] out:     #1) Respect the privacy of others.\n[slave2] out:     #2) Think before you type.\n[slave2] out:     #3) With great power comes great responsibility.\n[slave2] out:\n[slave2] out: sudo password:\n[slave2] out: testuser is not in the sudoers file.  This incident will be reported.\n[slave2] out:\n\nFatal error: [master] sudo() received nonzero return code 1 while executing!\n\nRequested: set -m; /etc/init.d/presto stop\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"set -m; /etc/init.d/presto stop\"\n\nAborting.\n[master] out:\n[master] out: We trust you have received the usual lecture from the local System\n[master] out: Administrator. It usually boils down to these three things:\n[master] out:\n[master] out:     #1) Respect the privacy of others.\n[master] out:     #2) Think before you type.\n[master] out:     #3) With great power comes great responsibility.\n[master] out:\n[master] out: sudo password:\n[master] out: testuser is not in the sudoers file.  This incident will be reported.\n[master] out:\n"
  },
  {
    "path": "tests/product/resources/parallel_password_failure.txt",
    "content": "\nFatal error: [%(slave2)s] Needed to prompt for a connection or sudo password (host: %(slave2)s), but input would be ambiguous in parallel mode\n\nAborting.\nDeploying tpch.properties catalog configurations on: %(slave2)s\n\nFatal error: [%(slave1)s] Needed to prompt for a connection or sudo password (host: %(slave1)s), but input would be ambiguous in parallel mode\n\nAborting.\nDeploying tpch.properties catalog configurations on: %(slave1)s\n\nFatal error: [%(master)s] Needed to prompt for a connection or sudo password (host: %(master)s), but input would be ambiguous in parallel mode\n\nAborting.\nDeploying tpch.properties catalog configurations on: %(master)s\n\nFatal error: [%(slave3)s] Needed to prompt for a connection or sudo password (host: %(slave3)s), but input would be ambiguous in parallel mode\n\nAborting.\nDeploying tpch.properties catalog configurations on: %(slave3)s\n"
  },
  {
    "path": "tests/product/resources/uninstall_twice.txt",
    "content": "\nWarning: [slave2] Presto is not installed.\n\n\nWarning: [master] Presto is not installed.\n\n\nWarning: [slave3] Presto is not installed.\n\n\nWarning: [slave1] Presto is not installed.\n\n\nFatal error: [slave2] Unable to uninstall package on: slave2\n\nAborting.\n\nFatal error: [slave3] Unable to uninstall package on: slave3\n\nAborting.\n\nFatal error: [master] Unable to uninstall package on: master\n\nAborting.\n\nFatal error: [slave1] Unable to uninstall package on: slave1\n\nAborting.\n"
  },
  {
    "path": "tests/product/standalone/__init__.py",
    "content": ""
  },
  {
    "path": "tests/product/standalone/presto_installer.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for installing presto on a cluster.\n\"\"\"\n\nimport fnmatch\nimport os\n\nimport prestoadmin\n\nfrom tests.base_installer import BaseInstaller\nfrom tests.product.mode_installers import StandaloneModeInstaller\nfrom tests.product.prestoadmin_installer import PrestoadminInstaller\nfrom tests.product.topology_installer import TopologyInstaller\n\nRPM_BASENAME = r'presto.*'\nPRESTO_RPM_GLOB = r'presto*.rpm'\n\nPACKAGE_NAME = 'presto-server-rpm'\n\n\nclass StandalonePrestoInstaller(BaseInstaller):\n    def __init__(self, testcase, rpm_location=None):\n        if rpm_location:\n            self.rpm_dir, self.rpm_name = rpm_location\n        else:\n            self.rpm_dir, self.rpm_name = self._detect_presto_rpm()\n\n        self.testcase = testcase\n\n    @staticmethod\n    def get_dependencies():\n        return [PrestoadminInstaller, StandaloneModeInstaller,\n                TopologyInstaller]\n\n    def install(self, extra_configs=None, coordinator=None,\n                pa_raise_error=True):\n        cluster = self.testcase.cluster\n        rpm_name = self.copy_presto_rpm_to_master(cluster=cluster)\n\n        self.testcase.write_test_configs(cluster, extra_configs, coordinator)\n        cmd_output = self.testcase.run_prestoadmin(\n            'server install ' + os.path.join(cluster.rpm_cache_dir, rpm_name),\n            cluster=cluster, raise_error=pa_raise_error\n        )\n\n        return cmd_output\n\n    def get_keywords(self):\n        return {\n            'rpm': self.rpm_name,\n            'rpm_basename': RPM_BASENAME,\n        }\n\n    @staticmethod\n    def assert_installed(testcase, container=None, msg=None, cluster=None):\n        # cluster keyword arg supports configurable cluster, which needs to\n        # assert that presto isn't installed before testcase.cluster is set.\n        if not cluster:\n            cluster = testcase.cluster\n\n        # container keyword arg supports test_package_install and a few other\n        # places where we need to check specific members of a cluster.\n        if not container:\n            container = cluster.master\n\n        try:\n            check_rpm = cluster.exec_cmd_on_host(\n                container, 'rpm -q %s' % (PACKAGE_NAME,))\n            testcase.assertRegexpMatches(\n                check_rpm, RPM_BASENAME + '\\n', msg=msg\n            )\n        except OSError as e:\n            if msg:\n                error_message = e.strerror + '\\n' + msg\n            else:\n                error_message = e.strerror\n            testcase.fail(msg=error_message)\n\n    def copy_presto_rpm_to_master(self, cluster=None):\n        if not cluster:\n            cluster = self.testcase.cluster\n\n        rpm_path = os.path.join(self.rpm_dir, self.rpm_name)\n        if not self._check_rpm_already_uploaded(self.rpm_name, cluster):\n            cluster.copy_to_host(rpm_path, cluster.master, dest_path=os.path.join(cluster.rpm_cache_dir,\n                                                                                  self.rpm_name))\n        self._check_if_corrupted_rpm(self.rpm_name, cluster)\n        return self.rpm_name\n\n    @staticmethod\n    def _detect_presto_rpm():\n        \"\"\"\n        Detects the Presto RPM in the main directory of presto-admin.\n        Returns the name of the RPM, if it exists, else raises an OSError.\n        \"\"\"\n        rpm_names = fnmatch.filter(os.listdir(prestoadmin.main_dir),\n                                   PRESTO_RPM_GLOB)\n        if rpm_names:\n            # Choose the last RPM name if you sort the list, since if there\n            # are multiple RPMs, the last one is probably the latest\n            rpm_name = sorted(rpm_names)[-1]\n        else:\n            raise OSError(1, 'Presto RPM not detected.')\n\n        return prestoadmin.main_dir, rpm_name\n\n    @staticmethod\n    def _check_if_corrupted_rpm(rpm_name, cluster):\n        cluster.exec_cmd_on_host(\n            cluster.master, 'rpm -K --nosignature ' +\n                            os.path.join(cluster.rpm_cache_dir, rpm_name)\n        )\n\n    def assert_uninstalled(self, container, msg=None):\n        failure_msg = 'package %s is not installed' % (PACKAGE_NAME,)\n        rpm_cmd = 'rpm -q %s' % (PACKAGE_NAME,)\n\n        self.testcase.assertRaisesRegexp(\n            OSError,\n            failure_msg,\n            self.testcase.cluster.exec_cmd_on_host, container,\n            rpm_cmd, msg=msg)\n\n    @staticmethod\n    def _check_rpm_already_uploaded(rpm_name, cluster):\n        rpm_already_exists = True\n        try:\n            cluster.exec_cmd_on_host(\n                cluster.master,\n                'ls ' + os.path.join(cluster.rpm_cache_dir, rpm_name)\n            )\n        except OSError:\n            rpm_already_exists = False\n        return rpm_already_exists\n"
  },
  {
    "path": "tests/product/standalone/test_installation.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProduct tests for presto-admin installation\n\"\"\"\nimport certifi\nimport os\n\nfrom nose.plugins.attrib import attr\n\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase, docker_only\nfrom tests.product.cluster_types import STANDALONE_BARE_CLUSTER\nfrom tests.product.config_dir_utils import get_catalog_directory, get_coordinator_directory, get_workers_directory\nfrom tests.product.prestoadmin_installer import PrestoadminInstaller\n\n\nclass TestInstallation(BaseProductTestCase):\n\n    def setUp(self):\n        super(TestInstallation, self).setUp()\n        self.pa_installer = PrestoadminInstaller(self)\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_BARE_CLUSTER)\n        dist_dir = self.pa_installer._build_dist_if_necessary(self.cluster)\n        self.pa_installer._copy_dist_to_host(self.cluster, dist_dir, self.cluster.master)\n\n    @attr('smoketest')\n    @docker_only\n    def test_install_non_root(self):\n        install_dir = '/home/app-admin'\n        script = \"\"\"\n            set -e\n            cp {mount_dir}/prestoadmin-*.tar.gz {install_dir}\n            chown app-admin {install_dir}/prestoadmin-*.tar.gz\n            cd {install_dir}\n            sudo -u app-admin tar zxf prestoadmin-*.tar.gz\n            cd prestoadmin\n            sudo -u app-admin ./install-prestoadmin.sh\n        \"\"\".format(mount_dir=self.cluster.mount_dir, install_dir=install_dir)\n\n        self.cluster.run_script_on_host(script, self.cluster.master)\n\n        pa_config_dir = '/home/app-admin/.prestoadmin'\n        catalog_dir = os.path.join(pa_config_dir, 'catalog')\n        self.assert_path_exists(self.cluster.master, catalog_dir)\n\n        coordinator_dir = os.path.join(pa_config_dir, 'coordinator')\n        self.assert_path_exists(self.cluster.master, coordinator_dir)\n\n        workers_dir = os.path.join(pa_config_dir, 'workers')\n        self.assert_path_exists(self.cluster.master, workers_dir)\n\n    @attr('smoketest')\n    def test_cert_arg_to_installation_nonexistent_file(self):\n        install_dir = '~'\n        script = \"\"\"\n            set -e\n            cp {mount_dir}/prestoadmin-*.tar.gz {install_dir}\n            cd {install_dir}\n            tar zxf prestoadmin-*.tar.gz\n            cd prestoadmin\n             ./install-prestoadmin.sh dummy_cert.cert\n        \"\"\".format(mount_dir=self.cluster.mount_dir,\n                   install_dir=install_dir)\n        output = self.cluster.run_script_on_host(script, self.cluster.master)\n        self.assertRegexpMatches(output, r'Adding pypi.python.org as '\n                                 'trusted\\-host. Cannot find certificate '\n                                 'file: dummy_cert.cert')\n\n    @attr('smoketest')\n    def test_cert_arg_to_installation_real_cert(self):\n        self.cluster.copy_to_host(certifi.where(), self.cluster.master)\n        install_dir = '~'\n        cert_file = os.path.basename(certifi.where())\n        script = \"\"\"\n            set -e\n            cp {mount_dir}/prestoadmin-*.tar.gz {install_dir}\n            cd {install_dir}\n            tar zxf prestoadmin-*.tar.gz\n            cd prestoadmin\n             ./install-prestoadmin.sh {mount_dir}/{cacert}\n        \"\"\".format(mount_dir=self.cluster.mount_dir,\n                   install_dir=install_dir,\n                   cacert=cert_file)\n        output = self.cluster.run_script_on_host(script, self.cluster.master)\n        self.assertTrue('Adding pypi.python.org as trusted-host. Cannot find'\n                        ' certificate file: %s' % cert_file not in output,\n                        'Unable to find cert file; output: %s' % output)\n\n    def test_additional_dirs_created(self):\n        install_dir = '~'\n        script = \"\"\"\n            set -e\n            cp {mount_dir}/prestoadmin-*.tar.gz {install_dir}\n            cd {install_dir}\n            tar zxf prestoadmin-*.tar.gz\n            cd prestoadmin\n             ./install-prestoadmin.sh\n        \"\"\".format(mount_dir=self.cluster.mount_dir,\n                   install_dir=install_dir)\n        self.cluster.run_script_on_host(script, self.cluster.master)\n\n        self.assert_path_exists(self.cluster.master, get_catalog_directory())\n        self.assert_path_exists(self.cluster.master, get_coordinator_directory())\n        self.assert_path_exists(self.cluster.master, get_workers_directory())\n"
  },
  {
    "path": "tests/product/test_authentication.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProduct tests for SSH authentication for presto-admin commands\n\"\"\"\n\nimport os\nimport subprocess\nimport re\n\nfrom nose.plugins.attrib import attr\n\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase, docker_only\nfrom tests.product.cluster_types import STANDALONE_PRESTO_CLUSTER\nfrom constants import LOCAL_RESOURCES_DIR\nfrom tests.product.config_dir_utils import get_catalog_directory, get_presto_admin_path\n\n\nclass TestAuthentication(BaseProductTestCase):\n    def setUp(self):\n        super(TestAuthentication, self).setUp()\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n\n    success_output = (\n        'Deploying tpch.properties catalog configurations on: slave1 \\n'\n        'Deploying tpch.properties catalog configurations on: master \\n'\n        'Deploying tpch.properties catalog configurations on: slave2 \\n'\n        'Deploying tpch.properties catalog configurations on: slave3 \\n'\n    )\n\n    interactive_text = (\n        '/usr/lib64/python2.6/getpass.py:83: GetPassWarning: Can not control '\n        'echo on the terminal.\\n'\n        'Initial value for env.password: \\n'\n        'Warning: Password input may be echoed.\\n'\n        '  passwd = fallback_getpass(prompt, stream)\\n'\n    )\n\n    sudo_password_prompt = (\n        '[master] out: sudo password:\\n'\n        '[master] out: \\n'\n        '[slave1] out: sudo password:\\n'\n        '[slave1] out: \\n'\n        '[slave2] out: sudo password:\\n'\n        '[slave2] out: \\n'\n        '[slave3] out: sudo password:\\n'\n        '[slave3] out: \\n'\n    )\n\n    def parallel_password_failure_message(self, with_sudo_prompt=True):\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'parallel_password_failure.txt')) as f:\n            parallel_password_failure = f.read()\n        if with_sudo_prompt:\n            parallel_password_failure += (\n                '[%(slave3)s] out: sudo password:\\n'\n                '[%(slave3)s] out: Sorry, try again.\\n'\n                '[%(slave2)s] out: sudo password:\\n'\n                '[%(slave2)s] out: Sorry, try again.\\n'\n                '[%(slave1)s] out: sudo password:\\n'\n                '[%(slave1)s] out: Sorry, try again.\\n'\n                '[%(master)s] out: sudo password:\\n'\n                '[%(master)s] out: Sorry, try again.\\n')\n        parallel_password_failure = parallel_password_failure % {\n            'master': self.cluster.internal_master,\n            'slave1': self.cluster.internal_slaves[0],\n            'slave2': self.cluster.internal_slaves[1],\n            'slave3': self.cluster.internal_slaves[2]}\n        return parallel_password_failure\n\n    def non_root_sudo_warning_message(self):\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'non_root_sudo_warning_text.txt')) as f:\n            non_root_sudo_warning = f.read()\n        return non_root_sudo_warning\n\n    @attr('smoketest')\n    @docker_only\n    def test_passwordless_ssh_authentication(self):\n        self.upload_topology()\n        self.setup_for_catalog_add()\n\n        # Passwordless SSH as root, but specify -I\n        # We need to do it as a script because docker_py doesn't support\n        # redirecting stdin.\n        command_output = self.run_script_from_prestoadmin_dir(\n            'echo \"password\" | ./presto-admin catalog add -I')\n\n        self.assertEqualIgnoringOrder(\n            self._remove_python_string(self.success_output + self.interactive_text),\n            self._remove_python_string(command_output))\n\n        # Passwordless SSH as root, but specify -p\n        command_output = self.run_prestoadmin('catalog add --password '\n                                              'password')\n        self.assertEqualIgnoringOrder(self.success_output, command_output)\n\n        # Passwordless SSH as app-admin, specify -I\n        non_root_sudo_warning = self.non_root_sudo_warning_message()\n\n        command_output = self.run_script_from_prestoadmin_dir(\n            'echo \"password\" | ./presto-admin catalog add -I -u app-admin')\n        self.assertEqualIgnoringOrder(\n            self._remove_python_string(\n                self.success_output + self.interactive_text +\n                self.sudo_password_prompt + non_root_sudo_warning),\n            self._remove_python_string(command_output))\n\n        # Passwordless SSH as app-admin, but specify -p\n        command_output = self.run_prestoadmin('catalog add --password '\n                                              'password -u app-admin')\n        self.assertEqualIgnoringOrder(\n            self.success_output + self.sudo_password_prompt +\n            self.sudo_password_prompt, command_output)\n\n        # Passwordless SSH as app-admin, but specify wrong password with -I\n        parallel_password_failure = self.parallel_password_failure_message()\n        command_output = self.run_script_from_prestoadmin_dir(\n            'echo \"asdf\" | ./presto-admin catalog add -I -u app-admin',\n            raise_error=False)\n        self.assertEqualIgnoringOrder(\n            self._remove_python_string(parallel_password_failure + self.interactive_text),\n            self._remove_python_string(command_output))\n\n        # Passwordless SSH as app-admin, but specify wrong password with -p\n        command_output = self.run_prestoadmin(\n            'catalog add --password asdf -u app-admin', raise_error=False)\n        self.assertEqualIgnoringOrder(parallel_password_failure,\n                                      command_output)\n\n        # Passwordless SSH as root, in serial mode\n        command_output = self.run_script_from_prestoadmin_dir(\n            './presto-admin catalog add --serial')\n        self.assertEqualIgnoringOrder(\n            self.success_output, command_output)\n\n    @attr('smoketest')\n    @docker_only\n    def test_no_passwordless_ssh_authentication(self):\n        self.upload_topology()\n        self.setup_for_catalog_add()\n\n        # This is needed because the test for\n        # No passwordless SSH, -I correct -u app-admin,\n        # was giving Device not a stream error in jenkins\n        self.run_script_from_prestoadmin_dir(\n            'echo \"password\" | ./presto-admin catalog add -I')\n\n        for host in self.cluster.all_hosts():\n            self.cluster.exec_cmd_on_host(\n                host,\n                'mv /root/.ssh/id_rsa /root/.ssh/id_rsa.bak'\n            )\n\n        # No passwordless SSH, no -I or -p\n        parallel_password_failure = self.parallel_password_failure_message(\n            with_sudo_prompt=False)\n        command_output = self.run_prestoadmin(\n            'catalog add', raise_error=False)\n        self.assertEqualIgnoringOrder(parallel_password_failure,\n                                      command_output)\n\n        # No passwordless SSH, -p incorrect -u root\n        command_output = self.run_prestoadmin(\n            'catalog add --password password', raise_error=False)\n        self.assertEqualIgnoringOrder(parallel_password_failure,\n                                      command_output)\n\n        # No passwordless SSH, -I correct -u app-admin\n        non_root_sudo_warning = self.non_root_sudo_warning_message()\n        command_output = self.run_script_from_prestoadmin_dir(\n            'echo \"password\" | ./presto-admin catalog add -I -u app-admin')\n        self.assertEqualIgnoringOrder(\n            self._remove_python_string(\n                self.success_output + self.interactive_text +\n                self.sudo_password_prompt + non_root_sudo_warning),\n            self._remove_python_string(command_output))\n\n        # No passwordless SSH, -p correct -u app-admin\n        command_output = self.run_prestoadmin('catalog add -p password '\n                                              '-u app-admin')\n        self.assertEqualIgnoringOrder(\n            self.success_output + self.sudo_password_prompt +\n            self.sudo_password_prompt, command_output)\n\n        # No passwordless SSH, specify keyfile with -i\n        self.cluster.exec_cmd_on_host(\n            self.cluster.master, 'chmod 600 /root/.ssh/id_rsa.bak')\n        command_output = self.run_prestoadmin(\n            'catalog add -i /root/.ssh/id_rsa.bak')\n        self.assertEqualIgnoringOrder(self.success_output, command_output)\n\n        for host in self.cluster.all_hosts():\n            self.cluster.exec_cmd_on_host(\n                host,\n                'mv /root/.ssh/id_rsa.bak /root/.ssh/id_rsa'\n            )\n\n    @attr('smoketest', 'quarantine')\n    @docker_only\n    def test_prestoadmin_no_sudo_popen(self):\n        self.upload_topology()\n        self.setup_for_catalog_add()\n\n        # We use Popen because docker-py loses the first 8 characters of TTY\n        # output.\n        args = ['docker', 'exec', '-t', self.cluster.master, 'sudo',\n                '-u', 'app-admin', get_presto_admin_path(),\n                'topology show']\n        proc = subprocess.Popen(args, stdout=subprocess.PIPE,\n                                stderr=subprocess.STDOUT)\n        self.assertRegexpMatchesLineByLine(\n            'Please run presto-admin with sudo.\\n'\n            '\\\\[Errno 13\\\\] Permission denied: \\'.*/.prestoadmin/log'\n            'presto-admin.log\\'', proc.stdout.read())\n\n    def setup_for_catalog_add(self):\n        connector_script = 'mkdir -p %(catalogs)s\\n' \\\n                           'echo \\'connector.name=tpch\\' >> %(catalogs)s/tpch.properties\\n' % \\\n                           {'catalogs': get_catalog_directory()}\n        self.run_script_from_prestoadmin_dir(connector_script)\n\n    def _remove_python_string(self, text):\n        return re.sub(r'python2\\.6|python2\\.7', '', text)\n"
  },
  {
    "path": "tests/product/test_catalog.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProduct tests for presto-admin catalog support.\n\"\"\"\nimport os\n\nfrom nose.plugins.attrib import attr\n\nfrom prestoadmin.standalone.config import PRESTO_STANDALONE_USER\nfrom prestoadmin.util import constants\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_PRESTO_CLUSTER, STANDALONE_PA_CLUSTER\nfrom tests.product.config_dir_utils import get_catalog_directory\nfrom tests.product.standalone.presto_installer import StandalonePrestoInstaller\n\n\nclass TestCatalog(BaseProductTestCase):\n    def setUp(self):\n        super(TestCatalog, self).setUp()\n\n    def setup_cluster_assert_catalogs(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.run_prestoadmin('server start')\n        for host in self.cluster.all_hosts():\n            self.assert_has_default_catalog(host)\n\n        self._assert_catalogs_loaded([['system'], ['tpch']])\n\n    @attr('smoketest')\n    def test_catalog_add_remove(self):\n        self.setup_cluster_assert_catalogs()\n        self.run_prestoadmin('catalog remove tpch')\n        self.assert_path_removed(self.cluster.master, os.path.join(get_catalog_directory(), 'tpch.properties'))\n        for host in self.cluster.all_hosts():\n            self.assert_path_removed(host, os.path.join(constants.REMOTE_CATALOG_DIR, 'tpch.properties'))\n\n        # test add catalogs from directory with more than one catalog\n        self.cluster.write_content_to_host(\n            'connector.name=tpch',\n            os.path.join(get_catalog_directory(), 'tpch.properties'),\n            self.cluster.master\n        )\n        self.cluster.write_content_to_host(\n            'connector.name=jmx',\n            os.path.join(get_catalog_directory(), 'jmx.properties'),\n            self.cluster.master\n        )\n        self.run_prestoadmin('catalog add')\n        self.run_prestoadmin('server restart')\n        for host in self.cluster.all_hosts():\n            filepath = '/etc/presto/catalog/jmx.properties'\n            self.assert_has_default_catalog(host)\n            self.assert_config_perms(host, filepath)\n            self.assert_file_content(host, filepath, 'connector.name=jmx')\n        self._assert_catalogs_loaded([['system'], ['jmx'], ['tpch']])\n\n    def test_catalog_add_remove_coord_worker_using_dash_h(self):\n        self.setup_cluster_assert_catalogs()\n\n        self.run_prestoadmin('catalog remove tpch -H %(master)s,%(slave1)s')\n        self.run_prestoadmin('server restart')\n        self.assert_path_removed(self.cluster.master,\n                                 os.path.join(get_catalog_directory(),\n                                              'tpch.properties'))\n        self._assert_catalogs_loaded([['system']])\n        for host in [self.cluster.master, self.cluster.slaves[0]]:\n            self.assert_path_removed(host,\n                                     os.path.join(constants.REMOTE_CATALOG_DIR,\n                                                  'tpch.properties'))\n        self.assert_has_default_catalog(self.cluster.slaves[1])\n        self.assert_has_default_catalog(self.cluster.slaves[2])\n\n        self.cluster.write_content_to_host(\n            'connector.name=tpch',\n            os.path.join(get_catalog_directory(), 'tpch.properties'),\n            self.cluster.master\n        )\n        self.run_prestoadmin('catalog add tpch -H %(master)s,%(slave1)s')\n        self.run_prestoadmin('server restart')\n        self.assert_has_default_catalog(self.cluster.master)\n        self.assert_has_default_catalog(self.cluster.slaves[1])\n\n    def test_catalog_add_remove_coord_worker_using_dash_x(self):\n        self.setup_cluster_assert_catalogs()\n\n        self.run_prestoadmin('catalog remove tpch -x %(master)s,%(slave1)s')\n        self.run_prestoadmin('server restart')\n        self._assert_catalogs_loaded([['system'], ['tpch']])\n        self.assert_has_default_catalog(self.cluster.master)\n        self.assert_has_default_catalog(self.cluster.slaves[0])\n        for host in [self.cluster.slaves[1], self.cluster.slaves[2]]:\n            self.assert_path_removed(host,\n                                     os.path.join(constants.REMOTE_CATALOG_DIR,\n                                                  'tpch.properties'))\n\n        self.cluster.write_content_to_host(\n            'connector.name=tpch',\n            os.path.join(get_catalog_directory(), 'tpch.properties'),\n            self.cluster.master\n        )\n        self.run_prestoadmin('catalog add tpch -x %(master)s,%(slave1)s')\n        self.run_prestoadmin('server restart')\n        self._assert_catalogs_loaded([['system'], ['tpch']])\n        for slave in [self.cluster.slaves[1], self.cluster.slaves[2]]:\n            self.assert_has_default_catalog(slave)\n\n    def test_catalog_add_by_name(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.run_prestoadmin('catalog remove tpch')\n\n        # test add catalog by name when it exists\n        self.cluster.write_content_to_host(\n            'connector.name=tpch',\n            os.path.join(get_catalog_directory(), 'tpch.properties'),\n            self.cluster.master\n        )\n        self.run_prestoadmin('catalog add tpch')\n        self.run_prestoadmin('server start')\n        for host in self.cluster.all_hosts():\n            self.assert_has_default_catalog(host)\n        self._assert_catalogs_loaded([['system'], ['tpch']])\n\n    def test_catalog_add_empty_dir(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.run_prestoadmin('catalog remove tpch')\n        output = self.run_prestoadmin('catalog add')\n        expected = [r'',\n                    r'Warning: \\[slave3\\] Directory .*/.prestoadmin/catalog is empty. '\n                    r'No catalogs will be deployed',\n                    r'',\n                    r'',\n                    r'Warning: \\[slave2\\] Directory .*/.prestoadmin/catalog is empty. '\n                    r'No catalogs will be deployed',\n                    r'',\n                    r'',\n                    r'Warning: \\[slave1\\] Directory .*/.prestoadmin/catalog is empty. '\n                    r'No catalogs will be deployed',\n                    r'',\n                    r'',\n                    r'Warning: \\[master\\] Directory .*/.prestoadmin/catalog is empty. '\n                    r'No catalogs will be deployed',\n                    r'']\n        self.assertRegexpMatchesLineByLine(output.splitlines(), expected)\n\n    def fatal_error(self, error):\n        message = \"\"\"\nFatal error: %(error)s\n\nUnderlying exception:\n    %(error)s\n\nAborting.\n\"\"\"\n        return message % {'error': error}\n\n    def test_catalog_add_lost_host(self):\n        installer = StandalonePrestoInstaller(self)\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        self.upload_topology()\n        installer.install()\n        self.run_prestoadmin('catalog remove tpch')\n\n        self.cluster.stop_host(\n            self.cluster.slaves[0])\n        self.cluster.write_content_to_host(\n            'connector.name=tpch',\n            os.path.join(get_catalog_directory(), 'tpch.properties'),\n            self.cluster.master\n        )\n        output = self.run_prestoadmin('catalog add tpch', raise_error=False)\n        for host in self.cluster.all_internal_hosts():\n            deploying_message = 'Deploying tpch.properties catalog configurations on: %s'\n            self.assertTrue(deploying_message % host in output,\n                            'expected %s \\n actual %s'\n                            % (deploying_message % host, output))\n        self.assertRegexpMatches(\n            output,\n            self.down_node_connection_error(self.cluster.internal_slaves[0])\n        )\n        self.assertEqual(len(output.splitlines()),\n                         len(self.cluster.all_hosts()) +\n                         self.len_down_node_error)\n        self.run_prestoadmin('server start', raise_error=False)\n\n        for host in [self.cluster.master,\n                     self.cluster.slaves[1],\n                     self.cluster.slaves[2]]:\n            self.assert_has_default_catalog(host)\n        self._assert_catalogs_loaded([['system'], ['tpch']])\n\n    def test_catalog_remove(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        for host in self.cluster.all_hosts():\n            self.assert_has_default_catalog(host)\n\n        missing_catalog_message = \"\"\"[Errno 1]\nFatal error: [master] Could not remove catalog '%(name)s'. No such file \\\n'/etc/presto/catalog/%(name)s.properties'\n\nAborting.\n\nFatal error: [slave1] Could not remove catalog '%(name)s'. No such file \\\n'/etc/presto/catalog/%(name)s.properties'\n\nAborting.\n\nFatal error: [slave2] Could not remove catalog '%(name)s'. No such file \\\n'/etc/presto/catalog/%(name)s.properties'\n\nAborting.\n\nFatal error: [slave3] Could not remove catalog '%(name)s'. No such file \\\n'/etc/presto/catalog/%(name)s.properties'\n\nAborting.\n\"\"\"  # noqa\n\n        success_message = \"\"\"[master] Catalog removed. Restart the server \\\nfor the change to take effect\n[slave1] Catalog removed. Restart the server for the change to take effect\n[slave2] Catalog removed. Restart the server for the change to take effect\n[slave3] Catalog removed. Restart the server for the change to take effect\"\"\"\n\n        # test remove catalog does not exist\n        # expect error\n\n        self.assertRaisesMessageIgnoringOrder(\n            OSError,\n            missing_catalog_message % {'name': 'jmx'},\n            self.run_prestoadmin,\n            'catalog remove jmx')\n\n        # test remove catalog not in directory, but in presto\n        self.cluster.exec_cmd_on_host(\n            self.cluster.master,\n            'rm %s' % os.path.join(get_catalog_directory(), 'tpch.properties')\n        )\n\n        output = self.run_prestoadmin('catalog remove tpch')\n        self.assertEqualIgnoringOrder(success_message, output)\n\n        # test remove catalog in directory but not in presto\n        self.cluster.write_content_to_host(\n            'connector.name=tpch',\n            os.path.join(get_catalog_directory(), 'tpch.properties'),\n            self.cluster.master\n        )\n\n        self.assertRaisesMessageIgnoringOrder(\n            OSError,\n            missing_catalog_message % {'name': 'tpch'},\n            self.run_prestoadmin,\n            'catalog remove tpch')\n\n    def test_catalog_add_no_presto_user(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n\n        for host in self.cluster.all_hosts():\n            self.cluster.exec_cmd_on_host(\n                host, \"userdel %s\" % (PRESTO_STANDALONE_USER,), invoke_sudo=True)\n\n        self.assertRaisesRegexp(\n            OSError, \"User presto does not exist\", self.run_prestoadmin,\n            'catalog add tpch')\n\n    def get_catalog_info(self):\n        client = self.create_presto_client()\n        return client.run_sql('select catalog_name from catalogs')\n\n    # Presto will be 'query-able' before it has loaded all of its\n    # catalogs. When presto-admin restarts presto it returns when it\n    # can query the server but that doesn't mean that all catalogs\n    # have been loaded. Thus in order to verify that catalogs get\n    # correctly added we check continuously within a timeout.\n    def _assert_catalogs_loaded(self, expected_catalogs):\n        self.retry(lambda: self.assertEqual(expected_catalogs.sort(), self.get_catalog_info().sort()))\n\n    def test_catalog_add_remove_non_sudo_user(self):\n        self.setup_cluster_assert_catalogs()\n        self.upload_topology(\n            {\"coordinator\": \"master\",\n             \"workers\": [\"slave1\", \"slave2\", \"slave3\"],\n             \"username\": \"app-admin\"}\n        )\n\n        self.run_prestoadmin('catalog remove tpch -p password')\n        self.assert_path_removed(self.cluster.master,\n                                 os.path.join(get_catalog_directory(),\n                                              'tpch.properties'))\n        for host in self.cluster.all_hosts():\n            self.assert_path_removed(host,\n                                     os.path.join(constants.REMOTE_CATALOG_DIR,\n                                                  'tcph.properties'))\n\n        self.cluster.write_content_to_host(\n            'connector.name=jmx',\n            os.path.join(get_catalog_directory(), 'jmx.properties'),\n            self.cluster.master\n        )\n        self.run_prestoadmin('catalog add -p password')\n        self.run_prestoadmin('server restart -p password')\n        for host in self.cluster.all_hosts():\n            self.assert_has_jmx_catalog(host)\n        self._assert_catalogs_loaded([['system'], ['jmx']])\n"
  },
  {
    "path": "tests/product/test_collect.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProduct tests for presto-admin collect\n\"\"\"\nimport os\nfrom os import path\n\nfrom fabric.context_managers import settings\nfrom nose.plugins.attrib import attr\nfrom nose.tools import nottest\n\nfrom prestoadmin.collect import OUTPUT_FILENAME_FOR_LOGS, TMP_PRESTO_DEBUG, \\\n    PRESTOADMIN_LOG_NAME, OUTPUT_FILENAME_FOR_SYS_INFO, TMP_PRESTO_DEBUG_REMOTE\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase, PrestoError\nfrom tests.product.cluster_types import STANDALONE_PRESTO_CLUSTER, STANDALONE_PA_CLUSTER\nfrom tests.product.config_dir_utils import get_install_directory\nfrom tests.product.standalone.presto_installer import StandalonePrestoInstaller\n\n\nclass TestCollect(BaseProductTestCase):\n    def setUp(self):\n        super(TestCollect, self).setUp()\n\n    @attr('smoketest')\n    def test_collect_logs_basic(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.run_prestoadmin('server start')\n        actual = self.run_prestoadmin('collect logs')\n\n        expected = 'Downloading logs from all the nodes...\\n' + \\\n                   'logs archive created: ' + OUTPUT_FILENAME_FOR_LOGS + '\\n'\n        self.assertLazyMessage(lambda: self.log_msg(actual, expected),\n                               self.assertEqual, actual, expected)\n        self.assert_path_exists(self.cluster.master, OUTPUT_FILENAME_FOR_LOGS)\n        self.assert_path_exists(self.cluster.master, TMP_PRESTO_DEBUG)\n\n        downloaded_logs_location = path.join(TMP_PRESTO_DEBUG, 'logs')\n        self.assert_path_exists(self.cluster.master, downloaded_logs_location)\n\n        for host in self.cluster.all_internal_hosts():\n            host_log_location = path.join(downloaded_logs_location, host)\n            self.assert_path_exists(self.cluster.master, host_log_location)\n\n        admin_log = path.join(downloaded_logs_location, PRESTOADMIN_LOG_NAME)\n        self.assert_path_exists(self.cluster.master, admin_log)\n\n    def log_msg(self, actual, expected):\n        msg = '%s != %s' % (actual, expected)\n        return msg\n\n    @nottest\n    def _test_basic_system_info(self, actual, coordinator=None, hosts=None):\n        if not coordinator:\n            coordinator = self.cluster.internal_master\n        if not hosts:\n            hosts = self.cluster.all_hosts()\n\n        expected = 'System info archive created: ' + OUTPUT_FILENAME_FOR_SYS_INFO + '\\n'\n        self.assertEqual(expected, actual)\n        self.assert_path_exists(self.cluster.master, OUTPUT_FILENAME_FOR_SYS_INFO)\n        self.assert_path_exists(self.cluster.master, TMP_PRESTO_DEBUG)\n\n        downloaded_sys_info_loc = path.join(TMP_PRESTO_DEBUG, 'sysinfo')\n        self.assert_path_exists(self.cluster.master, downloaded_sys_info_loc)\n\n        catalog_file_name = path.join(downloaded_sys_info_loc, 'catalog_info.txt')\n        self.assert_path_exists(self.cluster.master, catalog_file_name)\n\n        version_file_name = path.join(TMP_PRESTO_DEBUG_REMOTE, 'version_info.txt')\n        for host in hosts:\n            self.assert_path_exists(host, version_file_name)\n\n        # collected coordinator info\n        coord_system_info_location = path.join(downloaded_sys_info_loc, coordinator)\n        self.assert_path_exists(self.cluster.master, coord_system_info_location)\n\n        coord_catalog_info_location = path.join(coord_system_info_location, 'catalog')\n        self.assert_path_exists(self.cluster.master, coord_catalog_info_location)\n        self.assert_path_exists(self.cluster.master, path.join(coord_catalog_info_location, 'tpch.properties'))\n\n        # collected worker info\n        slave0_system_info_loc = path.join(downloaded_sys_info_loc, self.cluster.internal_slaves[0])\n        self.assert_path_exists(self.cluster.master, slave0_system_info_loc)\n        self.assert_path_exists(self.cluster.master, slave0_system_info_loc)\n\n        slave0_catalog_info_loc = path.join(slave0_system_info_loc, 'catalog')\n        self.assert_path_exists(self.cluster.master, slave0_catalog_info_loc)\n        self.assert_path_exists(self.cluster.master, path.join(slave0_catalog_info_loc, 'tpch.properties'))\n        self.assert_path_exists(self.cluster.master, OUTPUT_FILENAME_FOR_SYS_INFO)\n\n    def test_collect_system_info_dash_h_coord_worker(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.run_prestoadmin('server start')\n        actual = self.run_prestoadmin('collect system_info -H %(master)s,%(slave1)s')\n        self._test_basic_system_info(actual,\n                                     self.cluster.internal_master,\n                                     [self.cluster.master, self.cluster.slaves[0]])\n\n    def test_collect_system_info_dash_x_two_workers(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.run_prestoadmin('server start')\n        actual = self.run_prestoadmin('collect system_info -x %(slave2)s,%(slave3)s')\n        self._test_basic_system_info(actual,\n                                     self.cluster.internal_master,\n                                     [self.cluster.master, self.cluster.slaves[0]])\n\n    @attr('smoketest')\n    def test_system_info_pa_separate_node(self):\n        installer = StandalonePrestoInstaller(self)\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        topology = {\"coordinator\": \"slave1\",\n                    \"workers\": [\"slave2\", \"slave3\"]}\n        self.upload_topology(topology=topology)\n        installer.install(coordinator='slave1')\n        self.run_prestoadmin('server start')\n        actual = self.run_prestoadmin('collect system_info')\n        self._test_basic_system_info(\n            actual,\n            coordinator=self.cluster.internal_slaves[0],\n            hosts=self.cluster.slaves)\n\n    @attr('smoketest')\n    def test_query_info_pa_separate_node(self):\n        installer = StandalonePrestoInstaller(self)\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        topology = {\"coordinator\": \"slave1\",\n                    \"workers\": [\"slave2\", \"slave3\"]}\n        self.upload_topology(topology=topology)\n        installer.install(coordinator='slave1')\n        self.run_prestoadmin('server start')\n        sql_to_run = 'SELECT * FROM system.runtime.nodes WHERE 1234 = 1234'\n        with settings(roledefs={'coordinator': ['slave1']}):\n            query_id = self.retry(\n                lambda: self.get_query_id(sql_to_run, host=self.cluster.slaves[0]))\n\n        actual = self.run_prestoadmin('collect query_info ' + query_id)\n        query_info_file_name = path.join(TMP_PRESTO_DEBUG, 'query_info_' + query_id + '.json')\n\n        expected = 'Gathered query information in file: ' + query_info_file_name + '\\n'\n        self.assert_path_exists(self.cluster.master, query_info_file_name)\n        self.assertEqual(actual, expected)\n\n    def get_query_id(self, sql, host=None):\n        client = self.create_presto_client(host)\n        client.run_sql(sql)\n        query_runtime_info = client.run_sql('SELECT query_id FROM system.runtime.queries WHERE query = \\'%s\\'' % (sql,))\n        if not query_runtime_info:\n            raise PrestoError('Presto not started up yet.')\n        for row in query_runtime_info:\n            return row[0]\n\n    def test_query_info_invalid_id(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.run_prestoadmin('server start')\n        invalid_id = '1234_invalid'\n        actual = self.run_prestoadmin('collect query_info ' + invalid_id, raise_error=False)\n        expected = '\\nFatal error: [master] Unable to retrieve information. ' \\\n                   'Please check that the query_id is correct, or check ' \\\n                   'that server is up with command: server status\\n\\n' \\\n                   'Aborting.\\n'\n        self.assertEqual(actual, expected)\n\n    def test_collect_logs_server_stopped(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self._assert_no_logs_downloaded()\n\n    def test_collect_system_info_server_stopped(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        actual = self.run_prestoadmin('collect system_info', raise_error=False)\n        message = '\\nFatal error: [%s] Unable to access node ' \\\n                  'information. Please check that server is up with ' \\\n                  'command: server status\\n\\nAborting.\\n'\n        expected = message % self.cluster.internal_master\n        self.assertEqualIgnoringOrder(actual, expected)\n\n    def _add_custom_log_location(self, new_log_location):\n        for host in self.cluster.all_hosts():\n            self.run_script_from_prestoadmin_dir('rm -rf /var/log/presto', host)\n            self.run_script_from_prestoadmin_dir(\n                'mkdir %s; chown -R presto:presto %s'\n                % (new_log_location, new_log_location),\n                host)\n            config_script = 'echo \"node.server-log-file=%s/server.log\\n' \\\n                            'node.launcher-log-file=%s/launcher.log\" >> ' \\\n                            '/etc/presto/node.properties' \\\n                            % (new_log_location, new_log_location)\n            self.run_script_from_prestoadmin_dir(config_script, host=host)\n\n    def _collect_logs_and_unzip(self):\n        self.run_prestoadmin('collect logs')\n        self.assert_path_exists(self.cluster.master, OUTPUT_FILENAME_FOR_LOGS)\n        log_filename = path.basename(OUTPUT_FILENAME_FOR_LOGS)\n        self.run_script_from_prestoadmin_dir('cp %s .; tar xvf %s' % (OUTPUT_FILENAME_FOR_LOGS, log_filename))\n\n    def test_collect_logs_nonstandard_location(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n\n        version = self.cluster.exec_cmd_on_host(self.cluster.master, 'rpm -q --qf \\\"%{VERSION}\\\\n\\\" presto-server-rpm')\n        if '127t' not in version:\n            print 'test_collect_logs_nonstandard_location only valid for 127t'\n            return\n\n        new_log_location = '/var/presto'\n        self._add_custom_log_location(new_log_location)\n\n        self.run_prestoadmin('server start')\n        self._collect_logs_and_unzip()\n        collected_logs_dir = os.path.join(get_install_directory(), 'logs')\n        self.assert_path_exists(self.cluster.master, os.path.join(collected_logs_dir, ' presto-admin.log'))\n\n        for host in self.cluster.all_internal_hosts():\n            host_directory = os.path.join(collected_logs_dir, host)\n            self.assert_path_exists(self.cluster.master, os.path.join(host_directory, 'server.log'))\n            self.assert_path_exists(self.cluster.master, os.path.join(host_directory, 'launcher.log'))\n\n    def _assert_no_logs_downloaded(self):\n        self._collect_logs_and_unzip()\n        collected_logs_dir = os.path.join(get_install_directory(), 'logs')\n        self.assert_path_exists(self.cluster.master, os.path.join(collected_logs_dir, 'presto-admin.log'))\n        for host in self.cluster.all_internal_hosts():\n            host_directory = os.path.join(collected_logs_dir, host)\n            self.assert_path_exists(self.cluster.master, host_directory)\n            self.assert_path_removed(self.cluster.master, os.path.join(host_directory, '*'))\n\n    def test_collect_logs_server_not_installed(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        self.upload_topology()\n        self._assert_no_logs_downloaded()\n\n    def test_collect_logs_multiple_server_logs(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.run_prestoadmin('server start')\n        self.cluster.write_content_to_host('Stuff that I logged!', '/var/log/presto/server.log-2', self.cluster.master)\n        actual = self.run_prestoadmin('collect logs')\n\n        expected = 'Downloading logs from all the nodes...\\nlogs archive created: ' + OUTPUT_FILENAME_FOR_LOGS + '\\n'\n        self.assertLazyMessage(lambda: self.log_msg(actual, expected),\n                               self.assertEqual, actual, expected)\n\n        downloaded_logs_location = path.join(TMP_PRESTO_DEBUG, 'logs')\n        self.assert_path_exists(self.cluster.master, downloaded_logs_location)\n\n        for host in self.cluster.all_internal_hosts():\n            host_log_location = path.join(downloaded_logs_location, host)\n            self.assert_path_exists(self.cluster.master, os.path.join(host_log_location, 'server.log'))\n\n        master_path = os.path.join(downloaded_logs_location, self.cluster.internal_master, )\n        self.assert_path_exists(self.cluster.master, os.path.join(master_path, 'server.log-2'))\n\n    def test_collect_non_root_user(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.upload_topology(\n            {\"coordinator\": \"master\",\n             \"workers\": [\"slave1\", \"slave2\", \"slave3\"],\n             \"username\": \"app-admin\"}\n        )\n\n        self.run_script_from_prestoadmin_dir('./presto-admin server start -p password')\n\n        self.run_script_from_prestoadmin_dir('./presto-admin collect logs -p password')\n\n        actual = self.run_script_from_prestoadmin_dir('./presto-admin collect system_info -p password')\n        self._test_basic_system_info(actual)\n"
  },
  {
    "path": "tests/product/test_configuration.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProduct tests for presto-admin configuration\n\"\"\"\n\nimport os\n\nfrom nose.plugins.attrib import attr\n\nfrom prestoadmin.standalone.config import PRESTO_STANDALONE_USER\nfrom prestoadmin.util import constants\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_PRESTO_CLUSTER\nfrom tests.product.config_dir_utils import get_workers_directory, get_coordinator_directory\nfrom tests.product.constants import LOCAL_RESOURCES_DIR\n\n\nclass TestConfiguration(BaseProductTestCase):\n    def setUp(self):\n        super(TestConfiguration, self).setUp()\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.write_test_configs(self.cluster)\n\n    def deploy_and_assert_default_config(self):\n        # deploy a default configuration, no files in coordinator or workers\n        output = self.run_prestoadmin('configuration deploy')\n        deploy_template = 'Deploying configuration on: %s\\n'\n        expected = ''\n        for host in self.cluster.all_internal_hosts():\n            expected += deploy_template % host\n\n        for host in self.cluster.all_hosts():\n            self.assert_has_default_config(host)\n\n        self.assertEqualIgnoringOrder(output, expected)\n\n        # redeploy configuration to test the default files that we wrote out\n        output = self.run_prestoadmin('configuration deploy')\n\n        for host in self.cluster.all_hosts():\n            self.assert_has_default_config(host)\n\n        self.assertEqualIgnoringOrder(output, expected)\n\n    def __write_dummy_config_file(self):\n        # deploy coordinator configuration only.  Has a non-default file\n        dummy_prop1 = 'a.dummy.property=\\'single-quoted\\''\n        dummy_prop2 = 'another.dummy=value'\n        extra_configs = '%s\\n%s' % (dummy_prop1, dummy_prop2)\n        self.write_test_configs(self.cluster, extra_configs)\n        return dummy_prop1, dummy_prop2\n\n    def _get_node_id(self, host):\n        return self.cluster.exec_cmd_on_host(host, 'grep node.id= /etc/presto/node.properties',\n                                             invoke_sudo=True).strip()\n\n    @attr('smoketest')\n    def test_configuration_deploy_show(self):\n        self.upload_topology()\n\n        self.deploy_and_assert_default_config()\n        node_ids = {}\n        for host in self.cluster.all_hosts():\n            node_ids[host] = self._get_node_id(host)\n\n        # deploy coordinator configuration only.  Has a non-default file\n        dummy_prop1, dummy_prop2 = self.__write_dummy_config_file()\n\n        output = self.run_prestoadmin('configuration deploy coordinator')\n        deploy_template = 'Deploying configuration on: %s\\n'\n        self.assertEqual(output,\n                         deploy_template % self.cluster.internal_master)\n        for host in self.cluster.slaves:\n            self.assert_has_default_config(host)\n\n        config_properties_path = os.path.join(\n            constants.REMOTE_CONF_DIR, 'config.properties')\n        self.assert_config_perms(self.cluster.master, config_properties_path)\n        self.assert_file_content(self.cluster.master,\n                                 config_properties_path,\n                                 dummy_prop1 + '\\n' +\n                                 dummy_prop2 + '\\n' +\n                                 self.default_coordinator_test_config_)\n\n        # deploy workers configuration only has non-default file\n        filename = 'node.properties'\n        path = os.path.join(get_workers_directory(), filename)\n        self.cluster.write_content_to_host(\n            'node.environment test', path, self.cluster.master)\n        path = os.path.join(get_coordinator_directory(), filename)\n        self.cluster.write_content_to_host(\n            'node.environment test', path, self.cluster.master)\n\n        output = self.run_prestoadmin('configuration deploy workers')\n        expected = ''\n        for host in self.cluster.internal_slaves:\n            expected += deploy_template % host\n        self.assertEqualIgnoringOrder(output, expected)\n\n        for host in self.cluster.slaves:\n            self.assert_config_perms(host, config_properties_path)\n            self.assert_file_content(host,\n                                     config_properties_path,\n                                     dummy_prop1 + '\\n' +\n                                     dummy_prop2 + '\\n' +\n                                     self.default_workers_test_config_)\n            expected = 'node.environment=test\\n'\n            self.assert_node_config(host, expected, node_ids[host])\n\n        self.assert_node_config(self.cluster.master,\n                                self.default_node_properties_,\n                                node_ids[self.cluster.master])\n\n    def test_configuration_deploy_using_dash_h_coord_worker(self):\n        self.upload_topology()\n\n        self.deploy_and_assert_default_config()\n\n        dummy_prop1, dummy_prop2 = self.__write_dummy_config_file()\n\n        output = self.run_prestoadmin('configuration deploy '\n                                      '-H %(master)s,%(slave1)s')\n        deploy_template = 'Deploying configuration on: %s\\n'\n        expected = ''\n        for host in [self.cluster.internal_master,\n                     self.cluster.internal_slaves[0]]:\n            expected += deploy_template % host\n\n        for host in [self.cluster.slaves[1], self.cluster.slaves[2]]:\n            self.assert_has_default_config(host)\n\n        self.assertEqualIgnoringOrder(output, expected)\n\n        config_properties_path = os.path.join(constants.REMOTE_CONF_DIR,\n                                              'config.properties')\n\n        self.assert_config_perms(\n            self.cluster.master, config_properties_path)\n        self.assert_file_content(self.cluster.master,\n                                 config_properties_path,\n                                 dummy_prop1 + '\\n' +\n                                 dummy_prop2 + '\\n' +\n                                 self.default_coordinator_test_config_)\n\n        self.assert_config_perms(\n            self.cluster.slaves[0], config_properties_path)\n        self.assert_file_content(self.cluster.slaves[0],\n                                 config_properties_path,\n                                 dummy_prop1 + '\\n' +\n                                 dummy_prop2 + '\\n' +\n                                 self.default_workers_test_config_)\n\n    def test_configuration_deploy_using_dash_x_coord_worker(self):\n        self.upload_topology()\n\n        self.deploy_and_assert_default_config()\n\n        dummy_prop1, dummy_prop2 = self.__write_dummy_config_file()\n\n        output = self.run_prestoadmin('configuration deploy '\n                                      '-x %(master)s,%(slave1)s')\n        self.assert_has_default_config(self.cluster.master)\n        self.assert_has_default_config(self.cluster.slaves[0])\n        deploy_template = 'Deploying configuration on: %s\\n'\n        expected = ''\n        for host in [self.cluster.internal_slaves[1],\n                     self.cluster.internal_slaves[2]]:\n            expected += deploy_template % host\n\n        self.assertEqualIgnoringOrder(output, expected)\n\n        config_properties_path = os.path.join(constants.REMOTE_CONF_DIR,\n                                              'config.properties')\n\n        for slave in [self.cluster.slaves[1], self.cluster.slaves[2]]:\n            self.assert_config_perms(slave, config_properties_path)\n            self.assert_file_content(slave,\n                                     config_properties_path,\n                                     dummy_prop1 + '\\n' +\n                                     dummy_prop2 + '\\n' +\n                                     self.default_workers_test_config_)\n\n    def test_lost_coordinator_connection(self):\n        internal_bad_host = self.cluster.internal_slaves[0]\n        bad_host = self.cluster.slaves[0]\n        good_hosts = [self.cluster.internal_master,\n                      self.cluster.internal_slaves[1],\n                      self.cluster.internal_slaves[2]]\n        topology = {'coordinator': internal_bad_host,\n                    'workers': good_hosts}\n        self.upload_topology(topology)\n        self.cluster.stop_host(bad_host)\n        output = self.run_prestoadmin('configuration deploy',\n                                      raise_error=False)\n        self.assertRegexpMatches(\n            output,\n            self.down_node_connection_error(internal_bad_host)\n        )\n        for host in self.cluster.all_internal_hosts():\n            self.assertTrue('Deploying configuration on: %s' % host in output)\n        expected_size = self.len_down_node_error + len(self.cluster.all_hosts())\n        self.assertEqual(len(output.splitlines()), expected_size)\n\n        output = self.run_prestoadmin('configuration show config',\n                                      raise_error=False)\n        self.assertRegexpMatches(\n            output,\n            self.down_node_connection_error(internal_bad_host)\n        )\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'configuration_show_down_node.txt'), 'r') as f:\n            expected = f.read()\n        self.assertRegexpMatches(str.join('\\n', output.splitlines()[6:]),\n                                 expected)\n\n    def test_configuration_show(self):\n        self.upload_topology()\n\n        for host in self.cluster.all_hosts():\n            self.cluster.exec_cmd_on_host(host, 'rm -rf /etc/presto', invoke_sudo=True)\n\n        # configuration show no configuration\n        output = self.run_prestoadmin('configuration show')\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'configuration_show_none.txt'), 'r') as f:\n            expected = f.read()\n        self.assertEqual(expected, output)\n\n        self.run_prestoadmin('configuration deploy')\n\n        # configuration show default configuration\n        output = self.run_prestoadmin('configuration show')\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'configuration_show_default.txt'), 'r') as f:\n            expected = f.read()\n        self.assertRegexpMatches(output, expected)\n\n        # configuration show node\n        output = self.run_prestoadmin('configuration show node')\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'configuration_show_node.txt'), 'r') as f:\n            expected = f.read()\n        self.assertRegexpMatches(output, expected)\n\n        # configuration show jvm\n        output = self.run_prestoadmin('configuration show jvm')\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'configuration_show_jvm.txt'), 'r') as f:\n            expected = f.read()\n        self.assertEqual(output, expected)\n\n        # configuration show config\n        output = self.run_prestoadmin('configuration show config')\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'configuration_show_config.txt'), 'r') as f:\n            expected = f.read()\n        self.assertEqual(output, expected)\n\n        # configuration show log no log.properties\n        output = self.run_prestoadmin('configuration show log')\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'configuration_show_log_none.txt'), 'r') as f:\n            expected = f.read()\n        self.assertEqual(output, expected)\n\n        # configuration show log has log.properties\n        log_properties = 'com.facebook.presto=WARN'\n        filename = 'log.properties'\n        self.cluster.write_content_to_host(\n            log_properties,\n            os.path.join(get_workers_directory(), filename),\n            self.cluster.master\n        )\n        self.cluster.write_content_to_host(\n            log_properties,\n            os.path.join(get_coordinator_directory(), filename),\n            self.cluster.master\n        )\n        self.run_prestoadmin('configuration deploy')\n\n        output = self.run_prestoadmin('configuration show log')\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'configuration_show_log.txt'), 'r') as f:\n            expected = f.read()\n        self.assertEqual(output, expected)\n\n    def test_configuration_show_coord_worker_using_dash_h(self):\n        self.upload_topology()\n\n        self.run_prestoadmin('configuration deploy')\n\n        # show default configuration for master and slave1\n        output = self.run_prestoadmin('configuration show '\n                                      '-H %(master)s,%(slave1)s')\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'configuration_show_default_master_slave1.txt'),\n                  'r') as f:\n            expected = f.read()\n        self.assertRegexpMatches(output, expected)\n\n    def test_configuration_show_coord_worker_using_dash_x(self):\n        self.upload_topology()\n\n        self.run_prestoadmin('configuration deploy')\n\n        # show default configuration for all except master and slave1\n        output = self.run_prestoadmin('configuration show '\n                                      '-x %(master)s,%(slave1)s')\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'configuration_show_default_slave2_slave3.txt'),\n                  'r') as f:\n            expected = f.read()\n        self.assertRegexpMatches(output, expected)\n\n    def test_configuration_no_presto_user(self):\n        for host in self.cluster.all_hosts():\n            self.cluster.exec_cmd_on_host(\n                host, \"userdel %s\" % (PRESTO_STANDALONE_USER,), invoke_sudo=True)\n\n        self.assertRaisesRegexp(\n            OSError, \"User presto does not exist\", self.run_prestoadmin,\n            'configuration deploy')\n\n    def test_configuration_show_non_root_user(self):\n        self.upload_topology(\n            {\"coordinator\": \"master\",\n             \"workers\": [\"slave1\", \"slave2\", \"slave3\"],\n             \"username\": \"app-admin\"}\n        )\n        for host in self.cluster.all_hosts():\n            self.cluster.exec_cmd_on_host(host, 'rm -rf /etc/presto', invoke_sudo=True)\n\n        self.run_prestoadmin('configuration deploy -p password')\n\n        # configuration show default configuration\n        output = self.run_prestoadmin('configuration show -p password')\n        with open(os.path.join(LOCAL_RESOURCES_DIR,\n                               'configuration_show_default.txt'), 'r') as f:\n            expected = f.read()\n        self.assertRegexpMatches(output, expected)\n"
  },
  {
    "path": "tests/product/test_control.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProduct tests for start/stop/restart of presto-admin server\n\"\"\"\nfrom nose.plugins.attrib import attr\n\nfrom prestoadmin.server import RETRY_TIMEOUT\nfrom prestoadmin.util import constants\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_PRESTO_CLUSTER, STANDALONE_PA_CLUSTER\nfrom tests.product.standalone.presto_installer import StandalonePrestoInstaller\n\n\nclass TestControl(BaseProductTestCase):\n    def setUp(self):\n        super(TestControl, self).setUp()\n\n    @attr('smoketest')\n    def test_server_start_stop_simple(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.assert_simple_start_stop(self.expected_start(),\n                                      self.expected_stop())\n\n    @attr('smoketest')\n    def test_server_restart_simple(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        expected_output = self.expected_stop()[:] + self.expected_start()[:]\n        self.assert_simple_server_restart(expected_output)\n\n    def test_server_start_without_presto(self):\n        self.assert_service_fails_without_presto('start')\n\n    def test_server_stop_without_presto(self):\n        self.assert_service_fails_without_presto('stop')\n\n    def test_server_restart_without_presto(self):\n        self.assert_service_fails_without_presto('restart')\n\n    def assert_service_fails_without_presto(self, service):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        self.upload_topology()\n        # Start without Presto installed\n        start_output = self.run_prestoadmin('server %s' % service,\n                                            raise_error=False).splitlines()\n        presto_not_installed = self.presto_not_installed_message()\n        self.assertEqualIgnoringOrder(presto_not_installed,\n                                      '\\n'.join(start_output))\n\n    def test_server_start_one_host_started(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.assert_start_with_one_host_started(\n            self.cluster.internal_slaves[0])\n\n    def test_server_stop_one_host_started(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.assert_one_host_stopped(self.cluster.internal_master)\n\n    def test_server_restart_nothing_started(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n\n        # Restart when the servers aren't started\n        expected_output = self.expected_stop(\n            not_running=self.cluster.all_internal_hosts())[:] +\\\n            self.expected_start()[:]\n        self.assert_simple_server_restart(expected_output, running_host='')\n\n    def test_start_coordinator_down(self):\n        installer = StandalonePrestoInstaller(self)\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        topology = {\"coordinator\": \"slave1\", \"workers\":\n                    [\"master\", \"slave2\", \"slave3\"]}\n        self.upload_topology(topology=topology)\n        installer.install(coordinator='slave1')\n        self.assert_start_coordinator_down(\n            self.cluster.slaves[0],\n            self.cluster.internal_slaves[0])\n\n    def test_start_worker_down(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.assert_start_worker_down(\n            self.cluster.slaves[0],\n            self.cluster.internal_slaves[0])\n\n    def assert_start_coordinator_down(self, coordinator, coordinator_internal):\n        self.cluster.stop_host(coordinator)\n        alive_hosts = self.cluster.all_internal_hosts()[:]\n        alive_hosts.remove(self.cluster.get_down_hostname(coordinator_internal))\n\n        # test server start\n        start_output = self.run_prestoadmin('server start', raise_error=False)\n\n        # when the coordinator is down, you can't confirm that the server is started\n        # on any of the nodes\n        expected_start = self.expected_start(failed_hosts=alive_hosts)\n        for host in alive_hosts:\n            expected_start.append(self.expected_no_status_message(host))\n        expected_start.append(self.down_node_connection_error(coordinator_internal))\n        for message in expected_start:\n            self.assertRegexpMatches(start_output, message, 'expected %s \\n '\n                                                            'actual %s' % (message, start_output))\n\n        process_per_host = self.get_process_per_host(start_output.splitlines())\n        self.assert_started(process_per_host)\n\n    def assert_start_worker_down(self, down_node, down_internal_node):\n        self.cluster.stop_host(down_node)\n        alive_hosts = self.cluster.all_internal_hosts()[:]\n        alive_hosts.remove(self.cluster.get_down_hostname(down_internal_node))\n\n        # test server start\n        start_output = self.run_prestoadmin('server start', raise_error=False)\n\n        self.assertRegexpMatches(\n            start_output,\n            self.down_node_connection_error(down_internal_node)\n        )\n\n        expected_start = self.expected_start(start_success=alive_hosts)\n        for message in expected_start:\n            self.assertRegexpMatches(start_output, message, 'expected %s \\n '\n                                     'actual %s' % (message, start_output))\n\n        process_per_host = self.get_process_per_host(start_output.splitlines())\n        self.assert_started(process_per_host)\n\n    def expected_down_node_output_size(self, expected_output):\n        return self.len_down_node_error + len(\n            '\\n'.join(expected_output).splitlines())\n\n    def assert_simple_start_stop(self, expected_start, expected_stop,\n                                 pa_raise_error=True):\n        cmd_output = self.run_prestoadmin(\n            'server start', raise_error=pa_raise_error)\n        cmd_output = cmd_output.splitlines()\n        self.assertRegexpMatchesLineByLine(cmd_output, expected_start)\n        process_per_host = self.get_process_per_host(cmd_output)\n        self.assert_started(process_per_host)\n        cmd_output = self.run_prestoadmin('server stop').splitlines()\n        self.assertRegexpMatchesLineByLine(cmd_output, expected_stop)\n        self.assert_stopped(process_per_host)\n\n    def assert_simple_server_restart(self, expected_output, running_host='all',\n                                     pa_raise_error=True):\n        if running_host is 'all':\n            start_output = self.run_prestoadmin(\n                'server start', raise_error=pa_raise_error)\n        elif running_host:\n            start_output = self.run_prestoadmin('server start -H %s'\n                                                % running_host, raise_error=pa_raise_error)\n        else:\n            start_output = ''\n\n        start_output = start_output.splitlines()\n\n        restart_output = self.run_prestoadmin(\n            'server restart', raise_error=pa_raise_error).splitlines()\n        self.assertRegexpMatchesLineByLine(restart_output, expected_output)\n\n        if start_output:\n            process_per_host = self.get_process_per_host(start_output)\n            self.assert_stopped(process_per_host)\n\n        process_per_host = self.get_process_per_host(restart_output)\n        self.assert_started(process_per_host)\n\n    def assert_start_with_one_host_started(self, host):\n        start_output = self.run_prestoadmin('server start -H %s' % host).splitlines()\n        process_per_host = self.get_process_per_host(start_output)\n        self.assert_started(process_per_host)\n\n        start_output = self.run_prestoadmin(\n            'server start', raise_error=False).splitlines()\n        started_hosts = self.cluster.all_internal_hosts()\n        started_hosts.remove(host)\n        started_expected = self.expected_start(start_success=started_hosts)\n        started_expected.extend(self.expected_port_error([host]))\n        self.assertRegexpMatchesLineByLine(\n            start_output,\n            started_expected\n        )\n        process_per_host = self.get_process_per_host(start_output)\n        self.assert_started(process_per_host)\n\n    def assert_one_host_stopped(self, host):\n        start_output = self.run_prestoadmin('server start -H %s' % host) \\\n            .splitlines()\n        process_per_host = self.get_process_per_host(start_output)\n        self.assert_started(process_per_host)\n        stop_output = self.run_prestoadmin('server stop').splitlines()\n        not_started_hosts = self.cluster.all_internal_hosts()\n        not_started_hosts.remove(host)\n        self.assertRegexpMatchesLineByLine(\n            stop_output,\n            self.expected_stop(not_running=not_started_hosts)\n        )\n        process_per_host = self.get_process_per_host(start_output)\n        self.assert_stopped(process_per_host)\n\n    def expected_port_error(self, hosts=None):\n        return_str = []\n        for host in hosts:\n            return_str += [r'Fatal error: \\[%s\\] Server failed to start on %s.'\n                           r' Port 7070 already in use' % (host, host), r'',\n                           r'', r'Aborting.']\n        return return_str\n\n    def expected_no_status_message(self, host=None):\n        return ('Could not verify server status for: %s\\n'\n                'This could mean that the server failed to start or that there was no coordinator or worker up.'\n                ' Please check ' + constants.DEFAULT_PRESTO_SERVER_LOG_FILE + ' and ' +\n                constants.DEFAULT_PRESTO_LAUNCHER_LOG_FILE) % host\n\n    def expected_start(self, start_success=None, already_started=None,\n                       failed_hosts=None):\n        return_str = []\n\n        # With no args, return message that all started successfully\n        if not already_started and not start_success and not failed_hosts:\n            start_success = self.cluster.all_internal_hosts()\n\n        if start_success:\n            for host in start_success:\n                return_str += [r'Waiting to make sure we can connect to the '\n                               r'Presto server on %s, please wait. This check'\n                               r' will time out after %d minutes if the server'\n                               r' does not respond.'\n                               % (host, RETRY_TIMEOUT / 60),\n                               r'Server started successfully on: %s' % host,\n                               r'\\[%s\\] out: ' % host,\n                               r'\\[%s\\] out: Started as .*' % host,\n                               r'\\[%s\\] out: Starting presto' % host]\n        if already_started:\n            for host in already_started:\n                return_str += [r'Waiting to make sure we can connect to the '\n                               r'Presto server on %s, please wait. This check'\n                               r' will time out after %d minutes if the server'\n                               r' does not respond.'\n                               % (host, RETRY_TIMEOUT / 60),\n                               r'Server started successfully on: %s' % host,\n                               r'\\[%s\\] out: ' % host,\n                               r'\\[%s\\] out: Already running as .*' % host,\n                               r'\\[%s\\] out: Starting presto' % host]\n        if failed_hosts:\n            for host in failed_hosts:\n                return_str += [r'\\[%s\\] out: ' % host,\n                               r'\\[%s\\] out: Starting presto' % host]\n        return return_str\n\n    def presto_not_installed_message(self):\n        return ('Warning: [slave2] Presto is not installed.\\n\\n\\n'\n                'Warning: [slave3] Presto is not installed.\\n\\n\\n'\n                'Warning: [slave1] Presto is not installed.\\n\\n\\n'\n                'Warning: [master] Presto is not installed.\\n\\n')\n"
  },
  {
    "path": "tests/product/test_error_handling.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nSystem tests for error handling in presto-admin\n\"\"\"\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_PA_CLUSTER\n\n\nclass TestErrorHandling(BaseProductTestCase):\n\n    def setUp(self):\n        super(TestErrorHandling, self).setUp()\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        self.upload_topology()\n\n    def test_wrong_arguments_parallel(self):\n        actual = self.run_prestoadmin('server start extra_arg',\n                                      raise_error=False)\n        expected = \"Incorrect number of arguments to task.\\n\\n\" \\\n                   \"Displaying detailed information for task \" \\\n                   \"'server start':\\n\\n    Start the Presto server on all \" \\\n                   \"nodes\\n    \\n    A status check is performed on the \" \\\n                   \"entire cluster and a list of\\n    servers that did not \" \\\n                   \"start, if any, are reported at the end.\\n\\n\"\n        self.assertEqual(expected, actual)\n\n    def test_wrong_arguments_serial(self):\n        actual = self.run_prestoadmin('server start extra_arg --serial',\n                                      raise_error=False)\n        expected = \"Incorrect number of arguments to task.\\n\\n\" \\\n                   \"Displaying detailed information for task \" \\\n                   \"'server start':\\n\\n    Start the Presto server on all \" \\\n                   \"nodes\\n    \\n    A status check is performed on the \" \\\n                   \"entire cluster and a list of\\n    servers that did not \" \\\n                   \"start, if any, are reported at the end.\\n\\n\"\n        self.assertEqual(expected, actual)\n"
  },
  {
    "path": "tests/product/test_file.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nTest file run\n\"\"\"\nimport os\n\nfrom nose.plugins.attrib import attr\n\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_PA_CLUSTER\nfrom tests.product.config_dir_utils import get_install_directory\n\n\nclass TestFile(BaseProductTestCase):\n    def setUp(self):\n        super(TestFile, self).setUp()\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        self.upload_topology()\n\n    @attr('smoketest')\n    def test_run_script(self):\n        script_path = os.path.join(get_install_directory(), 'script.sh')\n        # basic run script\n        self.cluster.write_content_to_host('#!/bin/bash\\necho hello',\n                                           script_path,\n                                           self.cluster.master)\n        output = self.run_prestoadmin('file run %s' % script_path)\n        self.assertEqualIgnoringOrder(output, \"\"\"[slave2] out: hello\n[slave2] out:\n[slave1] out: hello\n[slave1] out:\n[master] out: hello\n[master] out:\n[slave3] out: hello\n[slave3] out:\n\"\"\")\n        # specify remote directory\n        self.cluster.write_content_to_host('#!/bin/bash\\necho hello',\n                                           script_path,\n                                           self.cluster.master)\n        output = self.run_prestoadmin('file run %s' % script_path)\n        self.assertEqualIgnoringOrder(output, \"\"\"[slave2] out: hello\n[slave2] out:\n[slave1] out: hello\n[slave1] out:\n[master] out: hello\n[master] out:\n[slave3] out: hello\n[slave3] out:\n\"\"\")\n\n        # remote and local are the same\n        self.cluster.write_content_to_host('#!/bin/bash\\necho hello',\n                                           '/tmp/script.sh',\n                                           self.cluster.master)\n        output = self.run_prestoadmin('file run %s' % script_path)\n        self.assertEqualIgnoringOrder(output, \"\"\"[slave2] out: hello\n[slave2] out:\n[slave1] out: hello\n[slave1] out:\n[master] out: hello\n[master] out:\n[slave3] out: hello\n[slave3] out:\n\"\"\")\n        # invalid script\n        self.cluster.write_content_to_host('not a valid script',\n                                           script_path,\n                                           self.cluster.master)\n        output = self.run_prestoadmin('file run %s' % script_path,\n                                      raise_error=False)\n        self.assertEqualIgnoringOrder(output, \"\"\"\nFatal error: [slave2] sudo() received nonzero return code 127 while executing!\n\nRequested: /tmp/script.sh\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"/tmp/script.sh\"\n\nAborting.\n[slave2] out: /tmp/script.sh: line 1: not: command not found\n[slave2] out:\n\nFatal error: [master] sudo() received nonzero return code 127 while executing!\n\nRequested: /tmp/script.sh\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"/tmp/script.sh\"\n\nAborting.\n[master] out: /tmp/script.sh: line 1: not: command not found\n[master] out:\n\nFatal error: [slave3] sudo() received nonzero return code 127 while executing!\n\nRequested: /tmp/script.sh\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"/tmp/script.sh\"\n\nAborting.\n[slave3] out: /tmp/script.sh: line 1: not: command not found\n[slave3] out:\n\nFatal error: [slave1] sudo() received nonzero return code 127 while executing!\n\nRequested: /tmp/script.sh\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"/tmp/script.sh\"\n\nAborting.\n[slave1] out: /tmp/script.sh: line 1: not: command not found\n[slave1] out:\n\"\"\")\n"
  },
  {
    "path": "tests/product/test_offline_installer.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProduct tests for generating an online and offline installer for presto-admin\n\"\"\"\n\nfrom nose.plugins.attrib import attr\n\nfrom tests.product.base_product_case import docker_only\nfrom tests.product.base_test_installer import BaseTestInstaller\n\n\nclass TestOfflineInstaller(BaseTestInstaller):\n    def setUp(self):\n        super(TestOfflineInstaller, self).setUp(\"runtime\")\n\n    @attr('smoketest', 'offline_installer')\n    @docker_only\n    def test_offline_installer(self):\n        self.pa_installer._build_installer_in_docker(\n            self.centos_container, online_installer=False, unique=True)\n        self._verify_third_party_dir(True)\n        self.centos_container.exec_cmd_on_host(\n            # IMPORTANT: ifdown eth0 fails silently without taking the\n            # interface down if the NET_ADMIN capability isn't set for the\n            # container. ifconfig eth0 down accomplishes the same thing, but\n            # results in a failure if it fails.\n            self.centos_container.master, 'ifconfig eth0 down')\n        self.pa_installer.install(\n            dist_dir=self.centos_container.get_dist_dir(unique=True))\n        self.run_prestoadmin('--help', raise_error=True)\n"
  },
  {
    "path": "tests/product/test_online_installer.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProduct tests for generating an online and offline installer for presto-admin\n\"\"\"\n\nfrom nose.plugins.attrib import attr\n\nfrom tests.product.base_test_installer import BaseTestInstaller\n\n\nclass TestOnlineInstaller(BaseTestInstaller):\n    def setUp(self):\n        # for online installer we need to install on \"build\" cluster\n        # as essentially building presto is part of installation process\n        super(TestOnlineInstaller, self).setUp(\"build\")\n\n    @attr('smoketest')\n    def test_online_installer(self):\n        self.pa_installer._build_installer_in_docker(self.centos_container,\n                                                     online_installer=True,\n                                                     unique=True)\n        self._verify_third_party_dir(False)\n        self.pa_installer.install(\n            dist_dir=self.centos_container.get_dist_dir(unique=True))\n        self.run_prestoadmin('--help', raise_error=True)\n"
  },
  {
    "path": "tests/product/test_package_install.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom nose.plugins.attrib import attr\n\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase, \\\n    docker_only\nfrom tests.product.cluster_types import STANDALONE_PA_CLUSTER\nfrom tests.product.standalone.presto_installer import StandalonePrestoInstaller\n\n\nclass TestPackageInstall(BaseProductTestCase):\n    def setUp(self):\n        super(TestPackageInstall, self).setUp()\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        self.upload_topology()\n        self.installer = StandalonePrestoInstaller(self)\n\n    def tearDown(self):\n        self._assert_uninstall()\n        super(TestPackageInstall, self).tearDown()\n\n    def _assert_uninstall(self):\n        output = self.run_prestoadmin('package uninstall presto-server-rpm --force')\n        for container in self.cluster.all_hosts():\n            self.installer.assert_uninstalled(container, msg=output)\n\n    @attr('smoketest')\n    def test_package_installer(self):\n        rpm_name = self.installer.copy_presto_rpm_to_master()\n\n        # install\n        output = self.run_prestoadmin('package install %(rpm)s',\n                                      rpm=os.path.join(self.cluster.rpm_cache_dir, rpm_name))\n        for container in self.cluster.all_hosts():\n            self.installer.assert_installed(self, container, msg=output)\n\n        # uninstall\n        output = self.run_prestoadmin('package uninstall presto-server-rpm')\n        for container in self.cluster.all_hosts():\n            self.installer.assert_uninstalled(container, msg=output)\n\n    def test_install_using_dash_h(self):\n        rpm_name = self.installer.copy_presto_rpm_to_master()\n\n        # install onto master and slave2\n        output = self.run_prestoadmin('package install %(rpm)s -H %(master)s,%(slave2)s',\n                                      rpm=os.path.join(self.cluster.rpm_cache_dir, rpm_name))\n\n        self.installer.assert_installed(self, self.cluster.master, msg=output)\n        self.installer.assert_installed(self, self.cluster.slaves[1], msg=output)\n        self.installer.assert_uninstalled(self.cluster.slaves[0], msg=output)\n        self.installer.assert_uninstalled(self.cluster.slaves[2], msg=output)\n\n        # uninstall on slave2\n        output = self.run_prestoadmin('package uninstall presto-server-rpm -H %(slave2)s')\n        self.installer.assert_installed(self, self.cluster.master, msg=output)\n        for container in self.cluster.slaves:\n            self.installer.assert_uninstalled(container, msg=output)\n\n        # uninstall on rest\n        output = self.run_prestoadmin('package uninstall presto-server-rpm --force')\n        for container in self.cluster.all_hosts():\n            self.installer.assert_uninstalled(container, msg=output)\n\n    def test_install_exclude_nodes(self):\n        rpm_name = self.installer.copy_presto_rpm_to_master()\n        output = self.run_prestoadmin('package install %(rpm)s -x %(master)s,%(slave2)s',\n                                      rpm=os.path.join(self.cluster.rpm_cache_dir, rpm_name))\n\n        # install\n        self.installer.assert_uninstalled(self.cluster.master, msg=output)\n        self.installer.assert_uninstalled(self.cluster.slaves[1], msg=output)\n        self.installer.assert_installed(self, self.cluster.slaves[0], msg=output)\n        self.installer.assert_installed(self, self.cluster.slaves[2], msg=output)\n\n        # uninstall\n        output = self.run_prestoadmin('package uninstall presto-server-rpm -x %(master)s,%(slave2)s')\n        for container in self.cluster.all_hosts():\n            self.installer.assert_uninstalled(container, msg=output)\n\n    # skip this tests as it depends on OS package names\n    @attr('quarantine')\n    @docker_only\n    def test_install_rpm_missing_dependency(self):\n        rpm_name = self.installer.copy_presto_rpm_to_master()\n        self.cluster.exec_cmd_on_host(\n            self.cluster.master, 'rpm -e --nodeps python-2.6.6')\n        self.assertRaisesRegexp(OSError,\n                                'package python-2.6.6 is not installed',\n                                self.cluster.exec_cmd_on_host,\n                                self.cluster.master,\n                                'rpm -q python-2.6.6')\n\n        cmd_output = self.run_prestoadmin(\n            'package install %(rpm)s -H %(master)s',\n            rpm=os.path.join(self.cluster.rpm_cache_dir, rpm_name),\n            raise_error=False)\n        expected = self.replace_keywords(\"\"\"\nFatal error: [%(master)s] sudo() received nonzero return code 1 while \\\nexecuting!\n\nRequested: rpm -i /opt/prestoadmin/packages/%(rpm)s\nExecuted: sudo -S -p 'sudo password:'  /bin/bash -l -c \"rpm -i \\\n/opt/prestoadmin/packages/%(rpm)s\"\n\nAborting.\nDeploying rpm on %(master)s...\nPackage deployed successfully on: %(master)s\n[%(master)s] out: error: Failed dependencies:\n[%(master)s] out: \tpython >= 2.4 is needed by %(rpm_basename)s\n[%(master)s] out: \"\"\", **self.installer.get_keywords())\n        self.assertRegexpMatchesLineByLine(\n            cmd_output.splitlines(),\n            self.escape_for_regex(expected).splitlines()\n        )\n\n    # skip this tests as it depends on OS package names\n    @attr('quarantine')\n    @docker_only\n    def test_install_rpm_with_nodeps(self):\n        rpm_name = self.installer.copy_presto_rpm_to_master()\n        self.cluster.exec_cmd_on_host(\n            self.cluster.master, 'rpm -e --nodeps python-2.6.6')\n        self.assertRaisesRegexp(OSError,\n                                'package python-2.6.6 is not installed',\n                                self.cluster.exec_cmd_on_host,\n                                self.cluster.master,\n                                'rpm -q python-2.6.6')\n\n        cmd_output = self.run_prestoadmin(\n            'package install %(rpm)s -H %(master)s --nodeps',\n            rpm=os.path.join(self.cluster.rpm_cache_dir, rpm_name)\n        )\n        expected = 'Deploying rpm on %(host)s...\\n' \\\n                   'Package deployed successfully on: %(host)s\\n' \\\n                   'Package installed successfully on: %(host)s' \\\n                   % {'host': self.cluster.internal_master}\n\n        self.assertEqualIgnoringOrder(expected, cmd_output)\n"
  },
  {
    "path": "tests/product/test_plugin.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nproduct tests for presto-admin plugin commands\n\"\"\"\nimport os\n\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_PA_CLUSTER\nfrom tests.product.config_dir_utils import get_install_directory\n\nTMP_JAR_PATH = os.path.join(get_install_directory(), 'pretend.jar')\nSTD_REMOTE_PATH = '/usr/lib/presto/lib/plugin/hive-cdh5/pretend.jar'\n\n\nclass TestPlugin(BaseProductTestCase):\n    def setUp(self):\n        super(TestPlugin, self).setUp()\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n\n    def deploy_jar_to_master(self):\n        self.cluster.write_content_to_host('A PRETEND JAR', TMP_JAR_PATH,\n                                           self.cluster.master)\n\n    def test_basic_add_jars(self):\n        self.upload_topology()\n        self.deploy_jar_to_master()\n        # no plugin dir argument\n        output = self.run_prestoadmin(\n            'plugin add_jar %s hive-cdh5' % TMP_JAR_PATH)\n        self.assertEqualIgnoringOrder(output, '')\n        for host in self.cluster.all_hosts():\n            self.assert_path_exists(host, STD_REMOTE_PATH)\n            self.cluster.exec_cmd_on_host(host, 'rm %s' % STD_REMOTE_PATH,\n                                          raise_error=False)\n\n        # supply plugin directory\n        output = self.run_prestoadmin(\n            'plugin add_jar %s hive-cdh5 /etc/presto/plugin' % TMP_JAR_PATH)\n        self.assertEqual(output, '')\n        for host in self.cluster.all_hosts():\n            temp_jar_location = '/etc/presto/plugin/hive-cdh5/pretend.jar'\n            self.assert_path_exists(host, temp_jar_location)\n            self.cluster.exec_cmd_on_host(host, 'rm %s' % temp_jar_location, invoke_sudo=True)\n\n    def test_lost_coordinator(self):\n        internal_bad_host = self.cluster.internal_slaves[0]\n        bad_host = self.cluster.slaves[0]\n        good_hosts = [self.cluster.internal_master,\n                      self.cluster.internal_slaves[1],\n                      self.cluster.internal_slaves[2]]\n        topology = {'coordinator': internal_bad_host,\n                    'workers': good_hosts}\n        self.upload_topology(topology)\n        self.cluster.stop_host(bad_host)\n        self.deploy_jar_to_master()\n        output = self.run_prestoadmin(\n            'plugin add_jar %s hive-cdh5' % TMP_JAR_PATH, raise_error=False)\n        self.assertRegexpMatches(output, self.down_node_connection_error(\n            internal_bad_host))\n        self.assertEqual(len(output.splitlines()), self.len_down_node_error)\n        for host in good_hosts:\n            self.assert_path_exists(host, STD_REMOTE_PATH)\n            self.cluster.exec_cmd_on_host(host, 'rm %s' % STD_REMOTE_PATH,\n                                          raise_error=False)\n\n    def test_lost_worker(self):\n        internal_bad_host = self.cluster.internal_slaves[0]\n        bad_host = self.cluster.slaves[0]\n        good_hosts = [self.cluster.internal_master,\n                      self.cluster.internal_slaves[1],\n                      self.cluster.internal_slaves[2]]\n        topology = {'coordinator': self.cluster.internal_master,\n                    'workers': self.cluster.internal_slaves}\n        self.upload_topology(topology)\n        self.cluster.stop_host(bad_host)\n        self.deploy_jar_to_master()\n        output = self.run_prestoadmin(\n            'plugin add_jar %s hive-cdh5' % TMP_JAR_PATH, raise_error=False)\n        self.assertRegexpMatches(output, self.down_node_connection_error(\n            internal_bad_host))\n        self.assertEqual(len(output.splitlines()), self.len_down_node_error)\n        for host in good_hosts:\n            self.assert_path_exists(host, STD_REMOTE_PATH)\n            self.cluster.exec_cmd_on_host(host, 'rm %s' % STD_REMOTE_PATH,\n                                          raise_error=False)\n"
  },
  {
    "path": "tests/product/test_server_install.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom nose.plugins.attrib import attr\n\nfrom tests.product import relocate_jdk_directory\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_PA_CLUSTER\nfrom tests.product.config_dir_utils import get_catalog_directory\nfrom tests.product.standalone.presto_installer import StandalonePrestoInstaller\nfrom tests.product.constants import LOCAL_RESOURCES_DIR\n\n\ninstall_with_ext_host_pa_master_out = ['Deploying rpm on slave1...',\n                                       'Deploying rpm on slave2...',\n                                       'Deploying rpm on slave3...',\n                                       'Package deployed successfully on: '\n                                       'slave3',\n                                       'Package installed successfully on: '\n                                       'slave3',\n                                       'Package deployed successfully on: '\n                                       'slave1',\n                                       'Package installed successfully on: '\n                                       'slave1',\n                                       'Package deployed successfully on: '\n                                       'slave2',\n                                       'Package installed successfully on: '\n                                       'slave2',\n                                       'Deploying configuration on: slave3',\n                                       'Deploying tpch.properties catalog '\n                                       'configurations on: slave3 ',\n                                       'Deploying configuration on: slave1',\n                                       'Deploying tpch.properties catalog '\n                                       'configurations on: slave1 ',\n                                       'Deploying configuration on: slave2',\n                                       'Deploying tpch.properties catalog '\n                                       'configurations on: slave2 ',\n                                       'Using rpm_specifier as a local path',\n                                       'Fetching local presto rpm at path: .*',\n                                       'Found existing rpm at: .*']\n\ninstall_with_worker_pa_master_out = ['Deploying rpm on {master}...',\n                                     'Deploying rpm on {slave1}...',\n                                     'Deploying rpm on {slave2}...',\n                                     'Deploying rpm on {slave3}...',\n                                     'Package deployed successfully on: '\n                                     '{slave3}',\n                                     'Package installed successfully on: '\n                                     '{slave3}',\n                                     'Package deployed successfully on: '\n                                     '{slave1}',\n                                     'Package installed successfully on: '\n                                     '{slave1}',\n                                     'Package deployed successfully on: '\n                                     '{master}',\n                                     'Package installed successfully on: '\n                                     '{master}',\n                                     'Package deployed successfully on: '\n                                     '{slave2}',\n                                     'Package installed successfully on: '\n                                     '{slave2}',\n                                     'Deploying configuration on: {slave3}',\n                                     'Deploying tpch.properties catalog '\n                                     'configurations on: {slave3} ',\n                                     'Deploying configuration on: {slave1}',\n                                     'Deploying tpch.properties catalog '\n                                     'configurations on: {slave1} ',\n                                     'Deploying configuration on: {slave2}',\n                                     'Deploying tpch.properties catalog '\n                                     'configurations on: {slave2} ',\n                                     'Deploying configuration on: {master}',\n                                     'Deploying tpch.properties catalog '\n                                     'configurations on: {master} ',\n                                     'Using rpm_specifier as a local path',\n                                     'Fetching local presto rpm at path: .*',\n                                     'Found existing rpm at: .*']\n\ninstalled_all_hosts_output = ['Deploying rpm on {master}...',\n                              'Deploying rpm on {slave1}...',\n                              'Deploying rpm on {slave2}...',\n                              'Deploying rpm on {slave3}...',\n                              'Package deployed successfully on: {slave3}',\n                              'Package installed successfully on: {slave3}',\n                              'Package deployed successfully on: {slave1}',\n                              'Package installed successfully on: {slave1}',\n                              'Package deployed successfully on: {master}',\n                              'Package installed successfully on: {master}',\n                              'Package deployed successfully on: {slave2}',\n                              'Package installed successfully on: {slave2}',\n                              'Deploying configuration on: {slave3}',\n                              'Deploying tpch.properties catalog '\n                              'configurations on: {slave3} ',\n                              'Deploying configuration on: {slave1}',\n                              'Deploying tpch.properties catalog '\n                              'configurations on: {slave1} ',\n                              'Deploying configuration on: {slave2}',\n                              'Deploying tpch.properties catalog '\n                              'configurations on: {slave2} ',\n                              'Deploying configuration on: {master}',\n                              'Deploying tpch.properties catalog '\n                              'configurations on: {master} ',\n                              'Using rpm_specifier as a local path',\n                              'Fetching local presto rpm at path: .*',\n                              'Found existing rpm at: .*']\n\n\nclass TestServerInstall(BaseProductTestCase):\n    default_workers_config_with_slave1_ = \"\"\"coordinator=false\ndiscovery.uri=http://slave1:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\\n\"\"\"\n\n    default_coord_config_with_slave1_ = \"\"\"coordinator=true\ndiscovery-server.enabled=true\ndiscovery.uri=http://slave1:7070\nhttp-server.http.port=7070\nnode-scheduler.include-coordinator=false\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\\n\"\"\"\n\n    default_workers_config_regex_ = \"\"\"coordinator=false\ndiscovery.uri=http:.*:7070\nhttp-server.http.port=7070\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\\n\"\"\"\n\n    default_coord_config_regex_ = \"\"\"coordinator=true\ndiscovery-server.enabled=true\ndiscovery.uri=http:.*:7070\nhttp-server.http.port=7070\nnode-scheduler.include-coordinator=false\nquery.max-memory-per-node=512MB\nquery.max-memory=50GB\\n\"\"\"\n\n    def setUp(self):\n        super(TestServerInstall, self).setUp()\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n\n    def assert_common_configs(self, host):\n        installer = StandalonePrestoInstaller(self)\n        installer.assert_installed(self, host)\n        self.assert_file_content(host, '/etc/presto/jvm.config',\n                                 self.default_jvm_config_)\n        self.assert_node_config(host, self.default_node_properties_)\n        self.assert_has_default_catalog(host)\n\n    def assert_installed_with_configs(self, master, slaves):\n        self.assert_common_configs(master)\n        self.assert_file_content(master,\n                                 '/etc/presto/config.properties',\n                                 self.default_coord_config_with_slave1_)\n        for container in slaves:\n            self.assert_common_configs(container)\n            self.assert_file_content(container,\n                                     '/etc/presto/config.properties',\n                                     self.default_workers_config_with_slave1_)\n\n    def assert_installed_with_regex_configs(self, master, slaves):\n        self.assert_common_configs(master)\n        self.assert_file_content_regex(master,\n                                       '/etc/presto/config.properties',\n                                       self.default_coord_config_regex_)\n        for container in slaves:\n            self.assert_common_configs(container)\n            self.assert_file_content_regex(container,\n                                           '/etc/presto/config.properties',\n                                           self.default_workers_config_regex_)\n\n    @attr('smoketest')\n    def test_install_with_java8_home(self):\n        installer = StandalonePrestoInstaller(self)\n\n        with relocate_jdk_directory(self.cluster, '/usr') as new_java_home:\n            topology = {\"coordinator\": \"master\",\n                        \"workers\": [\"slave1\", \"slave2\", \"slave3\"],\n                        \"java8_home\": new_java_home}\n            self.upload_topology(topology)\n\n            cmd_output = installer.install()\n            expected = self.format_err_msgs_with_internal_hosts(installed_all_hosts_output)\n\n            actual = cmd_output.splitlines()\n            self.assertRegexpMatchesLineByLine(actual, expected)\n\n            for host in self.cluster.all_hosts():\n                installer.assert_installed(self, host)\n                self.assert_has_default_config(host)\n                self.assert_has_default_catalog(host)\n\n    def test_install_ext_host_is_pa_master(self):\n        installer = StandalonePrestoInstaller(self)\n        topology = {\"coordinator\": \"slave1\",\n                    \"workers\": [\"slave2\", \"slave3\"]}\n        self.upload_topology(topology)\n\n        cmd_output = installer.install(coordinator='slave1')\n        expected = install_with_ext_host_pa_master_out\n\n        actual = cmd_output.splitlines()\n        self.assertRegexpMatchesLineByLine(actual, expected)\n\n        self.assert_installed_with_configs(\n            self.cluster.slaves[0],\n            [self.cluster.slaves[1],\n             self.cluster.slaves[2]])\n\n    def test_install_when_catalog_json_exists(self):\n        installer = StandalonePrestoInstaller(self)\n        topology = {\"coordinator\": \"master\",\n                    \"workers\": [\"slave1\"]}\n        self.upload_topology(topology)\n        self.cluster.write_content_to_host(\n            'connector.name=jmx',\n            os.path.join(get_catalog_directory(), 'jmx.properties'),\n            self.cluster.master\n        )\n\n        cmd_output = installer.install()\n        expected = ['Deploying rpm on master...',\n                    'Deploying rpm on slave1...',\n                    'Package deployed successfully on: slave1',\n                    'Package installed successfully on: slave1',\n                    'Package deployed successfully on: master',\n                    'Package installed successfully on: master',\n                    'Deploying configuration on: master',\n                    'Deploying jmx.properties, tpch.properties '\n                    'catalog configurations on: master ',\n                    'Deploying configuration on: slave1',\n                    'Deploying jmx.properties, tpch.properties '\n                    'catalog configurations on: slave1 ',\n                    'Using rpm_specifier as a local path',\n                    'Fetching local presto rpm at path: .*',\n                    'Found existing rpm at: .*']\n\n        actual = cmd_output.splitlines()\n        self.assertRegexpMatchesLineByLine(actual, expected)\n\n        for container in [self.cluster.master,\n                          self.cluster.slaves[0]]:\n            installer.assert_installed(self, container)\n            self.assert_has_default_config(container)\n            self.assert_has_default_catalog(container)\n            self.assert_has_jmx_catalog(container)\n\n    def test_install_when_topology_has_ips(self):\n        installer = StandalonePrestoInstaller(self)\n        ips = self.cluster.get_ip_address_dict()\n        topology = {\"coordinator\": ips[self.cluster.internal_master],\n                    \"workers\": [ips[self.cluster.internal_slaves[0]]]}\n        self.upload_topology(topology)\n        self.cluster.write_content_to_host(\n            'connector.name=jmx',\n            os.path.join(get_catalog_directory(), 'jmx.properties'),\n            self.cluster.master\n        )\n\n        cmd_output = installer.install().splitlines()\n        expected = [\n            r'Deploying rpm on %s...' % ips[self.cluster.internal_master],\n            r'Deploying rpm on %s...' % ips[self.cluster.internal_slaves[0]],\n            r'Package deployed successfully on: ' +\n            ips[self.cluster.internal_master],\n            r'Package installed successfully on: ' +\n            ips[self.cluster.internal_master],\n            r'Package deployed successfully on: ' +\n            ips[self.cluster.internal_slaves[0]],\n            r'Package installed successfully on: ' +\n            ips[self.cluster.internal_slaves[0]],\n            r'Deploying configuration on: ' +\n            ips[self.cluster.internal_master],\n            r'Deploying jmx.properties, tpch.properties '\n            r'catalog configurations on: ' +\n            ips[self.cluster.internal_master] + r' ',\n            r'Deploying configuration on: ' +\n            ips[self.cluster.internal_slaves[0]],\n            r'Deploying jmx.properties, tpch.properties '\n            r'catalog configurations on: ' +\n            ips[self.cluster.internal_slaves[0]] + r' ',\n            r'Using rpm_specifier as a local path',\n            r'Fetching local presto rpm at path: .*',\n            r'Found existing rpm at: .*']\n\n        cmd_output.sort()\n        expected.sort()\n        self.assertRegexpMatchesLineByLine(cmd_output, expected)\n\n        self.assert_installed_with_regex_configs(\n            self.cluster.master,\n            [self.cluster.slaves[0]])\n        for host in [self.cluster.master, self.cluster.slaves[0]]:\n            self.assert_has_jmx_catalog(host)\n\n    def test_install_interactive(self):\n        installer = StandalonePrestoInstaller(self)\n        self.cluster.write_content_to_host(\n            'connector.name=jmx',\n            os.path.join(get_catalog_directory(), 'jmx.properties'),\n            self.cluster.master\n        )\n        rpm_name = installer.copy_presto_rpm_to_master()\n        self.write_test_configs(self.cluster)\n\n        additional_keywords = {\n            'user': self.cluster.user,\n            'rpm_dir': self.cluster.rpm_cache_dir,\n            'rpm': rpm_name\n        }\n\n        cmd_output = self.run_script_from_prestoadmin_dir(\n            'echo -e \"%(user)s\\n22\\n%(master)s\\n%(slave1)s\\n\" | '\n            './presto-admin server install %(rpm_dir)s/%(rpm)s ',\n            **additional_keywords)\n\n        actual = cmd_output.splitlines()\n        expected = [r'Enter user name for SSH connection to all nodes: '\n                    r'\\[root\\] '\n                    r'Enter port number for SSH connections to all nodes: '\n                    r'\\[22\\] '\n                    r'Enter host name or IP address for coordinator node. '\n                    r'Enter an external host name or ip address if this is a '\n                    r'multi-node cluster: \\[localhost\\] '\n                    r'Enter host names or IP addresses for worker nodes '\n                    r'separated by spaces: '\n                    r'\\[localhost\\] Using rpm_specifier as a local path',\n                    r'Package deployed successfully on: ' +\n                    self.cluster.internal_master,\n                    r'Package installed successfully on: ' +\n                    self.cluster.internal_master,\n                    r'Package deployed successfully on: ' +\n                    self.cluster.internal_slaves[0],\n                    r'Package installed successfully on: ' +\n                    self.cluster.internal_slaves[0],\n                    r'Deploying configuration on: ' +\n                    self.cluster.internal_master,\n                    r'Deploying jmx.properties, tpch.properties catalog '\n                    r'configurations on: ' +\n                    self.cluster.internal_master,\n                    r'Deploying configuration on: ' +\n                    self.cluster.internal_slaves[0],\n                    r'Deploying jmx.properties, tpch.properties catalog '\n                    r'configurations on: ' +\n                    self.cluster.internal_slaves[0],\n                    r'Deploying rpm on .*\\.\\.\\.',\n                    r'Deploying rpm on .*\\.\\.\\.',\n                    r'Fetching local presto rpm at path: .*',\n                    r'Found existing rpm at: .*'\n                    ]\n\n        self.assertRegexpMatchesLineByLine(actual, expected)\n        for container in [self.cluster.master,\n                          self.cluster.slaves[0]]:\n            installer.assert_installed(self, container)\n            self.assert_has_default_config(container)\n            self.assert_has_default_catalog(container)\n            self.assert_has_jmx_catalog(container)\n\n    def test_connection_to_coord_lost(self):\n        installer = StandalonePrestoInstaller(self)\n        down_node = self.cluster.internal_slaves[0]\n        topology = {\"coordinator\": down_node,\n                    \"workers\": [self.cluster.internal_master,\n                                self.cluster.internal_slaves[1],\n                                self.cluster.internal_slaves[2]]}\n        self.upload_topology(topology=topology)\n        self.cluster.stop_host(\n            self.cluster.slaves[0])\n\n        actual_out = installer.install(\n            coordinator=down_node, pa_raise_error=False)\n\n        self.assertRegexpMatches(\n            actual_out,\n            self.down_node_connection_error(down_node)\n        )\n\n        for host in [self.cluster.master,\n                     self.cluster.slaves[1],\n                     self.cluster.slaves[2]]:\n            self.assert_common_configs(host)\n            self.assert_file_content(\n                host,\n                '/etc/presto/config.properties',\n                self.default_workers_config_with_slave1_\n            )\n\n    def test_install_twice(self):\n        installer = StandalonePrestoInstaller(self)\n        self.upload_topology()\n        cmd_output = installer.install()\n        expected = self.format_err_msgs_with_internal_hosts(installed_all_hosts_output)\n\n        actual = cmd_output.splitlines()\n        self.assertRegexpMatchesLineByLine(actual, expected)\n\n        for container in self.cluster.all_hosts():\n            installer.assert_installed(self, container)\n            self.assert_has_default_config(container)\n            self.assert_has_default_catalog(container)\n\n        output = installer.install(pa_raise_error=False)\n\n        self.default_keywords.update(installer.get_keywords())\n\n        with open(os.path.join(LOCAL_RESOURCES_DIR, 'install_twice.txt'),\n                  'r') as f:\n            expected = f.read()\n        expected = self.escape_for_regex(\n            self.replace_keywords(expected))\n\n        self.assertRegexpMatchesLineByLine(output.splitlines(),\n                                           expected.splitlines())\n        for container in self.cluster.all_hosts():\n            installer.assert_installed(self, container)\n            self.assert_has_default_config(container)\n            self.assert_has_default_catalog(container)\n\n    def test_install_non_root_user(self):\n        installer = StandalonePrestoInstaller(self)\n        self.upload_topology(\n            {\"coordinator\": \"master\",\n             \"workers\": [\"slave1\", \"slave2\", \"slave3\"],\n             \"username\": \"app-admin\"}\n        )\n\n        rpm_name = installer.copy_presto_rpm_to_master(cluster=self.cluster)\n        self.write_test_configs(self.cluster)\n        self.run_prestoadmin(\n            'server install {rpm_dir}/{name} -p password'.format(\n                rpm_dir=self.cluster.rpm_cache_dir, name=rpm_name)\n        )\n\n        for container in self.cluster.all_hosts():\n            installer.assert_installed(self, container)\n            self.assert_has_default_config(container)\n            self.assert_has_default_catalog(container)\n\n    def format_err_msgs_with_internal_hosts(self, msgs):\n        formatted_msg = []\n        for msg in msgs:\n            formatted_msg.append(msg.format(master=self.cluster.internal_master,\n                                            slave1=self.cluster.internal_slaves[0],\n                                            slave2=self.cluster.internal_slaves[1],\n                                            slave3=self.cluster.internal_slaves[2]))\n        return formatted_msg\n"
  },
  {
    "path": "tests/product/test_server_uninstall.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom nose.plugins.attrib import attr\n\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_PRESTO_CLUSTER\nfrom tests.product.constants import LOCAL_RESOURCES_DIR\nfrom tests.product.standalone.presto_installer import StandalonePrestoInstaller\n\nuninstall_output = ['Package uninstalled successfully on: slave1',\n                    'Package uninstalled successfully on: slave2',\n                    'Package uninstalled successfully on: slave3',\n                    'Package uninstalled successfully on: master']\n\n\nclass TestServerUninstall(BaseProductTestCase):\n    def setUp(self):\n        super(TestServerUninstall, self).setUp()\n        self.installer = StandalonePrestoInstaller(self)\n\n    @attr('smoketest')\n    def test_uninstall(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        start_output = self.run_prestoadmin('server start')\n        process_per_host = self.get_process_per_host(start_output.splitlines())\n        self.assert_started(process_per_host)\n\n        cmd_output = self.run_prestoadmin(\n            'server uninstall', raise_error=False).splitlines()\n        self.assert_stopped(process_per_host)\n        expected = uninstall_output + self.expected_stop()[:]\n        self.assertRegexpMatchesLineByLine(cmd_output, expected)\n\n        for container in self.cluster.all_hosts():\n            self.assert_uninstalled_dirs_removed(container)\n\n    def assert_uninstalled_dirs_removed(self, container):\n        self.installer.assert_uninstalled(container)\n        self.assert_path_removed(container, '/etc/presto')\n        self.assert_path_removed(container, '/usr/lib/presto')\n        self.assert_path_removed(container, '/var/lib/presto')\n        self.assert_path_removed(container, '/usr/shared/doc/presto')\n        self.assert_path_removed(container, '/etc/init.d/presto')\n\n    def test_uninstall_twice(self):\n        self.test_uninstall()\n\n        output = self.run_prestoadmin('server uninstall', raise_error=False)\n        with open(os.path.join(LOCAL_RESOURCES_DIR, 'uninstall_twice.txt'),\n                  'r') as f:\n            expected = f.read()\n\n        self.assertEqualIgnoringOrder(expected, output)\n\n    def test_uninstall_lost_host(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n\n        self.cluster.stop_host(\n            self.cluster.slaves[0])\n\n        expected = self.down_node_connection_error(\n            self.cluster.internal_slaves[0])\n        cmd_output = self.run_prestoadmin('server uninstall',\n                                          raise_error=False)\n        self.assertRegexpMatches(cmd_output, expected)\n\n        for container in [self.cluster.internal_master,\n                          self.cluster.internal_slaves[1],\n                          self.cluster.internal_slaves[2]]:\n            self.assert_uninstalled_dirs_removed(container)\n"
  },
  {
    "path": "tests/product/test_server_upgrade.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom nose.plugins.attrib import attr\n\nimport prestoadmin\n\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_PRESTO_CLUSTER\nfrom tests.product.config_dir_utils import get_install_directory\nfrom tests.product.standalone.presto_installer import StandalonePrestoInstaller\n\n\nclass TestServerUpgrade(BaseProductTestCase):\n\n    def setUp(self):\n        super(TestServerUpgrade, self).setUp()\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.dummy_installer = StandalonePrestoInstaller(\n            self, (os.path.join(prestoadmin.main_dir, 'tests', 'product',\n                                'resources'), 'dummy-rpm.rpm'))\n        self.real_installer = StandalonePrestoInstaller(self)\n\n    def start_and_assert_started(self):\n        cmd_output = self.run_prestoadmin('server start')\n        process_per_host = self.get_process_per_host(cmd_output.splitlines())\n        self.assert_started(process_per_host)\n\n    #\n    # The dummy RPM is not guaranteed to have any functionality beyond not\n    # including any real payload and adding the random README file. It's a\n    # hacky one-off that satisfies the requirement of having *something* to\n    # upgrade to without downloading another copy of the real RPM. This is NOT\n    # the place to test functionality that the presto-server-rpm normally\n    # provides, because the dummy rpm probably doesn't provide it, or worse,\n    # provides an old and/or broken version of it.\n    #\n    def assert_upgraded_to_dummy_rpm(self, hosts):\n        for container in hosts:\n            # Still should have the same configs\n            self.dummy_installer.assert_installed(self, container)\n            self.assert_has_default_config(container)\n            self.assert_has_default_catalog(container)\n\n            # However, dummy_rpm.rpm removes /usr/lib/presto/lib and\n            # /usr/lib/presto/lib/plugin\n            self.assert_path_removed(container, '/usr/lib/presto/lib')\n            self.assert_path_removed(container, '/usr/lib/presto/lib/plugin')\n\n            # And adds /usr/lib/presto/README.txt\n            self.assert_path_exists(container, '/usr/lib/presto/README.txt')\n\n            # And modifies the text of the readme in\n            # /usr/shared/doc/presto/README.txt\n            self.assert_file_content_regex(\n                container,\n                '/usr/shared/doc/presto/README.txt',\n                r'.*New line of text here.$'\n            )\n\n    @attr('smoketest')\n    def test_upgrade(self):\n        self.start_and_assert_started()\n\n        self.run_prestoadmin('configuration deploy')\n        for container in self.cluster.all_hosts():\n            self.real_installer.assert_installed(self, container)\n            self.assert_has_default_config(container)\n            self.assert_has_default_catalog(container)\n\n        path_on_cluster = self.copy_upgrade_rpm_to_cluster()\n        self.upgrade_and_assert_success(path_on_cluster)\n\n    def upgrade_and_assert_success(self, path_on_cluster, extra_arguments=''):\n        self.run_prestoadmin('server upgrade ' + path_on_cluster + extra_arguments)\n        self.assert_upgraded_to_dummy_rpm(self.cluster.all_hosts())\n\n    def copy_upgrade_rpm_to_cluster(self):\n        rpm_name = self.dummy_installer.copy_presto_rpm_to_master()\n        return os.path.join(self.cluster.rpm_cache_dir, rpm_name)\n\n    def test_upgrade_fails_given_directory(self):\n        dir_on_cluster = '/opt/prestoadmin'\n        self.assertRaisesRegexp(\n            OSError,\n            'RPM file not found at %s.' % dir_on_cluster,\n            self.run_prestoadmin,\n            'server upgrade ' + dir_on_cluster\n        )\n\n    def test_upgrade_works_with_symlink(self):\n        self.run_prestoadmin('configuration deploy')\n        for container in self.cluster.all_hosts():\n            self.real_installer.assert_installed(self, container)\n            self.assert_has_default_config(container)\n            self.assert_has_default_catalog(container)\n\n        path_on_cluster = self.copy_upgrade_rpm_to_cluster()\n        symlink = os.path.join(get_install_directory(), 'link.rpm')\n        self.cluster.exec_cmd_on_host(self.cluster.master, 'ln -s %s %s'\n                                      % (path_on_cluster, symlink))\n        self.upgrade_and_assert_success(symlink)\n\n    def test_configuration_preserved_on_upgrade(self):\n        book_content = 'Call me Ishmael ... FINIS'\n        book_path = '/etc/presto/moby_dick_abridged'\n        self.run_prestoadmin('configuration deploy')\n        big_files = {}\n        for container in self.cluster.all_hosts():\n            self.real_installer.assert_installed(self, container)\n            self.assert_has_default_config(container)\n            self.assert_has_default_catalog(container)\n\n            big_file = self.cluster.exec_cmd_on_host(\n                container,\n                \"find /usr -size +2M -ls | \"\n                \"sort -nk7 | \"\n                \"tail -1 | \"\n                \"awk '{print $NF}'\").strip()\n\n            self.cluster.exec_cmd_on_host(\n                container, \"cp %s /etc/presto\" % (big_file,), invoke_sudo=True)\n            big_files[container] = os.path.join(\"/etc/presto\", os.path.basename(big_file))\n\n            self.cluster.write_content_to_host(book_content, book_path, host=container)\n            self.cluster.exec_cmd_on_host(container, \"chown presto:games %s\" % (book_path,), invoke_sudo=True)\n            self.cluster.exec_cmd_on_host(container, \"chmod 272 %s\" % (book_path,), invoke_sudo=True)\n            self.assert_file_content(container, book_path, book_content)\n            self.assert_file_perm_owner(container, book_path, '--w-rwx-w-', 'presto', 'games')\n            self.assert_path_exists(container, big_files[container])\n\n        self.add_dummy_properties_to_host(self.cluster.slaves[1])\n        path_on_cluster = self.copy_upgrade_rpm_to_cluster()\n        symlink = os.path.join(get_install_directory(), 'link.rpm')\n        self.cluster.exec_cmd_on_host(self.cluster.master, 'ln -s %s %s'\n                                      % (path_on_cluster, symlink))\n\n        self.run_prestoadmin('server upgrade ' + path_on_cluster)\n        self.assert_dummy_properties(self.cluster.slaves[1])\n\n        for container in self.cluster.all_hosts():\n            self.assert_file_content(container, book_path, book_content)\n            self.assert_file_perm_owner(container, book_path, '--w-rwx-w-', 'presto', 'games')\n\n            self.assert_path_exists(container, big_files[container])\n\n    def test_upgrade_non_root_user(self):\n        self.upload_topology(\n            {\"coordinator\": \"master\",\n             \"workers\": [\"slave1\", \"slave2\", \"slave3\"],\n             \"username\": \"app-admin\"}\n        )\n        self.run_prestoadmin('configuration deploy -p password')\n        for container in self.cluster.all_hosts():\n            self.real_installer.assert_installed(self, container)\n            self.assert_has_default_config(container)\n            self.assert_has_default_catalog(container)\n\n        path_on_cluster = self.copy_upgrade_rpm_to_cluster()\n        self.upgrade_and_assert_success(path_on_cluster, extra_arguments=' -p password')\n\n    def add_dummy_properties_to_host(self, host):\n        self.cluster.write_content_to_host(\n            'com.facebook.presto=INFO',\n            '/etc/presto/log.properties',\n            host\n        )\n        self.cluster.write_content_to_host(\n            'dummy config file',\n            '/etc/presto/jvm.config',\n            host\n        )\n\n    def assert_dummy_properties(self, host):\n        # assert log properties file is there\n        self.assert_file_content(\n            host,\n            '/etc/presto/log.properties',\n            'com.facebook.presto=INFO'\n        )\n\n        # assert dummy jvm config is there too\n        self.assert_file_content(\n            host,\n            '/etc/presto/jvm.config',\n            'dummy config file'\n        )\n"
  },
  {
    "path": "tests/product/test_status.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProduct tests for presto-admin status commands\n\"\"\"\n\nfrom nose.plugins.attrib import attr\n\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase, \\\n    PRESTO_VERSION, PrestoError\nfrom tests.product.cluster_types import STANDALONE_PA_CLUSTER, STANDALONE_PRESTO_CLUSTER\nfrom tests.product.standalone.presto_installer import StandalonePrestoInstaller\n\n\nclass TestStatus(BaseProductTestCase):\n\n    def setUp(self):\n        super(TestStatus, self).setUp()\n        self.installer = StandalonePrestoInstaller(self)\n\n    def test_status_uninstalled(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        self.upload_topology()\n        status_output = self._server_status_with_retries()\n        self.check_status(status_output, self.not_installed_status())\n\n    def test_status_not_started(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        status_output = self._server_status_with_retries()\n        self.check_status(status_output, self.not_started_status())\n\n    @attr('smoketest')\n    def test_status_happy_path(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.run_prestoadmin('server start')\n        status_output = self._server_status_with_retries(check_catalogs=True)\n        self.check_status(status_output, self.base_status())\n\n    def test_status_only_coordinator(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n\n        self.run_prestoadmin('server start -H master')\n        # don't run with retries because it won't be able to query the\n        # coordinator because the coordinator is set to not be a worker\n        status_output = self.run_prestoadmin('server status')\n        self.check_status(\n            status_output,\n            self.single_node_up_status(self.cluster.internal_master)\n        )\n\n    def test_status_only_worker(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n\n        self.run_prestoadmin('server start -H slave1')\n        status_output = self._server_status_with_retries()\n        self.check_status(\n            status_output,\n            self.single_node_up_status(self.cluster.internal_slaves[0])\n        )\n\n        # Check that the slave sees that it's stopped, even though the\n        # discovery server is not up.\n        self.run_prestoadmin('server stop')\n        status_output = self._server_status_with_retries()\n        self.check_status(status_output, self.not_started_status())\n\n    def test_connection_to_coordinator_lost(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        topology = {\"coordinator\": \"slave1\", \"workers\":\n                    [\"master\", \"slave2\", \"slave3\"]}\n        self.upload_topology(topology=topology)\n        self.installer.install(coordinator='slave1')\n        self.run_prestoadmin('server start')\n        self.cluster.stop_host(\n            self.cluster.slaves[0])\n        topology = {\"coordinator\": self.cluster.get_down_hostname(\"slave1\"),\n                    \"workers\": [\"master\", \"slave2\", \"slave3\"]}\n        status_output = self._server_status_with_retries()\n        statuses = self.node_not_available_status(\n            topology, self.cluster.internal_slaves[0],\n            coordinator_down=True)\n        self.check_status(status_output, statuses)\n\n    def test_connection_to_worker_lost(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n        topology = {\"coordinator\": \"slave1\", \"workers\":\n                    [\"master\", \"slave2\", \"slave3\"]}\n        self.upload_topology(topology=topology)\n        self.installer.install(coordinator='slave1')\n        self.run_prestoadmin('server start')\n        self.cluster.stop_host(\n            self.cluster.slaves[1])\n        topology = {\"coordinator\": \"slave1\", \"workers\":\n                    [\"master\", self.cluster.get_down_hostname(\"slave2\"),\n                     \"slave3\"]}\n        status_output = self._server_status_with_retries(check_catalogs=True)\n        statuses = self.node_not_available_status(\n            topology, self.cluster.internal_slaves[1])\n        self.check_status(status_output, statuses)\n\n    def test_status_non_root_user(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.upload_topology(\n            {\"coordinator\": \"master\",\n             \"workers\": [\"slave1\", \"slave2\", \"slave3\"],\n             \"username\": \"app-admin\"}\n        )\n        self.run_prestoadmin('server start -p password')\n        status_output = self._server_status_with_retries(check_catalogs=True, extra_arguments=' -p password')\n        self.check_status(status_output, self.base_status())\n\n    def base_status(self, topology=None):\n        ips = self.cluster.get_ip_address_dict()\n        if not topology:\n            topology = {\n                'coordinator': self.cluster.internal_master, 'workers':\n                [self.cluster.internal_slaves[0],\n                 self.cluster.internal_slaves[1],\n                 self.cluster.internal_slaves[2]]\n            }\n        statuses = []\n        hosts_in_status = [topology['coordinator']] + topology['workers'][:]\n        for host in hosts_in_status:\n            role = 'coordinator' if host is topology['coordinator']\\\n                else 'worker'\n            status = {'host': host, 'role': role, 'ip': ips[host],\n                      'is_running': 'Running'}\n            statuses += [status]\n        return statuses\n\n    def not_started_status(self):\n        statuses = self.base_status()\n        for status in statuses:\n            status['ip'] = 'Unknown'\n            status['is_running'] = 'Not Running'\n            status['error_message'] = '\\tNo information available: ' \\\n                                      'unable to query coordinator'\n        return statuses\n\n    def not_installed_status(self):\n        statuses = self.base_status()\n        for status in statuses:\n            status['ip'] = 'Unknown'\n            status['is_running'] = 'Not Running'\n            status['error_message'] = '\\tPresto is not installed.'\n        return statuses\n\n    def single_node_up_status(self, node):\n        statuses = self.not_started_status()\n        for status in statuses:\n            if status['host'] is node:\n                status['is_running'] = 'Running'\n        return statuses\n\n    def node_not_available_status(self, topology, node,\n                                  coordinator_down=False):\n        statuses = self.base_status(topology)\n        for status in statuses:\n            if status['host'] == node:\n                status['is_running'] = 'Not Running'\n                status['error_message'] = \\\n                    self.status_node_connection_error(node)\n                status['ip'] = 'Unknown'\n                status['host'] = self.cluster.get_down_hostname(node)\n            elif coordinator_down:\n                status['error_message'] = '\\tNo information available: ' \\\n                                          'unable to query coordinator'\n                status['ip'] = 'Unknown'\n\n        return statuses\n\n    def status_fail_msg(self, actual_output, expected_regexp):\n        log_tail = self.fetch_log_tail(lines=100)\n\n        return (\n            '=== ACTUAL OUTPUT ===\\n%s\\n=== DID NOT MATCH REGEXP ===\\n%s\\n'\n            '=== LOG FOR DEBUGGING ===\\n%s=== END OF LOG ===' % (\n                actual_output, expected_regexp, log_tail))\n\n    def check_status(self, cmd_output, statuses, port=7070):\n        expected_output = []\n        for status in statuses:\n            expected_output += \\\n                ['Server Status:',\n                 '\\t%s\\(IP: .+, Roles: %s\\): %s' %\n                 (status['host'], status['role'], status['is_running'])]\n            if 'error_message' in status and status['error_message']:\n                expected_output += [status['error_message']]\n            elif status['is_running'] is 'Running':\n                expected_output += \\\n                    ['\\tNode URI\\(http\\): http://.+:%s' % str(port),\n                     '\\tPresto Version: ' + PRESTO_VERSION,\n                     '\\tNode status:    active',\n                     '\\tCatalogs:     system, tpch']\n\n        expected_regex = '\\n'.join(expected_output)\n        # The status command is written such that there are a couple ways that\n        # the presto client can fail that result in partial output from the\n        # command, but errors in the logs. If we fail to match, we include the\n        # log information in the assertion message to make determining exactly\n        # what failed easier. Grab the logs lazily so that we don't incur the\n        # cost of getting them when they aren't needed. The status tests are\n        # slow enough already.\n        self.assertLazyMessage(\n            lambda: self.status_fail_msg(cmd_output, expected_regex),\n            self.assertRegexpMatches, cmd_output, expected_regex)\n\n    def _server_status_with_retries(self, check_catalogs=False, extra_arguments=''):\n        try:\n            return self.retry(lambda: self._get_status_until_coordinator_updated(\n                check_catalogs, extra_arguments=extra_arguments), 720, 0)\n        except PrestoError as e:\n            self.assertLazyMessage(\n                lambda: self.status_fail_msg(e.message, \"Ran out of time retrying status\"),\n                self.fail,\n                \"PrestoError: %s\" % e.message)\n\n    def _get_status_until_coordinator_updated(self, check_catalogs=False, extra_arguments=''):\n        status_output = self.run_prestoadmin('server status' + extra_arguments)\n        if 'the coordinator has not yet discovered this node' in status_output:\n            raise PrestoError('Coordinator has not discovered all nodes yet: '\n                              '%s' % status_output)\n        if 'Roles: coordinator): Running\\n\\tNo information available: ' \\\n           'unable to query coordinator' in status_output:\n            raise PrestoError('Coordinator not started up properly yet.'\n                              '\\nOutput: %s' % status_output)\n        if check_catalogs and 'Catalogs:' not in status_output:\n            raise PrestoError('Catalogs not loaded yet: %s' % status_output)\n        return status_output\n"
  },
  {
    "path": "tests/product/test_topology.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom nose.plugins.attrib import attr\n\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase\nfrom tests.product.cluster_types import STANDALONE_PA_CLUSTER\nfrom tests.product.config_dir_utils import get_config_file_path\nfrom tests.product.constants import LOCAL_RESOURCES_DIR\n\n\ntopology_with_slave1_coord = \"\"\"{{'coordinator': u'slave1',\n 'port': 22,\n 'username': '{user}',\n 'workers': [u'master',\n             u'slave2',\n             u'slave3']}}\n\"\"\"\n\nnormal_topology = \"\"\"{{'coordinator': u'master',\n 'port': 22,\n 'username': '{user}',\n 'workers': [u'slave1',\n             u'slave2',\n             u'slave3']}}\n\"\"\"\n\nlocal_topology = \"\"\"{{'coordinator': 'localhost',\n 'port': 22,\n 'username': '{user}',\n 'workers': ['localhost']}}\n\"\"\"\n\n\nclass TestTopologyShow(BaseProductTestCase):\n\n    def setUp(self):\n        super(TestTopologyShow, self).setUp()\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n\n    @attr('smoketest')\n    def test_topology_show(self):\n        self.upload_topology()\n        actual = self.run_prestoadmin('topology show')\n        expected = normal_topology.format(user=self.cluster.user)\n        self.assertEqual(expected, actual)\n\n    def test_topology_show_empty_config(self):\n        self.dump_and_cp_topology(topology={})\n        actual = self.run_prestoadmin('topology show')\n        self.assertEqual(local_topology.format(user=self.cluster.user), actual)\n\n    def test_topology_show_bad_json(self):\n        self.cluster.copy_to_host(\n            os.path.join(LOCAL_RESOURCES_DIR, 'invalid_json.json'),\n            self.cluster.master\n        )\n        self.cluster.exec_cmd_on_host(\n            self.cluster.master,\n            'cp %s %s' %\n            (os.path.join(self.cluster.mount_dir, 'invalid_json.json'), get_config_file_path())\n        )\n        self.assertRaisesRegexp(OSError,\n                                'Expecting , delimiter: line 3 column 3 '\n                                '\\(char 21\\)  More detailed information '\n                                'can be found in '\n                                '.*/.prestoadmin/log/presto-admin.log\\n',\n                                self.run_prestoadmin,\n                                'topology show')\n"
  },
  {
    "path": "tests/product/timing_test_decorator.py",
    "content": "import logging\nimport sys\n\nfrom time import time\n\n\nlogger = logging.getLogger()\nlogger.setLevel(logging.INFO)\nmessage_format = '%(levelname)s - %(message)s'\nformatter = logging.Formatter(message_format)\nconsole_handler = logging.StreamHandler(sys.stdout)\nconsole_handler.setLevel(logging.INFO)\nconsole_handler.setFormatter(formatter)\nlogger.addHandler(console_handler)\n\n\ndef log_function_time():\n    \"\"\"\n    Returns: Prints the execution time of the decorated function to the\n    console. If the execution time exceeds 10 minutes, it will use 'error'\n    for the message level. Otherwise, it will use 'info'.\n    \"\"\"\n    def name_wrapper(function):\n        def time_wrapper(*args, **kwargs):\n            global logger\n            function_name = function.__name__\n\n            start_time = time()\n            return_value = function(*args, **kwargs)\n            elapsed_time = time() - start_time\n\n            travis_output_time_limit = 600\n            message_level = logging.ERROR if elapsed_time >= travis_output_time_limit \\\n                else logging.INFO\n            logging.disable(logging.NOTSET)\n            logger.log(message_level,\n                       \"%s completed in %s seconds...\",\n                       function_name,\n                       str(elapsed_time))\n            logging.disable(logging.CRITICAL)\n\n            return return_value\n        return time_wrapper\n    return name_wrapper\n"
  },
  {
    "path": "tests/product/topology_installer.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for setting the topology on the presto-admin host prior to installing\npresto\n\"\"\"\n\nfrom tests.base_installer import BaseInstaller\nfrom tests.product.config_dir_utils import get_config_file_path\n\n\nclass TopologyInstaller(BaseInstaller):\n    def __init__(self, testcase):\n        self.testcase = testcase\n\n    @staticmethod\n    def get_dependencies():\n        return []\n\n    def install(self):\n        self.testcase.upload_topology(cluster=self.testcase.cluster)\n\n    @staticmethod\n    def assert_installed(testcase, msg=None):\n        testcase.cluster.exec_cmd_on_host(\n            testcase.cluster.master,\n            'test -r %s' % get_config_file_path())\n\n    def get_keywords(self):\n        return {}\n"
  },
  {
    "path": "tests/rpm/__init__.py",
    "content": "# -*- coding: utf-8 -*-\n"
  },
  {
    "path": "tests/rpm/test_rpm.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nProduct tests for generating an online and offline installer for presto-admin\n\"\"\"\n\nimport os\nfrom tests.no_hadoop_bare_image_provider import NoHadoopBareImageProvider\nfrom tests.product.base_product_case import BaseProductTestCase, docker_only\nfrom tests.product.cluster_types import STANDALONE_PA_CLUSTER, STANDALONE_PRESTO_CLUSTER\nfrom tests.product.standalone.presto_installer import StandalonePrestoInstaller\nfrom tests.product.test_server_install import relocate_jdk_directory\n\n\nclass TestRpm(BaseProductTestCase):\n    def setUp(self):\n        super(TestRpm, self).setUp()\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PA_CLUSTER)\n\n    @docker_only\n    def test_install_fails_java8_not_found(self):\n        installer = StandalonePrestoInstaller(self)\n        with relocate_jdk_directory(self.cluster, '/usr'):\n            self.upload_topology()\n            cmd_output = installer.install(pa_raise_error=False)\n            actual = cmd_output.splitlines()\n            num_failures = 0\n            for line in enumerate(actual):\n                if str(line).find('Error: Required Java version'\n                                  ' could not be found') != -1:\n                    num_failures += 1\n\n            self.assertEqual(4, num_failures)\n\n            for container in self.cluster.all_hosts():\n                installer.assert_uninstalled(container)\n\n    @docker_only\n    def test_server_starts_java8_in_bin_java(self):\n        installer = StandalonePrestoInstaller(self)\n\n        with relocate_jdk_directory(self.cluster, '/usr') as new_java_home:\n            java_bin = os.path.join(new_java_home, 'bin', 'java')\n\n            for container in self.cluster.all_hosts():\n                self.cluster.exec_cmd_on_host(\n                    container, 'ln -s %s /bin/java' % (java_bin,))\n\n            self.upload_topology()\n\n            installer.install()\n\n            # starts successfully with java8_home set\n            output = self.run_prestoadmin('server start')\n            self.assertFalse(\n                'Warning: No value found for JAVA8_HOME. Default Java will be '\n                'used.' in output)\n\n    @docker_only\n    def test_server_starts_no_java8_variable(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        self.run_script_from_prestoadmin_dir('rm /etc/presto/env.sh')\n        # tests that no error is encountered\n        self.run_prestoadmin('server start')\n\n    @docker_only\n    def test_started_with_presto_user(self):\n        self.setup_cluster(NoHadoopBareImageProvider(), STANDALONE_PRESTO_CLUSTER)\n        start_output = self.run_prestoadmin('server start').splitlines()\n        process_per_host = self.get_process_per_host(start_output)\n\n        for host, pid in process_per_host:\n            user_for_pid = self.run_script_from_prestoadmin_dir(\n                'uid=$(awk \\'/^Uid:/{print $2}\\' /proc/%s/status);'\n                'getent passwd \"$uid\" | awk -F: \\'{print $1}\\'' % pid,\n                host)\n            self.assertEqual(user_for_pid.strip(), 'presto')\n"
  },
  {
    "path": "tests/unit/__init__.py",
    "content": "class SudoResult(object):\n    def __init__(self):\n        super(SudoResult, self).__init__()\n        self.return_code = 0\n        self.failed = False\n"
  },
  {
    "path": "tests/unit/base_unit_case.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom mock import patch\n\nfrom prestoadmin.standalone.config import StandaloneConfig\nfrom prestoadmin.util.presto_config import PrestoConfig\n\nfrom tests.base_test_case import BaseTestCase\n\nPRESTO_CONFIG = PrestoConfig({\n    'http-server.http.enabled': 'true',\n    'http-server.https.enabled': 'false',\n    'http-server.http.port': '8080',\n    'http-server.https.port': '7878',\n    'http-server.https.keystore.path': '/UPDATE/THIS/PATH',\n    'http-server.https.keystore.key': 'UPDATE PASSWORD'},\n    \"TEST_PATH\",\n    \"TEST_HOST\")\n\n\nclass BaseUnitCase(BaseTestCase):\n\n    '''\n    Tasks generally require that the configuration they need to run has been\n    loaded. This takes care of loading the config without going to the\n    filesystem. For cases where you want to test the configuration load process\n    itself, you should pass load_config=False to setUp.\n    '''\n    def setUp(self, capture_output=False, load_config=True):\n        super(BaseUnitCase, self).setUp(capture_output=capture_output)\n        if load_config:\n            @patch('tests.unit.base_unit_case.StandaloneConfig.'\n                   '_get_conf_from_file')\n            def loader(mock_get_conf):\n                mock_get_conf.return_value = {'username': 'user',\n                                              'port': 1234,\n                                              'coordinator': 'master',\n                                              'workers': ['slave1', 'slave2']}\n\n                config = StandaloneConfig()\n                config.get_config()\n            loader()\n"
  },
  {
    "path": "tests/unit/resources/empty.txt",
    "content": ""
  },
  {
    "path": "tests/unit/resources/invalid.properties",
    "content": "abcd"
  },
  {
    "path": "tests/unit/resources/invalid_json_conf.json",
    "content": "{\n  \"user\": \"me\"\n  Invalid!!!\n}\n"
  },
  {
    "path": "tests/unit/resources/server_status_out.txt",
    "content": "Server Status:\n\tNode1(IP: IP1, Roles: coordinator, worker): Running\n\tNode URI(http): http://active/statement\n\tPresto Version: presto-main:0.97-SNAPSHOT\n\tNode status:    active\n\tCatalogs:     hive, system, tpch\nServer Status:\n\tNode2(IP: IP2, Roles: worker): Running\n\tNode URI(http): http://inactive/stmt\n\tPresto Version: presto-main:0.99-SNAPSHOT\n\tNode status:    inactive\n\tCatalogs:     hive, system, tpch\nServer Status:\n\tNode3(IP: IP3, Roles: worker): Running\n\tNo information available: the coordinator has not yet discovered this node\nServer Status:\n\tNode4(IP: Unknown, Roles: worker): Not Running\n\tTimed out trying to connect to Node4\n"
  },
  {
    "path": "tests/unit/resources/slider-extended-help.txt",
    "content": "Usage: presto-admin [options] <command> [arg]\n\nOptions:\n  --version             show program's version number and exit\n  -h, --help            show this help message and exit\n  -d, --display         print detailed information about command\n  --extended-help       print out all options, including advanced ones\n  -I, --initial-password-prompt\n                        Force password prompt up-front\n  -p PASSWORD, --password=PASSWORD\n                        password for use with authentication and/or sudo\n\n  Advanced Options:\n    -a, --no_agent      don't use the running SSH agent\n    -A, --forward-agent\n                        forward local agent to remote end\n    --colorize-errors   Color error output\n    -D, --disable-known-hosts\n                        do not load user known_hosts file\n    -g HOST, --gateway=HOST\n                        gateway host to connect through\n    -H HOSTS, --hosts=HOSTS\n                        comma-separated list of hosts to operate on\n    -i PATH             path to SSH private key file. May be repeated.\n    -k, --no-keys       don't load private key files from ~/.ssh/\n    --keepalive=N       enables a keepalive every N seconds\n    -n M, --connection-attempts=M\n                        make M attempts to connect before giving up\n    --port=PORT         SSH connection port\n    -r, --reject-unknown-hosts\n                        reject unknown hosts\n    --system-known-hosts=SYSTEM_KNOWN_HOSTS\n                        load system known_hosts file before reading user\n                        known_hosts\n    -t N, --timeout=N   set connection timeout to N seconds\n    -T N, --command-timeout=N\n                        set remote command timeout to N seconds\n    -u USER, --user=USER\n                        username to use when connecting to remote hosts\n    -x HOSTS, --exclude-hosts=HOSTS\n                        comma-separated list of hosts to exclude\n    --serial            default to serial execution method\n\nCommands:\n    server install\n    server uninstall\n    slider install\n    slider uninstall\n\n"
  },
  {
    "path": "tests/unit/resources/slider-help.txt",
    "content": "Usage: presto-admin [options] <command> [arg]\n\nOptions:\n  --version             show program's version number and exit\n  -h, --help            show this help message and exit\n  -d, --display         print detailed information about command\n  --extended-help       print out all options, including advanced ones\n  -I, --initial-password-prompt\n                        Force password prompt up-front\n  -p PASSWORD, --password=PASSWORD\n                        password for use with authentication and/or sudo\n\n\nCommands:\n    server install\n    server uninstall\n    slider install\n    slider uninstall\n\n"
  },
  {
    "path": "tests/unit/resources/standalone-extended-help.txt",
    "content": "Usage: presto-admin [options] <command> [arg]\n\nOptions:\n  --version             show program's version number and exit\n  -h, --help            show this help message and exit\n  -d, --display         print detailed information about command\n  --extended-help       print out all options, including advanced ones\n  -I, --initial-password-prompt\n                        Force password prompt up-front\n  -p PASSWORD, --password=PASSWORD\n                        password for use with authentication and/or sudo\n\n  Advanced Options:\n    -a, --no_agent      don't use the running SSH agent\n    -A, --forward-agent\n                        forward local agent to remote end\n    --colorize-errors   Color error output\n    -D, --disable-known-hosts\n                        do not load user known_hosts file\n    -g HOST, --gateway=HOST\n                        gateway host to connect through\n    -H HOSTS, --hosts=HOSTS\n                        comma-separated list of hosts to operate on\n    -i PATH             path to SSH private key file. May be repeated.\n    -k, --no-keys       don't load private key files from ~/.ssh/\n    --keepalive=N       enables a keepalive every N seconds\n    -n M, --connection-attempts=M\n                        make M attempts to connect before giving up\n    --port=PORT         SSH connection port\n    -r, --reject-unknown-hosts\n                        reject unknown hosts\n    --system-known-hosts=SYSTEM_KNOWN_HOSTS\n                        load system known_hosts file before reading user\n                        known_hosts\n    -t N, --timeout=N   set connection timeout to N seconds\n    -T N, --command-timeout=N\n                        set remote command timeout to N seconds\n    -u USER, --user=USER\n                        username to use when connecting to remote hosts\n    -x HOSTS, --exclude-hosts=HOSTS\n                        comma-separated list of hosts to exclude\n    --serial            default to serial execution method\n\nCommands:\n    catalog add\n    catalog remove\n    collect logs\n    collect query_info\n    collect system_info\n    configuration deploy\n    configuration show\n    file copy\n    file run\n    package install\n    package uninstall\n    plugin add_jar\n    server install\n    server restart\n    server start\n    server status\n    server stop\n    server uninstall\n    server upgrade\n    topology show\n\n"
  },
  {
    "path": "tests/unit/resources/standalone-help.txt",
    "content": "Usage: presto-admin [options] <command> [arg]\n\nOptions:\n  --version             show program's version number and exit\n  -h, --help            show this help message and exit\n  -d, --display         print detailed information about command\n  --extended-help       print out all options, including advanced ones\n  -I, --initial-password-prompt\n                        Force password prompt up-front\n  -p PASSWORD, --password=PASSWORD\n                        password for use with authentication and/or sudo\n\n\nCommands:\n    catalog add\n    catalog remove\n    collect logs\n    collect query_info\n    collect system_info\n    configuration deploy\n    configuration show\n    file copy\n    file run\n    package install\n    package uninstall\n    plugin add_jar\n    server install\n    server restart\n    server start\n    server status\n    server stop\n    server uninstall\n    server upgrade\n    topology show\n\n"
  },
  {
    "path": "tests/unit/resources/valid.config",
    "content": "prop1\nprop2\nprop3"
  },
  {
    "path": "tests/unit/resources/valid.properties",
    "content": "a=1\nb:2\nc   3\n! A comment\n# another comment\nd\\== 4\ne\\:=5\nf===6\ng:= 7\nh=:8\ni   = 9\n\n"
  },
  {
    "path": "tests/unit/resources/valid_rest_response_level1.txt",
    "content": "{\"id\":\"2015_harih\",\"infoUri\":\"http://localhost:8080/v1/query/2015_harih\",\"nextUri\":\"http://localhost:8080/v1/statement/2015_harih/2\"}"
  },
  {
    "path": "tests/unit/resources/valid_rest_response_level2.txt",
    "content": "{\"id\":\"2015_harih\",\"infoUri\":\"http://localhost:8080/v1/query/2015_harih\",\"nextUri\":\"\",\"data\":[[\"uuid1\",\"http://localhost:8080\",\"presto-main:0.97\",true], [\"uuid2\",\"http://worker:8080\",\"presto-main:0.97\",false]]}"
  },
  {
    "path": "tests/unit/standalone/__init__.py",
    "content": ""
  },
  {
    "path": "tests/unit/standalone/test_help.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom mock import patch\nimport os\n\nimport prestoadmin\nfrom prestoadmin import main\n\nfrom tests.unit.test_main import BaseMainCase\n\n\n# Consult the comment on yarn_slider.test_help.TestSliderHelp for more info.\nclass TestStandaloneHelp(BaseMainCase):\n    @patch('prestoadmin.mode.get_mode', return_value='standalone')\n    def setUp(self, mode_mock):\n        super(TestStandaloneHelp, self).setUp()\n        reload(prestoadmin)\n        reload(main)\n\n    def get_short_help_path(self):\n        return os.path.join('resources', 'standalone-help.txt')\n\n    def get_extended_help_path(self):\n        return os.path.join('resources', 'standalone-extended-help.txt')\n\n    def test_standalone_help_text_short(self):\n        self._run_command_compare_to_file(\n            [\"-h\"], 0, self.get_short_help_path())\n\n    def test_standalone_help_text_long(self):\n        self._run_command_compare_to_file(\n            [\"--help\"], 0, self.get_short_help_path())\n\n    def test_standalone_help_displayed_with_no_args(self):\n        self._run_command_compare_to_file(\n            [], 0, self.get_short_help_path())\n\n    def test_standalone_extended_help(self):\n        self._run_command_compare_to_file(\n            ['--extended-help'], 0, self.get_extended_help_path())\n"
  },
  {
    "path": "tests/unit/test_base_test_case.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nModule for validating functionality in BaseTestCase.\n\"\"\"\n\nfrom base_unit_case import BaseUnitCase\n\n\nclass TestBaseTestCase(BaseUnitCase):\n    def testLazyPass(self):\n        self.assertLazyMessage(\n            lambda: self.fail(\"shouldn't be called\"), self.assertEqual, 1, 1)\n\n    def testLazyFail(self):\n        a = 2\n        e = 1\n\n        self.assertRaisesRegexp(\n            AssertionError, 'asdfasdfasdf 2 1', self.assertLazyMessage,\n            lambda: 'asdfasdfasdf %d %d' % (a, e), self.assertEqual, a, e)\n"
  },
  {
    "path": "tests/unit/test_bdist_prestoadmin.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\n\nfrom distutils.dir_util import remove_tree\nfrom distutils.dir_util import mkpath\nfrom mock import patch\n\nfrom mock import call\n\nfrom tests.base_test_case import BaseTestCase\nfrom packaging.bdist_prestoadmin import bdist_prestoadmin\nfrom distutils.dist import Distribution\n\n\n# Hello future maintainer! Several tests in here include a version number in a\n# path. It is by pure coincidence that these happen to match the current\n# version number, if in fact they still do. We set the version number for the\n# tests in self.attrs, and it can be anything as long as the other version\n# numbers in the file match.\nclass TestBDistPrestoAdmin(BaseTestCase):\n    def setUp(self):\n        super(TestBDistPrestoAdmin, self).setUp()\n        self.attrs = {\n            'name': 'prestoadmin',\n            'cmdclass': {'bdist_prestoadmin': bdist_prestoadmin},\n            'version': '1.2',\n            'packages': ['prestoadmin'],\n            'package_dir': {'prestoadmin': 'prestoadmin'},\n            'install_requires': ['fabric']\n        }\n\n        # instantiation of the object calls\n        # initialize_options which is what we are testing\n        dist = Distribution(attrs=self.attrs)\n        self.bdist = dist.get_command_obj('bdist_prestoadmin')\n        self.bdist.finalize_options()\n\n    def test_initialize(self):\n        # we don't use the dist from setUp because\n        # we want to test before finalize is called\n        dist = Distribution(attrs=self.attrs)\n        bdist = dist.get_command_obj('bdist_prestoadmin')\n\n        self.assertEquals(bdist.bdist_dir, None)\n        self.assertEquals(bdist.dist_dir, None)\n        self.assertEquals(bdist.virtualenv_version, None)\n        self.assertEquals(bdist.keep_temp, False)\n        self.assertEquals(bdist.online_install, False)\n\n    def test_finalize(self):\n        self.assertRegexpMatches(\n            self.bdist.bdist_dir,\n            'build/bdist.*/prestoadmin')\n        self.assertEquals(self.bdist.dist_dir, 'dist')\n        self.assertEquals(self.bdist.default_virtualenv_version, '12.0.7')\n        self.assertEquals(self.bdist.keep_temp, False)\n\n    def test_finalize_argvs(self):\n        self.attrs['script_args'] = ['bdist_prestoadmin',\n                                     '--bdist-dir=junk',\n                                     '--dist-dir=tmp',\n                                     '--virtualenv-version=12.0.1',\n                                     '-k'\n                                     ]\n\n        # we don't use the dist from setUp because\n        # we want to test with additional arguments\n        dist = Distribution(attrs=self.attrs)\n        dist.parse_command_line()\n        bdist = dist.get_command_obj('bdist_prestoadmin')\n        bdist.finalize_options()\n\n        self.assertEquals(bdist.bdist_dir, 'junk')\n        self.assertEquals(bdist.dist_dir, 'tmp')\n        self.assertEquals(bdist.virtualenv_version, '12.0.1')\n        self.assertEquals(bdist.keep_temp, True)\n\n    @patch('distutils.core.Command.run_command')\n    def test_build_wheel(self, run_command_mock):\n        self.assertEquals('prestoadmin-1.2-py2-none-any',\n                          self.bdist.build_wheel('build'))\n\n    @patch('packaging.bdist_prestoadmin.pip.main')\n    def test_package_dependencies_for_offline_installer(self, pip_mock):\n        build_path = os.path.join('build', 'prestoadmin')\n        self.bdist.package_dependencies(build_path)\n\n        calls = [call(['wheel',\n                       '--wheel-dir=build/prestoadmin/third-party',\n                       '--no-cache',\n                       'fabric']),\n                 call(['install',\n                       '-d',\n                       'build/prestoadmin/third-party',\n                       '--no-cache',\n                       '--no-use-wheel',\n                       'virtualenv==12.0.7'])]\n        pip_mock.assert_has_calls(calls, any_order=False)\n\n    @patch('packaging.bdist_prestoadmin.bdist_prestoadmin.'\n           'generate_install_script')\n    @patch('packaging.bdist_prestoadmin.bdist_prestoadmin.build_wheel')\n    @patch('packaging.bdist_prestoadmin.bdist_prestoadmin.'\n           'package_dependencies')\n    def test_package_dependencies_for_online_installer(\n            self, package_dependencies_mock, build_wheel_mock,\n            generate_install_script_mock):\n        self.bdist.online_install = True\n\n        self.bdist.run()\n\n        assert not package_dependencies_mock.called, 'method should not have been called'\n\n    def test_generate_online_install_script(self):\n        test_input = ['virtualenv-%VIRTUALENV_VERSION%.tar.gz\\n',\n                      'pip install %WHEEL_NAME%.whl %ONLINE_OR_OFFLINE_INSTALL%']\n        self.bdist.online_install = True\n        output = self.bdist._fill_in_template(test_input, 'my_wheel')\n        self.assertEqual(output, 'virtualenv-12.0.7.tar.gz\\npip install my_wheel.whl ')\n\n    def test_generate_offline_install_script(self):\n        test_input = ['virtualenv-%VIRTUALENV_VERSION%.tar.gz\\n',\n                      'pip install %WHEEL_NAME%.whl %ONLINE_OR_OFFLINE_INSTALL%']\n        self.bdist.online_install = False\n        output = self.bdist._fill_in_template(test_input, 'my_wheel')\n        self.assertEqual(output,\n                         'virtualenv-12.0.7.tar.gz\\npip install my_wheel.whl --no-index --find-links third-party')\n\n    def test_archive_dist_offline(self):\n        build_path = os.path.join('build', 'prestoadmin')\n        try:\n            mkpath(build_path)\n            self.bdist.archive_dist(build_path, 'dist')\n\n            archive = os.path.join('dist', 'prestoadmin-1.2-offline.tar.gz')\n            self.assertTrue(os.path.exists(archive))\n        finally:\n            remove_tree(os.path.dirname(build_path))\n            remove_tree('dist')\n\n    def test_archive_dist_online(self):\n        build_path = os.path.join('build', 'prestoadmin')\n        try:\n            mkpath(build_path)\n            self.bdist.online_install = True\n            self.bdist.archive_dist(build_path, 'dist')\n\n            archive = os.path.join('dist', 'prestoadmin-1.2-online.tar.gz')\n            self.assertTrue(os.path.exists(archive))\n        finally:\n            remove_tree(os.path.dirname(build_path))\n            remove_tree('dist')\n\n    @patch('distutils.core.Command.mkpath')\n    @patch('packaging.bdist_prestoadmin.remove_tree')\n    @patch('packaging.bdist_prestoadmin.bdist_prestoadmin.build_wheel',\n           return_value='wheel_name')\n    @patch('packaging.bdist_prestoadmin.bdist_prestoadmin.' +\n           'generate_install_script')\n    @patch('packaging.bdist_prestoadmin.bdist_prestoadmin.' +\n           'package_dependencies')\n    @patch('packaging.bdist_prestoadmin.bdist_prestoadmin.archive_dist')\n    def test_run(self,\n                 archive_dist_mock,\n                 package_dependencies_mock,\n                 install_script_mock,\n                 build_wheel_mock,\n                 remove_tree_mock,\n                 mkpath_mock):\n        self.bdist.run()\n\n        def matching_regex(expected_regex):\n            class RegexMatcher:\n                def __eq__(self, other):\n                    return re.match(expected_regex, other)\n            return RegexMatcher()\n\n        build_path_re = matching_regex(\n            'build/bdist.*/prestoadmin')\n        build_wheel_mock.assert_called_once_with(build_path_re)\n        install_script_mock.assert_called_once_with('wheel_name',\n                                                    build_path_re)\n        package_dependencies_mock.assert_called_once_with(\n            build_path_re)\n        archive_dist_mock.assert_called_once_with(build_path_re, 'dist')\n\n    def test_description(self):\n        self.assertEquals('create a distribution for prestoadmin',\n                          self.bdist.description)\n\n    def test_user_options(self):\n        expected = [('bdist-dir=', 'b',\n                     'temporary directory for creating the distribution'),\n                    ('dist-dir=', 'd',\n                     'directory to put final built distributions in'),\n                    ('virtualenv-version=', None,\n                     'version of virtualenv to download'),\n                    ('keep-temp', 'k',\n                     'keep the pseudo-installation tree around after ' +\n                     'creating the distribution archive'),\n                    ('online-install', None, 'boolean flag indicating if ' +\n                     'the installation should pull dependencies from the ' +\n                     'Internet or use the ones supplied in the third party ' +\n                     'directory')\n                    ]\n\n        self.assertEquals(expected, self.bdist.user_options)\n"
  },
  {
    "path": "tests/unit/test_catalog.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\ntests for catalog module\n\"\"\"\nimport os\n\nimport fabric.api\nfrom fabric.operations import _AttributeString\nfrom mock import patch\n\nfrom prestoadmin import catalog\nfrom prestoadmin.util import constants\nfrom prestoadmin.util.exception import ConfigurationError, \\\n    ConfigFileNotFoundError\nfrom prestoadmin.standalone.config import PRESTO_STANDALONE_USER_GROUP\nfrom prestoadmin.util.local_config_util import get_catalog_directory\nfrom tests.unit.base_unit_case import BaseUnitCase\n\n\nclass TestCatalog(BaseUnitCase):\n    def setUp(self):\n        super(TestCatalog, self).setUp(capture_output=True)\n\n    @patch('prestoadmin.catalog.os.path.isfile')\n    def test_add_not_exist(self, isfile_mock):\n        isfile_mock.return_value = False\n        self.assertRaisesRegexp(ConfigurationError,\n                                'Configuration for catalog dummy not found',\n                                catalog.add, 'dummy')\n\n    @patch('prestoadmin.catalog.validate')\n    @patch('prestoadmin.catalog.deploy_files')\n    @patch('prestoadmin.catalog.os.path.isfile')\n    def test_add_exists(self, isfile_mock, deploy_mock, validate_mock):\n        isfile_mock.return_value = True\n        catalog.add('tpch')\n        filenames = ['tpch.properties']\n        deploy_mock.assert_called_with(filenames,\n                                       get_catalog_directory(),\n                                       constants.REMOTE_CATALOG_DIR,\n                                       PRESTO_STANDALONE_USER_GROUP)\n        validate_mock.assert_called_with(filenames)\n\n    @patch('prestoadmin.catalog.deploy_files')\n    @patch('prestoadmin.catalog.os.path.isdir')\n    @patch('prestoadmin.catalog.os.listdir')\n    @patch('prestoadmin.catalog.validate')\n    def test_add_all(self, mock_validate, listdir_mock, isdir_mock,\n                     deploy_mock):\n        catalogs = ['tpch.properties', 'another.properties']\n        listdir_mock.return_value = catalogs\n        catalog.add()\n        deploy_mock.assert_called_with(catalogs,\n                                       get_catalog_directory(),\n                                       constants.REMOTE_CATALOG_DIR,\n                                       PRESTO_STANDALONE_USER_GROUP)\n\n    @patch('prestoadmin.catalog.deploy_files')\n    @patch('prestoadmin.catalog.os.path.isdir')\n    def test_add_all_fails_if_dir_not_there(self, isdir_mock, deploy_mock):\n        isdir_mock.return_value = False\n        self.assertRaisesRegexp(ConfigFileNotFoundError,\n                                r'Cannot add catalogs because directory .+'\n                                r' does not exist',\n                                catalog.add)\n        self.assertFalse(deploy_mock.called)\n\n    @patch('prestoadmin.catalog.sudo')\n    @patch('prestoadmin.catalog.os.path.exists')\n    @patch('prestoadmin.catalog.os.remove')\n    def test_remove(self, local_rm_mock, exists_mock, sudo_mock):\n        script = ('if [ -f /etc/presto/catalog/tpch.properties ] ; '\n                  'then rm /etc/presto/catalog/tpch.properties ; '\n                  'else echo \"Could not remove catalog \\'tpch\\'. '\n                  'No such file \\'/etc/presto/catalog/tpch.properties\\'\"; fi')\n        exists_mock.return_value = True\n        fabric.api.env.host = 'localhost'\n        catalog.remove('tpch')\n        sudo_mock.assert_called_with(script)\n        local_rm_mock.assert_called_with(get_catalog_directory() +\n                                         '/tpch.properties')\n\n    @patch('prestoadmin.catalog.sudo')\n    @patch('prestoadmin.catalog.os.path.exists')\n    def test_remove_failure(self, exists_mock, sudo_mock):\n        exists_mock.return_value = False\n        fabric.api.env.host = 'localhost'\n        out = _AttributeString()\n        out.succeeded = False\n        sudo_mock.return_value = out\n        self.assertRaisesRegexp(SystemExit,\n                                '\\\\[localhost\\\\] Failed to remove catalog tpch.',\n                                catalog.remove,\n                                'tpch')\n\n    @patch('prestoadmin.catalog.sudo')\n    @patch('prestoadmin.catalog.os.path.exists')\n    def test_remove_no_such_file(self, exists_mock, sudo_mock):\n        exists_mock.return_value = False\n        fabric.api.env.host = 'localhost'\n        error_msg = ('Could not remove catalog tpch: No such file ' +\n                     os.path.join(get_catalog_directory(), 'tpch.properties'))\n        out = _AttributeString(error_msg)\n        out.succeeded = True\n        sudo_mock.return_value = out\n        self.assertRaisesRegexp(SystemExit,\n                                '\\\\[localhost\\\\] %s' % error_msg,\n                                catalog.remove,\n                                'tpch')\n\n    @patch('prestoadmin.catalog.os.listdir')\n    @patch('prestoadmin.catalog.os.path.isdir')\n    def test_warning_if_connector_dir_empty(self, isdir_mock, listdir_mock):\n        isdir_mock.return_value = True\n        listdir_mock.return_value = []\n        catalog.add()\n        self.assertEqual('\\nWarning: Directory %s is empty. No catalogs will'\n                         ' be deployed\\n\\n' % get_catalog_directory(),\n                         self.test_stderr.getvalue())\n\n    @patch('prestoadmin.catalog.os.listdir')\n    @patch('prestoadmin.catalog.os.path.isdir')\n    def test_add_permission_denied(self, isdir_mock, listdir_mock):\n        isdir_mock.return_value = True\n        error_msg = ('Permission denied')\n        listdir_mock.side_effect = OSError(13, error_msg)\n        fabric.api.env.host = 'localhost'\n        self.assertRaisesRegexp(SystemExit, '\\[localhost\\] %s' % error_msg,\n                                catalog.add)\n\n    @patch('prestoadmin.catalog.os.remove')\n    @patch('prestoadmin.catalog.remove_file')\n    def test_remove_os_error(self, remove_file_mock, remove_mock):\n        fabric.api.env.host = 'localhost'\n        error = OSError(13, 'Permission denied')\n        remove_mock.side_effect = error\n        self.assertRaisesRegexp(OSError, 'Permission denied',\n                                catalog.remove, 'tpch')\n\n    @patch('prestoadmin.catalog.secure_create_directory')\n    @patch('prestoadmin.util.fabricapi.put')\n    def test_deploy_files(self, put_mock, create_dir_mock):\n        local_dir = '/my/local/dir'\n        remote_dir = '/my/remote/dir'\n        catalog.deploy_files(['a', 'b'], local_dir, remote_dir,\n                             PRESTO_STANDALONE_USER_GROUP)\n        create_dir_mock.assert_called_with(remote_dir, PRESTO_STANDALONE_USER_GROUP)\n        put_mock.assert_any_call('/my/local/dir/a', remote_dir, use_sudo=True,\n                                 mode=0600)\n        put_mock.assert_any_call('/my/local/dir/b', remote_dir, use_sudo=True,\n                                 mode=0600)\n\n    @patch('prestoadmin.catalog.os.path.isfile')\n    @patch(\"__builtin__.open\")\n    def test_validate(self, open_mock, is_file_mock):\n        is_file_mock.return_value = True\n        file_obj = open_mock.return_value.__enter__.return_value\n        file_obj.read.return_value = 'connector.noname=example'\n\n        self.assertRaisesRegexp(ConfigurationError,\n                                'Catalog configuration example.properties '\n                                'does not contain connector.name',\n                                catalog.add, 'example')\n\n    @patch('prestoadmin.catalog.os.path.isfile')\n    def test_validate_fail(self, is_file_mock):\n        is_file_mock.return_value = True\n\n        self.assertRaisesRegexp(\n            SystemExit,\n            'Error validating ' + os.path.join(get_catalog_directory(), 'example.properties') + '\\n\\n'\n            'Underlying exception:\\n    No such file or directory',\n            catalog.add, 'example')\n\n    @patch('prestoadmin.catalog.get')\n    @patch('prestoadmin.catalog.files.exists')\n    @patch('prestoadmin.catalog.ensure_directory_exists')\n    @patch('prestoadmin.catalog.os.path.exists')\n    def test_gather_connectors(self, path_exists, ensure_dir_exists,\n                               files_exists, get_mock):\n        fabric.api.env.host = 'any_host'\n        path_exists.return_value = False\n        files_exists.return_value = True\n        catalog.gather_catalogs('local_config_dir')\n        get_mock.assert_called_once_with(\n            constants.REMOTE_CATALOG_DIR, 'local_config_dir/any_host/catalog', use_sudo=True)\n\n        # if remote catalog dir does not exist\n        get_mock.reset_mock()\n        files_exists.return_value = False\n        results = catalog.gather_catalogs('local_config_dir')\n        self.assertEqual([], results)\n        self.assertFalse(get_mock.called)\n"
  },
  {
    "path": "tests/unit/test_collect.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nTests the presto diagnostic information using presto-admin collect\n\"\"\"\nimport os\nfrom os import path\n\nimport requests\nfrom fabric.api import env\nfrom mock import patch\n\nimport prestoadmin\nfrom prestoadmin import collect\nfrom prestoadmin.collect import \\\n    TMP_PRESTO_DEBUG, \\\n    PRESTOADMIN_LOG_NAME, \\\n    OUTPUT_FILENAME_FOR_LOGS, \\\n    OUTPUT_FILENAME_FOR_SYS_INFO, \\\n    TMP_PRESTO_DEBUG_REMOTE\nfrom prestoadmin.util.local_config_util import get_log_directory\nfrom tests.unit.base_unit_case import BaseUnitCase, PRESTO_CONFIG\n\n\nclass TestCollect(BaseUnitCase):\n    @patch('prestoadmin.collect.lookup_launcher_log_file')\n    @patch('prestoadmin.collect.lookup_server_log_file')\n    @patch('prestoadmin.collect.get_files')\n    @patch(\"prestoadmin.collect.tarfile.open\")\n    @patch(\"prestoadmin.collect.shutil.copy\")\n    @patch(\"prestoadmin.collect.ensure_directory_exists\")\n    def test_collect_logs(self, mkdirs_mock, copy_mock,\n                          tarfile_open_mock, get_files_mock, server_log_mock,\n                          launcher_log_mock):\n        downloaded_logs_loc = path.join(TMP_PRESTO_DEBUG, \"logs\")\n\n        collect.logs()\n\n        mkdirs_mock.assert_called_with(downloaded_logs_loc)\n        copy_mock.assert_called_with(path.join(get_log_directory(),\n                                               PRESTOADMIN_LOG_NAME),\n                                     downloaded_logs_loc)\n\n        tarfile_open_mock.assert_called_with(OUTPUT_FILENAME_FOR_LOGS, 'w:gz')\n        tar = tarfile_open_mock.return_value\n        tar.add.assert_called_with(downloaded_logs_loc,\n                                   arcname=path.basename(downloaded_logs_loc))\n\n    @patch(\"prestoadmin.collect.os.makedirs\")\n    @patch(\"prestoadmin.collect.get\")\n    def test_get_files(self, get_mock, makedirs_mock):\n        remote_path = \"/a/b\"\n        local_path = \"/c/d\"\n        env.host = \"myhost\"\n        path_with_host_name = path.join(local_path, env.host)\n\n        collect.get_files(remote_path, local_path)\n\n        makedirs_mock.assert_called_with(os.path.join(local_path, env.host))\n        get_mock.assert_called_with(remote_path, path_with_host_name, use_sudo=True)\n\n    @patch(\"prestoadmin.collect.os.makedirs\")\n    @patch(\"prestoadmin.collect.warn\")\n    @patch(\"prestoadmin.collect.get\")\n    def test_get_files_warning(self, get_mock, warn_mock, makedirs_mock):\n        remote_path = \"/a/b\"\n        local_path = \"/c/d\"\n        env.host = \"remote_host\"\n        get_mock.side_effect = SystemExit\n\n        collect.get_files(remote_path, local_path)\n\n        warn_mock.assert_called_with(\"remote path \" + remote_path +\n                                     \" not found on \" + env.host)\n\n    @patch(\"prestoadmin.collect.requests.get\")\n    def test_query_info_not_run_on_workers(self, req_get_mock):\n        env.host = [\"worker1\"]\n        env.roledefs[\"worker\"] = [\"worker1\"]\n        collect.query_info(\"any_query_id\")\n        assert not req_get_mock.called\n\n    @patch('prestoadmin.collect.request_url')\n    @patch(\"prestoadmin.collect.requests.get\")\n    def test_query_info_fail_invalid_id(self, req_get_mock, requests_url):\n        env.host = \"myhost\"\n        env.roledefs[\"coordinator\"] = [\"myhost\"]\n        query_id = \"invalid_id\"\n        req_get_mock.return_value.status_code = requests.codes.ok + 10\n        self.assertRaisesRegexp(SystemExit, \"Unable to retrieve information. \"\n                                            \"Please check that the query_id \"\n                                            \"is correct, or check that server \"\n                                            \"is up with command: \"\n                                            \"server status\",\n                                collect.query_info, query_id)\n\n    @patch(\"prestoadmin.collect.json.dumps\")\n    @patch(\"prestoadmin.collect.requests.models.json\")\n    @patch(\"__builtin__.open\")\n    @patch(\"prestoadmin.collect.os.makedirs\")\n    @patch(\"prestoadmin.collect.requests.get\")\n    @patch('prestoadmin.collect.request_url')\n    def test_collect_query_info(self, requests_url_mock, requests_get_mock,\n                                mkdir_mock, open_mock,\n                                req_json_mock, json_dumps_mock):\n        query_id = \"1234_abcd\"\n        query_info_file_name = path.join(TMP_PRESTO_DEBUG,\n                                         \"query_info_\" + query_id + \".json\")\n        file_obj = open_mock.return_value.__enter__.return_value\n        requests_get_mock.return_value.json.return_value = req_json_mock\n        requests_get_mock.return_value.status_code = requests.codes.ok\n        env.host = \"myhost\"\n        env.roledefs[\"coordinator\"] = [\"myhost\"]\n\n        collect.query_info(query_id)\n\n        mkdir_mock.assert_called_with(TMP_PRESTO_DEBUG)\n\n        open_mock.assert_called_with(query_info_file_name, \"w\")\n\n        json_dumps_mock.assert_called_with(req_json_mock, indent=4)\n\n        file_obj.write.assert_called_with(json_dumps_mock.return_value)\n\n    @patch('prestoadmin.util.presto_config.PrestoConfig.coordinator_config',\n           return_value=PRESTO_CONFIG)\n    @patch(\"prestoadmin.collect.make_tarfile\")\n    @patch('prestoadmin.collect.get_catalog_info_from')\n    @patch(\"prestoadmin.collect.json.dumps\")\n    @patch(\"prestoadmin.collect.requests.models.json\")\n    @patch('prestoadmin.collect.execute')\n    @patch(\"__builtin__.open\")\n    @patch(\"prestoadmin.collect.os.makedirs\")\n    @patch(\"prestoadmin.collect.requests.get\")\n    @patch('prestoadmin.collect.request_url')\n    def test_collect_system_info(self, requests_url_mock, requests_get_mock,\n                                 makedirs_mock, open_mock,\n                                 execute_mock, req_json_mock,\n                                 json_dumps_mock, catalog_info_mock,\n                                 make_tarfile_mock, mock_presto_config):\n        downloaded_sys_info_loc = path.join(TMP_PRESTO_DEBUG, \"sysinfo\")\n        node_info_file_name = path.join(downloaded_sys_info_loc,\n                                        \"node_info.json\")\n        conn_info_file_name = path.join(downloaded_sys_info_loc,\n                                        \"catalog_info.txt\")\n\n        file_obj = open_mock.return_value.__enter__.return_value\n        requests_get_mock.return_value.json.return_value = req_json_mock\n        requests_get_mock.return_value.status_code = requests.codes.ok\n        catalog_info = catalog_info_mock.return_value\n\n        env.host = \"myhost\"\n        env.roledefs[\"coordinator\"] = [\"myhost\"]\n        collect.system_info()\n\n        makedirs_mock.assert_called_with(downloaded_sys_info_loc)\n        makedirs_mock.assert_called_with(downloaded_sys_info_loc)\n\n        open_mock.assert_any_call(node_info_file_name, \"w\")\n\n        json_dumps_mock.assert_called_with(req_json_mock, indent=4)\n\n        file_obj.write.assert_any_call(json_dumps_mock.return_value)\n\n        open_mock.assert_any_call(conn_info_file_name, \"w\")\n\n        assert catalog_info_mock.called\n\n        file_obj.write.assert_any_call(catalog_info + '\\n')\n\n        execute_mock.assert_called_with(collect.get_system_info,\n                                        downloaded_sys_info_loc, roles=[])\n\n        make_tarfile_mock.assert_called_with(OUTPUT_FILENAME_FOR_SYS_INFO,\n                                             downloaded_sys_info_loc)\n\n    @patch(\"prestoadmin.collect.get_files\")\n    @patch(\"prestoadmin.collect.append\")\n    @patch(\"prestoadmin.collect.get_presto_version\")\n    @patch(\"prestoadmin.collect.get_java_version\")\n    @patch(\"prestoadmin.collect.get_platform_information\")\n    @patch('prestoadmin.collect.run')\n    def test_get_system_info(self, run_collect_mock,\n                             plat_info_mock, java_version_mock,\n                             server_version_mock,\n                             append_mock, get_files_mock):\n        downloaded_sys_info_loc = path.join(TMP_PRESTO_DEBUG, \"sysinfo\")\n        version_info_file_name = path.join(TMP_PRESTO_DEBUG_REMOTE,\n                                           \"version_info.txt\")\n\n        platform_info = \"platform abcd\"\n        server_version = \"dummy_verion\"\n        java_version = \"java dummy version\"\n        plat_info_mock.return_value = platform_info\n        java_version_mock.return_value = java_version\n        server_version_mock.return_value = server_version\n\n        collect.get_system_info(downloaded_sys_info_loc)\n\n        run_collect_mock.assert_any_call('mkdir -p ' + TMP_PRESTO_DEBUG_REMOTE)\n\n        append_mock.assert_any_call(version_info_file_name,\n                                    'platform information : ' +\n                                    platform_info + '\\n')\n\n        append_mock.assert_any_call(version_info_file_name,\n                                    'Java version: ' +\n                                    java_version + '\\n')\n\n        append_mock.assert_any_call(version_info_file_name,\n                                    'Presto-admin version: ' +\n                                    prestoadmin.__version__ + '\\n')\n\n        append_mock.assert_any_call(version_info_file_name,\n                                    'Presto server version: ' +\n                                    server_version + '\\n')\n\n        get_files_mock.assert_called_with(version_info_file_name,\n                                          downloaded_sys_info_loc)\n"
  },
  {
    "path": "tests/unit/test_config.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom mock import patch\n\nfrom prestoadmin import config\nfrom prestoadmin.util.exception import ConfigurationError, \\\n    ConfigFileNotFoundError\nfrom tests.base_test_case import BaseTestCase\n\n\nDIR = os.path.abspath(os.path.dirname(__file__))\n\n\nclass TestConfiguration(BaseTestCase):\n    def test_file_does_not_exist_json(self):\n        self.assertRaisesRegexp(ConfigFileNotFoundError,\n                                'Missing configuration file ',\n                                config.get_conf_from_json_file,\n                                'does/not/exist/conf.json')\n\n    def test_file_is_empty_json(self):\n        emptyconf = {}\n        conf = config.get_conf_from_json_file(DIR + '/resources/empty.txt')\n        self.assertEqual(conf, emptyconf)\n\n    def test_file_is_empty_properties(self):\n        emptyconf = {}\n        conf = config.get_conf_from_properties_file(\n            DIR + '/resources/empty.txt')\n        self.assertEqual(conf, emptyconf)\n\n    def test_file_is_empty_config(self):\n        emptyconf = []\n        conf = config.get_conf_from_config_file(DIR + '/resources/empty.txt')\n        self.assertEqual(conf, emptyconf)\n\n    def test_invalid_json(self):\n        self.assertRaisesRegexp(ConfigurationError,\n                                'Expecting , delimiter: line 3 column 3 '\n                                '\\(char 19\\)',\n                                config.get_conf_from_json_file,\n                                DIR + '/resources/invalid_json_conf.json')\n\n    def test_get_config(self):\n        config_file = os.path.join(DIR, 'resources', 'valid.config')\n        conf = config.get_conf_from_config_file(config_file)\n        self.assertEqual(conf, ['prop1', 'prop2', 'prop3'])\n\n    def test_get_properties(self):\n        config_file = os.path.join(DIR, 'resources', 'valid.properties')\n        conf = config.get_conf_from_properties_file(config_file)\n        self.assertEqual(conf, {'a': '1', 'b': '2', 'c': '3',\n                                'd\\\\=': '4', 'e\\\\:': '5', 'f': '==6',\n                                'g': '= 7', 'h': ':8', 'i': '9'})\n\n    @patch('__builtin__.open')\n    def test_get_properties_ignores_whitespace(self, open_mock):\n        file_manager = open_mock.return_value.__enter__.return_value\n        file_manager.read.return_value = ' key1 =value1 \\n   \\n key2= value2'\n        conf = config.get_conf_from_properties_file('/dummy/path')\n        self.assertEqual(conf, {'key1': 'value1', 'key2': 'value2'})\n\n    def test_get_properties_invalid(self):\n        config_file = os.path.join(DIR, 'resources', 'invalid.properties')\n        self.assertRaisesRegexp(ConfigurationError,\n                                'abcd is not in the expected format: '\n                                '<property>=<value>, <property>:<value> or '\n                                '<property> <value>',\n                                config.get_conf_from_properties_file,\n                                config_file)\n\n    def test_fill_defaults_no_missing(self):\n        orig = {'key1': 'val1', 'key2': 'val2', 'key3': 'val3'}\n        defaults = {'key1': 'default1', 'key2': 'default2'}\n        filled = orig.copy()\n        config.fill_defaults(filled, defaults)\n        self.assertEqual(filled, orig)\n\n    def test_fill_defaults(self):\n        orig = {'key1': 'val1', 'key3': 'val3'}\n        defaults = {'key1': 'default1', 'key2': 'default2'}\n        filled = orig.copy()\n        config.fill_defaults(filled, defaults)\n        self.assertEqual(filled,\n                         {'key1': 'val1', 'key2': 'default2', 'key3': 'val3'})\n"
  },
  {
    "path": "tests/unit/test_configure_cmds.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nfrom fabric.state import env\nfrom mock import patch\nfrom prestoadmin.util import constants\nfrom prestoadmin import configure_cmds\nfrom tests.unit.base_unit_case import BaseUnitCase\n\n\nclass TestConfigureCmds(BaseUnitCase):\n    @patch('prestoadmin.configure_cmds.get')\n    @patch('prestoadmin.configure_cmds.files.exists')\n    def test_config_show(self, mock_file_exists, mock_get):\n        mock_file_exists.return_value = True\n\n        configure_cmds.show(\"Node\")\n        file_path_node = os.path.join(constants.REMOTE_CONF_DIR,\n                                      \"node.properties\")\n        args, kwargs = mock_get.call_args\n        self.assertEqual(args[0], file_path_node)\n\n        configure_cmds.show(\"jvm\")\n        file_path_jvm = os.path.join(constants.REMOTE_CONF_DIR, \"jvm.config\")\n        args, kwargs = mock_get.call_args\n        self.assertEqual(args[0], file_path_jvm)\n\n        configure_cmds.show(\"conFig\")\n        file_path_config = os.path.join(constants.REMOTE_CONF_DIR,\n                                        \"config.properties\")\n        args, kwargs = mock_get.call_args\n        self.assertEqual(args[0], file_path_config)\n\n    @patch('prestoadmin.configure_cmds.configuration_show')\n    def test_config_show_all(self, mock_show):\n        configure_cmds.show()\n        mock_show.assert_any_call(\"node.properties\")\n        mock_show.assert_any_call(\"jvm.config\")\n        mock_show.assert_any_call(\"config.properties\")\n        mock_show.assert_any_call(\"log.properties\", should_warn=False)\n\n    @patch('prestoadmin.configure_cmds.abort')\n    @patch('prestoadmin.configure_cmds.warn')\n    @patch('prestoadmin.configure_cmds.files.exists')\n    def test_config_show_fail(self, mock_file_exists, mock_warn, mock_abort):\n        mock_file_exists.return_value = False\n        env.host = \"any_host\"\n        configure_cmds.configuration_show(\"any_path\")\n        file_path = os.path.join(constants.REMOTE_CONF_DIR, \"any_path\")\n        mock_warn.assert_called_with(\"No configuration file found \"\n                                     \"for %s at %s\" % (env.host, file_path))\n\n        configure_cmds.show(\"invalid_config\")\n        mock_abort.assert_called_with(\"Invalid Argument. Possible values: \"\n                                      \"node, jvm, config, log\")\n\n    @patch('prestoadmin.configure_cmds.warn')\n    @patch('prestoadmin.configure_cmds.files.exists')\n    def test_config_show_fail_no_warn(self, mock_file_exists, mock_warn):\n        mock_file_exists.return_value = False\n        env.host = \"any_host\"\n        configure_cmds.configuration_show(\"any_path\", should_warn=False)\n        self.assertFalse(mock_warn.called)\n\n    @patch('prestoadmin.configure_cmds.abort')\n    @patch('prestoadmin.deploy.workers')\n    @patch('prestoadmin.deploy.coordinator')\n    def test_config_deploy(self, mock_coordinator, mock_workers, mock_abort):\n        env.host = \"any_host\"\n        configure_cmds.deploy(\"invalid_config\")\n        mock_abort.assert_called_with(\"Invalid Argument. \"\n                                      \"Possible values: coordinator, workers\")\n\n        configure_cmds.deploy()\n        mock_workers.assert_called_with()\n        mock_coordinator.assert_called_with()\n\n    @patch('prestoadmin.deploy.workers')\n    @patch('prestoadmin.deploy.coordinator')\n    def test_config_deploy_coord(self, mock_coordinator, mock_workers):\n        env.host = \"any_host\"\n        configure_cmds.deploy(\"coordinator\")\n        mock_coordinator.assert_called_with()\n        assert not mock_workers.called\n\n    @patch('prestoadmin.deploy.workers')\n    @patch('prestoadmin.deploy.coordinator')\n    def test_config_deploy_workers(self, mock_coordinator, mock_workers):\n        env.host = \"any_host\"\n        configure_cmds.deploy(\"workers\")\n        mock_workers.assert_called_with()\n        assert not mock_coordinator.called\n"
  },
  {
    "path": "tests/unit/test_coordinator.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nTests the coordinator module\n\"\"\"\nfrom fabric.api import env\nfrom mock import patch\n\nfrom prestoadmin import coordinator\nfrom prestoadmin.util.exception import ConfigurationError\nfrom tests.base_test_case import BaseTestCase\n\n\nclass TestCoordinator(BaseTestCase):\n\n    def test_build_all_defaults(self):\n        env.roledefs['coordinator'] = 'a'\n        env.roledefs['workers'] = ['b', 'c']\n        actual_default = coordinator.Coordinator().build_all_defaults()\n        expected = {'node.properties':\n                    {'node.environment': 'presto',\n                     'node.data-dir': '/var/lib/presto/data',\n                     'node.launcher-log-file': '/var/log/presto/launcher.log',\n                     'node.server-log-file': '/var/log/presto/server.log',\n                     'catalog.config-dir': '/etc/presto/catalog',\n                     'plugin.dir': '/usr/lib/presto/lib/plugin'},\n                    'jvm.config': ['-server',\n                                   '-Xmx16G',\n                                   '-XX:-UseBiasedLocking',\n                                   '-XX:+UseG1GC',\n                                   '-XX:G1HeapRegionSize=32M',\n                                   '-XX:+ExplicitGCInvokesConcurrent',\n                                   '-XX:+HeapDumpOnOutOfMemoryError',\n                                   '-XX:+UseGCOverheadLimit',\n                                   '-XX:+ExitOnOutOfMemoryError',\n                                   '-XX:ReservedCodeCacheSize=512M',\n                                   '-DHADOOP_USER_NAME=hive'],\n                    'config.properties': {\n                        'coordinator': 'true',\n                        'discovery-server.enabled': 'true',\n                        'discovery.uri': 'http://a:8080',\n                        'http-server.http.port': '8080',\n                        'node-scheduler.include-coordinator': 'false',\n                        'query.max-memory': '50GB',\n                        'query.max-memory-per-node': '8GB'}\n                    }\n\n        self.assertEqual(actual_default, expected)\n\n    def test_defaults_coord_is_worker(self):\n        env.roledefs['coordinator'] = ['a']\n        env.roledefs['worker'] = ['a', 'b', 'c']\n        actual_default = coordinator.Coordinator().build_all_defaults()\n        expected = {'node.properties': {\n                    'node.environment': 'presto',\n                    'node.data-dir': '/var/lib/presto/data',\n                    'node.launcher-log-file': '/var/log/presto/launcher.log',\n                    'node.server-log-file': '/var/log/presto/server.log',\n                    'catalog.config-dir': '/etc/presto/catalog',\n                    'plugin.dir': '/usr/lib/presto/lib/plugin'},\n                    'jvm.config': ['-server',\n                                   '-Xmx16G',\n                                   '-XX:-UseBiasedLocking',\n                                   '-XX:+UseG1GC',\n                                   '-XX:G1HeapRegionSize=32M',\n                                   '-XX:+ExplicitGCInvokesConcurrent',\n                                   '-XX:+HeapDumpOnOutOfMemoryError',\n                                   '-XX:+UseGCOverheadLimit',\n                                   '-XX:+ExitOnOutOfMemoryError',\n                                   '-XX:ReservedCodeCacheSize=512M',\n                                   '-DHADOOP_USER_NAME=hive'],\n                    'config.properties': {\n                        'coordinator': 'true',\n                        'discovery-server.enabled': 'true',\n                        'discovery.uri': 'http://a:8080',\n                        'http-server.http.port': '8080',\n                        'node-scheduler.include-coordinator': 'true',\n                        'query.max-memory': '50GB',\n                        'query.max-memory-per-node': '8GB'}\n                    }\n\n        self.assertEqual(actual_default, expected)\n\n    def test_validate_valid(self):\n        conf = {'node.properties': {},\n                'jvm.config': [],\n                'config.properties': {'coordinator': 'true',\n                                      'discovery.uri': 'http://uri'}}\n        self.assertEqual(conf, coordinator.Coordinator.validate(conf))\n\n    def test_validate_default(self):\n        env.roledefs['coordinator'] = 'localhost'\n        env.roledefs['workers'] = ['localhost']\n        conf = coordinator.Coordinator().build_all_defaults()\n        self.assertEqual(conf, coordinator.Coordinator.validate(conf))\n\n    def test_invalid_conf(self):\n        conf = {'node.propoerties': {}}\n        self.assertRaisesRegexp(ConfigurationError,\n                                'Missing configuration for required file: ',\n                                coordinator.Coordinator.validate, conf)\n\n    def test_invalid_conf_missing_coordinator(self):\n        conf = {'node.properties': {},\n                'jvm.config': [],\n                'config.properties': {'discovery.uri': 'http://uri'}\n                }\n\n        self.assertRaisesRegexp(ConfigurationError,\n                                'Must specify coordinator=true in '\n                                'coordinator\\'s config.properties',\n                                coordinator.Coordinator.validate, conf)\n\n    def test_invalid_conf_coordinator(self):\n        conf = {'node.properties': {},\n                'jvm.config': [],\n                'config.properties': {'coordinator': 'false',\n                                      'discovery.uri': 'http://uri'}\n                }\n\n        self.assertRaisesRegexp(ConfigurationError,\n                                'Coordinator cannot be false in the '\n                                'coordinator\\'s config.properties',\n                                coordinator.Coordinator.validate, conf)\n\n    @patch('prestoadmin.node.config.write_conf_to_file')\n    @patch('prestoadmin.node.get_presto_conf')\n    def test_get_conf_empty_is_default(self, get_conf_from_file_mock,\n                                       write_mock):\n        env.roledefs['coordinator'] = 'j'\n        env.roledefs['workers'] = ['K', 'L']\n        get_conf_from_file_mock.return_value = {}\n        self.assertEqual(coordinator.Coordinator().get_conf(),\n                         coordinator.Coordinator().build_all_defaults())\n\n    @patch('prestoadmin.node.config.write_conf_to_file')\n    @patch('prestoadmin.node.get_presto_conf')\n    def test_get_conf(self, get_conf_from_file_mock, write_mock):\n        env.roledefs['coordinator'] = 'j'\n        env.roledefs['workers'] = ['K', 'L']\n        file_conf = {'node.properties': {'my-property': 'value',\n                                         'node.environment': 'test'}}\n        get_conf_from_file_mock.return_value = file_conf\n        expected = {'node.properties':\n                    {'my-property': 'value',\n                     'node.environment': 'test'},\n                    'jvm.config': ['-server',\n                                   '-Xmx16G',\n                                   '-XX:-UseBiasedLocking',\n                                   '-XX:+UseG1GC',\n                                   '-XX:G1HeapRegionSize=32M',\n                                   '-XX:+ExplicitGCInvokesConcurrent',\n                                   '-XX:+HeapDumpOnOutOfMemoryError',\n                                   '-XX:+UseGCOverheadLimit',\n                                   '-XX:+ExitOnOutOfMemoryError',\n                                   '-XX:ReservedCodeCacheSize=512M',\n                                   '-DHADOOP_USER_NAME=hive'],\n                    'config.properties': {\n                        'coordinator': 'true',\n                        'discovery-server.enabled': 'true',\n                        'discovery.uri': 'http://j:8080',\n                        'http-server.http.port': '8080',\n                        'node-scheduler.include-coordinator': 'false',\n                        'query.max-memory': '50GB',\n                        'query.max-memory-per-node': '8GB'}\n                    }\n\n        self.assertEqual(coordinator.Coordinator().get_conf(), expected)\n"
  },
  {
    "path": "tests/unit/test_deploy.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nTests deploying the presto configuration\n\"\"\"\nfrom mock import patch\n\nfrom fabric.api import env\nfrom prestoadmin import deploy\nfrom tests.base_test_case import BaseTestCase\nfrom tests.unit import SudoResult\n\n\nclass TestDeploy(BaseTestCase):\n    def test_output_format_dict(self):\n        conf = {'a': 'b', 'c': 'd'}\n        self.assertEqual(deploy.output_format(conf),\n                         \"a=b\\nc=d\")\n\n    def test_output_format_list(self):\n        self.assertEqual(deploy.output_format(['a', 'b']),\n                         'a\\nb')\n\n    def test_output_format_string(self):\n        conf = \"A string\"\n        self.assertEqual(deploy.output_format(conf), conf)\n\n    def test_output_format_int(self):\n        conf = 1\n        self.assertEqual(deploy.output_format(conf), str(conf))\n\n    @patch('prestoadmin.deploy.configure_presto')\n    @patch('prestoadmin.deploy.util.get_coordinator_role')\n    @patch('prestoadmin.deploy.env')\n    def test_worker_is_coordinator(self, env_mock, coord_mock, configure_mock):\n        env_mock.host = \"my.host\"\n        coord_mock.return_value = [\"my.host\"]\n        deploy.workers()\n        assert not configure_mock.called\n\n    @patch('prestoadmin.deploy.w.Worker')\n    @patch('prestoadmin.deploy.configure_presto')\n    def test_worker_not_coordinator(self,  configure_mock, get_conf_mock):\n        env.host = \"my.host1\"\n        env.roledefs[\"worker\"] = [\"my.host1\"]\n        env.roledefs[\"coordinator\"] = [\"my.host2\"]\n        deploy.workers()\n        assert configure_mock.called\n\n    @patch('prestoadmin.deploy.configure_presto')\n    @patch('prestoadmin.deploy.coord.Coordinator')\n    def test_coordinator(self, coord_mock, configure_mock):\n        env.roledefs['coordinator'] = ['master']\n        env.host = 'master'\n        deploy.coordinator()\n        assert configure_mock.called\n\n    @patch('prestoadmin.deploy.sudo')\n    def test_deploy(self, sudo_mock):\n        sudo_mock.return_value = SudoResult()\n        files = {\"jvm.config\": \"a=b\"}\n        deploy.deploy(files, \"/my/remote/dir\")\n        sudo_mock.assert_any_call(\"mkdir -p /my/remote/dir\")\n        sudo_mock.assert_any_call(\"echo 'a=b' > /my/remote/dir/jvm.config\")\n\n    @patch('__builtin__.open')\n    @patch('prestoadmin.deploy.exists')\n    @patch('prestoadmin.deploy.files.append')\n    @patch('prestoadmin.deploy.sudo')\n    def test_deploy_node_properties(self, sudo_mock, append_mock, exists_mock, open_mock):\n        sudo_mock.return_value = SudoResult()\n        exists_mock.return_value = True\n        file_manager = open_mock.return_value.__enter__.return_value\n        file_manager.read.return_value = (\"key=value\")\n        command = (\n            \"if ! ( grep -q -s 'node.id' /my/remote/dir/node.properties ); \"\n            \"then \"\n            \"uuid=$(uuidgen); \"\n            \"echo node.id=$uuid >> /my/remote/dir/node.properties;\"\n            \"fi; \"\n            \"sed -i '/node.id/!d' /my/remote/dir/node.properties; \")\n        deploy.deploy_node_properties(\"key=value\", \"/my/remote/dir\")\n        sudo_mock.assert_called_with(command)\n        append_mock.assert_called_with(\"/my/remote/dir/node.properties\",\n                                       \"key=value\", True, shell=True)\n\n    @patch('prestoadmin.deploy.sudo')\n    @patch('prestoadmin.deploy.secure_create_file')\n    def test_deploys_as_presto_user(self, secure_create_file_mock, sudo_mock):\n        deploy.deploy({'my_file': 'hello!'}, '/remote/path')\n        secure_create_file_mock.assert_called_with('/remote/path/my_file', 'presto:presto', 600)\n        sudo_mock.assert_called_with(\"echo 'hello!' > /remote/path/my_file\")\n\n    @patch('prestoadmin.deploy.deploy')\n    @patch('prestoadmin.deploy.deploy_node_properties')\n    def test_configure_presto(self, deploy_node_mock, deploy_mock):\n        env.host = 'localhost'\n        conf = {\"node.properties\": {\"key\": \"value\"}, \"jvm.config\": [\"list\"]}\n        remote_dir = \"/my/remote/dir\"\n        deploy.configure_presto(conf, remote_dir)\n        deploy_mock.assert_called_with({\"jvm.config\": \"list\"}, remote_dir)\n\n    def test_escape_quotes_do_nothing(self):\n        text = 'basic_text'\n        self.assertEqual('basic_text', deploy.escape_single_quotes(text))\n\n    def test_escape_quotes_has_quote(self):\n        text = \"A quote! ' A quote!\"\n        self.assertEqual(\"A quote! '\\\\'' A quote!\",\n                         deploy.escape_single_quotes(text))\n"
  },
  {
    "path": "tests/unit/test_expand.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom prestoadmin.standalone.config import _expand_host\nfrom tests.unit.base_unit_case import BaseUnitCase\n\n\nclass TestExpandHost(BaseUnitCase):\n\n    def test_basic_expand_host_01(self):\n        input_host = \"worker0[1-2].example.com\"\n        expected = [\"worker01.example.com\", \"worker02.example.com\"]\n        self.assertEqual(expected, _expand_host(input_host))\n\n    def test_basic_expand_host_02(self):\n        input_host = \"worker[01-02].example.com\"\n        expected = [\"worker01.example.com\", \"worker02.example.com\"]\n        self.assertEqual(expected, _expand_host(input_host))\n\n    def test_expand_host_include_hyphen(self):\n        input_host = \"cdh5-[1-2].example.com\"\n        expected = [\"cdh5-1.example.com\", \"cdh5-2.example.com\"]\n        self.assertEqual(expected, _expand_host(input_host))\n\n    def test_not_expand_host(self):\n        input_host = \"worker1.example.com\"\n        expected = [\"worker1.example.com\"]\n        self.assertEqual(expected, _expand_host(input_host))\n\n    def test_except_expand_host(self):\n        input_host = \"worker0[3-2].example.com\"\n        self.assertRaises(ValueError, _expand_host, input_host)\n"
  },
  {
    "path": "tests/unit/test_fabric_patches.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport sys\nimport logging\n\nfrom fabric import state\nfrom fabric.context_managers import hide, settings\nfrom fabric.decorators import hosts, parallel, roles, serial\nfrom fabric.exceptions import NetworkError\nfrom fabric.tasks import Task\nfrom fudge import Fake, patched_context, with_fakes, clear_expectations\nfrom fabric.state import env\nimport fabric.api\nimport fabric.operations\nimport fabric.utils\nfrom mock import call\nfrom mock import patch\nfrom tests.base_test_case import BaseTestCase\n\nfrom prestoadmin.util.application import Application\nfrom prestoadmin.fabric_patches import execute\n\n\nAPPLICATION_NAME = 'foo'\n\n\n@patch('prestoadmin.util.application.filesystem')\n@patch('prestoadmin.util.application.logging.config')\nclass FabricPatchesTest(BaseTestCase):\n\n    def setUp(self):\n        # basicConfig is a noop if there are already handlers\n        # present on the root logger, remove them all here\n        self.__old_log_handlers = []\n        for handler in logging.root.handlers:\n            self.__old_log_handlers.append(handler)\n            logging.root.removeHandler(handler)\n        # Load prestoadmin so that the monkeypatching is in place\n        BaseTestCase.setUp(self, capture_output=True)\n\n    def tearDown(self):\n        # restore the old log handlers\n        for handler in logging.root.handlers:\n            logging.root.removeHandler(handler)\n        for handler in self.__old_log_handlers:\n            logging.root.addHandler(handler)\n        BaseTestCase.tearDown(self)\n\n    @patch('prestoadmin.fabric_patches._LOGGER')\n    def test_warn_api_prints_out_message(self, logger_mock, log_conf_mock,\n                                         filesystem_mock):\n        with Application(APPLICATION_NAME):\n            fabric.api.warn(\"Test warning.\")\n\n        logger_mock.warn.assert_has_calls(\n            [\n                call('Test warning.\\n\\nNone\\n'),\n            ]\n        )\n        self.assertEqual(\n            '\\nWarning: Test warning.\\n\\n',\n            self.test_stderr.getvalue()\n        )\n\n    @patch('prestoadmin.fabric_patches._LOGGER')\n    def test_warn_utils_prints_out_message(self, logger_mock, log_conf_mock,\n                                           filesystem_mock):\n        with Application(APPLICATION_NAME):\n            fabric.utils.warn(\"Test warning.\")\n\n        logger_mock.warn.assert_has_calls(\n            [\n                call('Test warning.\\n\\nNone\\n'),\n                ]\n        )\n        self.assertEqual(\n            '\\nWarning: Test warning.\\n\\n',\n            self.test_stderr.getvalue()\n        )\n\n    @patch('prestoadmin.fabric_patches._LOGGER')\n    def test_warn_utils_prints_out_message_with_host(self, logger_mock,\n                                                     log_conf_mock, fs_mock):\n        fabric.api.env.host = 'host'\n        with Application(APPLICATION_NAME):\n            fabric.utils.warn(\"Test warning.\")\n\n        logger_mock.warn.assert_has_calls(\n            [\n                call('[host] Test warning.\\n\\nNone\\n'),\n                ]\n        )\n        self.assertEqual(\n            '\\nWarning: [host] Test warning.\\n\\n',\n            self.test_stderr.getvalue()\n        )\n\n    @patch('fabric.operations._run_command')\n    @patch('prestoadmin.fabric_patches._LOGGER')\n    def test_run_api_logs_stdout(self, logger_mock, run_command_mock,\n                                 logging_config_mock, filesystem_mock):\n        self._execute_operation_test(run_command_mock, logger_mock,\n                                     fabric.api.run)\n\n    @patch('fabric.operations._run_command')\n    @patch('prestoadmin.fabric_patches._LOGGER')\n    def test_run_op_logs_stdout(self, logger_mock, run_command_mock,\n                                logging_config_mock, filesystem_mock):\n        self._execute_operation_test(run_command_mock, logger_mock,\n                                     fabric.operations.run)\n\n    @patch('fabric.operations._run_command')\n    @patch('prestoadmin.fabric_patches._LOGGER')\n    def test_sudo_api_logs_stdout(self, logger_mock, run_command_mock,\n                                  logging_config_mock, filesystem_mock):\n        self._execute_operation_test(run_command_mock, logger_mock,\n                                     fabric.api.sudo)\n\n    @patch('fabric.operations._run_command')\n    @patch('prestoadmin.fabric_patches._LOGGER')\n    def test_sudo_op_logs_stdout(self, logger_mock, run_command_mock,\n                                 logging_config_mock, filesystem_mock):\n        self._execute_operation_test(run_command_mock, logger_mock,\n                                     fabric.operations.sudo)\n\n    def _execute_operation_test(self, run_command_mock, logger_mock, func):\n        out = fabric.operations._AttributeString('Test warning')\n        out.command = 'echo \"Test warning\"'\n        out.real_command = '/bin/bash echo \"Test warning\"'\n        out.stderr = ''\n        run_command_mock.return_value = out\n\n        fabric.api.env.host_string = 'localhost'\n        with Application(APPLICATION_NAME):\n            func('echo \"Test warning\"')\n            pass\n\n        logger_mock.info.assert_has_calls(\n            [\n                call('\\nCOMMAND: echo \"Test warning\"\\nFULL COMMAND: /bin/bash'\n                     ' echo \"Test warning\"\\nSTDOUT: Test warning\\nSTDERR: '),\n                ]\n        )\n\n\n# Most of these tests were taken or modified from fabric's test_tasks.py\n# Below is the license for the fabric code:\n# Copyright (c) 2009-2015 Jeffrey E. Forcier\n# Copyright (c) 2008-2009 Christian Vest Hansen\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n#     * Redistributions of source code must retain the above copyright notice,\n#       this list of conditions and the following disclaimer.\n#     * Redistributions in binary form must reproduce the above copyright\n#       notice,\n#       this list of conditions and the following disclaimer in the\n#       documentation and/or other materials provided with the distribution.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE\n# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\nclass TestExecute(BaseTestCase):\n    def setUp(self):\n        clear_expectations()\n        super(TestExecute, self).setUp(capture_output=True)\n\n    @with_fakes\n    def test_calls_task_function_objects(self):\n        \"\"\"\n        should execute the passed-in function object\n        \"\"\"\n        execute(Fake(callable=True, expect_call=True))\n\n    @with_fakes\n    def test_should_look_up_task_name(self):\n        \"\"\"\n        should also be able to handle task name strings\n        \"\"\"\n        name = 'task1'\n        commands = {name: Fake(callable=True, expect_call=True)}\n        with patched_context(fabric.state, 'commands', commands):\n            execute(name)\n\n    @with_fakes\n    def test_should_handle_name_of_Task_object(self):\n        \"\"\"\n        handle corner case of Task object referrred to by name\n        \"\"\"\n        name = 'task2'\n\n        class MyTask(Task):\n            run = Fake(callable=True, expect_call=True)\n        mytask = MyTask()\n        mytask.name = name\n        commands = {name: mytask}\n        with patched_context(fabric.state, 'commands', commands):\n            execute(name)\n\n    def test_should_abort_if_task_name_not_found(self):\n        \"\"\"\n        should abort if given an invalid task name\n        \"\"\"\n        self.assertRaisesRegexp(SystemExit,\n                                \"'thisisnotavalidtaskname' is not callable or\"\n                                \" a valid task name\",\n                                execute, 'thisisnotavalidtaskname')\n\n    def test_should_not_abort_if_task_name_not_found_with_skip(self):\n        \"\"\"\n        should not abort if given an invalid task name\n        and skip_unknown_tasks in env\n        \"\"\"\n        env.skip_unknown_tasks = True\n        execute('thisisnotavalidtaskname')\n        del env['skip_unknown_tasks']\n\n    @with_fakes\n    def test_should_pass_through_args_kwargs(self):\n        \"\"\"\n        should pass in any additional args, kwargs to the given task.\n        \"\"\"\n        task = (\n            Fake(callable=True, expect_call=True)\n            .with_args('foo', biz='baz')\n        )\n        execute(task, 'foo', biz='baz')\n\n    @with_fakes\n    def test_should_honor_hosts_kwarg(self):\n        \"\"\"\n        should use hosts kwarg to set run list\n        \"\"\"\n        # Make two full copies of a host list\n        hostlist = ['a', 'b', 'c']\n        hosts = hostlist[:]\n\n        # Side-effect which asserts the value of env.host_string when it runs\n        def host_string():\n            self.assertEqual(env.host_string, hostlist.pop(0))\n        task = Fake(callable=True, expect_call=True).calls(host_string)\n        with hide('everything'):\n            execute(task, hosts=hosts)\n\n    def test_should_honor_hosts_decorator(self):\n        \"\"\"\n        should honor @hosts on passed-in task objects\n        \"\"\"\n        # Make two full copies of a host list\n        hostlist = ['a', 'b', 'c']\n\n        @hosts(*hostlist[:])\n        def task():\n            self.assertEqual(env.host_string, hostlist.pop(0))\n        with hide('running'):\n            execute(task)\n\n    def test_should_honor_roles_decorator(self):\n        \"\"\"\n        should honor @roles on passed-in task objects\n        \"\"\"\n        # Make two full copies of a host list\n        roledefs = {'role1': ['a', 'b', 'c'], 'role2': ['d', 'e']}\n        role_copy = roledefs['role1'][:]\n\n        @roles('role1')\n        def task():\n            self.assertEqual(env.host_string, role_copy.pop(0))\n        with settings(hide('running'), roledefs=roledefs):\n            execute(task)\n\n    @with_fakes\n    def test_should_set_env_command_to_string_arg(self):\n        \"\"\"\n        should set env.command to any string arg, if given\n        \"\"\"\n        name = \"foo\"\n\n        def command():\n            self.assert_(env.command, name)\n        task = Fake(callable=True, expect_call=True).calls(command)\n        with patched_context(fabric.state, 'commands', {name: task}):\n            execute(name)\n\n    @with_fakes\n    def test_should_set_env_command_to_name_attr(self):\n        \"\"\"\n        should set env.command to TaskSubclass.name if possible\n        \"\"\"\n        name = \"foo\"\n\n        def command():\n            self.assertEqual(env.command, name)\n        task = (\n            Fake(callable=True, expect_call=True)\n            .has_attr(name=name)\n            .calls(command)\n        )\n        execute(task)\n\n    @with_fakes\n    def test_should_set_all_hosts(self):\n        \"\"\"\n        should set env.all_hosts to its derived host list\n        \"\"\"\n        hosts = ['a', 'b']\n        roledefs = {'r1': ['c', 'd']}\n        roles = ['r1']\n        exclude_hosts = ['a']\n\n        def command():\n            self.assertEqual(set(env.all_hosts), set(['b', 'c', 'd']))\n        task = Fake(callable=True, expect_call=True).calls(command)\n        with settings(hide('everything'), roledefs=roledefs):\n            execute(\n                task, hosts=hosts, roles=roles, exclude_hosts=exclude_hosts\n            )\n\n    def test_should_print_executing_line_per_host(self):\n        \"\"\"\n        should print \"Executing\" line once per host\n        \"\"\"\n        state.output.running = True\n\n        def task():\n            pass\n        execute(task, hosts=['host1', 'host2'])\n        self.assertEqual(sys.stdout.getvalue(),\n                         \"\"\"[host1] Executing task 'task'\n[host2] Executing task 'task'\n\"\"\")\n\n    def test_should_not_print_executing_line_for_singletons(self):\n        \"\"\"\n        should not print \"Executing\" line for non-networked tasks\n        \"\"\"\n\n        def task():\n            pass\n        with settings(hosts=[]):  # protect against really odd test bleed :(\n            execute(task)\n        self.assertEqual(sys.stdout.getvalue(), \"\")\n\n    def test_should_return_dict_for_base_case(self):\n        \"\"\"\n        Non-network-related tasks should return a dict w/ special key\n        \"\"\"\n        def task():\n            return \"foo\"\n        self.assertEqual(execute(task), {'<local-only>': 'foo'})\n\n    def test_should_return_dict_for_serial_use_case(self):\n        \"\"\"\n        Networked but serial tasks should return per-host-string dict\n        \"\"\"\n        ports = [2200, 2201]\n        hosts = map(lambda x: '127.0.0.1:%s' % x, ports)\n\n        @serial\n        def task():\n            return \"foo\"\n        with hide('everything'):\n            self.assertEqual(execute(task, hosts=hosts), {\n                '127.0.0.1:2200': 'foo',\n                '127.0.0.1:2201': 'foo'\n            })\n\n    @patch('fabric.operations._run_command')\n    @patch('prestoadmin.fabric_patches.log_output')\n    def test_should_preserve_None_for_non_returning_tasks(self, log_mock,\n                                                          run_mock):\n        \"\"\"\n        Tasks which don't return anything should still show up in the dict\n        \"\"\"\n        def local_task():\n            pass\n\n        def remote_task():\n            with hide('everything'):\n                run_mock.return_value = 'hello'\n                fabric.api.run('a command')\n        self.assertEqual(execute(local_task), {'<local-only>': None})\n        with hide('everything'):\n            self.assertEqual(\n                execute(remote_task, hosts=['host']),\n                {'host': None}\n            )\n\n    def test_should_use_sentinel_for_tasks_that_errored(self):\n        \"\"\"\n        Tasks which errored but didn't abort should contain an eg NetworkError\n        \"\"\"\n        def task():\n            fabric.api.run(\"whoops\")\n        host_string = 'localhost:1234'\n        with settings(hide('everything'), skip_bad_hosts=True):\n            retval = execute(task, hosts=[host_string])\n        assert isinstance(retval[host_string], NetworkError)\n\n    def test_parallel_return_values(self):\n        \"\"\"\n        Parallel mode should still return values as in serial mode\n        \"\"\"\n        @parallel\n        @hosts('127.0.0.1:2200', '127.0.0.1:2201')\n        def task():\n            return env.host_string.split(':')[1]\n        with hide('everything'):\n            retval = execute(task)\n        self.assertEqual(retval, {'127.0.0.1:2200': '2200',\n                                  '127.0.0.1:2201': '2201'})\n\n    @with_fakes\n    def test_should_work_with_Task_subclasses(self):\n        \"\"\"\n        should work for Task subclasses, not just WrappedCallableTask\n        \"\"\"\n        class MyTask(Task):\n            name = \"mytask\"\n            run = Fake(callable=True, expect_call=True)\n        mytask = MyTask()\n        execute(mytask)\n\n    @patch('prestoadmin.fabric_patches.error')\n    def test_parallel_network_error(self, error_mock):\n        \"\"\"\n        network error should call error\n        \"\"\"\n\n        network_error = NetworkError('Network message')\n        fabric.state.env.warn_only = False\n\n        @parallel\n        @hosts('127.0.0.1:2200', '127.0.0.1:2201')\n        def task():\n            raise network_error\n        with hide('everything'):\n            execute(task)\n        error_mock.assert_called_with('Network message',\n                                      exception=network_error.wrapped,\n                                      func=fabric.utils.abort)\n\n    @patch('prestoadmin.fabric_patches.error')\n    def test_base_exception_error(self, error_mock):\n        \"\"\"\n        base exception should call error\n        \"\"\"\n\n        value_error = ValueError('error message')\n        fabric.state.env.warn_only = True\n\n        @parallel\n        @hosts('127.0.0.1:2200', '127.0.0.1:2201')\n        def task():\n            raise value_error\n        with hide('everything'):\n            execute(task)\n        # self.assertTrue(error_mock.is_called)\n        args = error_mock.call_args\n        self.assertEqual(args[0], ('error message',))\n        self.assertEqual(type(args[1]['exception']), type(value_error))\n        self.assertEqual(args[1]['exception'].args, value_error.args)\n\n    def test_abort_should_not_raise_error(self):\n        \"\"\"\n        base exception should call error\n        \"\"\"\n\n        fabric.state.env.warn_only = False\n\n        @parallel\n        @hosts('127.0.0.1:2200', '127.0.0.1:2201')\n        def task():\n            fabric.utils.abort('aborting')\n        with hide('everything'):\n            execute(task)\n\n    def test_abort_in_serial_should_not_raise_error(self):\n        \"\"\"\n        base exception should call error\n        \"\"\"\n\n        fabric.state.env.warn_only = False\n\n        @serial\n        @hosts('127.0.0.1:2200', '127.0.0.1:2201')\n        def task():\n            fabric.utils.abort('aborting')\n        with hide('everything'):\n            execute(task)\n\n    def test_arg_exception_should_raise_error(self):\n        @hosts('127.0.0.1:2200', '127.0.0.1:2201')\n        def task(arg):\n            pass\n        with hide('everything'):\n            self.assertRaisesRegexp(TypeError,\n                                    'task\\(\\) takes exactly 1 argument'\n                                    ' \\(0 given\\)', execute, task)\n"
  },
  {
    "path": "tests/unit/test_file.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nTests the script module\n\"\"\"\n\nfrom mock import patch, call\nfrom prestoadmin import file\n\nfrom tests.unit.base_unit_case import BaseUnitCase\n\n\nclass TestFile(BaseUnitCase):\n\n    @patch('prestoadmin.file.sudo')\n    @patch('prestoadmin.file.put')\n    def test_script_basic(self, put_mock, sudo_mock):\n        file.run('/my/local/path/script.sh')\n        put_mock.assert_called_with('/my/local/path/script.sh',\n                                    '/tmp/script.sh')\n        sudo_mock.assert_has_calls(\n            [call('chmod u+x /tmp/script.sh'), call('/tmp/script.sh'),\n             call('rm /tmp/script.sh')], any_order=False)\n\n    @patch('prestoadmin.file.sudo')\n    @patch('prestoadmin.file.put')\n    def test_script_specify_dir(self, put_mock, sudo_mock):\n        file.run('/my/local/path/script.sh', '/my/remote/path')\n        put_mock.assert_called_with('/my/local/path/script.sh',\n                                    '/my/remote/path/script.sh')\n        sudo_mock.assert_has_calls(\n            [call('chmod u+x /my/remote/path/script.sh'),\n             call('/my/remote/path/script.sh'),\n             call('rm /my/remote/path/script.sh')], any_order=False)\n"
  },
  {
    "path": "tests/unit/test_main.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"\ntest_prestoadmin\n----------------------------------\n\nTests for `prestoadmin` module.\n\"\"\"\nfrom optparse import Values\nimport os\nimport unittest\nfrom fabric import state\n\nimport fabric\nfrom fabric.state import env\nfrom mock import patch\n\nimport prestoadmin\nfrom prestoadmin import main\nfrom prestoadmin import topology\n\n# LINTED: the @patch decorators in mock_load_topology and mock_empty_topology\n# require that this import be here in order to work properly.\nfrom prestoadmin.standalone.config import StandaloneConfig  # noqa\nfrom prestoadmin.util.exception import ConfigurationError\nfrom tests.unit.base_unit_case import BaseUnitCase\n\n\n#\n# There is a certain amount of magic happening here.\n#\n# Most of the tests in test_main require that there's configuration information\n# loaded in order to validate the argument parsing logic. In order to avoid\n# every test in here having to know about the internals of how that\n# configuration gets loaded, main.py provides load_config as a patch point.\n#\n# The tests that need config loaded can patch that with one of the following\n# functions as a side-effect. Instead of main.load_config being called, the\n# function returned by e.g. mock_load_topology gets called, and it patches\n# the config load implementation to achieve the desired result.\n#\n# The downside of this approach is that any tests function that uses this ends\n# up getting an unused mock as a parameter. The upside is that when config load\n# inevitably changes, there will be 3 places to change instead of every test.\n#\ndef mock_load_topology():\n    @patch('tests.unit.test_main.StandaloneConfig._get_conf_from_file')\n    def loader(load_config_callback, get_conf_mock):\n        get_conf_mock.return_value = {'username': 'user',\n                                      'port': 1234,\n                                      'coordinator': 'master',\n                                      'workers': ['slave1', 'slave2']}\n        return load_config_callback()\n    return loader\n\n\ndef mock_empty_topology():\n    @patch('tests.unit.test_main.StandaloneConfig._get_conf_from_file')\n    def loader(load_config_callback, get_conf_mock):\n        get_conf_mock.return_value = {}\n        return load_config_callback()\n    return loader\n\n\ndef mock_error_topology():\n    @patch('tests.unit.test_main.StandaloneConfig._get_conf_from_file')\n    @patch('prestoadmin.standalone.config.validate',\n           side_effect=ConfigurationError())\n    def loader(load_config_callback, validate_mock, get_conf_mock):\n        return load_config_callback()\n    return loader\n\n\nclass BaseMainCase(BaseUnitCase):\n    def setUp(self):\n        super(BaseMainCase, self).setUp(capture_output=True, load_config=False)\n        # Empty out commands from previous tests.\n        fabric.state.commands = {}\n\n    def _run_command_compare_to_file(self, command, exit_status, filename):\n        \"\"\"\n            Compares stdout from the CLI to the given file\n        \"\"\"\n        current_dir = os.path.abspath(os.path.dirname(__file__))\n        expected_path = os.path.join(current_dir, filename)\n        input_file = open(expected_path, 'r')\n        text = \"\".join(input_file.readlines())\n        input_file.close()\n        self._run_command_compare_to_string(command, exit_status,\n                                            stdout_text=text)\n\n    def _format_expected_actual(self, expected, actual):\n        return '\\t\\t======== vv EXPECTED vv ========\\n%s\\n' \\\n               '\\t\\t========       !=       ========\\n%s\\n' \\\n               '\\t\\t======== ^^  ACTUAL  ^^ ========\\n' % (expected, actual)\n\n    def _run_command_compare_to_string(self, command, exit_status,\n                                       stdout_text=None, stderr_text=None):\n        \"\"\"\n            Compares stdout from the CLI to the given string\n        \"\"\"\n        try:\n            main.parse_and_validate_commands(command)\n        except SystemExit as e:\n            self.assertEqual(e.code, exit_status)\n\n        if stdout_text is not None:\n            actual = self.test_stdout.getvalue()\n            self.assertEqual(stdout_text, actual,\n                             self._format_expected_actual(stdout_text, actual))\n\n        if stderr_text is not None:\n            actual = self.test_stderr.getvalue()\n            self.assertEqual(stderr_text, self.test_stderr.getvalue(),\n                             self._format_expected_actual(stderr_text, actual))\n\n\nclass TestMain(BaseMainCase):\n\n    # Everything in here needs some kind of mode set. Since they were all\n    # written against standalone originally, standalone it is.\n    @patch('prestoadmin.mode.get_mode', return_value='standalone')\n    def setUp(self, mode_mock):\n        super(TestMain, self).setUp()\n        reload(prestoadmin)\n\n    def test_version(self):\n        # Note: this will have to be updated whenever we have a new version.\n        self._run_command_compare_to_string([\"--version\"], 0,\n                                            stdout_text=\"presto-admin %s\\n\" %\n                                            prestoadmin.__version__)\n\n    @patch('prestoadmin.main._LOGGER')\n    def test_argument_parsing_with_invalid_command(self, logger_mock):\n        self._run_command_compare_to_string(\n            [\"hello\", \"world\"],\n            2,\n            stderr_text=\"\\nWarning: Command not found:\\n    hello world\\n\\n\"\n        )\n        self.assertTrue(\"Commands:\" in self.test_stdout.getvalue())\n\n    @patch('prestoadmin.main._LOGGER')\n    def test_argument_parsing_with_short_command(self, logger_mock):\n        self._run_command_compare_to_string(\n            [\"topology\"],\n            2,\n            stderr_text=\"\\nWarning: Command not found:\\n    topology\\n\\n\"\n        )\n        self.assertTrue(\"Commands:\" in self.test_stdout.getvalue())\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_argument_parsing_with_valid_command(self, unused_load_mock):\n        commands = main.parse_and_validate_commands([\"topology\", \"show\"])\n        self.assertEqual(commands[0][0], \"topology.show\")\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_argument_parsing_with_arguments(self, unused_load_mock):\n        commands = main.parse_and_validate_commands([\"topology\", \"show\", \"f\"])\n        self.assertEqual(commands[0][0], \"topology.show\")\n        self.assertEqual(commands[0][1], [\"f\"])\n\n    def test_arbitrary_remote_shell_disabled(self):\n        self._run_command_compare_to_string(\n            [\"--\", \"echo\", \"hello\"],\n            2,\n            stderr_text=\"\\nWarning: Arbitrary remote shell commands not \"\n                        \"supported.\\n\\n\"\n        )\n        self.assertTrue(\"Commands:\" in self.test_stdout.getvalue())\n\n    def assertDefaultRoledefs(self):\n        self.assertEqual(main.state.env.roledefs,\n                         {'coordinator': ['master'],\n                          'worker': ['slave1', 'slave2'],\n                          'all': ['master', 'slave1', 'slave2']})\n\n    def assertDefaultHosts(self):\n        self.assertEqual(main.state.env.hosts, ['master', 'slave1', 'slave2'])\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_hosts_on_cli_overrides_topology(self, unused_mock_load):\n        try:\n            main.main(['--hosts', 'master,slave1', 'topology', 'show'])\n        except SystemExit as e:\n            self.assertEqual(e.code, 0)\n\n        self.assertDefaultRoledefs()\n        self.assertEqual(main.state.env.hosts, ['master', 'slave1'])\n        self.assertEqual(main.api.env.hosts, ['master', 'slave1'])\n\n    def test_describe(self):\n        self._run_command_compare_to_string(\n            ['-d', 'topology', 'show'],\n            0,\n            \"Displaying detailed information for task 'topology show':\\n\\n   \"\n            \" Shows the current topology configuration for the cluster \"\n            \"(including the\\n    coordinators, workers, SSH port, and SSH \"\n            \"username)\\n\\n\"\n        )\n\n    def test_describe_with_args(self):\n        self._run_command_compare_to_string(\n            ['-d', 'topology', 'show', 'arg'],\n            0,\n            \"Displaying detailed information for task 'topology show':\\n\\n   \"\n            \" Shows the current topology configuration for the cluster \"\n            \"(including the\\n    coordinators, workers, SSH port, and SSH \"\n            \"username)\\n\\n\"\n        )\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    @patch('prestoadmin.main.getpass.getpass')\n    def test_initial_password(self, pass_mock, unused_mock_load):\n        try:\n            main.parse_and_validate_commands(['-I', 'topology', 'show'])\n        except SystemExit as e:\n            self.assertEqual(0, e.code)\n        pass_mock.assert_called_once_with('Initial value for env.password: ')\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_env_vars_persisted(self, unused_mock_load):\n        try:\n            main.main(['topology', 'show'])\n        except SystemExit as e:\n            self.assertEqual(e.code, 0)\n        self.assertDefaultHosts()\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_empty_topology())\n    def test_topology_defaults_override_fabric_defaults(\n            self, unused_mock_load):\n        self.remove_runs_once_flag(topology.show)\n        try:\n            main.main(['topology', 'show'])\n        except SystemExit as e:\n            self.assertEqual(e.code, 0)\n        self.assertEqual(['localhost'], main.state.env.hosts)\n        self.assertEqual({'coordinator': ['localhost'],\n                          'worker': ['localhost'], 'all': ['localhost']},\n                         main.state.env.roledefs)\n        self.assertEqual(22, main.state.env.port)\n        self.assertEqual('root', main.state.env.user)\n\n    def test_fabfile_option_not_present(self):\n        self._run_command_compare_to_string([\"--fabfile\"], 2)\n        self.assertTrue(\"no such option: --fabfile\" in\n                        self.test_stderr.getvalue())\n\n    def test_rcfile_option_not_present(self):\n        self._run_command_compare_to_string([\"--config\"], 2)\n        self.assertTrue(\"no such option: --config\" in\n                        self.test_stderr.getvalue())\n\n    @patch('prestoadmin.main.crawl')\n    @patch('prestoadmin.fabric_patches.crawl')\n    def test_has_args_expecting_none(self, crawl_mock, crawl_mock_main):\n        def task():\n            \"\"\"This is my task\"\"\"\n            pass\n\n        crawl_mock.return_value = task\n        crawl_mock_main.return_value = task\n        state.env.nodeps = False\n        try:\n            main.run_tasks([('my task', ['arg1'], {}, [], [], [])])\n        except SystemExit as e:\n            self.assertEqual(e.code, 2)\n        self.assertEqual('Incorrect number of arguments to task.\\n\\n'\n                         'Displaying detailed information for task '\n                         '\\'my task\\':\\n\\n    This is my task\\n\\n',\n                         self.test_stdout.getvalue())\n\n    @patch('prestoadmin.main.crawl')\n    @patch('prestoadmin.fabric_patches.crawl')\n    def test_too_few_args(self, crawl_mock, crawl_mock_main):\n        def task(arg1):\n            \"\"\"This is my task\"\"\"\n            pass\n\n        crawl_mock.return_value = task\n        crawl_mock_main.return_value = task\n        state.env.nodeps = False\n        try:\n            main.run_tasks([('my task', [], {}, [], [], [])])\n        except SystemExit as e:\n            self.assertEqual(e.code, 2)\n        self.assertEqual('Incorrect number of arguments to task.\\n\\n'\n                         'Displaying detailed information for task '\n                         '\\'my task\\':\\n\\n    This is my task\\n\\n',\n                         self.test_stdout.getvalue())\n\n    @patch('prestoadmin.main.crawl')\n    @patch('prestoadmin.fabric_patches.crawl')\n    def test_too_many_args(self, crawl_mock, crawl_mock_main):\n        def task(arg1):\n            \"\"\"This is my task\"\"\"\n            pass\n\n        crawl_mock.return_value = task\n        crawl_mock_main.return_value = task\n        state.env.nodeps = False\n        try:\n            main.run_tasks([('my task', ['arg1', 'arg2'], {}, [], [], [])])\n        except SystemExit as e:\n            self.assertEqual(e.code, 2)\n        self.assertEqual('Incorrect number of arguments to task.\\n\\n'\n                         'Displaying detailed information for task '\n                         '\\'my task\\':\\n\\n    This is my task\\n\\n',\n                         self.test_stdout.getvalue())\n\n    @patch('prestoadmin.main.crawl')\n    @patch('prestoadmin.fabric_patches.crawl')\n    def test_too_many_args_has_optionals(self, crawl_mock, crawl_mock_main):\n        def task(optional=None):\n            \"\"\"This is my task\"\"\"\n            pass\n\n        crawl_mock.return_value = task\n        crawl_mock_main.return_value = task\n        state.env.nodeps = False\n        try:\n            main.run_tasks([('my task', ['arg1', 'arg2'], {}, [], [], [])])\n        except SystemExit as e:\n            self.assertEqual(e.code, 2)\n        self.assertEqual('Incorrect number of arguments to task.\\n\\n'\n                         'Displaying detailed information for task '\n                         '\\'my task\\':\\n\\n    This is my task\\n\\n',\n                         self.test_stdout.getvalue())\n\n    @patch('prestoadmin.main.crawl')\n    @patch('prestoadmin.fabric_patches.crawl')\n    def test_too_few_args_has_optionals(self, crawl_mock, crawl_mock_main):\n        def task(arg1, optional=None):\n            \"\"\"This is my task\"\"\"\n            pass\n\n        crawl_mock.return_value = task\n        crawl_mock_main.return_value = task\n        state.env.nodeps = False\n        try:\n            main.run_tasks([('my task', [], {}, [], [], [])])\n        except SystemExit as e:\n            self.assertEqual(e.code, 2)\n        self.assertEqual('Incorrect number of arguments to task.\\n\\n'\n                         'Displaying detailed information for task '\n                         '\\'my task\\':\\n\\n    This is my task\\n\\n',\n                         self.test_stdout.getvalue())\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_env_parallel(self, unused_mock_load):\n        main.parse_and_validate_commands(['server', 'install',\n                                          \"local_path\", \"--serial\"])\n        self.assertEqual(env.parallel, False)\n\n        main.parse_and_validate_commands(['server', 'install',\n                                          \"local_path\"])\n        self.assertEqual(env.parallel, True)\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_set_vars(self, unused_mock_load_topology):\n        main.parse_and_validate_commands(\n            ['--set', 'skip_bad_hosts,shell=,hosts=master\\,slave1\\,slave2,'\n                      'skip_unknown_tasks=True,use_shell=False',\n             'server', 'install', \"local_path\"])\n        self.assertEqual(env.skip_bad_hosts, True)\n        self.assertEqual(env.shell, '')\n        self.assertEqual(env.hosts, ['master', 'slave1', 'slave2'])\n        self.assertEqual(env.use_shell, False)\n        self.assertEqual(env.skip_unknown_tasks, True)\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_nodeps_check(self, unused_mock_load):\n        env.nodeps = True\n        try:\n            main.main(['topology', 'show', '--nodeps'])\n        except SystemExit as e:\n            self.assertEqual(e.code, 2)\n        self.assertTrue('Invalid argument --nodeps to task: topology.show\\n'\n                        in self.test_stderr.getvalue())\n        self.assertTrue('Displaying detailed information for task '\n                        '\\'topology show\\':\\n\\n    Shows the current topology '\n                        'configuration for the cluster (including the\\n    '\n                        'coordinators, workers, SSH port, and SSH username)'\n                        '\\n\\n' in self.test_stdout.getvalue())\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_skip_bad_hosts(self, unused_mock_load):\n        main.parse_and_validate_commands(['server', 'install',\n                                          \"local_path\"])\n        self.assertEqual(env.skip_bad_hosts, True)\n\n    def test_get_default_options(self):\n        options = Values({'k1': 'dv1', 'k2': 'dv2'})\n        non_default_options = Values({'k2': 'V2', 'k3': 'V3'})\n        default_options = main.get_default_options(options,\n                                                   non_default_options)\n        self.assertEqual(default_options, Values({'k1': 'dv1'}))\n\n    #\n    # The env.port situation is currently a special kind of hell. There are a\n    # bunch of different ways for port to get set:\n    # 1) Topology exists, port in it: port is an int.\n    # 2) Topology exists, port is NOT in it: port is an int.\n    # 3) --port CLI option: port is a string\n    # 4) Interactive config: port is an int.\n    #\n    # What should it be? Probably an int being as it's a port *number* and all.\n    # What should we likely settle on? Probably string, because that's what\n    #   fabric sets the default to in env.\n    # Is this a terrible situation? Yes; we need to clean it up.\n    #\n    # Note that interactive config isn't tested here. because getting input fed\n    # into main.main()'s stdin seems problematic with all the magic the tests\n    # are already doing.\n    #\n\n    # PORT CASE 1\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_unchanged_hosts(self, unused_mock_load):\n        \"\"\"\n        Possible alternate name for the test: test_does_my_magic_work\n        \"\"\"\n        main.parse_and_validate_commands(\n            args=['server', 'uninstall'])\n        self.assertDefaultHosts()\n        self.assertDefaultRoledefs()\n        self.assertEqual(env.port, 1234)\n        self.assertEqual(env.user, 'user')\n        self.assertNotIn('conf_hosts', env)\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_specific_hosts_long_option(self, unused_mock_load):\n        main.parse_and_validate_commands(\n            args=['--hosts', 'master', 'server', 'uninstall'])\n        self.assertEqual(env.hosts, ['master'])\n        self.assertNotIn('cli_hosts', env)\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_specific_hosts_short_option(self, unused_mock_load):\n        main.parse_and_validate_commands(\n            args=['-H', 'master,slave2', 'server', 'uninstall'])\n        self.assertEqual(env.hosts, ['master', 'slave2'])\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_generic_set_hosts(self, unused_mock_load):\n        main.parse_and_validate_commands(\n            args=['--set', 'hosts=master\\,slave2', 'server', 'uninstall'])\n        self.assertEqual(env.hosts, ['master', 'slave2'])\n        self.assertNotIn('env_settings', env)\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_generic_invalid_host(self, unused_mock_load):\n        self.assertRaises(\n            ConfigurationError, main.parse_and_validate_commands,\n            args=['--set', 'hosts=bogushost\\,slave2', 'server', 'uninstall'])\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_specific_overrides_generic(self, unused_mock_load):\n        main.parse_and_validate_commands(\n            args=['-H', 'master,slave1', '--set', 'hosts=master\\,slave2',\n                  'server', 'uninstall'])\n        self.assertEqual(env.hosts, ['master', 'slave1'])\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_host_not_in_conf(self, unused_mock_load):\n        self.assertRaises(\n            ConfigurationError, main.parse_and_validate_commands,\n            args=['--hosts', 'non_conf_host', 'server', 'uninstall'])\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_host_not_in_conf_short_option(self, unused_mock_load):\n        self.assertRaises(\n            ConfigurationError, main.parse_and_validate_commands,\n            args=['-H', 'non_conf_host', 'server', 'uninstall'])\n\n    # PORT CASE 3\n    @patch('prestoadmin.main.load_config', side_effect=mock_load_topology())\n    def test_cli_overrides_config(self, unused_mock_load):\n        main.parse_and_validate_commands(\n            args=['-H', 'master,slave1', '-u', 'other_user', '--port', '2179',\n                  'server', 'uninstall'])\n        self.assertEqual(env.hosts, ['master', 'slave1'])\n        self.assertEqual(env.user, 'other_user')\n        self.assertEqual(env.port, '2179')\n\n    # PORT CASE 2\n    @patch('prestoadmin.main.load_config', side_effect=mock_empty_topology())\n    def test_default_topology(self, unused_mock_load):\n        main.parse_and_validate_commands(args=['server', 'uninstall'])\n        self.assertEqual(env.port, 22)\n        self.assertEqual(env.user, 'root')\n        self.assertEqual(env.hosts, ['localhost'])\n\n    @patch('prestoadmin.main.load_config', side_effect=mock_error_topology())\n    def test_error_topology(self, unused_mock_load):\n        self.assertRaises(ConfigurationError, main.parse_and_validate_commands,\n                          args=['server', 'uninstall'])\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/unit/test_package.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom fabric.state import env\nfrom fabric.operations import _AttributeString\nfrom mock import patch\nfrom prestoadmin import package\nfrom prestoadmin.util import constants\nfrom tests.unit.base_unit_case import BaseUnitCase\n\n\nclass TestPackage(BaseUnitCase):\n\n    @patch('prestoadmin.package.os.path.isfile')\n    @patch('prestoadmin.package.sudo')\n    @patch('prestoadmin.package.put')\n    def test_deploy_is_called(self, mock_put, mock_sudo, mock_isfile):\n        env.host = 'any_host'\n        mock_isfile.return_value = True\n        package.deploy('/any/path/rpm')\n        mock_sudo.assert_called_with('mkdir -p ' +\n                                     constants.REMOTE_PACKAGES_PATH)\n        mock_put.assert_called_with('/any/path/rpm',\n                                    constants.REMOTE_PACKAGES_PATH,\n                                    use_sudo=True)\n\n    @patch('prestoadmin.package.sudo')\n    def test_rpm_install(self, mock_sudo):\n        env.host = 'any_host'\n        env.nodeps = False\n        package.rpm_install('test.rpm')\n        mock_sudo.assert_called_with('rpm -i '\n                                     '/opt/prestoadmin/packages/test.rpm')\n\n    @patch('prestoadmin.package.sudo')\n    def test_rpm_install_nodeps(self, mock_sudo):\n        env.host = 'any_host'\n        env.nodeps = True\n        package.rpm_install('test.rpm')\n        mock_sudo.assert_called_with('rpm -i --nodeps '\n                                     '/opt/prestoadmin/packages/test.rpm')\n\n    @patch('prestoadmin.package._rpm_upgrade')\n    @patch('prestoadmin.package.sudo')\n    def test_rpm_upgrade(self, mock_sudo, mock_rpm_upgrade):\n        env.host = 'any_host'\n        env.nodeps = False\n        mock_sudo.return_value = _AttributeString('test_package_name')\n        mock_sudo.return_value.succeeded = True\n        package.rpm_upgrade('test.rpm')\n\n        mock_sudo.assert_any_call('rpm -qp --queryformat \\'%{NAME}\\' '\n                                  '/opt/prestoadmin/packages/test.rpm',\n                                  quiet=True)\n\n        mock_rpm_upgrade.assert_any_call('/opt/prestoadmin/packages/test.rpm')\n\n    @patch('prestoadmin.package.rpm_install')\n    @patch('prestoadmin.package.deploy')\n    @patch('prestoadmin.package.check_if_valid_rpm')\n    def test_install(self, mock_chksum, mock_deploy, mock_install):\n        env.host = 'any_host'\n        self.remove_runs_once_flag(package.install)\n        package.install('/any/path/rpm')\n        mock_chksum.assert_called_with('/any/path/rpm')\n        mock_deploy.assert_called_with('/any/path/rpm')\n        mock_install.assert_called_with('rpm')\n\n    @patch('prestoadmin.package.local')\n    @patch('prestoadmin.package.abort')\n    def test_check_rpm_checksum(self, mock_abort, mock_local):\n        mock_local.return_value = lambda: None\n        setattr(mock_local.return_value, 'stderr', '')\n        setattr(mock_local.return_value, 'stdout', 'sha1 MD5 NOT OK')\n        package.check_if_valid_rpm('/any/path/rpm')\n\n        mock_local.assert_called_with('rpm -K --nosignature /any/path/rpm',\n                                      capture=True)\n        mock_abort.assert_called_with('Corrupted RPM. '\n                                      'Try downloading the RPM again.')\n\n    @patch('prestoadmin.package.local')\n    @patch('prestoadmin.package.abort')\n    def test_check_rpm_checksum_err(self, mock_abort, mock_local):\n        mock_local.return_value = lambda: None\n        setattr(mock_local.return_value, 'stderr', 'Not an rpm package')\n        setattr(mock_local.return_value, 'stdout', '')\n        package.check_if_valid_rpm('/any/path/rpm')\n\n        mock_local.assert_called_with('rpm -K --nosignature /any/path/rpm',\n                                      capture=True)\n        mock_abort.assert_called_with('Not an rpm package')\n\n    @patch('prestoadmin.package.os.path.isfile')\n    @patch('prestoadmin.package.sudo')\n    @patch('prestoadmin.package.put')\n    def test_deploy_with_fallback_location(self, mock_put, mock_sudo, mock_isfile):\n        env.host = 'any_host'\n        mock_isfile.return_value = True\n        package.deploy('/any/path/rpm')\n        mock_put.return_value = lambda: None\n        setattr(mock_put.return_value, 'succeeded', False)\n        package.deploy('/any/path/rpm')\n        mock_put.assert_called_with('/any/path/rpm',\n                                    constants.REMOTE_PACKAGES_PATH,\n                                    use_sudo=True,\n                                    temp_dir='/tmp')\n\n    @patch('prestoadmin.package.os.path.isfile')\n    def test_deploy_invalid_local_path(self, mock_isfile):\n        mock_isfile.return_value = False\n        invalid_path = '/invalid/path'\n        self.assertRaisesRegexp(SystemExit, 'RPM file not found at %s' % invalid_path, package.deploy, invalid_path)\n\n    @patch('prestoadmin.package.uninstall')\n    def test_uninstall(self, mock_uninstall):\n        env.host = 'any_host'\n        env.nodeps = False\n        self.remove_runs_once_flag(package.uninstall)\n\n        package.uninstall('any_rpm')\n\n        mock_uninstall.assert_called_once_with('any_rpm')\n\n    @patch('prestoadmin.package.sudo')\n    def test_rpm_uninstall(self, mock_sudo):\n        env.host = 'any_host'\n        env.nodeps = False\n\n        package.rpm_uninstall('anyrpm')\n\n        mock_sudo.assert_called_with('rpm -e anyrpm')\n\n    @patch('prestoadmin.package.sudo')\n    def test_rpm_uninstall_nodeps(self, mock_sudo):\n        env.host = 'any_host'\n        env.nodeps = True\n\n        package.rpm_uninstall('anyrpm')\n\n        mock_sudo.assert_called_with('rpm -e --nodeps anyrpm')\n\n    @patch('prestoadmin.package.is_rpm_installed')\n    def test_rpm_uninstall_non_existing(self, mock_is_rpm_installed):\n        env.host = 'any_host'\n        env.force = False\n        mock_is_rpm_installed.return_value = False\n\n        try:\n            package.rpm_uninstall('anyrpm')\n            self.fail('expected exception to be raised here')\n        except SystemExit, e:\n            self.assertEqual(e.message, '[any_host] Package is not installed: anyrpm')\n\n    @patch('prestoadmin.package.is_rpm_installed')\n    @patch('prestoadmin.package.sudo')\n    def test_rpm_uninstall_non_existing_with_force(self, mock_sudo, mock_is_rpm_installed):\n        env.host = 'any_host'\n        env.force = True\n        env.nodeps = False\n        mock_is_rpm_installed.return_value = False\n\n        package.rpm_uninstall('anyrpm')\n\n        self.assertTrue(mock_sudo.call_count == 0)\n"
  },
  {
    "path": "tests/unit/test_plugin.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nunit tests for plugin module\n\"\"\"\nfrom mock import patch\nfrom prestoadmin import plugin\nfrom tests.unit.base_unit_case import BaseUnitCase\n\n\nclass TestPlugin(BaseUnitCase):\n    @patch('prestoadmin.plugin.write')\n    def test_add_jar(self, write_mock):\n        plugin.add_jar('/my/local/path.jar', 'hive-hadoop2')\n        write_mock.assert_called_with(\n            '/my/local/path.jar', '/usr/lib/presto/lib/plugin/hive-hadoop2')\n\n    @patch('prestoadmin.plugin.write')\n    def test_add_jar_provide_dir(self, write_mock):\n        plugin.add_jar('/my/local/path.jar', 'hive-hadoop2',\n                       '/etc/presto/plugin')\n        write_mock.assert_called_with('/my/local/path.jar',\n                                      '/etc/presto/plugin/hive-hadoop2')\n"
  },
  {
    "path": "tests/unit/test_presto_conf.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nTest the presto_conf module\n\"\"\"\n\nimport re\nfrom mock import patch\nfrom prestoadmin.presto_conf import get_presto_conf, validate_presto_conf\nfrom prestoadmin.util.exception import ConfigurationError\nfrom tests.base_test_case import BaseTestCase\n\n\nclass TestPrestoConf(BaseTestCase):\n\n    @patch('prestoadmin.presto_conf.os.path.isdir')\n    @patch('prestoadmin.presto_conf.os.listdir')\n    @patch('prestoadmin.presto_conf.get_conf_from_properties_file')\n    @patch('prestoadmin.presto_conf.get_conf_from_config_file')\n    def test_get_presto_conf(self, config_mock, props_mock, listdir_mock,\n                             isdir_mock):\n        isdir_mock.return_value = True\n        listdir_mock.return_value = ['log.properties', 'jvm.config', ]\n        config_mock.return_value = ['prop1', 'prop2']\n        props_mock.return_value = {'a': '1', 'b': '2'}\n        conf = get_presto_conf('dummy/dir')\n        config_mock.assert_called_with('dummy/dir/jvm.config')\n        props_mock.assert_called_with('dummy/dir/log.properties')\n        self.assertEqual(conf, {'log.properties': {'a': '1', 'b': '2'},\n                                'jvm.config': ['prop1', 'prop2']})\n\n    @patch('prestoadmin.presto_conf.os.listdir')\n    @patch('prestoadmin.presto_conf.os.path.isdir')\n    @patch('prestoadmin.presto_conf.get_conf_from_properties_file')\n    def test_get_non_presto_file(self, get_mock, isdir_mock, listdir_mock):\n        isdir_mock.return_value = True\n        listdir_mock.return_value = ['test.properties']\n        self.assertFalse(get_mock.called)\n\n    def test_conf_not_exists_is_empty(self):\n        self.assertEqual(get_presto_conf('/does/not/exist'), {})\n\n    def test_valid_conf(self):\n        conf = {'node.properties': {}, 'jvm.config': [],\n                'config.properties': {'discovery.uri': 'http://uri'}}\n        self.assertEqual(validate_presto_conf(conf), conf)\n\n    def test_invalid_conf(self):\n        conf = {'jvm.config': [],\n                'config.properties': {}}\n        self.assertRaisesRegexp(ConfigurationError,\n                                'Missing configuration for required file:',\n                                validate_presto_conf,\n                                conf)\n\n    def test_invalid_node_type(self):\n        conf = {'node.properties': '', 'jvm.config': [],\n                'config.properties': {}}\n        self.assertRaisesRegexp(ConfigurationError,\n                                'node.properties must be an object with key-'\n                                'value property pairs',\n                                validate_presto_conf,\n                                conf)\n\n    def test_invalid_jvm_type(self):\n        conf = {'node.properties': {}, 'jvm.config': {},\n                'config.properties': {}}\n        self.assertRaisesRegexp(ConfigurationError,\n                                re.escape('jvm.config must contain a json '\n                                          'array of jvm arguments ([arg1, '\n                                          'arg2, arg3])'),\n                                validate_presto_conf,\n                                conf)\n\n    def test_invalid_config_type(self):\n        conf = {'node.properties': {}, 'jvm.config': [],\n                'config.properties': []}\n        self.assertRaisesRegexp(ConfigurationError,\n                                'config.properties must be an object with key-'\n                                'value property pairs',\n                                validate_presto_conf,\n                                conf)\n"
  },
  {
    "path": "tests/unit/test_presto_config.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom StringIO import StringIO\n\nfrom prestoadmin.util.presto_config import PrestoConfig\nfrom tests.unit.base_unit_case import BaseUnitCase\n\n\nclass TestPrestoConfig(BaseUnitCase):\n    realworld = \"\"\"\ncoordinator=true\ndiscovery-server.enabled=true\ndiscovery.uri=http://localhost:8285\nhttp-server.http.port=8285\nnode-scheduler.include-coordinator=true\nquery.max-memory-per-node=8GB\nquery.max-memory=50GB\nhttp-server.https.port=8444\nhttp-server.https.enabled=true\nhttp-server.https.keystore.path=/tmp/mykeystore.jks\nhttp-server.https.keystore.key=testldap\nhttp-server.authentication.type=LDAP\nauthentication.ldap.url=ldaps://10.25.171.180:636\nauthentication.ldap.user-bind-pattern=${USER}@presto.testldap.com\n    \"\"\"\n\n    def _get_presto_config(self, config):\n        config_file = StringIO(config)\n        return PrestoConfig.from_file(config_file)\n\n    def _assert_use_https(self, expected, config):\n        presto_config = self._get_presto_config(config)\n        self.assertEqual(presto_config.use_https(), expected)\n\n    def test_use_https(self):\n        self._assert_use_https(False, \"\")\n        self._assert_use_https(False, \"http-server.http.enabled=true\")\n        self._assert_use_https(False, \"http-server.https.enabled=true\")\n\n        self._assert_use_https(False, \"\"\"\nhttp-server.http.enabled=true\nhttp-server.https.enabled=true\")\n        \"\"\")\n\n        self._assert_use_https(True, \"\"\"\nhttp-server.http.enabled=false\nhttp-server.https.enabled=true\n        \"\"\")\n\n        self._assert_use_https(False, self.realworld)\n\n    def _assert_use_ldap(self, expected, config):\n        presto_config = self._get_presto_config(config)\n        self.assertEqual(presto_config.use_ldap(), expected)\n\n    def test_use_ldap(self):\n        self._assert_use_ldap(False, \"\")\n        self._assert_use_ldap(False, \"http-server.authentication.type=LDAP\")\n\n        self._assert_use_ldap(False, \"\"\"\nhttp-server.http.enabled=false\nhttp-server.https.enabled=true\nhttp-server.authentication.type=A_BIG_BRASS_KEY\n        \"\"\")\n\n        self._assert_use_ldap(True, \"\"\"\nhttp-server.http.enabled=false\nhttp-server.https.enabled=true\nhttp-server.authentication.type=LDAP\n        \"\"\")\n\n        self._assert_use_ldap(False, self.realworld)\n"
  },
  {
    "path": "tests/unit/test_prestoclient.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport socket\nfrom httplib import HTTPException, HTTPConnection\n\nfrom fabric.operations import _AttributeString\nfrom mock import patch, PropertyMock\n\nfrom prestoadmin.prestoclient import URL_TIMEOUT_MS, PrestoClient\nfrom prestoadmin.util.exception import InvalidArgumentError\nfrom tests.base_test_case import BaseTestCase\nfrom tests.unit.base_unit_case import PRESTO_CONFIG\n\n\n@patch('prestoadmin.util.presto_config.PrestoConfig.coordinator_config',\n       return_value=PRESTO_CONFIG)\nclass TestPrestoClient(BaseTestCase):\n    def test_no_sql(self, mock_presto_config):\n        client = PrestoClient('any_host', 'any_user')\n        self.assertRaisesRegexp(InvalidArgumentError,\n                                \"SQL query missing\",\n                                client.run_sql, \"\", )\n\n    def test_no_server(self, mock_presto_config):\n        client = PrestoClient(\"\", 'any_user')\n        self.assertRaisesRegexp(InvalidArgumentError,\n                                \"Server IP missing\",\n                                client.run_sql, \"any_sql\")\n\n    def test_no_user(self, mock_presto_config):\n        client = PrestoClient('any_host', \"\")\n        self.assertRaisesRegexp(InvalidArgumentError,\n                                \"Username missing\",\n                                client.run_sql, \"any_sql\")\n\n    @patch('prestoadmin.prestoclient.HTTPConnection')\n    def test_default_request_called(self, mock_conn, mock_presto_config):\n        client = PrestoClient('any_host', 'any_user')\n        headers = {\"X-Presto-Catalog\": \"hive\", \"X-Presto-Schema\": \"default\",\n                   \"X-Presto-User\": 'any_user', \"X-Presto-Source\": \"presto-admin\"}\n\n        client.run_sql(\"any_sql\")\n        mock_conn.assert_called_with('any_host', 8080, False, URL_TIMEOUT_MS)\n        mock_conn().request.assert_called_with(\"POST\", \"/v1/statement\",\n                                               \"any_sql\", headers)\n        self.assertTrue(mock_conn().getresponse.called)\n\n    @patch('prestoadmin.prestoclient.HTTPConnection')\n    def test_connection_failed(self, mock_conn, mock_presto_config):\n        client = PrestoClient('any_host', 'any_user')\n        client.run_sql(\"any_sql\")\n\n        self.assertTrue(mock_conn().close.called)\n        self.assertFalse(client.run_sql(\"any_sql\"))\n\n    @patch('prestoadmin.prestoclient.HTTPConnection')\n    def test_http_call_failed(self, mock_conn, mock_presto_config):\n        client = PrestoClient('any_host', 'any_user')\n        mock_conn.side_effect = HTTPException(\"Error\")\n        self.assertFalse(client.run_sql(\"any_sql\"))\n\n        mock_conn.side_effect = socket.error(\"Error\")\n        self.assertFalse(client.run_sql(\"any_sql\"))\n\n    @patch.object(HTTPConnection, 'request')\n    @patch.object(HTTPConnection, 'getresponse')\n    def test_http_answer_valid(self, mock_response, mock_request, mock_presto_config):\n        client = PrestoClient('any_host', 'any_user')\n        mock_response.return_value.read.return_value = '{}'\n        type(mock_response.return_value).status = \\\n            PropertyMock(return_value=200)\n        self.assertEquals(client.run_sql('any_sql'), [])\n\n    @patch.object(HTTPConnection, 'request')\n    @patch.object(HTTPConnection, 'getresponse')\n    def test_http_answer_not_json(self, mock_response,\n                                  mock_request, mock_presto_config):\n        client = PrestoClient('any_host', 'any_user')\n        mock_response.return_value.read.return_value = 'NOT JSON!'\n        type(mock_response.return_value).status =\\\n            PropertyMock(return_value=200)\n        self.assertRaisesRegexp(ValueError, 'No JSON object could be decoded',\n                                client.run_sql, 'any_sql')\n\n    @patch('prestoadmin.prestoclient.HTTPConnection')\n    @patch('prestoadmin.util.remote_config_util.sudo')\n    def testrun_sql_get_port(self, sudo_mock, conn_mock, mock_presto_config):\n        client = PrestoClient('any_host', 'any_user')\n        client.rows = ['hello']\n        client.next_uri = 'hello'\n        client.response_from_server = {'hello': 'hello'}\n        sudo_mock.return_value = _AttributeString('http-server.http.port=8080')\n        sudo_mock.return_value.failed = False\n        sudo_mock.return_value.return_code = 0\n        client.run_sql('select * from nation')\n        self.assertEqual(client.port, 8080)\n        self.assertEqual(client.rows, [])\n        self.assertEqual(client.next_uri, '')\n        self.assertEqual(client.response_from_server, {})\n\n    def test_create_authorization_headers(self, mock_presto_config):\n        auth_headers = PrestoClient._create_auth_headers(\"Aladdin\", \"open sesame\")\n        expected_auth_headers = {\"Authorization\": \"Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==\"}\n        self.assertEqual(auth_headers, expected_auth_headers)\n\n    @patch('prestoadmin.prestoclient.error')\n    def test_create_authorization_headers_fails_with_empty_user(self, mock_error, mock_presto_config):\n        PrestoClient._create_auth_headers(\"\", \"open sesame\")\n        error_message = 'LDAP user (taken from internal-communication.authentication.ldap.user in ' \\\n                        '/etc/presto/config.properties on the coordinator) cannot be null or empty'\n        mock_error.assert_called_once_with(error_message)\n\n    @patch('prestoadmin.prestoclient.error')\n    def test_create_authorization_headers_fails_with_null_user(self, mock_error, mock_presto_config):\n        PrestoClient._create_auth_headers(None, \"open sesame\")\n        error_message = 'LDAP user (taken from internal-communication.authentication.ldap.user in ' \\\n                        '/etc/presto/config.properties on the coordinator) cannot be null or empty'\n        mock_error.assert_called_once_with(error_message)\n\n    @patch('prestoadmin.prestoclient.error')\n    def test_create_authorization_headers_fails_with_empty_password(self, mock_error, mock_presto_config):\n        PrestoClient._create_auth_headers(\"Aladdin\", \"\")\n        error_message = 'LDAP password (taken from internal-communication.authentication.ldap.password in ' \\\n                        '/etc/presto/config.properties on the coordinator) cannot be null or empty'\n        mock_error.assert_called_once_with(error_message)\n\n    @patch('prestoadmin.prestoclient.error')\n    def test_create_authorization_headers_fails_with_colon_in_user(self, mock_error, mock_presto_config):\n        PrestoClient._create_auth_headers(\"Aladdin:1\", \"open sesame\")\n        error_message = \"LDAP user cannot contain ':': Aladdin:1\"\n        mock_error.assert_called_once_with(error_message)\n"
  },
  {
    "path": "tests/unit/test_server.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nTests the presto install\n\"\"\"\nimport os\nimport tempfile\n\nfrom fabric.api import env\nfrom fabric.operations import _AttributeString\nfrom mock import patch, call, MagicMock\n\nfrom prestoadmin import server\nfrom prestoadmin.prestoclient import PrestoClient\nfrom prestoadmin.server import INIT_SCRIPTS\nfrom prestoadmin.util import constants\nfrom prestoadmin.util.exception import ConfigFileNotFoundError, \\\n    ConfigurationError\nfrom prestoadmin.util.fabricapi import get_host_list\nfrom prestoadmin.util.local_config_util import get_catalog_directory\nfrom tests.unit.base_unit_case import BaseUnitCase, PRESTO_CONFIG\n\n\nclass TestInstall(BaseUnitCase):\n    SERVER_FAIL_MSG = 'Could not verify server status for: failed_node1\\n' \\\n                      'This could mean that the server failed to start or that there was no coordinator or worker up.' \\\n                      ' Please check ' \\\n                      + constants.DEFAULT_PRESTO_SERVER_LOG_FILE + ' and ' + \\\n                      constants.DEFAULT_PRESTO_LAUNCHER_LOG_FILE\n\n    def setUp(self):\n        self.remove_runs_once_flag(server.status)\n        self.remove_runs_once_flag(server.install)\n        self.maxDiff = None\n        super(TestInstall, self).setUp(capture_output=True)\n\n    @patch('prestoadmin.server.package.check_if_valid_rpm')\n    def check_corrupt_rpm_removed_and_returns_none(self, mock_valid_rpm, is_absolute_path):\n        mock_valid_rpm.side_effect = SystemExit('...Corrupted RPM...')\n        fd = -1\n        absolute_path_corrupt_rpm = None\n        try:\n            fd, absolute_path_corrupt_rpm = tempfile.mkstemp()\n            if is_absolute_path:\n                local_finder = server.LocalPrestoRpmFinder(absolute_path_corrupt_rpm)\n            else:\n                relative_path_corrupt_rpm = os.path.basename(absolute_path_corrupt_rpm)\n                local_finder = server.LocalPrestoRpmFinder(relative_path_corrupt_rpm)\n            self.assertTrue(local_finder.find_local_presto_rpm() is None)\n            self.assertTrue(mock_valid_rpm.called)\n        finally:\n            os.close(fd)\n            self.assertRaises(OSError, os.remove, absolute_path_corrupt_rpm)\n\n    def test_check_corrupt_rpm_at_absolute_path_is_removed_and_returns_none(self):\n        self.check_corrupt_rpm_removed_and_returns_none(is_absolute_path=True)\n\n    def test_check_corrupt_rpm_at_relative_path_is_removed_and_returns_none(self):\n        self.check_corrupt_rpm_removed_and_returns_none(is_absolute_path=False)\n\n    @patch('prestoadmin.server.package.check_if_valid_rpm')\n    def check_nonexistent_rpm_returns_none(self, mock_valid_rpm, is_absolute_path):\n        mock_valid_rpm.side_effect = SystemExit('...File does not exist...')\n        fd = -1\n        absolute_path_nonexistent_rpm = None\n        try:\n            fd, absolute_path_nonexistent_rpm = tempfile.mkstemp()\n            if is_absolute_path:\n                local_finder = server.LocalPrestoRpmFinder(absolute_path_nonexistent_rpm)\n            else:\n                relative_path_nonexistent_rpm = os.path.basename(absolute_path_nonexistent_rpm)\n                local_finder = server.LocalPrestoRpmFinder(relative_path_nonexistent_rpm)\n        finally:\n            os.close(fd)\n            os.remove(absolute_path_nonexistent_rpm)\n        self.assertTrue(local_finder.find_local_presto_rpm() is None)\n\n    def test_check_nonexistent_rpm_at_absolute_path_returns_none(self):\n        self.check_nonexistent_rpm_returns_none(is_absolute_path=True)\n\n    def test_check_nonexistent_rpm_at_relative_path_returns_none(self):\n        self.check_nonexistent_rpm_returns_none(is_absolute_path=False)\n\n    @patch('prestoadmin.server.package.check_if_valid_rpm')\n    def check_find_valid_rpm_returns_absolute_path(self, mock_valid_rpm, is_absolute_path):\n        fd = -1\n        absolute_path_valid_rpm = None\n        try:\n            fd, absolute_path_valid_rpm = tempfile.mkstemp()\n            if is_absolute_path:\n                local_finder = server.LocalPrestoRpmFinder(absolute_path_valid_rpm)\n            else:\n                relative_path_valid_rpm = os.path.basename(absolute_path_valid_rpm)\n                local_finder = server.LocalPrestoRpmFinder(relative_path_valid_rpm)\n            self.assertEqual(local_finder.find_local_presto_rpm(), absolute_path_valid_rpm)\n            self.assertTrue(mock_valid_rpm.called)\n        finally:\n            os.close(fd)\n            os.remove(absolute_path_valid_rpm)\n\n    def test_check_find_valid_rpm_at_absolute_path_returns_absolute_path(self):\n        self.check_find_valid_rpm_returns_absolute_path(is_absolute_path=True)\n\n    def test_check_find_valid_rpm_at_relative_path_returns_absolute_path(self):\n        self.check_find_valid_rpm_returns_absolute_path(is_absolute_path=False)\n\n    @patch('prestoadmin.server.urllib2.urlopen')\n    def check_content_length(self, mock_urlopen, is_header_present):\n        url_response = MagicMock()\n        if is_header_present:\n            url_response.info.return_value = {'Content-Length': '123'}\n        else:\n            url_response.info.return_value = {}\n\n        mock_urlopen.return_value = url_response\n        url_handler = server.UrlHandler('https://www.google.com')\n        if is_header_present:\n            self.assertEqual(url_handler.get_content_length(), 123)\n        else:\n            self.assertTrue(url_handler.get_content_length() is None)\n\n    def test_get_content_length_returns_content_length(self):\n        self.check_content_length(is_header_present=True)\n\n    def test_get_content_length_missing_header_returns_none(self):\n        self.check_content_length(is_header_present=False)\n\n    @patch('prestoadmin.server.urllib2.urlopen')\n    def check_download_file_name(self, mock_urlopen, is_header_present, is_version_present):\n        url_response = MagicMock()\n        if is_header_present:\n            url_response.info.return_value = {'Content-Disposition': 'attachment; filename=\"test.txt\"'}\n        else:\n            url_response.info.return_value = {}\n        mock_urlopen.return_value = url_response\n        url_handler = server.UrlHandler('https://www.google.com')\n        if is_header_present:\n            self.assertEqual(url_handler.get_download_file_name(), 'test.txt')\n        else:\n            if is_version_present:\n                self.assertEqual(url_handler.get_download_file_name('0.148'), 'presto-server-rpm-0.148.rpm')\n            else:\n                self.assertEqual(url_handler.get_download_file_name(), server.DEFAULT_RPM_NAME)\n\n    def test_get_download_file_name_without_version_returns_header_file_name(self):\n        self.check_download_file_name(is_header_present=True, is_version_present=False)\n\n    def test_get_download_file_name_with_version_returns_header_file_name(self):\n        self.check_download_file_name(is_header_present=True, is_version_present=True)\n\n    def test_get_download_file_name_not_in_header_without_version_returns_default_name(self):\n        self.check_download_file_name(is_header_present=False, is_version_present=False)\n\n    def test_get_download_file_name_not_in_header_with_version_returns_default_name(self):\n        self.check_download_file_name(is_header_present=False, is_version_present=True)\n\n    @patch('prestoadmin.server.UrlHandler')\n    def test_download_rpm(self, mock_url_handler):\n        instance_url_handler = mock_url_handler.return_value\n        instance_url_handler.read_block.side_effect = ['abc', 'def', None]\n        instance_url_handler.get_content_length.return_value = 6\n        fd = -1\n        absolute_path_valid_rpm = None\n        try:\n            fd, absolute_path_valid_rpm = tempfile.mkstemp()\n            instance_url_handler.get_download_file_name.return_value = os.path.basename(absolute_path_valid_rpm)\n            downloader = server.PrestoRpmDownloader(instance_url_handler)\n            downloader.download_rpm('0.148')\n            instance_url_handler.get_download_file_name.assert_called_with('0.148')\n            with open(absolute_path_valid_rpm) as download_file:\n                self.assertEqual(download_file.read(), 'abcdef')\n        finally:\n            os.close(fd)\n            os.remove(absolute_path_valid_rpm)\n\n    def check_version(self, version, expect_valid):\n        rpm_fetcher = server.PrestoRpmFetcher(version)\n        is_valid_version = rpm_fetcher.check_valid_version()\n        if expect_valid:\n            self.assertTrue(is_valid_version)\n        else:\n            self.assertFalse(is_valid_version)\n\n    def test_check_version_empty_string_fails(self):\n        self.check_version('', False)\n\n    def test_check_version_major_succeeds(self):\n        self.check_version('1', True)\n\n    def test_check_version_major_extra_period_fails(self):\n        self.check_version('1.', False)\n\n    def test_check_version_minor_succeeds(self):\n        self.check_version('1.2', True)\n\n    def test_check_version_minor_extra_period_fails(self):\n        self.check_version('1.2.', False)\n\n    def test_check_version_patch_succeeds(self):\n        self.check_version('1.2.3', True)\n\n    def test_check_version_patch_extra_period_fails(self):\n        self.check_version('1.2.3.', False)\n\n    def test_check_version_multiple_numbers_succeeds(self):\n        self.check_version('111.222.333', True)\n\n    def test_check_version_with_dashes_fails(self):\n        self.check_version('1-2-3', False)\n\n    def test_check_version_extra_fields_fails(self):\n        self.check_version('1.2.3.4', False)\n\n    @staticmethod\n    def set_up_specifier_find_and_download_mocks(mock_download_rpm, mock_find_local, rpm_path, location=None):\n        if location == 'local':\n            mock_find_local.return_value = rpm_path\n        elif location == 'download':\n            mock_download_rpm.return_value = rpm_path\n            mock_find_local.return_value = None\n        elif location == 'none':\n            mock_download_rpm.return_value = None\n            mock_find_local.return_value = None\n        else:\n            exit('Cannot mock because of invalid location: %s' % location)\n\n    def call_and_assert_install_with_rpm_specifier(self, mock_download_rpm, mock_check_rpm, mock_execute, location,\n                                                   rpm_specifier, rpm_path):\n        if location == 'local' or location == 'download':\n            server.install(rpm_specifier)\n            if location == 'local':\n                mock_download_rpm.assert_not_called()\n            else:\n                self.assertTrue(mock_download_rpm.called)\n            mock_check_rpm.assert_called_with(rpm_path)\n            mock_execute.assert_called_with(server.deploy_install_configure,\n                                            rpm_path, hosts=get_host_list())\n        elif location == 'none':\n            self.assertRaises(SystemExit, server.install, rpm_specifier)\n            mock_check_rpm.assert_not_called()\n            mock_execute.assert_not_called()\n        else:\n            exit('Cannot assert because of invalid location: %s' % location)\n\n    @patch('prestoadmin.server.execute')\n    @patch('prestoadmin.server.package.check_if_valid_rpm')\n    @patch('prestoadmin.server.LocalPrestoRpmFinder.find_local_presto_rpm')\n    @patch('prestoadmin.server.PrestoRpmDownloader.download_rpm')\n    def check_rpm_specifier_with_location(self, mock_download_rpm, mock_find_local,\n                                          mock_check_rpm, mock_execute, rpm_specifier, location=None):\n        # This function should not mock the UrlHandler class so that urls will be opened\n        # This checks that the urls that the installer tries to reach are still valid\n        rpm_path = '/path/to/download_or_found/rpm'\n        TestInstall.set_up_specifier_find_and_download_mocks(mock_download_rpm, mock_find_local, rpm_path, location)\n        self.call_and_assert_install_with_rpm_specifier(mock_download_rpm, mock_check_rpm, mock_execute, location,\n                                                        rpm_specifier, rpm_path)\n\n    def test_specifier_as_latest_download(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='latest', location='download')\n\n    def test_specifier_as_latest_found_locally(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='latest', location='local')\n\n    def test_specifier_as_latest_not_located(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='latest', location='none')\n\n    def test_specifier_as_url_download(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='http://search.maven.org/remotecontent?filepath=com/'\n                                                             'facebook/presto/presto-server-rpm/0.148/'\n                                                             'presto-server-rpm-0.148.rpm',\n                                               location='download')\n\n    def test_specifier_as_url_found_locally(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='http://search.maven.org/remotecontent?filepath=com/'\n                                                             'facebook/presto/presto-server-rpm/0.148/'\n                                                             'presto-server-rpm-0.148.rpm',\n                                               location='local')\n\n    def test_specifier_as_url_not_located(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='http://search.maven.org/remotecontent?filepath=com/'\n                                                             'facebook/presto/presto-server-rpm/0.148/'\n                                                             'presto-server-rpm-0.148.rpm',\n                                               location='none')\n\n    def test_specifier_as_version_download(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='0.144.6', location='download')\n\n    def test_specifier_as_version_found_locally(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='0.144.6', location='local')\n\n    def test_specifier_as_version_not_located(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='0.144.6', location='none')\n\n    def test_specifier_as_local_path_without_file_scheme_found_locally(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='/path/to/rpm', location='local')\n\n    def test_specifier_as_local_path_without_file_scheme_not_located(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='/path/to/rpm', location='none')\n\n    def test_specifier_as_local_path_with_file_scheme_found_locally(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='file:///path/to/rpm', location='local')\n\n    def test_specifier_as_local_path_with_file_scheme_not_located(self):\n        self.check_rpm_specifier_with_location(rpm_specifier='file:///path/to/rpm', location='none')\n\n    @patch('prestoadmin.server.sudo')\n    @patch('prestoadmin.server.package.deploy_install')\n    @patch('prestoadmin.server.update_configs')\n    def test_deploy_install_configure(self, mock_update, mock_install,\n                                      mock_sudo):\n        rpm_specifier = \"/any/path/rpm\"\n        mock_sudo.side_effect = self.mock_fail_then_succeed()\n\n        server.deploy_install_configure(rpm_specifier)\n        mock_install.assert_called_with(rpm_specifier)\n        self.assertTrue(mock_update.called)\n        mock_sudo.assert_called_with('getent passwd presto', quiet=True)\n\n    @patch('prestoadmin.server.check_presto_version')\n    @patch('prestoadmin.package.is_rpm_installed')\n    @patch('prestoadmin.package.rpm_uninstall')\n    def test_uninstall_is_called(self, mock_package_rpm_uninstall, mock_package_is_rpm_installed, mock_version_check):\n        env.host = \"any_host\"\n        mock_package_is_rpm_installed.side_effect = [False, True]\n\n        server.uninstall()\n\n        mock_version_check.assert_called_with()\n        mock_package_is_rpm_installed.assert_called_with('presto-server')\n        mock_package_rpm_uninstall.assert_called_with('presto-server')\n        self.assertTrue(mock_package_is_rpm_installed.call_count == 2)\n        self.assertTrue(mock_package_rpm_uninstall.call_count == 1)\n\n    @patch('prestoadmin.util.presto_config.PrestoConfig.coordinator_config',\n           return_value=PRESTO_CONFIG)\n    @patch('prestoadmin.util.remote_config_util.lookup_in_config')\n    @patch('prestoadmin.server.run')\n    @patch('prestoadmin.server.sudo')\n    @patch('prestoadmin.server.query_server_for_status')\n    @patch('prestoadmin.server.warn')\n    @patch('prestoadmin.server.check_presto_version')\n    @patch('prestoadmin.server.is_port_in_use')\n    def test_server_start_fail(self, mock_port_in_use,\n                               mock_version_check, mock_warn,\n                               mock_query_for_status, mock_sudo, mock_run, mock_config,\n                               mock_presto_config):\n        mock_query_for_status.return_value = False\n        env.host = \"failed_node1\"\n        mock_version_check.return_value = ''\n        mock_port_in_use.return_value = 0\n        mock_config.return_value = None\n        server.start()\n        mock_sudo.assert_called_with('set -m; ' + INIT_SCRIPTS + ' start')\n        mock_version_check.assert_called_with()\n        mock_warn.assert_called_with(self.SERVER_FAIL_MSG)\n\n    @patch('prestoadmin.server.sudo')\n    @patch('prestoadmin.server.check_server_status')\n    @patch('prestoadmin.server.check_presto_version')\n    @patch('prestoadmin.server.is_port_in_use')\n    def test_server_start(self, mock_port_in_use, mock_version_check,\n                          mock_check_status, mock_sudo):\n        env.host = 'good_node'\n        mock_version_check.return_value = ''\n        mock_check_status.return_value = True\n        mock_port_in_use.return_value = 0\n        server.start()\n        mock_sudo.assert_called_with('set -m; ' + INIT_SCRIPTS + ' start')\n        mock_version_check.assert_called_with()\n        self.assertEqual('Waiting to make sure we can connect to the Presto '\n                         'server on good_node, please wait. This check will '\n                         'time out after 2 minutes if the server does not '\n                         'respond.\\nServer started successfully on: '\n                         'good_node\\n', self.test_stdout.getvalue())\n\n    @patch('prestoadmin.server.sudo')\n    @patch('prestoadmin.server.check_presto_version')\n    @patch('prestoadmin.server.is_port_in_use')\n    def test_server_start_bad_presto_version(self, mock_port_in_use,\n                                             mock_version_check, mock_sudo):\n        env.host = \"good_node\"\n        mock_version_check.return_value = 'Presto not installed'\n        server.start()\n        mock_version_check.assert_called_with()\n        self.assertEqual(False, mock_sudo.called)\n\n    @patch('prestoadmin.server.sudo')\n    @patch('prestoadmin.server.check_presto_version')\n    @patch('prestoadmin.server.is_port_in_use')\n    def test_server_start_port_in_use(self, mock_port_in_use,\n                                      mock_version_check, mock_sudo):\n        env.host = \"good_node\"\n        mock_version_check.return_value = ''\n        mock_port_in_use.return_value = 1\n        server.start()\n        mock_version_check.assert_called_with()\n        mock_port_in_use.assert_called_with('good_node')\n        self.assertEqual(False, mock_sudo.called)\n\n    @patch('prestoadmin.server.sudo')\n    @patch('prestoadmin.server.check_status_for_control_commands')\n    @patch('prestoadmin.server.check_presto_version')\n    @patch('prestoadmin.server.is_port_in_use')\n    def test_server_restart_port_in_use(self, mock_port_in_use,\n                                        mock_version_check, mock_check_status,\n                                        mock_sudo):\n        env.host = \"good_node\"\n        mock_version_check.return_value = ''\n        mock_port_in_use.return_value = 1\n        server.restart()\n        mock_sudo.assert_called_with('set -m; ' + INIT_SCRIPTS + ' stop')\n        mock_version_check.assert_called_with()\n        self.assertEqual(False, mock_check_status.called)\n\n    @patch('prestoadmin.server.check_presto_version')\n    @patch('prestoadmin.server.is_port_in_use')\n    @patch('prestoadmin.server.sudo')\n    def test_server_stop(self, mock_sudo, mock_port_in_use,\n                         mock_version_check):\n        mock_version_check.return_value = ''\n        server.stop()\n        mock_version_check.assert_called_with()\n        self.assertEqual(False, mock_port_in_use.called)\n        mock_sudo.assert_called_with('set -m; ' + INIT_SCRIPTS + ' stop')\n\n    @patch('prestoadmin.util.remote_config_util.lookup_in_config')\n    @patch('prestoadmin.server.sudo')\n    @patch('prestoadmin.server.check_server_status')\n    @patch('prestoadmin.server.warn')\n    @patch('prestoadmin.server.check_presto_version')\n    @patch('prestoadmin.server.is_port_in_use')\n    def test_server_restart_fail(self, mock_port_in_use, mock_version_check,\n                                 mock_warn, mock_status, mock_sudo,\n                                 mock_config):\n        mock_status.return_value = False\n        mock_config.return_value = None\n        env.host = \"failed_node1\"\n        mock_version_check.return_value = ''\n        mock_port_in_use.return_value = 0\n        server.restart()\n        mock_sudo.assert_any_call('set -m; ' + INIT_SCRIPTS + ' stop')\n        mock_sudo.assert_any_call('set -m; ' + INIT_SCRIPTS + ' start')\n        mock_version_check.assert_called_with()\n\n        mock_warn.assert_called_with(self.SERVER_FAIL_MSG)\n\n    @patch('prestoadmin.util.remote_config_util.lookup_port')\n    @patch('prestoadmin.server.sudo')\n    @patch('prestoadmin.server.check_server_status')\n    @patch('prestoadmin.server.check_presto_version')\n    @patch('prestoadmin.server.is_port_in_use')\n    def test_server_restart(self, mock_port_in_use, mock_version_check,\n                            mock_status, mock_sudo, mock_lookup_host):\n        mock_status.return_value = True\n        env.host = 'good_node'\n        mock_version_check.return_value = ''\n        mock_port_in_use.return_value = 0\n        server.restart()\n        mock_sudo.assert_any_call('set -m; ' + INIT_SCRIPTS + ' stop')\n        mock_sudo.assert_any_call('set -m; ' + INIT_SCRIPTS + ' start')\n        mock_version_check.assert_called_with()\n        self.assertEqual('Waiting to make sure we can connect to the Presto '\n                         'server on good_node, please wait. This check will '\n                         'time out after 2 minutes if the server does not '\n                         'respond.\\nServer started successfully on: '\n                         'good_node\\n', self.test_stdout.getvalue())\n\n    @patch('prestoadmin.server.catalog')\n    @patch('prestoadmin.server.configure_cmds.deploy')\n    @patch('prestoadmin.server.os.path.exists')\n    @patch('prestoadmin.server.os.makedirs')\n    @patch('prestoadmin.server.util.filesystem.os.fdopen')\n    @patch('prestoadmin.server.util.filesystem.os.open')\n    def test_update_config(self, mock_open, mock_fdopen, mock_makedir,\n                           mock_path_exists, mock_config, mock_connector):\n        e = ConfigFileNotFoundError(\n            message='problems', config_path='config_path')\n        mock_connector.add.side_effect = e\n        mock_path_exists.side_effect = [False, False]\n\n        server.update_configs()\n\n        mock_config.assert_called_with()\n        mock_makedir.assert_called_with(get_catalog_directory())\n        mock_open.assert_called_with(os.path.join(get_catalog_directory(),\n                                                  'tpch.properties'),\n                                     os.O_CREAT | os.O_EXCL | os.O_WRONLY)\n        file_manager = mock_fdopen.return_value.__enter__.return_value\n        file_manager.write.assert_called_with(\"connector.name=tpch\")\n\n    @patch('prestoadmin.util.presto_config.PrestoConfig.coordinator_config',\n           return_value=PRESTO_CONFIG)\n    @patch('prestoadmin.server.run')\n    @patch('prestoadmin.server.lookup_string_config')\n    @patch.object(PrestoClient, 'run_sql')\n    def test_check_success_status(self, mock_run_sql, string_config_mock, mock_run, mock_presto_config):\n        env.roledefs = {\n            'coordinator': ['Node1'],\n            'worker': ['Node1', 'Node2', 'Node3', 'Node4'],\n            'all': ['Node1', 'Node2', 'Node3', 'Node4']\n        }\n        env.hosts = env.roledefs['all']\n        env.host = 'Node1'\n        string_config_mock.return_value = 'Node1'\n        mock_run_sql.return_value = [['Node2', 'some stuff'], ['Node1', 'some other stuff']]\n        self.assertEqual(server.check_server_status(), True)\n\n    @patch('prestoadmin.util.presto_config.PrestoConfig.coordinator_config',\n           return_value=PRESTO_CONFIG)\n    @patch('prestoadmin.server.run')\n    @patch('prestoadmin.server.lookup_string_config')\n    @patch('prestoadmin.server.query_server_for_status')\n    def test_check_success_fail(self, mock_query_for_status, string_config_mock, mock_run,\n                                mock_presto_config):\n        env.roledefs = {\n            'coordinator': ['Node1'],\n            'worker': ['Node1', 'Node2', 'Node3', 'Node4'],\n            'all': ['Node1', 'Node2', 'Node3', 'Node4']\n        }\n        env.hosts = env.roledefs['all']\n        env.host = 'Node1'\n        string_config_mock.return_value = 'Node1'\n        mock_query_for_status.return_value = False\n        self.assertEqual(server.check_server_status(), False)\n\n    @patch('prestoadmin.util.presto_config.PrestoConfig.coordinator_config',\n           return_value=PRESTO_CONFIG)\n    @patch('prestoadmin.server.execute')\n    @patch('prestoadmin.server.get_presto_version')\n    @patch('prestoadmin.server.presto_installed')\n    @patch.object(PrestoClient, 'run_sql')\n    def test_status_from_each_node(\n            self, mock_run_sql, mock_presto_installed, mock_get_presto_version, mock_execute, mock_presto_config):\n        env.roledefs = {\n            'coordinator': ['Node1'],\n            'worker': ['Node1', 'Node2', 'Node3', 'Node4'],\n            'all': ['Node1', 'Node2', 'Node3', 'Node4']\n        }\n        env.hosts = env.roledefs['all']\n\n        mock_get_presto_version.return_value = '0.97-SNAPSHOT'\n        mock_run_sql.side_effect = [\n            [['select * from system.runtime.nodes']],\n            [['hive'], ['system'], ['tpch']],\n            [['http://active/statement', 'presto-main:0.97-SNAPSHOT', True]],\n            [['http://inactive/stmt', 'presto-main:0.99-SNAPSHOT', False]],\n            [[]],\n            [['http://servrdown/statement', 'any', True]]\n        ]\n        mock_execute.side_effect = [{\n            'Node1': ('IP1', True, ''),\n            'Node2': ('IP2', True, ''),\n            'Node3': ('IP3', True, ''),\n            'Node4': Exception('Timed out trying to connect to Node4')\n        }]\n        env.host = 'Node1'\n        server.status()\n\n        expected = self.read_file_output('/resources/server_status_out.txt')\n        self.assertEqual(\n            expected.splitlines(),\n            self.test_stdout.getvalue().splitlines()\n        )\n\n    @patch('prestoadmin.util.presto_config.PrestoConfig.coordinator_config',\n           return_value=PRESTO_CONFIG)\n    @patch('prestoadmin.server.check_presto_version')\n    @patch('prestoadmin.server.service')\n    @patch('prestoadmin.server.get_ext_ip_of_node')\n    def test_collect_node_information(self, mock_ext_ip, mock_service,\n                                      mock_version, mock_presto_config):\n        env.roledefs = {\n            'coordinator': ['Node1'],\n            'all': ['Node1']\n        }\n        mock_ext_ip.side_effect = ['IP1', 'IP3', 'IP4']\n        mock_service.side_effect = [True, False, Exception('Not running')]\n        mock_version.side_effect = ['', 'Presto not installed', '', '']\n\n        self.assertEqual(('IP1', True, ''), server.collect_node_information())\n        self.assertEqual(('Unknown', False, 'Presto not installed'),\n                         server.collect_node_information())\n        self.assertEqual(('IP3', False, ''), server.collect_node_information())\n        self.assertEqual(('IP4', False, ''),\n                         server.collect_node_information())\n\n    @patch('prestoadmin.server.sudo')\n    def test_get_external_ip(self, mock_nodeuuid):\n        client_mock = MagicMock(PrestoClient)\n        client_mock.run_sql.return_value = [['IP']]\n        self.assertEqual(server.get_ext_ip_of_node(client_mock), 'IP')\n\n    @patch('prestoadmin.server.sudo')\n    @patch('prestoadmin.server.warn')\n    def test_warn_external_ip(self, mock_warn, mock_nodeuuid):\n        env.host = 'node'\n        client_mock = MagicMock(PrestoClient)\n        client_mock.run_sql.return_value = [['IP1'], ['IP2']]\n        server.get_ext_ip_of_node(client_mock)\n        mock_warn.assert_called_with(\"More than one external ip found for \"\n                                     \"node. There could be multiple nodes \"\n                                     \"associated with the same node.id\")\n\n    def read_file_output(self, filename):\n        dir = os.path.abspath(os.path.dirname(__file__))\n        result_file = open(dir + filename, 'r')\n        file_content = \"\".join(result_file.readlines())\n        result_file.close()\n        return file_content\n\n    @patch('prestoadmin.util.presto_config.PrestoConfig.coordinator_config',\n           return_value=PRESTO_CONFIG)\n    @patch.object(PrestoClient, 'run_sql')\n    @patch('prestoadmin.server.run')\n    @patch('prestoadmin.server.warn')\n    def test_warning_presto_version_not_installed(self, mock_warn, mock_run,\n                                                  mock_run_sql, mock_presto_config):\n        env.host = 'node1'\n        env.roledefs['coordinator'] = ['node1']\n        env.roledefs['worker'] = ['node1']\n        env.roledefs['all'] = ['node1']\n        env.hosts = env.roledefs['all']\n        output = _AttributeString('package presto is not installed')\n        output.succeeded = False\n        mock_run.return_value = output\n        env.host = 'node1'\n        server.collect_node_information()\n        installation_warning = 'Presto is not installed.'\n        mock_warn.assert_called_with(installation_warning)\n\n    @patch('prestoadmin.server.run')\n    @patch('prestoadmin.server.lookup_port')\n    @patch('prestoadmin.server.error')\n    def test_fail_if_port_is_in_use(self, mock_error, mock_port, mock_run):\n        mock_port.return_value = 1010\n        env.host = 'any_host'\n        mock_run.return_value = 'some_string'\n        server.is_port_in_use(env.host)\n        mock_error.assert_called_with('Server failed to start on any_host. '\n                                      'Port 1010 already in use')\n\n    @patch('prestoadmin.server.run')\n    @patch('prestoadmin.server.lookup_port')\n    @patch('prestoadmin.server.warn')\n    def test_no_warn_if_port_free(self, mock_warn, mock_port, mock_run):\n        mock_port.return_value = 1010\n        env.host = 'any_host'\n        mock_run.return_value = ''\n        server.is_port_in_use(env.host)\n        self.assertEqual(False, mock_warn.called)\n\n    @patch('prestoadmin.server.lookup_port')\n    @patch('prestoadmin.server.warn')\n    def test_no_warn_if_port_lookup_fail(self, mock_warn, mock_port):\n        e = ConfigurationError()\n        mock_port.side_effect = e\n        env.host = 'any_host'\n        self.assertFalse(server.is_port_in_use(env.host))\n        self.assertEqual(False, mock_warn.called)\n\n    @patch('prestoadmin.server.run')\n    def test_multiple_version_rpms(self, mock_run):\n        output1 = _AttributeString('package presto is not installed')\n        output1.succeeded = False\n        output2 = _AttributeString('presto-server-rpm-0.115t-1.x86_64')\n        output2.succeeded = True\n        output3 = _AttributeString('Presto is not installed.')\n        output3.succeeded = False\n        output4 = _AttributeString('0.111.SNAPSHOT')\n        output4.succeeded = True\n\n        mock_run.side_effect = [output1, output2, output3, output4]\n\n        expected = server.check_presto_version()\n        mock_run.assert_has_calls([\n            call('rpm -q presto'),\n            call('rpm -q presto-server-rpm')\n        ])\n        self.assertEqual(expected, '')\n\n    def mock_fail_then_succeed(self):\n        output1 = _AttributeString()\n        output1.succeeded = False\n        output2 = _AttributeString()\n        output2.succeeded = True\n        return [output1, output2]\n"
  },
  {
    "path": "tests/unit/test_topology.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nTests the presto topology config\n\"\"\"\nimport unittest\n\nfrom mock import patch\nfrom fabric.state import env\n\nfrom prestoadmin import topology\nfrom prestoadmin.standalone import config\nfrom prestoadmin.standalone.config import StandaloneConfig\nfrom prestoadmin.util.exception import ConfigurationError\nfrom tests.unit.base_unit_case import BaseUnitCase\n\n\nclass TestTopologyConfig(BaseUnitCase):\n    def setUp(self):\n        super(TestTopologyConfig, self).setUp(capture_output=True)\n\n    @patch('tests.unit.test_topology.StandaloneConfig._get_conf_from_file')\n    def test_fill_conf(self, get_conf_from_file_mock):\n        get_conf_from_file_mock.return_value = \\\n            {\"username\": \"john\", \"port\": \"100\"}\n\n        config = StandaloneConfig()\n        conf = config.read_conf()\n\n        self.assertEqual(conf, {\"username\": \"john\", \"port\": 100,\n                                \"coordinator\": \"localhost\",\n                                \"workers\": [\"localhost\"]})\n\n    def test_invalid_property(self):\n        conf = {\"username\": \"me\",\n                \"port\": \"1234\",\n                \"coordinator\": \"coordinator\",\n                \"workers\": [\"node1\", \"node2\"],\n                \"invalid property\": \"fake\"}\n        self.assertRaisesRegexp(ConfigurationError,\n                                \"Invalid property: invalid property\",\n                                config.validate, conf)\n\n    def test_basic_valid_conf(self):\n        conf = {\"username\": \"user\",\n                \"port\": 1234,\n                \"coordinator\": \"my.coordinator\",\n                \"workers\": [\"my.worker1\", \"my.worker2\", \"my.worker3\"]}\n        self.assertEqual(config.validate(conf.copy()), conf)\n\n    def test_valid_string_port_to_int(self):\n        conf = {'username': 'john',\n                'port': '123',\n                'coordinator': 'master',\n                'workers': ['worker1', 'worker2']}\n        validated_conf = config.validate(conf.copy())\n        self.assertEqual(validated_conf['port'], 123)\n\n    def test_empty_host(self):\n        self.assertRaisesRegexp(ConfigurationError,\n                                \"'' is not a valid ip address or host name\",\n                                config.validate_coordinator, (\"\"))\n\n    def test_valid_workers(self):\n        workers = [\"172.16.1.10\", \"myslave\", \"FE80::0202:B3FF:FE1E:8329\"]\n        self.assertEqual(config.validate_workers(workers), workers)\n\n    def test_no_workers(self):\n        self.assertRaisesRegexp(ConfigurationError,\n                                \"Must specify at least one worker\",\n                                config.validate_workers, ([]))\n\n    def test_invalid_workers_type(self):\n        self.assertRaisesRegexp(ConfigurationError,\n                                \"Workers must be of type list.  \"\n                                \"Found <type 'str'>\",\n                                config.validate_workers, (\"not a list\"))\n\n    def test_invalid_coordinator_type(self):\n        self.assertRaisesRegexp(ConfigurationError,\n                                \"Host must be of type string.  \"\n                                \"Found <type 'list'>\",\n                                config.validate_coordinator,\n                                ([\"my\", \"list\"]))\n\n    def test_validate_workers_for_prompt(self):\n        workers_input = \"172.16.1.10 myslave FE80::0202:B3FF:FE1E:8329\"\n        workers_list = [\"172.16.1.10\", \"myslave\", \"FE80::0202:B3FF:FE1E:8329\"]\n        self.assertEqual(config.validate_workers_for_prompt(workers_input),\n                         workers_list)\n\n    def test_show(self):\n        env.roledefs = {'coordinator': ['hello'], 'worker': ['a', 'b'],\n                        'all': ['a', 'b', 'hello']}\n        env.user = 'user'\n        env.port = '22'\n\n        self.remove_runs_once_flag(topology.show)\n        topology.show()\n        self.assertEqual(\"\", self.test_stderr.getvalue())\n        self.assertEqual(\"{'coordinator': 'hello',\\n 'port': '22',\\n \"\n                         \"'username': 'user',\\n 'workers': ['a',\\n\"\n                         \"             'b']}\\n\",\n                         self.test_stdout.getvalue())\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/unit/test_workers.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"\nTests the workers module\n\"\"\"\nfrom fabric.api import env\nfrom mock import patch\n\nfrom prestoadmin import workers\nfrom prestoadmin.util.exception import ConfigurationError\nfrom tests.base_test_case import BaseTestCase\n\n\nclass TestWorkers(BaseTestCase):\n    def test_build_defaults(self):\n        env.roledefs['coordinator'] = 'a'\n        env.roledefs['workers'] = ['b', 'c']\n        actual_default = workers.Worker().build_all_defaults()\n        expected = {'node.properties':\n                    {'node.environment': 'presto',\n                     'node.data-dir': '/var/lib/presto/data',\n                     'node.launcher-log-file': '/var/log/presto/launcher.log',\n                     'node.server-log-file': '/var/log/presto/server.log',\n                     'catalog.config-dir': '/etc/presto/catalog',\n                     'plugin.dir': '/usr/lib/presto/lib/plugin'},\n                    'jvm.config': ['-server',\n                                   '-Xmx16G',\n                                   '-XX:-UseBiasedLocking',\n                                   '-XX:+UseG1GC',\n                                   '-XX:G1HeapRegionSize=32M',\n                                   '-XX:+ExplicitGCInvokesConcurrent',\n                                   '-XX:+HeapDumpOnOutOfMemoryError',\n                                   '-XX:+UseGCOverheadLimit',\n                                   '-XX:+ExitOnOutOfMemoryError',\n                                   '-XX:ReservedCodeCacheSize=512M',\n                                   '-DHADOOP_USER_NAME=hive'],\n                    'config.properties': {'coordinator': 'false',\n                                          'discovery.uri': 'http://a:8080',\n                                          'http-server.http.port': '8080',\n                                          'query.max-memory': '50GB',\n                                          'query.max-memory-per-node': '8GB'}\n                    }\n\n        self.assertEqual(actual_default, expected)\n\n    def test_validate_valid(self):\n        conf = {'node.properties': {},\n                'jvm.config': [],\n                'config.properties': {'coordinator': 'false',\n                                      'discovery.uri': 'http://host:8080'}}\n\n        self.assertEqual(conf, workers.Worker.validate(conf))\n\n    def test_validate_default(self):\n        env.roledefs['coordinator'] = 'localhost'\n        conf = workers.Worker().build_all_defaults()\n        self.assertEqual(conf, workers.Worker.validate(conf))\n\n    def test_invalid_conf(self):\n        conf = {'node.propoerties': {}}\n        self.assertRaisesRegexp(ConfigurationError,\n                                'Missing configuration for required file: ',\n                                workers.Worker.validate, conf)\n\n    def test_invalid_conf_missing_coordinator(self):\n        conf = {'node.properties': {},\n                'jvm.config': [],\n                'config.properties': {'discovery.uri': 'http://uri'}\n                }\n\n        self.assertRaisesRegexp(ConfigurationError,\n                                'Must specify coordinator=false in '\n                                'worker\\'s config.properties',\n                                workers.Worker.validate, conf)\n\n    def test_invalid_conf_coordinator(self):\n        conf = {'node.properties': {},\n                'jvm.config': [],\n                'config.properties': {'coordinator': 'true',\n                                      'discovery.uri': 'http://uri'}\n                }\n\n        self.assertRaisesRegexp(ConfigurationError,\n                                'Coordinator must be false in the '\n                                'worker\\'s config.properties',\n                                workers.Worker.validate, conf)\n\n    @patch('prestoadmin.node.config.write_conf_to_file')\n    @patch('prestoadmin.node.get_presto_conf')\n    def test_get_conf_empty_is_default(self, get_conf_mock, write_mock):\n        env.roledefs['coordinator'] = ['j']\n        get_conf_mock.return_value = {}\n        self.assertEqual(workers.Worker().get_conf(),\n                         workers.Worker().build_all_defaults())\n\n    @patch('prestoadmin.node.config.write_conf_to_file')\n    @patch('prestoadmin.node.get_presto_conf')\n    def test_get_conf(self, get_presto_conf_mock, write_mock):\n        env.roledefs['coordinator'] = ['j']\n        file_conf = {'node.properties': {'my-property': 'value',\n                                         'node.environment': 'test'}}\n        get_presto_conf_mock.return_value = file_conf\n        expected = {'node.properties':\n                    {'my-property': 'value',\n                     'node.environment': 'test'},\n                    'jvm.config': ['-server',\n                                   '-Xmx16G',\n                                   '-XX:-UseBiasedLocking',\n                                   '-XX:+UseG1GC',\n                                   '-XX:G1HeapRegionSize=32M',\n                                   '-XX:+ExplicitGCInvokesConcurrent',\n                                   '-XX:+HeapDumpOnOutOfMemoryError',\n                                   '-XX:+UseGCOverheadLimit',\n                                   '-XX:+ExitOnOutOfMemoryError',\n                                   '-XX:ReservedCodeCacheSize=512M',\n                                   '-DHADOOP_USER_NAME=hive'],\n                    'config.properties': {'coordinator': 'false',\n                                          'discovery.uri': 'http://j:8080',\n                                          'http-server.http.port': '8080',\n                                          'query.max-memory': '50GB',\n                                          'query.max-memory-per-node': '8GB'}\n                    }\n        self.assertEqual(workers.Worker().get_conf(), expected)\n\n    @patch('prestoadmin.node.config.write_conf_to_file')\n    @patch('prestoadmin.node.get_presto_conf')\n    @patch('prestoadmin.workers.util.get_coordinator_role')\n    def test_worker_not_localhost(self, coord_mock, get_conf_mock, write_mock):\n        get_conf_mock.return_value = {}\n        coord_mock.return_value = ['localhost']\n        env.roledefs['all'] = ['localhost', 'remote-host']\n        self.assertRaisesRegexp(ConfigurationError,\n                                'discovery.uri should not be localhost in a '\n                                'multi-node cluster',\n                                workers.Worker().get_conf)\n"
  },
  {
    "path": "tests/unit/util/__init__.py",
    "content": ""
  },
  {
    "path": "tests/unit/util/test_application.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\nimport logging\n\nfrom mock import patch\nfrom mock import call\n\nfrom prestoadmin.util import constants\nfrom prestoadmin.util.application import Application\nfrom prestoadmin.util.local_config_util import get_log_directory\n\nfrom tests.base_test_case import BaseTestCase\n\nAPPLICATION_NAME = 'foo'\n\n\n@patch('prestoadmin.util.application.filesystem')\n@patch('prestoadmin.util.application.logging.config')\nclass ApplicationTest(BaseTestCase):\n\n    def setUp(self):\n        # basicConfig is a noop if there are already handlers\n        # present on the root logger, remove them all here\n        self.__old_log_handlers = []\n        for handler in logging.root.handlers:\n            self.__old_log_handlers.append(handler)\n            logging.root.removeHandler(handler)\n\n    def tearDown(self):\n        # restore the old log handlers\n        for handler in logging.root.handlers:\n            logging.root.removeHandler(handler)\n        for handler in self.__old_log_handlers:\n            logging.root.addHandler(handler)\n\n    @patch('prestoadmin.util.application.os.path.exists')\n    def test_configures_default_log_file(\n        self,\n        path_exists_mock,\n        logging_mock,\n        filesystem_mock\n    ):\n        path_exists_mock.return_value = True\n\n        with Application(APPLICATION_NAME):\n            pass\n\n        file_path = os.path.join(\n            get_log_directory(),\n            APPLICATION_NAME + '.log'\n        )\n        self.__assert_logging_setup_with_file(\n            file_path,\n            filesystem_mock,\n            logging_mock\n        )\n\n        path_exists_mock.assert_called_once_with(\n            constants.LOGGING_CONFIG_FILE_NAME\n        )\n\n    def __assert_logging_setup_with_file(\n        self,\n        log_file_path,\n        filesystem_mock,\n        logging_mock\n    ):\n        parent_dirs_mock = filesystem_mock.ensure_parent_directories_exist\n        parent_dirs_mock.assert_called_once_with(log_file_path)\n\n        file_config_mock = logging_mock.fileConfig\n        file_config_mock.assert_called_once_with(\n            constants.LOGGING_CONFIG_FILE_NAME,\n            defaults={'log_file_path': log_file_path},\n            disable_existing_loggers=False\n        )\n\n    @patch('prestoadmin.util.application.os.path.exists')\n    def test_configures_custom_log_file(\n        self,\n        path_exists_mock,\n        logging_mock,\n        filesystem_mock\n    ):\n        path_exists_mock.return_value = True\n\n        log_file_path = 'bar.log'\n        with Application(\n            APPLICATION_NAME,\n            log_file_path=log_file_path\n        ):\n            pass\n\n        file_path = os.path.join(\n            get_log_directory(),\n            log_file_path\n        )\n        self.__assert_logging_setup_with_file(\n            file_path,\n            filesystem_mock,\n            logging_mock\n        )\n\n    @patch('prestoadmin.util.application.os.path.exists')\n    @patch('prestoadmin.util.application.sys.stderr')\n    def test_configures_invalid_log_file(\n        self,\n        stderr_mock,\n        path_exists_mock,\n        logging_mock,\n        filesystem_mock\n    ):\n        path_exists_mock.return_value = True\n\n        expected_error = FakeError('Error')\n        logging_mock.fileConfig.side_effect = expected_error\n\n        try:\n            with Application(APPLICATION_NAME):\n                pass\n        except SystemExit as e:\n            self.assertEqual('Error', e.message)\n\n        stderr_mock.write.assert_has_calls(\n            [\n                call('Please run %s with sudo.\\n' % APPLICATION_NAME),\n            ]\n        )\n\n    @patch('prestoadmin.util.application.os.path.exists')\n    def test_configures_absolute_path_to_log_file(\n        self,\n        path_exists_mock,\n        logging_mock,\n        filesystem_mock\n    ):\n        path_exists_mock.return_value = True\n\n        log_file_path = '/tmp/bar.log'\n        with Application(\n            APPLICATION_NAME,\n            log_file_path=log_file_path\n        ):\n            pass\n\n        self.__assert_logging_setup_with_file(\n            log_file_path,\n            filesystem_mock,\n            logging_mock\n        )\n\n    @patch('prestoadmin.util.application.os.path.exists')\n    def test_uses_logging_configs_in_order(\n        self,\n        path_exists_mock,\n        logging_mock,\n        filesystem_mock\n    ):\n        path_exists_mock.side_effect = [False, True]\n\n        log_file_path = '/tmp/bar.log'\n        with Application(\n            APPLICATION_NAME,\n            log_file_path=log_file_path\n        ):\n            pass\n\n        parent_dirs_mock = filesystem_mock.ensure_parent_directories_exist\n        parent_dirs_mock.assert_called_once_with(log_file_path)\n\n        file_config_mock = logging_mock.fileConfig\n        file_config_mock.assert_called_once_with(\n            log_file_path + '.ini',\n            defaults={'log_file_path': log_file_path},\n            disable_existing_loggers=False\n        )\n\n    @patch('prestoadmin.util.application.sys.stderr')\n    def test_handles_errors(\n        self,\n        stderr_mock,\n        logging_mock,\n        filesystem_mock\n    ):\n        def should_fail():\n            with Application(APPLICATION_NAME):\n                raise Exception('User facing error message')\n\n        self.assertRaises(SystemExit, should_fail)\n\n        stderr_mock.write.assert_has_calls(\n            [\n                call('User facing error message'),\n                call('\\n')\n            ]\n        )\n\n    @patch('prestoadmin.util.application.logger')\n    def test_handles_system_abnormal_exits(\n        self,\n        logger_mock,\n        logging_mock,\n        filesystem_mock\n    ):\n        def should_exit():\n            with Application(APPLICATION_NAME):\n                sys.exit(2)\n\n        self.assertRaises(SystemExit, should_exit)\n        logger_mock.debug.assert_has_calls(\n            [\n                call('Application exiting with status %d', 2),\n            ]\n        )\n\n    @patch('prestoadmin.util.application.logger')\n    def test_handles_system_normal_exits(\n        self,\n        logger_mock,\n        logging_mock,\n        filesystem_mock\n    ):\n        def should_exit():\n            with Application(APPLICATION_NAME):\n                sys.exit()\n\n        self.assertRaises(SystemExit, should_exit)\n        logger_mock.debug.assert_has_calls(\n            [\n                call('Application exiting with status %d', 0),\n            ]\n        )\n\n    @patch('prestoadmin.util.application.logger')\n    def test_handles_system_exit_none(\n        self,\n        logger_mock,\n        logging_mock,\n        filesystem_mock\n    ):\n        def should_exit_zero_with_none():\n            with Application(APPLICATION_NAME):\n                sys.exit(None)\n\n        self.assertRaises(SystemExit, should_exit_zero_with_none)\n        logger_mock.debug.assert_has_calls(\n            [\n                call('Application exiting with status %d', 0),\n            ]\n        )\n\n    @patch('prestoadmin.util.application.logger')\n    def test_handles_system_exit_string(\n            self,\n            logger_mock,\n            logging_mock,\n            filesystem_mock\n    ):\n        def should_exit_one_with_str():\n            with Application(APPLICATION_NAME):\n                sys.exit(\"exit\")\n\n        self.assertRaises(SystemExit, should_exit_one_with_str)\n        logger_mock.debug.assert_has_calls(\n            [\n                call('Application exiting with status %d', 1),\n                ]\n        )\n\n\nclass FakeError(Exception):\n    pass\n"
  },
  {
    "path": "tests/unit/util/test_base_config.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n'''\nTests for the base_config module.\n'''\n\nfrom prestoadmin.yarn_slider.config import SliderConfig\nfrom prestoadmin.standalone.config import StandaloneConfig\nfrom prestoadmin.util.base_config import requires_config\nfrom prestoadmin.util.exception import ConfigFileNotFoundError, \\\n    ConfigurationError\n\nfrom mock import patch\n\nfrom tests.base_test_case import BaseTestCase\n\n\nclass TestBaseConfig(BaseTestCase):\n    @patch('tests.unit.util.test_base_config.SliderConfig.'\n           'get_conf_interactive')\n    @patch('tests.unit.util.test_base_config.SliderConfig.read_conf')\n    @patch('tests.unit.util.test_base_config.SliderConfig.set_env_from_conf')\n    def test_get_config_already_loaded(\n            self, set_env_mock, file_conf_mock, interactive_conf_mock):\n        config = SliderConfig()\n        config.set_config_loaded()\n        config.get_config()\n        self.assertFalse(file_conf_mock.called)\n        self.assertFalse(interactive_conf_mock.called)\n        self.assertFalse(set_env_mock.called)\n\n    @patch('tests.unit.util.test_base_config.StandaloneConfig.'\n           'get_conf_interactive')\n    @patch('tests.unit.util.test_base_config.StandaloneConfig.read_conf')\n    @patch('tests.unit.util.test_base_config.StandaloneConfig.'\n           'set_env_from_conf')\n    def test_get_config_load_file(\n            self, set_env_mock, file_conf_mock, interactive_conf_mock):\n        config = StandaloneConfig()\n        config.get_config()\n        self.assertTrue(file_conf_mock.called)\n        self.assertFalse(interactive_conf_mock.called)\n        self.assertTrue(set_env_mock.called)\n        self.assertTrue(config.is_config_loaded())\n\n    @patch('tests.unit.util.test_base_config.StandaloneConfig.'\n           'get_conf_interactive')\n    @patch('tests.unit.util.test_base_config.StandaloneConfig.read_conf')\n    @patch('tests.unit.util.test_base_config.StandaloneConfig.write_conf')\n    @patch('tests.unit.util.test_base_config.StandaloneConfig.'\n           'set_env_from_conf')\n    def test_get_config_load_interactive(\n            self, set_env_mock, store_conf_mock, file_conf_mock,\n            interactive_conf_mock):\n        file_conf_mock.side_effect = ConfigFileNotFoundError(\n            message='oops', config_path='/asdf')\n        config = StandaloneConfig()\n        config.get_config()\n        self.assertTrue(file_conf_mock.called)\n        self.assertTrue(interactive_conf_mock.called)\n        self.assertTrue(set_env_mock.called)\n        self.assertTrue(store_conf_mock.called)\n        self.assertTrue(config.is_config_loaded())\n\n    @patch('tests.unit.util.test_base_config.SliderConfig.is_config_loaded')\n    def test_decorator_has_topology(self, mock_is_config_loaded):\n        mock_is_config_loaded.return_value = True\n\n        @requires_config(SliderConfig)\n        def func():\n            return 'runs'\n\n        self.assertEquals(func(), 'runs')\n\n    @patch('tests.unit.util.test_base_config.StandaloneConfig.'\n           'is_config_loaded')\n    def test_decorator_no_topology(self, mock_is_config_loaded):\n        mock_is_config_loaded.return_value = False\n\n        @requires_config(StandaloneConfig)\n        def func():\n            return 'runs'\n\n        self.assertRaises(ConfigurationError, func)\n"
  },
  {
    "path": "tests/unit/util/test_exception.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom prestoadmin.util.exception import ExceptionWithCause, \\\n    ConfigFileNotFoundError\n\nimport pickle\nimport re\nfrom unittest import TestCase\n\n\nclass ExceptionTest(TestCase):\n\n    def test_exception_with_cause(self):\n        pass\n        try:\n            try:\n                raise ValueError('invalid parameter!')\n            except:\n                raise ExceptionWithCause('outer exception')\n        except ExceptionWithCause as e:\n            self.assertEqual(str(e), 'outer exception')\n            m = re.match(\n                r'Traceback \\(most recent call last\\):\\n  File \".*\", line \\d+,'\n                ' in test_exception_with_cause\\n    raise ValueError\\('\n                '\\'invalid parameter!\\'\\)\\nValueError: invalid parameter!\\n',\n                e.inner_exception\n            )\n            self.assertTrue(m is not None)\n        else:\n            self.fail('ExceptionWithCause should have been raised')\n\n    def test_can_pickle_ConfigFileNotFound(self):\n        config_path = '/usa/georgia/macon'\n        message = 'I woke up this morning, I had them Statesboro Blues'\n        e = ConfigFileNotFoundError(config_path=config_path, message=message)\n\n        ps = pickle.dumps(e, pickle.HIGHEST_PROTOCOL)\n        a = pickle.loads(ps)\n\n        self.assertEquals(message, a.message)\n        self.assertEquals(config_path, a.config_path)\n"
  },
  {
    "path": "tests/unit/util/test_fabric_application.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom prestoadmin.util.fabric_application import FabricApplication\n\nfrom tests.base_test_case import BaseTestCase\n\nfrom mock import patch\n\nimport sys\nimport logging\n\n\nAPPLICATION_NAME = 'foo'\n\n\n@patch('prestoadmin.util.application.logging.config')\nclass FabricApplicationTest(BaseTestCase):\n\n    def setUp(self):\n        # basicConfig is a noop if there are already handlers\n        # present on the root logger, remove them all here\n        self.__old_log_handlers = []\n        for handler in logging.root.handlers:\n            self.__old_log_handlers.append(handler)\n            logging.root.removeHandler(handler)\n        super(FabricApplicationTest, self).setUp(capture_output=True)\n\n    def tearDown(self):\n        # restore the old log handlers\n        for handler in logging.root.handlers:\n            logging.root.removeHandler(handler)\n        for handler in self.__old_log_handlers:\n            logging.root.addHandler(handler)\n        BaseTestCase.tearDown(self)\n\n    @patch('prestoadmin.util.fabric_application.disconnect_all', autospec=True)\n    def test_disconnect_all(self, disconnect_mock, logging_conf_mock):\n        def should_disconnect():\n            with FabricApplication(APPLICATION_NAME):\n                sys.exit()\n\n        self.assertRaises(SystemExit, should_disconnect)\n        disconnect_mock.assert_called_with()\n\n    @patch('prestoadmin.util.application.logger')\n    @patch('prestoadmin.util.filesystem.os.makedirs')\n    def test_keyboard_interrupt(self, make_dirs_mock, logger_mock,\n                                logging_conf_mock):\n        def should_not_error():\n            with FabricApplication(APPLICATION_NAME):\n                raise KeyboardInterrupt\n\n        try:\n            should_not_error()\n        except SystemExit as e:\n            self.assertEqual(0, e.code)\n            self.assertEqual(\"Stopped.\\n\", self.test_stderr.getvalue())\n        else:\n            self.fail('Keyboard interrupt did not cause a system exit.')\n\n    def test_handles_errors(self, logging_mock):\n        def should_fail():\n            with FabricApplication(APPLICATION_NAME):\n                raise Exception('error message')\n\n        self.assertRaises(SystemExit, should_fail)\n        self.assertEqual(self.test_stderr.getvalue(), 'error message\\n')\n"
  },
  {
    "path": "tests/unit/util/test_fabricapi.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nTests the utility\n\"\"\"\nfrom fabric.api import env\n\nfrom mock import Mock\n\nfrom prestoadmin.util import fabricapi\n\nfrom tests.base_test_case import BaseTestCase\n\n\nclass TestFabricapi(BaseTestCase):\n    def test_get_host_with_exclude(self):\n        env.hosts = ['a', 'b', 'bad']\n        env.exclude_hosts = ['bad']\n        self.assertEqual(fabricapi.get_host_list(), ['a', 'b'])\n\n    TEST_ROLEDEFS = {\n        'coordinator': ['coordinator'],\n        'worker': ['worker0', 'worker1', 'worker2']\n        }\n\n    def test_by_role_coordinator(self):\n        env.roledefs = self.TEST_ROLEDEFS\n\n        callback = Mock()\n\n        fabricapi.by_role_coordinator('worker0', callback)\n        self.assertFalse(callback.called, 'coordinator callback called for ' +\n                         'worker')\n        fabricapi.by_role_coordinator('coordinator', callback)\n        callback.assert_any_call()\n\n    def test_by_role_worker(self):\n        env.roledefs = self.TEST_ROLEDEFS\n\n        callback = Mock()\n\n        fabricapi.by_role_worker('coordinator', callback)\n        self.assertFalse(callback.called, 'worker callback called for ' +\n                         'coordinator')\n        fabricapi.by_role_worker('worker0', callback)\n        callback.assert_any_call()\n\n    def assert_is_worker(self, roledefs):\n        def check(*args, **kwargs):\n            self.assertTrue(env.host in roledefs.get('worker'))\n        return check\n\n    def assert_is_coordinator(self, roledefs):\n        def check(*args, **kwargs):\n            self.assertTrue(env.host in roledefs.get('coordinator'))\n        return check\n\n    def test_by_rolename_worker(self):\n        callback = Mock()\n        callback.side_effect = self.assert_is_worker(self.TEST_ROLEDEFS)\n        env.roledefs = self.TEST_ROLEDEFS\n\n        env.host = 'coordinator'\n        fabricapi.by_rolename(env.host, 'worker', callback)\n        self.assertFalse(callback.called)\n\n        env.host = 'worker0'\n        fabricapi.by_rolename(env.host, 'worker', callback)\n        self.assertTrue(callback.called)\n\n    def test_by_rolename_coordinator(self):\n        callback = Mock()\n        callback.side_effect = self.assert_is_coordinator(self.TEST_ROLEDEFS)\n        env.roledefs = self.TEST_ROLEDEFS\n\n        env.host = 'worker0'\n        fabricapi.by_rolename(env.host, 'coordinator', callback)\n        self.assertFalse(callback.called)\n\n        env.host = 'coordinator'\n        fabricapi.by_rolename(env.host, 'coordinator', callback)\n        self.assertTrue(callback.called)\n\n    def test_by_rolename_all(self):\n        callback = Mock()\n        env.roledefs = self.TEST_ROLEDEFS\n\n        env.host = 'worker0'\n        fabricapi.by_rolename(env.host, None, callback)\n        self.assertTrue(callback.called)\n\n        callback.reset_mock()\n\n        env.host = 'coordinator'\n        fabricapi.by_rolename(env.host, None, callback)\n        self.assertTrue(callback.called)\n"
  },
  {
    "path": "tests/unit/util/test_filesystem.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport errno\nfrom mock import patch\nfrom prestoadmin.util import filesystem\nfrom tests.base_test_case import BaseTestCase\n\n\nclass TestFilesystem(BaseTestCase):\n    @patch('prestoadmin.util.filesystem.os.fdopen')\n    @patch('prestoadmin.util.filesystem.os.open')\n    @patch('prestoadmin.util.filesystem.os.makedirs')\n    def test_write_file_exits(self, makedirs_mock, open_mock, fdopen_mock):\n        makedirs_mock.side_effect = OSError(errno.EEXIST, 'message')\n        open_mock.side_effect = OSError(errno.EEXIST, 'message')\n        filesystem.write_to_file_if_not_exists('content', 'path/to/anyfile')\n        self.assertFalse(fdopen_mock.called)\n\n    @patch('prestoadmin.util.filesystem.os.makedirs')\n    def test_write_file_error_in_dirs(self, makedirs_mock):\n        makedirs_mock.side_effect = OSError(errno.EACCES, 'message')\n        self.assertRaisesRegexp(OSError, 'message',\n                                filesystem.write_to_file_if_not_exists,\n                                'content', 'path/to/anyfile')\n\n    @patch('prestoadmin.util.filesystem.os.makedirs')\n    @patch('prestoadmin.util.filesystem.os.open')\n    def test_write_file_error_in_files(self, open_mock, makedirs_mock):\n        open_mock.side_effect = OSError(errno.EACCES, 'message')\n        self.assertRaisesRegexp(OSError, 'message',\n                                filesystem.write_to_file_if_not_exists,\n                                'content', 'path/to/anyfile')\n"
  },
  {
    "path": "tests/unit/util/test_local_config_util.py",
    "content": "# -*- coding: utf-8 -*-\n\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\n\nfrom mock import patch\nfrom prestoadmin.util import local_config_util\nfrom prestoadmin.util.constants import DEFAULT_LOCAL_CONF_DIR\nfrom tests.base_test_case import BaseTestCase\n\n\nclass TestLocalConfigUtil(BaseTestCase):\n    @patch('prestoadmin.util.local_config_util.os.environ.get')\n    def test_get_default_config_dir(self, get_mock):\n        get_mock.return_value = None\n        self.assertEqual(local_config_util.get_config_directory(), DEFAULT_LOCAL_CONF_DIR)\n\n    @patch('prestoadmin.util.local_config_util.os.environ.get')\n    def test_get_configured_config_dir(self, get_mock):\n        non_default_directory = '/not/the/default'\n        get_mock.return_value = non_default_directory\n        self.assertEqual(local_config_util.get_config_directory(), non_default_directory)\n\n    @patch('prestoadmin.util.local_config_util.os.environ.get')\n    def test_get_default_log_dir(self, get_mock):\n        get_mock.return_value = None\n        self.assertEqual(local_config_util.get_log_directory(), os.path.join(DEFAULT_LOCAL_CONF_DIR, 'log'))\n\n    @patch('prestoadmin.util.local_config_util.os.environ.get')\n    def test_get_configured_log_dir(self, get_mock):\n        non_default_directory = '/not/the/default'\n        get_mock.return_value = non_default_directory\n        self.assertEqual(local_config_util.get_log_directory(), non_default_directory)\n"
  },
  {
    "path": "tests/unit/util/test_parser.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nTests the LoggingOptionParser\n\"\"\"\nfrom StringIO import StringIO\n\nfrom prestoadmin.util.parser import LoggingOptionParser\nfrom prestoadmin.util.hiddenoptgroup import HiddenOptionGroup\nfrom tests.base_test_case import BaseTestCase\n\n\nclass TestParser(BaseTestCase):\n    def test_print_extended_help(self):\n        parser = LoggingOptionParser(usage=\"Hello World\")\n        parser.add_option_group(\"a\")\n        hidden_group = HiddenOptionGroup(parser, \"b\", suppress_help=True)\n        non_hidden_group = HiddenOptionGroup(parser, \"c\", suppress_help=False)\n        parser.add_option_group(hidden_group)\n        parser.add_option_group(non_hidden_group)\n\n        help_out = StringIO()\n        parser.print_help(help_out)\n        self.assertEqual(help_out.getvalue(),\n                         \"Usage: Hello World\\n\\nOptions:\\n  -h, --help  show \"\n                         \"this help message and exit\\n\\n  a:\\n\\n\\n  c:\\n\")\n\n        extended_help_out = StringIO()\n        parser.print_extended_help(extended_help_out)\n        self.assertEqual(extended_help_out.getvalue(),\n                         \"Usage: Hello World\\n\\nOptions:\\n  -h, --help  show \"\n                         \"this help message and exit\\n\\n  a:\\n\\n  b:\\n\\n  \"\n                         \"c:\\n\")\n"
  },
  {
    "path": "tests/unit/util/test_remote_config_util.py",
    "content": "# -*- coding: utf-8 -*-\n\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom fabric.operations import _AttributeString\nfrom mock import patch\nfrom prestoadmin.util.exception import ConfigurationError\nfrom prestoadmin.util.remote_config_util import lookup_port,\\\n    lookup_string_config, NODE_CONFIG_FILE\nfrom tests.base_test_case import BaseTestCase\n\n\nclass TestRemoteConfigUtil(BaseTestCase):\n    @patch('prestoadmin.util.remote_config_util.sudo')\n    def test_lookup_port_failure(self, sudo_mock):\n        sudo_mock.return_value = Exception('File not found')\n\n        self.assertRaisesRegexp(\n            ConfigurationError,\n            'Could not access config file /etc/presto/config.properties on host any_host',\n            lookup_port, 'any_host'\n        )\n\n    @patch('prestoadmin.util.remote_config_util.sudo')\n    def test_lookup_port_not_integer_failure(self, sudo_mock):\n        sudo_mock.return_value = _AttributeString(\n            'http-server.http.port=hello')\n        sudo_mock.return_value.failed = False\n        sudo_mock.return_value.return_code = 0\n\n        self.assertRaisesRegexp(\n            ConfigurationError,\n            'Invalid port number hello: port must be a number between 1 and'\n            ' 65535 for property http-server.http.port on host any_host.',\n            lookup_port, 'any_host'\n        )\n\n    @patch('prestoadmin.util.remote_config_util.sudo')\n    def test_lookup_port_not_in_file(self, sudo_mock):\n        sudo_mock.return_value = _AttributeString('')\n        sudo_mock.return_value.failed = False\n        sudo_mock.return_value.return_code = 1\n        port = lookup_port('any_host')\n        self.assertEqual(port, 8080)\n\n    @patch('prestoadmin.util.remote_config_util.sudo')\n    def test_lookup_port_out_of_range(self, sudo_mock):\n        sudo_mock.return_value = _AttributeString(\n            'http-server.http.port=99999')\n        sudo_mock.return_value.failed = False\n        sudo_mock.return_value.return_code = 0\n        self.assertRaisesRegexp(\n            ConfigurationError,\n            'Invalid port number 99999: port must be a number between 1 and '\n            '65535 for property http-server.http.port on host any_host.',\n            lookup_port, 'any_host'\n        )\n\n    @patch('prestoadmin.util.remote_config_util.sudo')\n    def test_lookup_string_config(self, sudo_mock):\n        sudo_mock.return_value = _AttributeString(\n            'config.to.lookup=/path/hello')\n        sudo_mock.return_value.failed = False\n        sudo_mock.return_value.return_code = 0\n        config_value = lookup_string_config('config.to.lookup',\n                                            NODE_CONFIG_FILE, 'any_host')\n        self.assertEqual(config_value, '/path/hello')\n\n    @patch('prestoadmin.util.remote_config_util.sudo')\n    def test_lookup_string_config_not_in_file(self, sudo_mock):\n        sudo_mock.return_value = _AttributeString('')\n        sudo_mock.return_value.failed = False\n        sudo_mock.return_value.return_code = 1\n        config_value = lookup_string_config('config.to.lookup',\n                                            NODE_CONFIG_FILE, 'any_host')\n        self.assertEqual(config_value, '')\n\n    @patch('prestoadmin.util.remote_config_util.sudo')\n    def test_lookup_string_config_file_not_found(self, sudo_mock):\n        sudo_mock.return_value = _AttributeString(\n            'grep: /etc/presto/node.properties does not exist')\n        sudo_mock.return_value.return_code = 2\n\n        self.assertRaisesRegexp(\n            ConfigurationError,\n            'Could not access config file /etc/presto/node.properties on host any_host',\n            lookup_string_config, 'config.to.lookup', NODE_CONFIG_FILE,\n            'any_host'\n        )\n"
  },
  {
    "path": "tests/unit/util/test_validators.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nTest the various validators\n\"\"\"\nfrom prestoadmin.util import validators\nfrom prestoadmin.util.exception import ConfigurationError\nfrom tests.base_test_case import BaseTestCase\n\n\nclass TestValidators(BaseTestCase):\n    def test_valid_ipv4(self):\n        ipv4 = \"10.14.1.10\"\n        self.assertEqual(validators.validate_host(ipv4), ipv4)\n\n    def test_valid_full_ipv6(self):\n        ipv6 = \"FE80:0000:0000:0000:0202:B3FF:FE1E:8329\"\n        self.assertEqual(validators.validate_host(ipv6), ipv6)\n\n    def test_valid_collapsed_ipv6(self):\n        ipv6 = \"FE80::0202:B3FF:FE1E:8329\"\n        self.assertEqual(validators.validate_host(ipv6), ipv6)\n\n    def test_valid_hostname(self):\n        host = \"master\"\n        self.assertEqual(validators.validate_host(host), host)\n\n    def test_invalid_host(self):\n        self.assertRaisesRegexp(ConfigurationError,\n                                \"'.1234' is not a valid ip address \"\n                                \"or host name\",\n                                validators.validate_host,\n                                (\".1234\"))\n\n    def test_invalid_host_type(self):\n        self.assertRaisesRegexp(ConfigurationError,\n                                \"Host must be of type string.  \"\n                                \"Found <type 'list'>\",\n                                validators.validate_host,\n                                ([\"my\", \"list\"]))\n\n    def test_valid_port(self):\n        port = 1234\n        self.assertEqual(validators.validate_port(port), port)\n\n    def test_invalid_port(self):\n        self.assertRaisesRegexp(ConfigurationError,\n                                \"Invalid port number 99999999: port must be \"\n                                \"a number between 1 and 65535\",\n                                validators.validate_port,\n                                (\"99999999\"))\n\n    def test_invalid_port_type(self):\n        self.assertRaises(ConfigurationError,\n                          validators.validate_port, ([\"123\"]))\n"
  },
  {
    "path": "tests/unit/util/test_version_util.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nTests for version ranges\n\"\"\"\n\nfrom prestoadmin.util.version_util import VersionRange, VersionRangeList, \\\n    strip_tag, split_version\n\nfrom tests.unit.base_unit_case import BaseUnitCase\n\n\nclass TestVersionRange(BaseUnitCase):\n    def test_pad_tuple_bad_len(self):\n        self.assertRaises(AssertionError, VersionRange.pad_tuple, (1, 2), 0, 0)\n        self.assertRaises(AssertionError, VersionRange.pad_tuple, (1, 2), 1, 0)\n\n    def test_pad_tuple(self):\n        self.assertEquals((1, 2, 0, 0), VersionRange.pad_tuple((1, 2), 4, 0))\n\n    def test_invalid_range(self):\n        # Empty intervals, min == max\n        self.assertRaises(AssertionError, VersionRange, (1, 0), (1, 0))\n        self.assertRaises(AssertionError, VersionRange, (1, 0), (1, ))\n        self.assertRaises(AssertionError, VersionRange, (1, ), (1, 0))\n\n        # Empty interval max > min\n        self.assertRaises(AssertionError, VersionRange, (2, 0), (1, 0))\n\n        # Bare integers for min, max disallowed\n        self.assertRaises(AssertionError, VersionRange, (0), (2,))\n        self.assertRaises(AssertionError, VersionRange, (1,), (2))\n\n    def test_contains(self):\n        vr = VersionRange((2171, 0), (2179, 0))\n        self.assertNotIn(('2170', '9'), vr)\n        self.assertNotIn((2170, 9, 2, 718, 28, 18284, 590, 4523, 536), vr)\n        self.assertIn((2171, 0, 0), vr)\n        self.assertIn([2171, 1], vr)\n        self.assertIn(('2175',), vr)\n        self.assertIn((2178, 3, 1, 4, 1, 59, 26535, 89793), vr)\n        self.assertNotIn([2179], vr)\n        self.assertNotIn((2179, 1), vr)\n\n    def test_strip_td(self):\n        self.assertEquals((0, 123), VersionRange.strip_td_suffix((0, '123t')))\n        self.assertEquals((0, 123, 1, 0), VersionRange.strip_td_suffix((0, 123, 't', 1, 0)))\n\n    def test_contains_teradata(self):\n        vr = VersionRange((0,), (0, 128))\n        self.assertIn((0, '115t'), vr)\n        self.assertIn(('0', '115t'), vr)\n        self.assertIn((0, 125, 't', 0, 1), vr)\n\n\nclass TestVersionRangeSet(BaseUnitCase):\n    def test_0_length_list(self):\n        vl = VersionRangeList()\n        self.assertRaises(KeyError, vl.for_version, (1, 0))\n\n    def test_1_length_list(self):\n        vl = VersionRangeList(\n            VersionRange((0,), (1, 0)))\n        self.assertIsNone(vl.for_version((0, 5)))\n\n    def test_valid_2_length_list(self):\n        vl = VersionRangeList(\n            VersionRange((0,), (1, 0), '0'),\n            VersionRange((1, 0), (2, 0), '1'))\n        self.assertEqual('0', vl.for_version((0, 5)))\n        self.assertEqual('1', vl.for_version((1, 5)))\n\n    def test_discontinuous_2_length_list(self):\n        self.assertRaises(\n            AssertionError, VersionRangeList,\n            VersionRange((0,), (1, 0)), VersionRange((1, 1), (2, 0)))\n\n    def test_bad_order_2_length_list(self):\n        self.assertRaises(\n            AssertionError, VersionRangeList,\n            VersionRange((1, 0), (2, 0)), VersionRange((0,), (1, 0)))\n\n    def test_overlapping_2_length_list(self):\n        self.assertRaises(\n            AssertionError, VersionRangeList,\n            VersionRange((0,), (1, 0)), VersionRange((0, 9), (2, 0)))\n\n\nclass TestVersionUtils(BaseUnitCase):\n    def test_all_numeric(self):\n        self.assertEqual((1, 2), strip_tag(('1', '2')))\n        self.assertEqual((1, 2), strip_tag(['1', '2']))\n\n    def test_trailing_non_numeric(self):\n        self.assertEqual(\n            (1, 2), strip_tag(('1', '2', 'THREE', 'FOUR')))\n        self.assertEqual(\n            (1, 2), strip_tag(['1', '2', 'THR']))\n\n    def test_ancient_tags(self):\n        # Teradata and non-Teradata versions\n        self.assertEqual(\n            (0, '97t'), strip_tag(('0', '97t', 'SNAPSHOT')))\n        self.assertEqual(\n            (0, 99), strip_tag(('0', '99', 'SNAPSHOT')))\n\n    def test_non_trailing_non_numeric(self):\n        self.assertEqual(\n            (1, 3, 't', 4, 't'), strip_tag(('1', 'TWO', '3', 't', '4', 't')))\n\n    def test_no_numeric(self):\n        self.assertEqual(\n            (), strip_tag(('ONE', 'TWO', 'THREE'))\n        )\n\n    def test_split(self):\n        self.assertEqual(['1', '2', '3'], split_version(' \\t 1.2.3  \\t '))\n        self.assertEqual(['0', '115t'], split_version('0.115t'))\n        self.assertEqual(['0', '115t', 'SNAPSHOT'], split_version('0.115t-SNAPSHOT'))\n\n    def test_old_teradata_version(self):\n        self.assertEqual(\n            (0, '115t'), strip_tag(('0', '115t')))\n        self.assertEqual(\n            (0, '123t'), strip_tag(('0', '123t', 'SNAPSHOT')))\n\n    def test_new_teradata_version(self):\n        self.assertEqual(\n            (0, 148, 't'), strip_tag(('0', '148', 't'))\n        )\n        self.assertEqual(\n            (0, 148, 't', 0, 1), strip_tag(('0', '148', 't', '0', '1'))\n        )\n        self.assertEqual(\n            (0, 148, 't'), strip_tag(('0', '148', 'snapshot', 't', 'snapshot'))\n        )\n"
  },
  {
    "path": "tests/unit/yarn_slider/__init__.py",
    "content": ""
  },
  {
    "path": "tests/unit/yarn_slider/test_help.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom mock import patch\nimport os\n\nimport prestoadmin\nfrom prestoadmin import main\n\nfrom tests.unit.test_main import BaseMainCase\n\n\n#\n# Getting this and TestStandaloneHelp running, and subsequently running\n# successfully in the same run was a treat and a joy not to be missed.\n#\n#                                      A\n# Because the import runs way up there |, and __init__.py runs get_mode, it's\n# basically impossible to patch get_mode using the usual mechanisms; the mode\n# has long been set by the time we get to setUp or any of the tests. Instead,\n# we patch it down here, and then reload the prestoadmin module to re-execute\n# the code that calls get_mode and sets up the imports and __all__.\n#\n# The other thing to keep in mind is that the help tests end up (many levels\n# in) updating fabric.state.commands, and you need to clear it out in order for\n# the second test case to run correctly. BaseMainCase.setUp does this because\n# TestMain also ends up updating fabric.state.commands, and therefore ought to\n# clear it too.\n#\n# There's a lot of duplication between this and TestStandaloneHelp. Here are a\n# few things that don't work to remove it:\n#\n# Have a common abstract base class. Nosetests tries to instantiate it.\n# Mark the base class @nottest. Nosetests doesn't find the tests in the\n#     concrete classes.\n# Common non-abstract base class with additional constructor args. Nosetest\n#     will probably try to instantiate that too.\n# Multiple inheritance. Now you have two problems ;-)\n#\nclass TestSliderHelp(BaseMainCase):\n    @patch('prestoadmin.mode.get_mode', return_value='yarn_slider')\n    def setUp(self, mode_mock):\n        super(TestSliderHelp, self).setUp()\n        reload(prestoadmin)\n        reload(main)\n\n    def get_short_help_path(self):\n        return os.path.join('resources', 'slider-help.txt')\n\n    def get_extended_help_path(self):\n        return os.path.join('resources', 'slider-extended-help.txt')\n\n    def test_standalone_help_text_short(self):\n        self._run_command_compare_to_file(\n            [\"-h\"], 0, self.get_short_help_path())\n\n    def test_standalone_help_text_long(self):\n        self._run_command_compare_to_file(\n            [\"--help\"], 0, self.get_short_help_path())\n\n    def test_standalone_help_displayed_with_no_args(self):\n        self._run_command_compare_to_file(\n            [], 0, self.get_short_help_path())\n\n    def test_standalone_extended_help(self):\n        self._run_command_compare_to_file(\n            ['--extended-help'], 0, self.get_extended_help_path())\n"
  },
  {
    "path": "tox.ini",
    "content": "[testenv]\nsetenv =\n    PYTHONPATH = {toxinidir}:{toxinidir}/prestoadmin\ncommands = nosetests --with-timer --timer-ok 60s --timer-warning 300s {posargs}\ndeps =\n    -r{toxinidir}/requirements.txt\npassenv = DOCKER_HOST DOCKER_TLS_VERIFY DOCKER_CERT_PATH\n"
  },
  {
    "path": "util/__init__.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Modules within util should only use the standard library because setup.py\n# may rely on the modules. setup.py typically installs all dependencies.\n# If a third party module is used, setup.py may attempt to import it while\n# trying to install dependencies and an ImportError will be raised because\n# the dependency has not been installed yet.\nimport os\n\nmain_dir = os.path.dirname(os.path.abspath(os.path.dirname(__file__)))\n\nwith open(os.path.join(main_dir, 'prestoadmin/_version.py')) as version_file:\n    __version__ = version_file.readlines()[-1].split()[-1].strip(\"\\\"'\")\n"
  },
  {
    "path": "util/http.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nModule for sending HTTP requests\n\"\"\"\nimport urllib2\n\n\ndef send_get_request(url):\n    response = None\n    try:\n        response = urllib2.urlopen(url)\n        if response.getcode() != 200:\n            exit('Get request to %s responded with status of %s' % (url, str(response.getcode())))\n        else:\n            headers = response.info()\n            contents = response.read()\n            return headers, contents\n    finally:\n        if response:\n            response.close()\n\n\ndef send_authorized_post_request(url, data, authorization_string, content_type, content_length):\n    response = None\n    try:\n        request = urllib2.Request(url, data,\n                                  {'Content-Type': '%s' % content_type,\n                                   'Content-Length': content_length,\n                                   'Authorization': 'Basic %s' % authorization_string})\n        response = urllib2.urlopen(request)\n        status = response.getcode()\n        headers = response.info()\n        contents = response.read()\n        if status != 201:\n            print headers\n            print contents\n            exit('Failed to post to %s' % url)\n    finally:\n        if response:\n            response.close()\n"
  },
  {
    "path": "util/semantic_version.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nModule for parsing and processing semantic versions\n\"\"\"\nclass SemanticVersion(object):\n    def __init__(self, version):\n        self.version = version\n        version_fields = self.version.split('.')\n        if len(version_fields) > 3:\n            exit('Version %s has more than 3 fields' % self.version)\n        self.major_version = self._get_version_field_value(version_fields, 0)\n        self.minor_version = self._get_version_field_value(version_fields, 1)\n        self.patch_version = self._get_version_field_value(version_fields, 2)\n\n    def _get_version_field_value(self, version_fields, index):\n        try:\n            return int(version_fields[index])\n        except IndexError:\n            # The field value was omitted for the version\n            return 0\n        except ValueError:\n            exit('Version %s has a non-numeric field' % self.version)\n\n    def __lt__(self, other):\n        if self.major_version == other.major_version:\n            if self.minor_version == other.minor_version:\n                return self.patch_version < other.patch_version\n            else:\n                return self.minor_version < other.minor_version\n        else:\n            return self.major_version < other.major_version\n\n    def __eq__(self, other):\n        return self.major_version == other.major_version and \\\n               self.minor_version == other.minor_version and \\\n               self.patch_version == other.patch_veresion\n\n    def __str__(self):\n        return self.version\n\n    @staticmethod\n    def _bump_version(version_field):\n        return str(int(version_field) + 1)\n\n    def _get_acceptable_major_version_bumps(self):\n        acceptable_major = self._bump_version(self.major_version)\n        return [acceptable_major,\n                acceptable_major + '.0',\n                acceptable_major + '.0.0']\n\n    def _get_acceptable_minor_version_bumps(self):\n        acceptable_minor = self._bump_version(self.minor_version)\n        return [str(self.major_version) + '.' + acceptable_minor,\n                str(self.major_version) + '.' + acceptable_minor + '.0']\n\n    def _get_acceptable_patch_version_bumps(self):\n        acceptable_patch = self._bump_version(self.patch_version)\n        return [str(self.major_version) + '.' + str(self.minor_version) + '.' + acceptable_patch]\n\n    def get_acceptable_version_bumps(self):\n        \"\"\"\n        This functions takes as input strings major, minor, and patch which should be\n        the corresponding semvar fields for a release. It returns a list of strings, which\n        contains all acceptable versions. For each field bump, lower fields may be omitted\n        or 0s. For instance, bumping 0.1.2's major version can result in 1, 1.0, or 1.0.0.\n        \"\"\"\n        major_bumps = self._get_acceptable_major_version_bumps()\n        minor_bumps = self._get_acceptable_minor_version_bumps()\n        patch_bumps = self._get_acceptable_patch_version_bumps()\n        return major_bumps + minor_bumps + patch_bumps\n"
  }
]