[
  {
    "path": ".gitignore",
    "content": "build/*\nchaperone.egg*\ndist/*\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "## 0.3.00 (2015-10-04)\n\nThis is a major release that adds a number of important features and refinements.   Most importantly, a new automated test harness that simulates various process mixes has been added to the release process to assure that Chaperone manages processes in a consistent and reliable way from release to release.\n\nIn addition, Chaperone now recognizes `NOTIFY_SOCKET` upon start-up, and will inform the host's `systemd` of the status of the container.   This adds to Chaperone's existing support for notify-type processes within the container.  This means that container designers can choose any of a number of methods of signalling process readiness inside the container while Chaperone will translate those actions into suitable `systemd` notifications for the host.\n\nThis version is completely backward-compatible with older Chaperone versions.\n\nEnhancements:\n\n- Chaperone will recognize the `NOTIFY_SOCKET` environment variable if passed upon start-up and provide full `systemd` compatible notifications to the host.\n- The [detect_exit](http://garywiz.github.io/chaperone/ref/config-global.html#settings-detect-exit) global setting, which defaults to `true` tells Chaperone to attempt to determine when all processes have completed and automatically terminate the container.  This was the previous default behavior, but the new setting provides flexibility for containers which remain dormant until processes are started manually.\n- There is now a `telchap shutdown` command which provides orderly container shutdown from scripts.\n- Added the `sdnotify-exec` utility which is a multi-purpose wrapper which can be used to proxy `NOTIFY_SOCKET` communication to the host, or can be used to determine if a container is properly started even outside of `systemd` contexts.\n\nRefinements:\n\n- Exit detection is now smarter about `cron` and `inetd` jobs and will not cause container exit if either of those types have scheduled operations which have not yet been triggered.\n- The [--disable-services](http://garywiz.github.io/chaperone/ref/command-line.html#option-disable-services) switch now truly disables services rather than not defining them.  Therefore, containers in such containers can now be started manually.\n- Cron-type services now have more well-defined behavior for `telchap stop` which will unschedule the service, and `telchap reset` which will merely kill the current job and reschedule another.\n- If Chaperone `notify`-type services signal with `ERRNO=n`, then Chaperone will intelligently pass this error number up to `systemd` if the error was the direct cause of container termination, otherwise it is noted in the logs and `systemd` won't find out about it.\n\n## 0.2.40 (2015-09-08)\n\nEnhancements:\n\n- Both `uid` and `gid` can be specified using the path-format of the [--create-user](http://garywiz.github.io/chaperone/ref/command-line.html#option-create-user) command-line switch.\n\nRefinements:\n\n- The `${ENV:-foo}` expansion format now behaves like `bash` where 'foo' is the result if the variable `ENV` is undefined or null (blank).  Previously, it required that the variable be undefined.  This behavior is now consistent throughout all expansion operators.\n- Improved the environment expansion code to handle outlying cases, as well as be signfiicantly more readable.  Used coverage analysis to improve unit test coverage for complex expansions involving recursion.\n\nBug fixes:\n\n- Newer versions of Python's `asyncio` (present in some distros) could hang when starting an **inetd**-style socket process.\n\n## 0.2.37 (2015-08-24)\n\nEnhancements:\n\n- Add support for **inetd**-compatible dynamic TCP socket connections.  See the description of the [port configuration parameter](http://garywiz.github.io/chaperone/ref/config-service.html#service-port) for a complete description of this feature.\n- Added [_CHAP_SERVICE_SERIAL](http://garywiz.github.io/chaperone/ref/env.html#env-chap-service-serial) and [_CHAP_SERVICE_TIME](http://garywiz.github.io/chaperone/ref/env.html#env-chap-service-time) environment variables to provide useful information to 'cron' and 'inetd' services which may execute multiple times.\n- Added the ability to add a `gid` number to the path-based format of the [--create-user](http://garywiz.github.io/chaperone/ref/command-line.html#option-create-user) command-line switch.\n\nBug Fixes:\n\n- Fixed `telchap stop` so that it no longer would cause service restarts to occur.\n- Improved the service restart logic to handle a wider variety of service failure situations.\n\n## 0.2.31 (2015-08-11)\n\nEnhancements:\n\n- Add support for --create-user name:/path so that user identity can be based upon\n  the permissions set for a given path.  This helps workaround the file permissions\n  issues under OSX/VirtualBox where you can't really modify the mounted file\n  permissions and instead \"get what you get\".\n\n## 0.2.30 (2015-08-07)\n\nEnhancements:\n\n- Add support for --archive/-a to envcp.\n\n## 0.2.29 (2015-08-05)\n\nRefinements:\n\n- Allow backslash-escaping of VBAR construct contents in environment variable\n  if-then-else construct.\n\n## 0.2.28 (2015-08-03)\n\nRefinements:\n\n- Create a special-case syntax for shell escapes: ``$(`shell-command`)`` mainly to\n  assure that such syntaxes are propery supported instead of being expanded as a\n  side-effect.  Previously, the syntax above would treat the result of the command\n  as the name of an environment variable, and since it was not found, would insert\n  the results.   Since it was a useful trick, formalizing the use and eliminating\n  edge cases was important.\n- Disabled shell escapes by default in ``envcp`` and added the ``--shell-enable``\n  switch to enable them.\n- Added further documentation about shell escapes to clarify exactly how they\n  work and how they should be used.\n\n## 0.2.27 (2015-08-01)\n\nEnhancements:\n\n- Added documentation for ``envcp`` in the new utilities section of the documentation.\n- Enhanced environment-variable expansions so they are smart about nesting.\n- Fixed syslog receiver so that trailing newlines are stripped (programs like ``sudo``\n  and ``openvpn`` terminate their log lines this way, even though it is a questionable\n  practice).\n\n## 0.2.26 (2015-07-28)\n\nEnhancements:\n\n- Added the ``:/`` regex substitution expansion option, which provides a more extensive and useful\n  feature set than the bash-compatible options.\n- Updated the documentation to reflect the new expansion option and added a footnote about\n  bash compatibility.\n\n## 0.2.25 (2015-07-27)\n\nEnhancements:\n\n - Added the ``:?`` and ``:|`` environemnt variable expansion options.  The first works similarly\n   to bash and raises an error if a variable is not defined.  The second adds more versatility to\n   expansions by allowing the expansion to depend upon the particular value of a variable.\n-  Added documntation for the above.\n\n## 0.2.24 (2015-07-27)\n\nBug Fixes:\n\n - Made `setproctitle` an optional install so that `--no-install-recommends` can be used\n   on `apt-get` installs to streamline image size ([#1, @mc0e](https://github.com/garywiz/chaperone/issues/1))\n\nOther:\n\n - PyPi distribution is no longer done in \"wheel\" format, since that limits the ability\n   to include optional dependencies.  Source format is used instead.\n"
  },
  {
    "path": "LICENSE",
    "content": "Copyright (c) 2015, Gary J. Wisniewski <garyw@blueseastech.com>\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n   http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n"
  },
  {
    "path": "README",
    "content": "Chaperone is a lean, full-featured top-level system manager, similar to init, systemd, and others,\nbut designed for lean container environments like Docker.  It is a single, small program which provides\nprocess clean-up, rudimentary logging, and service management without the overhead of additional\ncomplex configuration.\n\n================   ======================================================\nDocumentation      http://garywiz.github.io/chaperone\nchaperone Source   http://github.com/garywiz/chaperone\npypi link          http://pypi.python.org/pypi/chaperone\n================   ======================================================\n"
  },
  {
    "path": "README.md",
    "content": "\n# ![](https://s.gravatar.com/avatar/62c4c783c4d7233c73f3a114578df650.jpg?s=50) Chaperone\n\n[![Gitter](https://badges.gitter.im/Join_Chat.svg)](https://gitter.im/garywiz/chaperone?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![PyPI version](https://badge.fury.io/py/chaperone.svg)](https://badge.fury.io/py/chaperone)\n\nChaperone is a lean init-style startup manager for Docker-like containers.  It runs as a single lightweight full-featured process which runs at the root of a docker container tree and provides all of the following functionality, plus much more:\n\n* Monitoring for all processes in the container, automatically shutting down the\n  container when the last process exits.\n* A complete, configurable syslog facility built in and provided on /dev/log\n  so daemons and other services can have output captured.  Configurable\n  to handle log-file rotation, duplication to stdout/stderr, and full Linux\n  logging facility, severity support.  No syslog daemon is required in your\n  container.\n* The ability to start up system services in dependency order, with options\n  for per-service environment variables, restart options, and stdout/stderr capture either\n  to the log service or stdout.\n* A built-in cron scheduling service.\n* Emulation of systemd notifications (sd_notify) so services can post\n  ready and status notifications to chaperone.\n* Process monitoring and zombie elimination, along with organized system\n  shutdown to assure all daemons shut-down gracefully.\n* The ability to have an optional controlling process, specified on the \n  docker command line, to simplify creating containers which have development\n  mode vs. production mode.\n* Complete configuration using a ``chaperone.d`` directory which can be located\n  in various places, and even allows different configurations\n  within the container, triggered based upon which user is selected at start-up.\n* Default behavior designed out-of-the-box to work with simple Docker containers\n  for quick start-up for lean containers.\n* More...\n\nIf you want to try it out quickly, the best place to start is on the\n[chaperone-docker](https://github.com/garywiz/chaperone-docker) repository\npage.  There is a quick section called \"Try it out\" that uses images\navailable now on Docker Hub.\n\nFor full details of features\nand usage: [see the documentation](http://garywiz.github.io/chaperone/index.html).\n\nThere is some debate about whether docker containers should be transformed into\ncomplete systems (so-called \"fat containers\").  However, it is clear that many\ncontainers contain one or more services to provide a single \"composite feature\",\nbut that such containers need a special, more streamlined approach to managing\na number of common daemons.  \n\nChaperone is the best answer I've come up with so far, and was inspired by\nThe [Phusion baseimage-docker](http://phusion.github.io/baseimage-docker/) approach.\nHowever, unlike the Phusion image, it does not require adding daemons for logging,\nsystem services (such as runit).  Chaperone is designed to be self-contained.\n\nStatus\n------\n\nChaperone is now stable and ready for production.  If you are currently starting up your\ncontainer services with Bash scripts, Chaperone is probably a much better choice. \n\nFull status is [now part of the documentation](http://garywiz.github.io/chaperone/status.html).\n\nDownloading and Installing\n--------------------------\n\nThe easiest way to install `chaperone`` is using ``pip`` from the https://pypi.python.org/pypi/chaperone package:\n\n    # Ubuntu or debian prerequisites...\n    apt-get install python3-pip\n\n    # chaperone installation (may be all you need)\n    pip3 install chaperone\n\nLicense\n-------\n\nCopyright (c) 2015, Gary J. Wisniewski <garyw@blueseastech.com>\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n   http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n"
  },
  {
    "path": "chaperone/__init__.py",
    "content": "# Placeholder\n"
  },
  {
    "path": "chaperone/cproc/__init__.py",
    "content": "# Placeholder\n\nfrom chaperone.cproc.process_manager import TopLevelProcess\n"
  },
  {
    "path": "chaperone/cproc/client.py",
    "content": "import asyncio\n\nclass CommandClient(asyncio.Protocol):\n\n    @classmethod\n    def sendCommand(cls, cmd):\n        loop = asyncio.get_event_loop()\n        coro = loop.create_unix_connection(lambda: CommandClient(cmd, loop), path = \"/dev/chaperone.sock\")\n        (transport, protocol) = loop.run_until_complete(coro)\n        loop.run_forever()\n        loop.close()\n        return protocol.result\n\n    def __init__(self, message, loop):\n        self.message = message\n        self.loop = loop\n        self.result = None\n\n    def connection_made(self, transport):\n        transport.write(self.message.encode())\n\n    def data_received(self, data):\n        msg = data.decode()\n        lines = msg.split(\"\\n\")\n        error = None\n\n        if lines[0] in {'COMMAND-ERROR', 'RESULT'}:\n            self.result = \"\\n\".join(lines[1:])\n        else:\n            error = \"Unexpected response from chaperone: \" + str(msg)\n\n        if error:\n            raise Exception(error)\n\n    def connection_lost(self, exc):\n        self.loop.stop()\n\n"
  },
  {
    "path": "chaperone/cproc/commands.py",
    "content": "import os\nimport asyncio\nimport stat\nimport shlex\nfrom functools import partial\nfrom docopt import docopt\n\nfrom chaperone.cutil.servers import Server, ServerProtocol\nfrom chaperone.cutil.misc import maybe_remove\nfrom chaperone.cutil.logging import debug, warn, info\nimport chaperone.cutil.syslog_info as syslog_info\n\nCOMMAND_DOC = \"\"\"\nUsage: telchap status\n       telchap loglevel [<level>]\n       telchap stop [--force] [--wait] [--disable] [<servname> ...]\n       telchap start [--force] [--wait] [--enable] [<servname> ...]\n       telchap reset [--force] [--wait] [<servname> ...]\n       telchap enable [<servname> ...]\n       telchap disable [<servname> ...]\n       telchap dependencies\n       telchap shutdown [<delay>]\n\"\"\"\n\nCHAP_FIFO = \"/dev/chaperone\"\nCHAP_SOCK = \"/dev/chaperone.sock\"\n\nclass _BaseCommand(object):\n\n    command_name = \"X\"\n    interactive_only = False\n    interactive = False\n\n    def match(self, opts):\n        if isinstance(self.command_name, tuple):\n            return all(opts.get(name, False) for name in self.command_name)\n        return opts.get(self.command_name, False)\n\n    @asyncio.coroutine\n    def exec(self, opts, protocol):\n        #result = yield from self.do_exec(opts, controller)\n        #return str(result)\n        self.interactive = protocol.interactive\n        try:\n            result = yield from self.do_exec(opts, protocol.owner.controller)\n            return str(result)\n        except Exception as ex:\n            return \"Command error: \" + str(ex)\n\n\nSTMSG = \"\"\"\nRunning:           {0.version}\nUptime:            {0.uptime}\nManaged processes: {1} ({2} enabled)\n\"\"\"\n\nclass statusCommand(_BaseCommand):\n\n    command_name = \"status\"\n    interactive_only = True\n\n    @asyncio.coroutine\n    def do_exec(self, opts, controller):\n        serv = controller.services\n        msg = STMSG.format(controller, len(serv), len([s for s in serv.values() if s.enabled]))\n        msg += \"\\nServices:\\n\\n\" + str(serv.get_status_formatter().get_formatted_data()) + \"\\n\"\n        return msg\n\nclass dependenciesCommand(_BaseCommand):\n\n    command_name = \"dependencies\"\n    interactive_only = True\n\n    @asyncio.coroutine\n    def do_exec(self, opts, controller):\n        graph = controller.services.services_config.get_dependency_graph()\n        return \"\\n\".join(graph)\n\nclass serviceReset(_BaseCommand):\n\n    command_name = 'reset'\n\n    @asyncio.coroutine\n    def do_exec(self, opts, controller):\n        wait = opts['--wait'] and self.interactive\n        yield from controller.services.reset(opts['<servname>'], force = opts['--force'], wait = wait)\n        return \"services reset.\"\n\nclass serviceEnable(_BaseCommand):\n\n    command_name = 'enable'\n\n    @asyncio.coroutine\n    def do_exec(self, opts, controller):\n        yield from controller.services.enable(opts['<servname>'])\n        return \"services enabled.\"\n\nclass serviceDisable(_BaseCommand):\n\n    command_name = 'disable'\n\n    @asyncio.coroutine\n    def do_exec(self, opts, controller):\n        yield from controller.services.disable(opts['<servname>'])\n        return \"services disabled.\"\n\nclass serviceStart(_BaseCommand):\n\n    command_name = 'start'\n\n    @asyncio.coroutine\n    def do_exec(self, opts, controller):\n        wait = opts['--wait'] and self.interactive\n        yield from controller.services.start(opts['<servname>'], force = opts['--force'],\n                                             wait = wait,\n                                             enable = opts['--enable'])\n        if wait:\n            return \"services started.\"\n        return \"service start-up queued.\"\n\nclass serviceStop(_BaseCommand):\n\n    command_name = 'stop'\n\n    @asyncio.coroutine\n    def do_exec(self, opts, controller):\n        wait = opts['--wait'] and self.interactive\n        yield from controller.services.stop(opts['<servname>'], force = opts['--force'], \n                                            wait = wait,\n                                            disable = opts['--disable'])\n        if wait:\n            return \"services stopped.\"\n        return \"services stopping.\"\n\nclass loglevelCommand(_BaseCommand):\n\n    command_name = \"loglevel\"\n\n    @asyncio.coroutine\n    def do_exec(self, opts, controller):\n        lev = opts['<level>']\n        if lev is None:\n            curlev = controller.force_log_level()\n            if curlev is None:\n                return \"Forced Logging Level: NOT SET\"\n            try:\n                pri = \"*.\" + syslog_info.PRIORITY[curlev]\n            except IndexError:\n                pri = \"Forced Logging Level: UNKNOWN\"\n            return pri\n        if lev.startswith('*.'):\n            lev = lev[2:]\n        controller.force_log_level(lev)\n        return \"All logging set to include priorities >= *.\" + lev.lower()\n            \nclass shutdownCommand(_BaseCommand):\n\n    command_name = \"shutdown\"\n\n    @asyncio.coroutine\n    def do_exec(self, opts, controller):\n        delay = opts['<delay>']\n\n        if delay is None or delay.lower() == \"now\":\n            delay = 0.1\n            message = \"Shutting down now\"\n        else:\n            try:\n                delay = float(delay)\n            except ValueError:\n                return \"Specified delay is not a valid decimal number: \" + str(delay)\n            message = \"Shutting down in {0} seconds\".format(delay)\n\n        info(\"requested shutdown scheduled to occur in {0} seconds\".format(delay))\n        asyncio.get_event_loop().call_later(delay, controller.kill_system)\n\n        return message\n            \n##\n## Register all commands here\n##\n\nCOMMANDS = (\n    loglevelCommand(),\n    shutdownCommand(),\n    statusCommand(),\n    serviceStop(),\n    serviceStart(),\n    serviceReset(),\n    serviceEnable(),\n    serviceDisable(),\n    dependenciesCommand(),\n)\n\nclass CommandProtocol(ServerProtocol):\n\n    interactive = False\n\n    @asyncio.coroutine\n    def _interpret_command(self, msg):\n        if not msg:\n            return\n        try:\n            options = docopt(COMMAND_DOC, shlex.split(msg), help=False)\n        except Exception as ex:\n            result = \"EXCEPTION\\n\" + str(ex)\n        except SystemExit as ex:\n            result = \"COMMAND-ERROR\\n\" + str(ex)\n        else:\n            result = \"?\"\n            for c in COMMANDS:\n                if c.match(options) and (not c.interactive_only or self.interactive):\n                    result = yield from c.exec(options, self)\n                    break\n            result = \"RESULT\\n\" + result\n        return result\n\n    @asyncio.coroutine\n    def _command_task(self, cmd, interactive = False):\n        result = yield from self._interpret_command(cmd)\n        if interactive:\n            self.transport.write(result.encode())\n            self.transport.close()\n\n    def data_received(self, data):\n        if self.interactive:\n            asyncio.async(self._command_task(data.decode(), True))\n        else:\n            commands = data.decode().split(\"\\n\")\n            for c in commands:\n                asyncio.async(self._command_task(c))\n\nclass _InteractiveServer(Server):\n\n    def _create_server(self):\n        maybe_remove(CHAP_SOCK)\n        return asyncio.get_event_loop().create_unix_server(CommandProtocol.buildProtocol(self, interactive=True), \n                                                           path=CHAP_SOCK)\n\n    @asyncio.coroutine\n    def server_running(self):\n        os.chmod(CHAP_SOCK, 0o777)\n\n    def close(self):\n        super().close()\n        maybe_remove(CHAP_SOCK)\n\n\nclass CommandServer(Server):\n\n    controller = None\n    _fifoname = None\n    _iserve = None\n\n    def __init__(self, controller, filename = CHAP_FIFO, **kwargs):\n        \"\"\"\n        Creates a new command FIFO and socket.  The controller is the object to which commands and interactions\n        will occur, usually a chaperone.cproc.process_manager.TopLevelProcess.\n        \"\"\"\n        super().__init__(**kwargs)\n\n        self.controller = controller\n        self._fifoname = filename\n\n    @asyncio.coroutine\n    def server_running(self):\n        self._iserve = _InteractiveServer()\n        self._iserve.controller = self.controller # share this with our domain socket\n        yield from self._iserve.run()\n\n    def _open(self):\n        name = self._fifoname\n\n        maybe_remove(name)\n        if not os.path.exists(name):\n            os.mkfifo(name)\n\n        if not stat.S_ISFIFO(os.stat(name).st_mode):\n            raise TypeError(\"File is not a fifo: \" + str(name))\n\n        os.chmod(name, 0o777)\n\n        return open(os.open(name, os.O_RDWR|os.O_NONBLOCK))\n            \n    def _create_server(self):\n        return asyncio.get_event_loop().connect_read_pipe(CommandProtocol.buildProtocol(self), self._open())\n\n    def close(self):\n        super().close()\n        maybe_remove(CHAP_FIFO)\n        if self._iserve:\n            self._iserve.close()\n\n"
  },
  {
    "path": "chaperone/cproc/process_manager.py",
    "content": "import os\nimport pwd\nimport errno\nimport asyncio\nimport shlex\nimport signal\nimport datetime\n\nfrom functools import partial\nfrom time import time, sleep\n\nimport chaperone.cutil.syslog_info as syslog_info\n\nfrom chaperone.cproc.commands import CommandServer\nfrom chaperone.cproc.version import DISPLAY_VERSION\nfrom chaperone.cproc.watcher import InitChildWatcher\nfrom chaperone.cproc.subproc import SubProcess, SubProcessFamily\nfrom chaperone.cutil.config import ServiceConfig\nfrom chaperone.cutil.env import Environment\nfrom chaperone.cutil.notify import NotifySink\nfrom chaperone.cutil.logging import warn, info, debug, error, set_log_level\nfrom chaperone.cutil.misc import lazydict, objectplus\nfrom chaperone.cutil.syslog import SyslogServer\nfrom chaperone.cutil.errors import get_errno_from_exception\n\nclass CustomEventLoop(asyncio.SelectorEventLoop):\n    def _make_socket_transport(self, sock, protocol, waiter=None, *,\n                               extra=None, server=None):\n        \"\"\"\n        Supports a special protocol method 'acquire_socket' which acceps only a socket.\n        If it returns True, then the passed socket has been detached and no further\n        action will be taken.  This is to support inetd-style processes.\n        \"\"\"\n        if hasattr(protocol, 'acquire_socket') and protocol.acquire_socket(sock):\n            if waiter:\n                waiter.set_result(None)\n            return None\n        return super()._make_socket_transport(sock, protocol, waiter, extra=extra, server=server)\n\nasyncio.DefaultEventLoopPolicy._loop_factory = CustomEventLoop\n\n\nclass TopLevelProcess(objectplus):\n             \n    send_sighup = False\n    detect_exit = True\n\n    _shutdown_timeout = None\n    _ignore_signals = False\n    _services_started = False\n    _syslog = None\n    _command = None\n    _minimum_syslog_level = None\n    _start_time = None\n    _status_interval = None\n    _family = None\n    _exitcode = None\n\n    _all_killed = False\n    _killing_system = False\n    _kill_future = None\n    _config = None\n    _pending = None\n\n    _notify_enabled = False\n    notify = None\n\n    def __init__(self, config):\n        self._config = config\n        self._start_time = time()\n        self._pending = set()\n\n        self.notify = NotifySink() # whether or not we actually have a notify socket\n\n        # wait at least 0.5 seconds, zero is totally pointless\n        settings = config.get_settings()\n        self._shutdown_timeout = settings.get('shutdown_timeout', 8) or 0.5\n\n        self.detect_exit = settings.get('detect_exit', True)\n        self.enable_syslog = settings.get('enable_syslog', True)\n\n        policy = asyncio.get_event_loop_policy()\n        w = self._watcher = InitChildWatcher(onNoProcesses = self._queue_no_processes)\n        policy.set_child_watcher(w)\n        self.loop.add_signal_handler(signal.SIGTERM, self.kill_system)\n        self.loop.add_signal_handler(signal.SIGINT, self._got_sigint)\n\n        self._status_interval = settings.get('status_interval', 30)\n\n    @property\n    def debug(self):\n        return asyncio.get_event_loop().get_debug()\n    @debug.setter\n    def debug(self, val):\n        asyncio.get_event_loop().set_debug(val)\n\n    @property\n    def loop(self):\n        return asyncio.get_event_loop()\n\n    @property\n    def system_alive(self):\n        \"\"\"\n        Returns true if the system is considered \"alive\" and new processes, restarts, and other\n        normal operations should proceed.   Generally, the system is alive until it is killed,\n        but the process of shutting down the system may be complex and time consuming, and\n        in the future there may be other factors which cause us to suspend\n        normal system operation.\n        \"\"\"\n        return not self._killing_system\n\n    @property\n    def version(self):\n        \"Returns version identifier\"\n        return \"chaperone version {0}\".format(DISPLAY_VERSION)\n\n    @property\n    def uptime(self):\n        return datetime.timedelta(seconds = time() - self._start_time)\n\n    @property\n    def services(self):\n        return self._family\n\n    def force_log_level(self, level = None):\n        \"\"\"\n        Specifies the *minimum* logging level that will be applied to all syslog entries.\n        This is primarily useful for debugging, where you want to override any limitations\n        imposed on log file entries.\n\n        As a (convenient) side-effect, if the level is DEBUG, then debug features of both\n        asyncio as well as chaperone will be enabled.\n\n        If level is not provided, then returns the current setting.\n        \"\"\"\n        if level is None:\n            return self._minimum_syslog_level\n\n        levid = syslog_info.PRIORITY_DICT.get(level.lower(), None)\n        if not levid:\n            raise Exception(\"Not a valid log level: {0}\".format(level))\n        set_log_level(levid)\n        self._minimum_syslog_level = levid\n        self.debug = (levid == syslog_info.LOG_DEBUG)\n        if self._syslog:\n            self._syslog.reset_minimum_priority(levid)\n        info(\"Forcing all log output to '{0}' or greater\", level)\n\n    def _queue_no_processes(self):\n        # Any output from dead processes won't get queued into the logs if we\n        # don't return to the event loop.\n        self.loop.call_later(0.05, self._no_processes)\n\n    def _no_processes(self, ignore_service_state = False):\n        if not (ignore_service_state or self._services_started):\n            return    # do not react during system initialization\n\n        self._all_killed = True\n\n        if not self._killing_system:\n            if not self.detect_exit:\n                return\n            if self._family:\n                ss = self._family.get_scheduled_services()\n                if ss:\n                    warn(\"system will remain active since there are scheduled services: \" + \", \".join(s.name for s in ss))\n                    return\n\n        # Passed all checks, now kill system\n\n        self.notify.stopping()\n\n        debug(\"Final termination phase.\")\n\n        self._services_started = False\n        if self._kill_future and not self._kill_future.cancelled():\n            self._kill_future.cancel()\n        self.activate(self._final_system_stop())\n\n    @asyncio.coroutine\n    def _final_system_stop(self):\n        yield from asyncio.sleep(0.1)\n        if self._syslog:\n            self._syslog.close()\n        if self._command:\n            self._command.close()\n\n        self._cancel_pending()\n        self.loop.stop()\n\n    def _got_sigint(self):\n        print(\"\\nCtrl-C ... killing chaperone.\")\n        self.kill_system(4, True)\n        \n    def signal_ready(self):\n        \"\"\"\n        Tells any notify listener that the system is ready.  Does nothing if the system\n        is dying due to errors, or if a kill is in progress.\n        \"\"\"\n        if not self._services_started or self._killing_system:\n            return\n        self.notify.ready()\n\n        # This is the time to set up the status monitor\n\n        if self._status_interval and self._family and self._notify_enabled:\n            self.activate(self._report_status())\n\n    @asyncio.coroutine\n    def _report_status(self):\n        while self._status_interval:\n            if self._family:\n                self.notify.status(self._family.get_status())\n                yield from asyncio.sleep(self._status_interval)\n\n    def kill_system(self, errno = None, force = False):\n        \"\"\"\n        Systematically shuts down the system.  With the 'force' argument set to true,\n        does so even if a kill is already in progress.\n        \"\"\"\n        if force:\n            self._services_started = True\n        elif self._killing_system:\n            return\n\n        if self._exitcode is None and errno is not None:\n            self._exitcode = 1   # default exit for an error\n            self.notify.error(errno)\n\n        warn(\"Request made to kill system.\" + ((force and \" (forced)\") or \"\"))\n        self._killing_system = True\n        self._kill_future = asyncio.async(self._kill_system_co())\n\n    def _cancel_pending(self):\n        \"Cancel any pending activated tasks\"\n\n        for p in list(self._pending):\n            if not p.cancelled():\n                p.cancel()\n\n    @asyncio.coroutine\n    def _kill_system_co(self):\n\n        self.notify.stopping()\n\n        self._cancel_pending()\n\n        # Tell the family it's been nice.  It's unlikely we won't have a process family, but\n        # it's optional, so we should handle the situation.\n\n        wait_done = False       # indicates if shutdown_timeout has expired\n\n        if self._family:\n            for f in self._family.values():\n                yield from f.final_stop()\n            # let normal shutdown happen\n            if self._watcher.number_of_waiters > 0 and self._shutdown_timeout:\n                debug(\"still have {0} waiting, sleeping for shutdown_timeout={1}\".format(self._watcher.number_of_waiters, self._shutdown_timeout))\n                yield from asyncio.sleep(self._shutdown_timeout)\n                wait_done = True\n\n        try:\n            os.kill(-1, signal.SIGTERM) # first try a sig term\n            if self.send_sighup:\n                os.kill(-1, signal.SIGHUP)\n        except ProcessLookupError:\n            debug(\"No processes remain when attempting to kill system, just stop.\")\n            self._no_processes(True)\n            return\n\n        if wait_done:                   # give a short wait just so the signals fire\n            yield from asyncio.sleep(1) # these processes are unknowns\n        else:\n            yield from asyncio.sleep(self._shutdown_timeout)\n            \n        if self._all_killed:\n            return\n\n        info(\"Some processes remain after {0}secs.  Forcing kill\".format(self._shutdown_timeout))\n\n        try:\n            os.kill(-1, signal.SIGKILL)\n        except ProcessLookupError:\n            debug(\"No processes when attempting to force quit\")\n            self._no_processes(True)\n            return\n\n    def activate_result(self, future):\n        self._pending.discard(future)\n\n    def activate(self, cr):\n       future = asyncio.async(cr)\n       future.add_done_callback(self.activate_result)\n       self._pending.add(future)\n       return future\n\n    def _system_coro_check(self, f):\n        if f.exception():\n            error(\"system startup cancelled due to error: {0}\".format(f.exception()))\n            self.kill_system(get_errno_from_exception(f.exception()))\n\n    def _system_started(self, startup, future=None):\n        if future and not future.cancelled() and future.exception():\n            self._system_coro_check(future)\n            return\n        info(self.version + \", ready.\")\n        if startup:\n            future = self.activate(startup)\n            future.add_done_callback(self._system_coro_check)\n\n    @asyncio.coroutine\n    def _start_system_services(self):\n\n        self._notify_enabled = yield from self.notify.connect()\n\n        if self.enable_syslog:\n            self._syslog = SyslogServer()\n            self._syslog.configure(self._config, self._minimum_syslog_level)\n\n            try:\n                yield from self._syslog.run()\n            except PermissionError as ex:\n                self._syslog = None\n                warn(\"syslog service cannot be started: {0}\", ex)\n            else:\n                self._syslog.capture_python_logging()\n                info(\"Switching all chaperone logging to /dev/log\")\n\n        self._command = CommandServer(self)\n\n        try:\n            yield from self._command.run()\n        except PermissionError as ex:\n            self._command = None\n            warn(\"command service cannot be started: {0}\", ex)\n\n    def run_event_loop(self, startup_coro = None, exit_when_done = True):\n        \"\"\"\n        Sets up the event loop and runs it, setting up basic services such as syslog\n        as well as the command services sockets.   Then, calls the startup coroutine (if any)\n        to tailor the environment and start up other services as needed.\n        \"\"\"\n\n        initfuture = asyncio.async(self._start_system_services())\n        initfuture.add_done_callback(lambda f: self._system_started(startup_coro, f))\n\n        self.loop.run_forever()\n        self.loop.close()\n\n        if exit_when_done:\n            exit(self._exitcode or 0)\n\n    @asyncio.coroutine\n    def run_services(self, extra_services, disable_others = False):\n        \"Run all services.\"\n\n        # First, determine our overall configuration for the services environment.\n\n        services = self._config.get_services()\n\n        if extra_services:\n            services = services.deepcopy()\n            if disable_others:\n                for s in services.values():\n                    s.enabled = False\n            for s in extra_services:\n                services.add(s)\n\n        family = self._family = SubProcessFamily(self, services)\n        tried_any = False\n        errno = None\n\n        try:\n            tried_any = yield from family.run()\n        except asyncio.CancelledError:\n            pass\n        finally:\n            self._services_started = True\n\n        if self.detect_exit:\n            if not tried_any:\n                warn(\"No service startups attempted (all disabled?) - exiting due to 'detect_exit=true'\")\n                self.kill_system()\n            else:\n                self._watcher.check_processes()\n"
  },
  {
    "path": "chaperone/cproc/pt/__init__.py",
    "content": "# Placeholder\n\nfrom chaperone.cproc.process_manager import TopLevelProcess\n"
  },
  {
    "path": "chaperone/cproc/pt/cron.py",
    "content": "import asyncio\nfrom aiocron import crontab\nfrom chaperone.cutil.logging import error, warn, debug, info\nfrom chaperone.cutil.syslog_info import LOG_CRON\nfrom chaperone.cproc.subproc import SubProcess\nfrom chaperone.cutil.errors import ChParameterError\n\n_CRON_SPECIALS = {\n    '@yearly':      '0 0 1 1 *',\n    '@annually':    '0 0 1 1 *',\n    '@monthly':     '0 0 1 * *',\n    '@weekly':      '0 0 * * 0',\n    '@daily':       '0 0 * * *',\n    '@hourly':      '0 * * * *',\n}\n\nclass CronProcess(SubProcess):\n\n    syslog_facility = LOG_CRON\n\n    _cron = None\n    _fut_monitor = None\n\n    def __init__(self, service, family=None):\n        super().__init__(service, family)\n        if not self.interval:\n            raise ChParameterError(\"interval= property missing, required for cron service '{0}'\".format(self.name))\n\n        # Support specials with or without the @\n        real_interval = _CRON_SPECIALS.get(self.interval) or _CRON_SPECIALS.get('@'+self.interval) or self.interval\n\n        # make a status note\n        self.note = \"{0} ({1})\".format(self.interval, real_interval) if self.interval != real_interval else real_interval\n\n        self._cron = crontab(real_interval, func=self._cron_hit, start=False)\n\n    def default_status(self):\n        if self._cron.handle:\n            return 'waiting'\n        return None\n\n    @property\n    def scheduled(self):\n        return self._cron and self._cron.handle\n        \n    @asyncio.coroutine\n    def start(self):\n        \"\"\"\n        Takes over startup and sets up our cron loop to handle starts instead.\n        \"\"\"\n        if not self.enabled or self._cron.handle:\n            return\n\n        self.start_attempted = True\n\n        # Start up cron\n        try:\n            self._cron.start()\n        except Exception:\n            raise ChParameterError(\"not a valid cron interval specification, '{0}'\".format(self.interval))\n\n        self.loginfo(\"cron service {0} scheduled using interval spec '{1}'\".format(self.name, self.interval))\n\n    @asyncio.coroutine\n    def _cron_hit(self):\n        if self.enabled:\n            if not self.family.system_alive:\n                return\n            if self.running:\n                self.logwarn(\"cron service {0} is still running when next interval expired, will not run again\", self.name)\n            else:\n                self.loginfo(\"cron service {0} running CMD ( {1} )\", self.name, self.command)\n                try:\n                    yield from super().start()\n                except Exception as ex:\n                    self.logerror(ex, \"cron service {0} failed to start: {1}\", self.name, ex)\n                    yield from self.reset();\n\n    @property\n    def stoppable(self):\n        return self.scheduled\n\n    @asyncio.coroutine\n    def stop(self):\n        self._cron.stop()\n        yield from super().stop()\n\n    @asyncio.coroutine\n    def process_started_co(self):\n        if self._fut_monitor and not self._fut_monitor.cancelled():\n            self._fut_monitor.cancel()\n            self._fut_monitor = None\n\n        # We have a successful start.  Monitor this service.\n\n        self._fut_monitor = asyncio.async(self._monitor_service())\n        self.add_pending(self._fut_monitor)\n\n    @asyncio.coroutine\n    def _monitor_service(self):\n        result = yield from self.wait()\n        if isinstance(result, int) and result > 0:\n            yield from self._abnormal_exit(result)\n        else:\n            yield from self.reset()\n"
  },
  {
    "path": "chaperone/cproc/pt/forking.py",
    "content": "import asyncio\nfrom chaperone.cproc.subproc import SubProcess\nfrom chaperone.cutil.errors import ChProcessError\n\nclass ForkingProcess(SubProcess):\n\n    defer_exit_kills = True\n\n    @asyncio.coroutine\n    def process_started_co(self):\n        result = yield from self.timed_wait(self.process_timeout, self._exit_timeout)\n        if result is not None and not result.normal_exit:\n            if self.ignore_failures:\n                self.logwarn(\"{0} (ignored) failure on start-up with result '{1}'\".format(self.name, result))\n            else:\n                raise ChProcessError(\"{0} failed on start-up with result '{1}'\".format(self.name, result), resultcode = result)\n        yield from self.wait_for_pidfile()\n        \n    def _exit_timeout(self):\n        service = self.service\n        message = \"forking service '{1}' did not exit after {2} second(s), {3}\".format(\n            service.type,\n            service.name, self.process_timeout, \n            \"proceeding due to 'ignore_failures=True'\" if service.ignore_failures else\n            \"terminating due to 'ignore_failures=False'\")\n        if not service.ignore_failures:\n            self.terminate()\n        raise Exception(message)\n"
  },
  {
    "path": "chaperone/cproc/pt/inetd.py",
    "content": "import os\nimport asyncio\nfrom copy import copy\nfrom chaperone.cutil.logging import error, warn, debug, info\nfrom chaperone.cproc.subproc import SubProcess\nfrom chaperone.cutil.syslog_info import LOG_DAEMON\nfrom chaperone.cutil.errors import ChParameterError\nfrom chaperone.cutil.servers import Server, ServerProtocol\n\nclass InetdServiceProtocol(ServerProtocol):\n\n    _fd = None\n\n    def acquire_socket(self, sock):\n        # Prepare the socket so it's inheritable\n        sock.setblocking(True)\n        self._fd = sock.detach()\n        sock.close()\n\n        future = asyncio.async(self.start_socket_process(self._fd))\n        future.add_done_callback(self._done)\n\n        self.process.counter += 1\n\n        return True\n\n    def _done(self, f):\n        # Close the socket regardless\n        if self._fd is not None:\n            os.close(self._fd)\n\n    @asyncio.coroutine\n    def start_socket_process(self, fd):\n        process = self.process\n        service = process.service\n\n        if not process.family.system_alive:\n            process.logdebug(\"{0} received connection on port {1}; ignored, system no longer alive\".format(service.name, service.port))\n            return\n\n        process.logdebug(\"{0} received connection on port {2}; attempting start '{1}'... \".format(service.name, \" \".join(service.exec_args),\n                         service.port))\n\n        kwargs = {'stdout': fd,\n                  'stderr': fd,\n                  'stdin': fd}\n\n        if service.directory:\n            kwargs['cwd'] = service.directory\n\n        env = process.get_expanded_environment().get_public_environment()\n\n        if service.debug:\n            if not env:\n                process.logdebug(\"{0} environment is empty\", service.name)\n            else:\n                process.logdebug(\"{0} environment:\", service.name)\n                for k,v in env.items():\n                    process.logdebug(\" {0} = '{1}'\".format(k,v))\n\n        create = asyncio.create_subprocess_exec(*service.exec_args, preexec_fn=process._setup_subprocess,\n                                                env=env, **kwargs)\n\n        proc = self._proc = yield from create\n        self.pid = proc.pid\n\n        process.logdebug(\"{0} instance connected to port {1}\", service.name, service.port)\n\n        process.add_process(proc)\n        yield from proc.wait()\n        process.remove_process(proc)\n\n        if not proc.returncode.normal_exit:\n            self.logerror(\"{2} exit status for pid={0} is '{1}'\".format(proc.pid, proc.returncode, service.name))\n\n\nclass InetdService(Server):\n    \n    def __init__(self, process):\n        super().__init__()\n        self.process = process\n\n    def _create_server(self):\n        return asyncio.get_event_loop().create_server(InetdServiceProtocol.buildProtocol(self, process=self.process),\n                                                      '0.0.0.0',\n                                                      self.process.port)\n\nclass InetdProcess(SubProcess):\n\n    syslog_facility = LOG_DAEMON\n    server = None\n    counter = 0\n\n    def __init__(self, service, family=None):\n        super().__init__(service, family)\n        self._proclist = set()\n\n        if not service.port:\n            raise ChParameterError(\"inetd-type service {0} requires 'port=' parameter\".format(self.name))\n\n    def add_process(self, proc):\n        self._proclist.add(proc)\n\n    def remove_process(self, proc):\n        self._proclist.discard(proc)\n\n    @property\n    def scheduled(self):\n        return self.server is not None\n\n    @property\n    def note(self):\n        if self.server:\n            msg = \"waiting on port \" + str(self.port)\n            if self.counter:\n                msg += \"; req recvd = \" + str(self.counter)\n            if len(self._proclist):\n                msg += \"; running = \" + str(len(self._proclist))\n            return msg\n\n    @asyncio.coroutine\n    def start_subprocess(self):\n        \"\"\"\n        Takes over process startup and sets up our own server socket.\n        \"\"\"\n        \n        self.server = InetdService(self)\n        yield from self.server.run()\n\n        self.loginfo(\"inetd service {0} listening on port {1}\".format(self.name, self.port))\n\n    @asyncio.coroutine\n    def reset(self, dependents = False, enable = False, restarts_ok = False):\n        if self.server:\n            self.server.close()\n            self.server = None\n        plist = copy(self._proclist)\n        if plist:\n            self.logwarn(\"{0} terminating {1} processes on port {2} that are still running\".format(self.name, len(plist), self.port))\n            for p in plist:\n                p.terminate()\n        yield from super().reset(dependents, enable, restarts_ok)\n\n    @asyncio.coroutine\n    def final_stop(self):\n        yield from self.reset()\n"
  },
  {
    "path": "chaperone/cproc/pt/notify.py",
    "content": "import asyncio\nimport socket\nimport re\nfrom functools import partial\n\nfrom chaperone.cutil.errors import ChProcessError\nfrom chaperone.cutil.proc import ProcStatus\nfrom chaperone.cutil.notify import NotifyListener\nfrom chaperone.cproc.subproc import SubProcess\n\nclass NotifyProcess(SubProcess):\n\n    process_timeout = 300\n    defer_exit_kills = True\n\n    _fut_monitor = None\n    _listener = None\n    _ready_event = None\n    \n    def _close_listener(self):\n        if self._listener:\n            self._listener.close()\n            self._listener = None\n\n    @asyncio.coroutine\n    def process_prepare_co(self, environ):\n        if not self._listener:\n            self._listener = NotifyListener('@/chaperone/' + self.service.name,\n                                            onNotify = self._notify_received)\n            yield from self._listener.run()\n\n        environ['NOTIFY_SOCKET'] = self._listener.socket_name\n\n        # Now, set up an event which is triggered upon ready\n        self._ready_event = asyncio.Event()\n\n    def _notify_timeout(self):\n        service = self.service\n        message = \"notify service '{1}' did not receive ready notification after {2} second(s), {3}\".format(\n            service.type,\n            service.name, self.process_timeout, \n            \"proceeding due to 'ignore_failures=True'\" if service.ignore_failures else\n            \"terminating due to 'ignore_failures=False'\")\n        if not service.ignore_failures:\n            self.terminate()\n        raise ChProcessError(message)\n\n    @asyncio.coroutine\n    def reset(self, dependents = False, enable = False, restarts_ok = False):\n        yield from super().reset(dependents, enable, restarts_ok)\n        self._close_listener()\n\n    @asyncio.coroutine\n    def final_stop(self):\n        yield from super().final_stop()\n        self._close_listener()\n\n    @asyncio.coroutine\n    def process_started_co(self):\n        if self._fut_monitor and not self._fut_monitor.cancelled():\n            self._fut_monitor.cancel()\n            self._fut_monitor = None\n\n        yield from self.do_startup_pause()\n\n        self._fut_monitor = asyncio.async(self._monitor_service())\n        self.add_pending(self._fut_monitor)\n\n        if self._ready_event:\n            try:\n                if not self.process_timeout:\n                    raise asyncio.TimeoutError()\n                yield from asyncio.wait_for(self._ready_event.wait(), self.process_timeout)\n            except asyncio.TimeoutError:\n                self._ready_event = None\n                self._notify_timeout()\n            else:\n                if self._ready_event:\n                    self._ready_event = None\n                    rc = self.returncode\n                    if rc is not None and not rc.normal_exit:\n                        if self.ignore_failures:\n                            warn(\"{0} (ignored) failure on start-up with result '{1}'\".format(self.name, rc))\n                        else:\n                            raise ChProcessError(\"{0} failed with reported error {1}\".format(self.name, rc), resultcode = rc)\n\n    @asyncio.coroutine\n    def _monitor_service(self):\n        \"\"\"\n        We only care about errors here.  The rest is dealt with by having notifications\n        occur.\n        \"\"\"\n        result = yield from self.wait()\n        if isinstance(result, int) and result > 0:\n            self._setready()    # simulate ready\n            self._ready_event = None\n            self._close_listener()\n            yield from self._abnormal_exit(result)\n            \n    def _notify_received(self, which, var, value):\n        callfunc = getattr(self, \"notify_\" + var.upper(), None)\n        #print(\"NOTIFY RECEIVED\", var, value)\n        if callfunc:\n            callfunc(value)\n\n    def _setready(self):\n        if self._ready_event:\n            self._ready_event.set()\n            return True\n        return False\n\n    def notify_MAINPID(self, value):\n        try:\n            pid = int(value)\n        except ValueError:\n            self.logdebug(\"{0} got MAINPID={1}, but not a valid pid#\", self.name, value)\n            return\n        self.pid = pid\n\n    def notify_BUSERROR(self, value):\n        code = ProcStatus(value)\n        if not self._setready():\n            self.process_exit(code)\n        else:\n            self.returncode = code\n\n    def notify_ERRNO(self, value):\n        try:\n            intval = int(value)\n        except ValueError:\n            self.logdebug(\"{0} got ERROR={1}, not a valid error code\", self.name, value)\n            return\n        code = ProcStatus(intval << 8)\n        if not self._setready():\n            self.process_exit(code)\n        else:\n            self.returncode = code\n\n    def notify_READY(self, value):\n        if value == \"1\":\n            self._setready()\n\n    def notify_STATUS(self, value):\n        self.note = value\n\n    @property\n    def status(self):\n        if self._ready_event:\n            return \"activating\"\n        return super().status\n"
  },
  {
    "path": "chaperone/cproc/pt/oneshot.py",
    "content": "import asyncio\nfrom chaperone.cproc.subproc import SubProcess\nfrom chaperone.cutil.errors import ChProcessError\n\nclass OneshotProcess(SubProcess):\n\n    process_timeout = 60.0       # default for a oneshot is 90 seconds\n\n    @asyncio.coroutine\n    def process_started_co(self):\n        result = yield from self.timed_wait(self.process_timeout, self._exit_timeout)\n        if result is not None and not result.normal_exit:\n            if self.ignore_failures:\n                warn(\"{0} (ignored) failure on start-up with result '{1}'\".format(self.name, result))\n            else:\n                raise ChProcessError(\"{0} failed on start-up with result '{1}'\".format(self.name, result), resultcode = result)\n        \n    def _exit_timeout(self):\n        service = self.service\n        message = \"oneshot service '{1}' did not exit after {2} second(s), {3}\".format(\n            service.type,\n            service.name, self.process_timeout, \n            \"proceeding due to 'ignore_failures=True'\" if service.ignore_failures else\n            \"terminating due to 'ignore_failures=False'\")\n        if not service.ignore_failures:\n            self.terminate()\n        raise Exception(message)\n"
  },
  {
    "path": "chaperone/cproc/pt/simple.py",
    "content": "import asyncio\nfrom chaperone.cproc.subproc import SubProcess\n\nclass SimpleProcess(SubProcess):\n\n    _fut_monitor = None\n\n    @asyncio.coroutine\n    def process_started_co(self):\n        if self._fut_monitor and not self._fut_monitor.cancelled():\n            self._fut_monitor.cancel()\n            self._fut_monitor = None\n\n        # We wait a short time just to see if the process errors out immediately.  This avoids a retry loop\n        # and catches any immediate failures now.\n\n        yield from self.do_startup_pause()\n\n        # If there is a pidfile, sit here and wait for a bit\n        yield from self.wait_for_pidfile()\n\n        # We have a successful start.  Monitor this service.\n\n        self._fut_monitor = asyncio.async(self._monitor_service())\n        self.add_pending(self._fut_monitor)\n\n    @asyncio.coroutine\n    def _monitor_service(self):\n        result = yield from self.wait()\n        if isinstance(result, int) and result > 0:\n            yield from self._abnormal_exit(result)\n"
  },
  {
    "path": "chaperone/cproc/subproc.py",
    "content": "import os\nimport asyncio\nimport shlex\nimport importlib\nimport signal\nimport errno\nfrom functools import partial\n\nfrom time import time, sleep\n\nimport chaperone.cutil.syslog_info as syslog_info\n\nfrom chaperone.cutil.env import Environment, ENV_SERIAL, ENV_SERVTIME\nfrom chaperone.cutil.logging import warn, info, debug, error\nfrom chaperone.cutil.proc import ProcStatus\nfrom chaperone.cutil.misc import lazydict, lookup_user, get_signal_name, executable_path\nfrom chaperone.cutil.errors import ChNotFoundError, ChProcessError, ChParameterError\nfrom chaperone.cutil.format import TableFormatter\n\n@asyncio.coroutine\ndef _process_logger(stream, kind, service):\n    name = service.name.replace('.service', '')\n    while True:\n        data = yield from stream.readline()\n        if not data:\n            return\n        line = data.decode('ascii', 'ignore').rstrip()\n        if not line:\n            continue            # ignore blank lines in stdout/stderr\n        if kind == 'stderr':\n            # we map to warning because stderr output is \"to be considered\" and not strictly\n            # erroneous\n            warn(line, program=name, pid=service.pid, facility=syslog_info.LOG_DAEMON)\n        else:\n            info(line, program=name, pid=service.pid, facility=syslog_info.LOG_DAEMON)\n\n\nclass SubProcess(object):\n\n    service = None              # service object\n    family = None\n    process_timeout = 30.0      # process_timeout will be set to this unless it is overridden by \n                                # the service entry\n    syslog_facility = None      # specifies any additional syslog facility to use when using\n                                # logerror, logdebug, logwarn, etc...\n    start_attempted = False     # used to determine if a service is truly dormant\n    defer_exit_kills = False    # if true, then exit_kills will wait until a proper PID is returned\n                                # from a subprocess, then will kill when the real process exits\n    error_count = 0             # counts errors for informational purposes\n\n    _proc = None\n    _pid = None                 # the pid, often associated with _proc, but not necessarily in the\n                                # case of notify processes\n    _returncode = None          # an alternate returncode, set with returncode property\n    _exit_event = None          # an event to be fired if an exit occurs, in the case of an\n                                # attached PID\n    _orig_executable = None     # original unexpanded exec_args[0]\n\n    _pwrec = None               # the pwrec looked up for execution user/group\n    _cond_starting = None       # a condition which, if present, indicates that this service is starting\n    _cond_exception = None      # exception which was raised during startup (for other waiters)\n\n    _started = False            # true if a start has occurred, either successful or not\n    _restarts_allowed = None    # number of starts permitted before we give up (if None then restarts allowed according to service def)\n    _prereq_cache = None\n    _procenv = None             # process environment ready to be expanded\n\n    _pending = None             # pending futures\n    _note = None\n\n    # Class variables\n    _cls_ptdict = lazydict()    # dictionary of process types\n    _cls_serial = 0             # serial number for process creation\n\n    def __new__(cls, service, family=None):\n        \"\"\"\n        New Subprocesses are managed by subclasses derived from SubProcess so that \n        complex process behavior can be isolated and loaded only when needed.  That\n        keeps this basic superclass logic less convoluted.\n        \"\"\"\n        # If we are trying to create a subclass, just inherit __new__ simply\n        if cls is not SubProcess:\n            return super(SubProcess, cls).__new__(cls)\n\n        # Lookup and cache the class object used to create this type.\n        stype = service.type\n        ptcls = SubProcess._cls_ptdict.get(stype)\n        if not ptcls:\n            mod = importlib.import_module('chaperone.cproc.pt.' + stype)\n            ptcls = SubProcess._cls_ptdict[stype] = getattr(mod, stype.capitalize() + 'Process')\n            assert issubclass(ptcls, cls)\n\n        return ptcls(service, family)\n            \n    def __init__(self, service, family=None):\n\n        self.service = service\n        self.family = family\n\n        self._pending = set()\n\n        if service.process_timeout is not None:\n            self.process_timeout = service.process_timeout\n\n        if not service.environment:\n            self._procenv = Environment()\n        else:\n            self._procenv = service.environment\n\n        if not service.exec_args:\n            raise ChParameterError(\"No command or arguments provided for service\")\n\n        # If the service is enabled, assure we check for the presence of the executable now.  This is\n        # to catch any start-up situations (such as cron jobs without their executables being present).\n        # However, we don't check this if a service is disabled.\n\n        self._orig_executable = service.exec_args[0]\n\n        if service.enabled:\n            self._try_to_enable()\n\n    def __getattr__(self, name):\n        \"Proxies value from the service description if we don't override them.\"\n        return getattr(self.service, name)\n\n    def __setattr__(self, name, value):\n        \"\"\"\n        Any service object attribute supercedes our own except for privates or those we\n        keep separately, in which case there is a distinction.\n        \"\"\"\n        if name[0:0] != '_' and hasattr(self.service, name) and not hasattr(self, name):\n            setattr(self.service, name, value)\n        else:\n            object.__setattr__(self, name, value)\n\n    def _setup_subprocess(self):\n        if self._pwrec:\n            os.setgid(self._pwrec.pw_gid)\n            os.setuid(self._pwrec.pw_uid)\n            if self.setpgrp:\n                os.setpgrp()\n            if not self.directory:\n                try:\n                    os.chdir(self._pwrec.pw_dir)\n                except Exception as ex:\n                    pass\n        return\n\n    def _get_states(self):\n        states = list()\n        if self.started:\n            states.append('started')\n        if self.failed:\n            states.append('failed')\n        if self.ready:\n            states.append('ready')\n        if self.running:\n            states.append('running')\n        return ' '.join(states)\n\n    # pid and returncode management\n\n    @property\n    def pid(self):\n        return self._pid\n\n    @pid.setter\n    def pid(self, newpid):\n        if self._pid is not None and newpid is not None and self._pid is not newpid:\n            self.logdebug(\"{0} changing PID to {1} (from {2})\", self.name, newpid, self._pid)\n            try:\n                pgid = os.getpgid(newpid)\n            except ProcessLookupError as ex:\n                raise ChProcessError(\"{0} attempted to attach the process with PID={1} but there is no such process\".\n                                     format(self.name, newpid), errno = ex.errno)\n            self._attach_pid(newpid)\n        self._pid = newpid\n\n    @property\n    def returncode(self):\n        if self._returncode is not None:\n            return self._returncode\n        return self._proc and self._proc.returncode\n\n    @returncode.setter\n    def returncode(self, val):\n        self._returncode = ProcStatus(val)\n        self.logdebug(\"{0} got explicit return code '{1}'\", self.name, self._returncode)\n\n    # Logging methods which may do special things for this service\n\n    def loginfo(self, *args, **kwargs):\n        info(*args, facility=self.syslog_facility, **kwargs)\n\n    def logerror(self, *args, **kwargs):\n        self.error_count += 1\n        error(*args, facility=self.syslog_facility, **kwargs)\n\n    def logwarn(self, *args, **kwargs):\n        warn(*args, facility=self.syslog_facility, **kwargs)\n\n    def logdebug(self, *args, **kwargs):\n        debug(*args, facility=self.syslog_facility, **kwargs)\n\n    @property\n    def note(self):\n        return self._note\n    @note.setter\n    def note(self, value):\n        self._note = value\n\n    @property\n    def status(self):\n        serv = self.service\n        proc = self._proc\n\n        rs = \"\"\n        if serv.restart and self._restarts_allowed is not None and self._restarts_allowed > 0:\n            rs = \"+r#\" + str(self._restarts_allowed)\n\n        if self._cond_starting:\n            return \"starting\"\n\n        if proc:\n            rc = self._returncode if self._returncode is not None else proc.returncode\n            if rc is None:\n                return \"running\"\n            elif rc.normal_exit and self._started:\n                return \"started\"\n            elif rc:\n                return rc.briefly + rs\n                    \n        if not serv.enabled:\n            return \"disabled\"\n\n        return self.default_status()\n\n    def default_status(self):\n        if self.ready:\n            return 'ready'\n        return None\n\n    @property\n    def enabled(self):\n        return self.service.enabled\n    @enabled.setter\n    def enabled(self, val):\n        if val and not self.service.enabled:\n            self._try_to_enable()\n        else:\n            self.service.enabled = False\n\n    def _try_to_enable(self):\n        service = self.service\n        if self._orig_executable:\n            try:\n                service.exec_args[0] = executable_path(self._orig_executable, service.environment.expanded())\n            except FileNotFoundError:\n                if service.optional:\n                    service.enabled = False\n                    self.loginfo(\"optional service {0} disabled since '{1}' is not present\".format(self.name, self._orig_executable))\n                    return\n                elif service.ignore_failures:\n                    service.enabled = False\n                    self.logwarn(\"(ignored) service {0} executable '{1}' is not present\".format(self.name, self._orig_executable))\n                    return\n                raise ChNotFoundError(\"executable '{0}' not found\".format(service.exec_args[0]))\n\n        # Now we know this service is truly enabled, we need to assure its credentials\n        # are correct.\n\n        senv = service.environment\n\n        if senv and senv.uid is not None and not self._pwrec:\n            self._pwrec = lookup_user(senv.uid, senv.gid)\n\n        service.enabled = True\n\n    @property\n    def scheduled(self):\n        \"\"\"\n        True if this is a process which WILL fire up a process in the future.\n        A \"scheduled\" process does not include one which will be started manually,\n        nor does it include proceses which will be started due to dependencies.\n        Processes like \"cron\" and \"inetd\" return True if they are active and \n        may start processes in the future.\n        \"\"\"\n        return False\n\n    @property\n    def kill_signal(self):\n        ksig = self.service.kill_signal\n        if ksig is not None:\n            return ksig\n        return signal.SIGTERM\n\n    @property\n    def running(self):\n        \"True if this process has started, is running, and has a pid\"\n        return self._proc and self._proc.returncode is None\n        \n    @property\n    def started(self):\n        \"\"\"\n        True if this process has started normally. It may have forked, or executed, or is scheduled.\n        \"\"\"\n        return self._started\n \n    @property\n    def stoppable(self):\n        \"\"\"\n        True if this process can be stopped.  By default, returns True if the service is started,\n        but some job types such as cron and inetd may be stoppable even when processes themselves\n        are not running.\n        \"\"\"\n        return self.started\n\n    @property\n    def failed(self):\n        \"True if this process has failed, either during startup or later.\"\n        return ((self._returncode is not None and not self._returncode.normal_exit) or \n                self._proc and (self._proc.returncode is not None and not self._proc.returncode.normal_exit))\n\n    @property\n    def ready(self):\n        \"\"\"\n        True if this process is ready to run, or running.  If not running, To be ready to run, all \n        prerequisites must also be ready.\n        \"\"\"\n        if not self.enabled or self.failed:\n            return False\n        if self.started:\n            return True\n        if any(p.enabled and not p.ready for p in self.prerequisites):\n            return False\n        return True\n\n    @property\n    def prerequisites(self):\n        \"\"\"\n        Return a list of prerequisite objects.  Right now, these must be within our family\n        but this may change, so don't refer to the family or the prereq in services.  Use this\n        instead.\n        \"\"\"\n        if self._prereq_cache is None:\n            prereq = (self.family and self.service.prerequisites) or ()\n            prereq = self._prereq_cache = tuple(self.family[p] for p in prereq if p in self.family)\n        return self._prereq_cache\n\n    @asyncio.coroutine\n    def start(self):\n        \"\"\"\n        Runs this service if it is enabled and has not already been started.  Starts\n        prerequisite services first.  A service is considered started if\n           a) It is enabled, and started up normally.\n           b) It is disabled, and an attempt was made to start it.\n           c) An error occurred, it did not start, but failures we an acceptable\n              outcome and the service has not been reset since the errors occurred.\n        \"\"\"\n\n        service = self.service\n\n        if self._started:\n            self.logdebug(\"service {0} already started.  further starts ignored.\", service.name)\n            return\n\n        if not service.enabled:\n            self.logdebug(\"service {0} not enabled, will be skipped\", service.name)\n            return\n        else:\n            self.logdebug(\"service {0} enabled, queueing start request\", service.name)\n\n        # If this service is already starting, then just wait until it completes.\n\n        cond_starting = self._cond_starting\n\n        if cond_starting:\n            yield from cond_starting.acquire()\n            yield from cond_starting.wait()\n            cond_starting.release()\n            # This is an odd situation.  Since every waiter expects start() to succeed, or\n            # raise an exception, we need to be sure we raise the exception that happened\n            # in the original start() request.\n            if self._cond_exception:\n                raise self._cond_exception\n            return\n\n        cond_starting = self._cond_starting = asyncio.Condition()\n        self._cond_exception = None\n\n        # Now we can procede\n\n        self.start_attempted = True\n\n        try:\n\n            prereq = self.prerequisites\n            if prereq:\n                for p in prereq:\n                    yield from p.start()\n                self.logdebug(\"service {0} prerequisites satisfied\", service.name)\n\n            if self.family:\n                # idle only makes sense for families\n                if \"IDLE\" in service.service_groups and service.idle_delay and not hasattr(self.family, '_idle_hit'):\n                    self.family._idle_hit = True\n                    self.logdebug(\"IDLE transition hit.  delaying for {0} seconds\", service.idle_delay)\n                    yield from asyncio.sleep(service.idle_delay)\n\n                # STOP if the system is no longer alive because a prerequisite failed\n                if not self.family.system_alive:\n                    return\n\n            try:\n                yield from self.start_subprocess()\n            except Exception as ex:\n                if service.ignore_failures:\n                    self.loginfo(\"service {0} ignoring failures. Exception: {1}\", service.name, ex)\n                else:\n                    self._cond_exception = ex\n                    self.logdebug(\"{0} received exception during attempted start. Exception: {1}\", service.name, ex)\n                    raise\n\n        finally:\n            self._started = True\n\n            yield from cond_starting.acquire()\n            cond_starting.notify_all()\n            cond_starting.release()\n            self._cond_starting = None\n            self.logdebug(\"{0} notified waiters upon completion\", service.name)\n\n    def get_expanded_environment(self):\n        SubProcess._cls_serial += 1\n        penv = self._procenv\n        penv[ENV_SERIAL] = str(SubProcess._cls_serial)\n        penv[ENV_SERVTIME] = str(int(time()))\n        return penv.expanded()\n\n    @asyncio.coroutine\n    def start_subprocess(self):\n        service = self.service\n\n        self.logdebug(\"{0} attempting start '{1}'... \".format(service.name, \" \".join(service.exec_args)))\n\n        kwargs = dict()\n\n        if service.stdout == 'log':\n            kwargs['stdout'] = asyncio.subprocess.PIPE\n        if service.stderr == 'log':\n            kwargs['stderr'] = asyncio.subprocess.PIPE\n        if service.directory:\n            kwargs['cwd'] = service.directory\n\n        env = self.get_expanded_environment()\n\n        yield from self.process_prepare_co(env)\n\n        if env:\n            env = env.get_public_environment()\n\n        if service.debug:\n            if not env:\n                self.logdebug(\"{0} environment is empty\", service.name)\n            else:\n                self.logdebug(\"{0} environment:\", service.name)\n                for k,v in env.items():\n                    self.logdebug(\" {0} = '{1}'\".format(k,v))\n\n        create = asyncio.create_subprocess_exec(*service.exec_args, preexec_fn=self._setup_subprocess,\n                                                env=env, **kwargs)\n        if service.exit_kills:\n            self.logwarn(\"system will be killed when '{0}' exits\", service.exec_args[0])\n            yield from asyncio.sleep(0.2)\n\n        proc = self._proc = yield from create\n\n        self.pid = proc.pid\n\n        if service.stdout == 'log':\n            self.add_pending(asyncio.async(_process_logger(proc.stdout, 'stdout', self)))\n        if service.stderr == 'log':\n            self.add_pending(asyncio.async(_process_logger(proc.stderr, 'stderr', self)))\n\n        if service.exit_kills and not self.defer_exit_kills:\n            self.add_pending(asyncio.async(self._wait_kill_on_exit()))\n\n        yield from self.process_started_co()\n\n        self.logdebug(\"{0} successfully started\", service.name)\n\n    @asyncio.coroutine\n    def process_prepare_co(self, environment):\n        pass\n\n    @asyncio.coroutine\n    def process_started_co(self):\n        pass\n\n    @asyncio.coroutine\n    def wait_for_pidfile(self):\n        \"\"\"\n        If the pidfile option was specified, then wait until we find a valid pidfile,\n        and register the new PID.  This is not done automatically, but is implemented\n        here as a utility for process types that need it.\n        \"\"\"\n        if not self.pidfile:\n            return\n\n        self.logdebug(\"{0} waiting for PID file: {1}\".format(self.name, self.pidfile))\n\n        pidsleep = 0.02         # work incrementally up to no more than process_timeout\n        minsleep = 3\n        expires = time() + self.process_timeout\n        last_ex = None\n\n        while time() < expires:\n            if not self.family.system_alive:\n                return\n            yield from asyncio.sleep(pidsleep)\n            # ramp up until we hit the minsleep ceiling\n            pidsleep = min(pidsleep*2, minsleep)\n            try:\n                newpid = int(open(self.pidfile, 'r').read().strip())\n            except FileNotFoundError:\n                continue\n            except Exception as ex:\n                # Don't raise this immediately.  The service may create the file before writing the PID.\n                last_ex = ChProcessError(\"{0} found pid file '{1}' but contents did not contain an integer\".format(\n                                         self.name, self.pidfile), errno = errno.EINVAL)\n                continue\n\n            self.pid = newpid\n            return\n\n        if last_ex is not None:\n            raise last_ex\n\n        raise ChProcessError(\"{0} did not find pid file '{1}' before {2}sec process_timeout expired\".format(\n                             self.name, self.pidfile, self.process_timeout), errno = errno.ENOENT)\n        \n    @asyncio.coroutine\n    def _wait_kill_on_exit(self):\n        yield from self.wait()\n        self._kill_system()\n\n    def _attach_pid(self, newpid):\n        \"\"\"\n        Attach this process to a new PID, creating a condition which will be used by \n        the child watcher to determine when the PID has exited.\n        \"\"\"\n        with asyncio.get_child_watcher() as watcher:\n            watcher.add_child_handler(newpid, self._child_watcher_callback)\n\n        self._exit_event = asyncio.Event()\n        \n    def _child_watcher_callback(self, pid, returncode):\n        asyncio.get_event_loop().call_soon_threadsafe(self.process_exit, returncode)\n\n    def process_exit(self, code):\n        self.returncode = code\n\n        if self._exit_event:\n            self._exit_event.set()\n            self._exit_event = None\n\n        if self.exit_kills:\n            self.logwarn(\"{0} terminated with exit_kills enabled\", self.service.name);\n            # Since we're dead, and the system is going away, disable any process management\n            self._proc = None\n            self.pid = None\n            self._kill_system();\n\n        if code.normal_exit or self.kill_signal == code.signal:\n            return\n\n        asyncio.async(self._abnormal_exit(code))\n    \n    @asyncio.coroutine\n    def _abnormal_exit(self, code):\n        service = self.service\n\n        if service.exit_kills:\n            self.logwarn(\"{0} terminated abnormally with {1}\", service.name, code)\n            return\n\n        # A disabled service should not do recovery\n\n        if not service.enabled:\n            return\n\n        if self._started and service.restart:\n            if self._restarts_allowed is None:\n                self._restarts_allowed = service.restart_limit\n            if self._restarts_allowed > 0:\n                self._restarts_allowed -= 1\n                controller = self.family.controller\n                if controller.system_alive:\n                    if service.restart_delay:\n                        self.loginfo(\"{0} pausing between restart retries ({1} left)\", service.name, self._restarts_allowed)\n                        yield from asyncio.sleep(service.restart_delay)\n                if controller.system_alive:\n                    yield from self.reset()\n                    #yield from self.start()\n                    f = asyncio.async(self.start()) # queue it since we will just return here\n                    f.add_done_callback(self._restart_callback)\n                return\n\n        if service.ignore_failures:\n            self.logdebug(\"{0} abnormal process exit ignored due to ignore_failures=true\", service.name)\n            yield from self.reset()\n            return\n\n        self.logerror(\"{0} terminated abnormally with {1}\", service.name, code)\n\n    def _restart_callback(self, fut):\n        # Catches a restart result, reporting it as a warning, and either passing back to _abnormal_exit\n        # or accepting glorious success.\n        ex = fut.exception()\n        if not ex:\n            self.logdebug(\"{0} restart succeeded\", self.name)\n        else:\n            self.logwarn(\"{0} restart failed: {1}\", self.name, ex)\n            asyncio.async(self._abnormal_exit(self._proc and self._proc.returncode))\n\n    def _kill_system(self):\n        self.family.controller.kill_system()\n\n    def add_pending(self, future):\n        self._pending.add(future)\n        future.add_done_callback(lambda f: self._pending.discard(future))\n\n    @asyncio.coroutine\n    def reset(self, dependents = False, enable = False, restarts_ok = False):\n        self.logdebug(\"{0} received reset\", self.name)\n\n        if self._exit_event:\n            self.terminate()\n        elif self._proc:\n            if self._proc.returncode is None:\n                self.terminate()\n                yield from self.wait()\n            self.pid = None\n            self._proc = None\n\n        self._started = False\n        \n        if restarts_ok:\n            self._restarts_allowed = None\n        if enable:\n            self.enabled = True\n\n        # If there is a pidfile, then remove it\n\n        if self.pidfile:\n            try:\n                os.remove(self.pidfile)\n            except Exception:\n                pass\n\n        # Reset any non-ready dependents\n\n        if dependents:\n            for p in self.prerequisites:\n                if not p.ready and (enable or p.enabled):\n                    yield from p.reset(dependents, enable, restarts_ok)\n                \n    @asyncio.coroutine\n    def stop(self):\n        yield from self.reset(restarts_ok = True)\n        \n    @asyncio.coroutine\n    def final_stop(self):\n        \"Called when the whole system is killed, but before drastic measures are taken.\"\n        self._exit_event = None\n        self.terminate()\n        for p in list(self._pending):\n            if not p.cancelled():\n                p.cancel()\n\n    def terminate(self):\n        proc = self._proc\n        otherpid = self.pid\n\n        if proc:\n            if otherpid == proc.pid:\n                otherpid = None\n            if proc.returncode is None:\n                if self.service.kill_signal is not None: # explicitly check service\n                    self.logdebug(\"using {0} to terminate {1}\", get_signal_name(self.kill_signal), self.name)\n                    proc.send_signal(self.kill_signal)\n                else:\n                    proc.terminate()\n\n        if otherpid:\n            self.logdebug(\"using {0} to terminate {1}\", get_signal_name(self.kill_signal), self.name)\n            try:\n                os.kill(otherpid, self.kill_signal)\n            except Exception as ex:\n                warn(\"{0} could not be killed using PID={1}: \".format(ex, otherpid))\n\n        self._pid = None\n        \n    @asyncio.coroutine\n    def do_startup_pause(self):\n        \"\"\"\n        Wait a short time just to see if the process errors out immediately.  This avoids a retry loop\n        and catches any immediate failures now.  Can be used by process implementations if needed.\n        \"\"\"\n\n        if not self.startup_pause:\n            return\n\n        try:\n            result = yield from self.timed_wait(self.startup_pause)\n        except asyncio.TimeoutError:\n            result = None\n        if result is not None and not result.normal_exit:\n            if self.ignore_failures:\n                warn(\"{0} (ignored) failure on start-up with result '{1}'\".format(self.name, result))\n            else:\n                raise ChProcessError(\"{0} failed on start-up with result '{1}'\".format(self.name, result),\n                                     resultcode = result)\n\n    @asyncio.coroutine\n    def timed_wait(self, timeout, func = None):\n        \"\"\"\n        Timed wait waits for process completion.  If process completion occurs normally, the\n        returncode for process startup is returned.\n\n        Upon timeout either:\n        1.  asyncio.TimeoutError is raised if 'func' is not provided, or...\n        2.  func is called and the result is returned from timed_wait().\n        \"\"\"\n        try:\n            if not timeout:\n                raise asyncio.TimeoutError() # funny situation, but settings can cause this if users attempt it\n            result =  yield from asyncio.wait_for(asyncio.shield(self.wait()), timeout)\n        except asyncio.TimeoutError:\n            if not func:\n                raise\n            result = func()\n        except asyncio.CancelledError:\n            result = self.returncode\n\n        return result\n\n    @asyncio.coroutine\n    def wait(self):\n        proc = self._proc\n\n        if self._exit_event:\n            yield from self._exit_event.wait()\n        elif proc:\n            yield from proc.wait()\n        else:\n            raise Exception(\"Process not running (or attached), can't wait\")\n\n        if proc.returncode is not None and proc.returncode.normal_exit:\n            self.logdebug(\"{2} exit status for pid={0} is '{1}'\".format(proc.pid, proc.returncode, self.name))\n        else:\n            self.loginfo(\"{2} exit status for pid={0} is '{1}'\".format(proc.pid, proc.returncode, self.name))\n\n        return proc.returncode\n\n\nclass SubProcessFamily(lazydict):\n\n    controller = None           # top level system controller\n    services_config = None\n\n    _start_time = None\n\n    def __init__(self, controller, services_config):\n        \"\"\"\n        Given a pre-analyzed list of processes, complete with prerequisites, build a process\n        family.\n        \"\"\"\n        super().__init__()\n\n        self.controller = controller\n        self.services_config = services_config\n\n        for s in services_config.get_startup_list():\n            self[s.name] = SubProcess(s, family = self)\n\n    def get_status_formatter(self):\n        df = TableFormatter('pid', 'name', 'enabled', 'status', 'note', sort='name')\n        df.add_rows(self.values())\n        return df\n    \n    @property\n    def system_alive(self):\n        return self.controller.system_alive\n\n    def get_scheduled_services(self):\n        return [s for s in self.values() if s.scheduled]\n\n    def get_status(self):\n        if not self._start_time:\n            return \"Not yet started\"\n\n        secs = time() - self._start_time\n\n        total = len(self.values())\n        scheduled = started = failed = errors = 0\n\n        for s in self.values():\n            if s.scheduled:\n                scheduled += 1\n            if s.started:\n                started += 1\n            if s.failed:\n                failed += 1\n            errors += s.error_count\n\n        m,s = divmod(int(secs), 60)\n        h,m = divmod(m, 60)\n\n        msg = \"Uptime {0:02}:{1:02}:{2:02}; {3} service{4} started\".format(h, m, s, started or \"No\", started != 1 and 's' or '')\n        if scheduled:\n            msg += \"; {0} scheduled\".format(scheduled)\n        if failed:\n            msg += \"; {0} failed\".format(failed)\n        if errors:\n            msg += \"; {0} total errors\".format(errors)\n\n        return msg\n\n    @asyncio.coroutine\n    def run(self, servicelist = None):\n        \"\"\"\n        Runs the family, starting up services in dependency order.  If any problems\n        occur, an exception is raised.  Returns True if any attempts were made to\n        start services, otherwize False if the configuration contained no services\n        that were enabled and ready to run.\n        \"\"\"\n        # Note that all tasks are started simultaneously, but they resolve their\n        # interdependencies themselves.\n        if not servicelist:\n            servicelist = self.values()\n        yield from asyncio.gather(*[s.start() for s in servicelist])\n\n        self._start_time = time()\n\n        # Indicate if any attempts were made\n        return any(s.start_attempted for s in servicelist)\n\n    def _lookup_services(self, names):\n        result = set()\n        for name in names:\n            serv = self.get(name)\n            if not serv:\n                serv = self.get(name + \".service\")\n            if not serv:\n                raise ChParameterError(\"no such service: \" + name)\n            result.add(serv)\n        return result\n\n    @asyncio.coroutine\n    def start(self, service_names, force = False, wait = False, enable = False):\n        slist = self._lookup_services(service_names)\n\n        not_enab = [s for s in slist if not s.enabled]\n\n        if not force:\n            if not_enab and not enable:\n                raise Exception(\"can only start services which have been enabled: \" + \", \".join([s.shortname for s in not_enab]))\n            started = [s for s in slist if s.started]\n            if started:\n                raise Exception(\"can't restart services without stop/reset: \" + \", \".join([s.shortname for s in started]))\n            notready = [s for s in slist if not s.ready and (s.enabled and not enable)]\n            if notready:\n                raise Exception(\"services or their prerequisites are not ready: \" + \", \".join([s.shortname for s in notready]))\n\n        resets = ()\n\n        if not_enab and enable:\n            resets = not_enab\n\n        # If forcing, then reset all services, as well as any non-ready dependents.\n\n        if force:\n            resets = [s for s in slist if (not s.ready or s.started)]\n\n        for s in resets:\n            yield from s.reset(dependents=True, enable=enable, restarts_ok=True)\n\n        if not wait:\n            asyncio.async(self._queued_start(slist, service_names))\n        else:\n            yield from self.run(slist)\n\n    @asyncio.coroutine\n    def _queued_start(self, slist, names):\n        try:\n            yield from self.run(slist)\n        except Exception as ex:\n            error(\"queued start (for {0}) failed: {1}\", names, ex)\n            \n    @asyncio.coroutine\n    def stop(self, service_names, force = False, wait = False, disable = False):\n        slist = self._lookup_services(service_names)\n        started = [s for s in slist if s.stoppable]\n\n        if not force:\n            if len(started) != len(slist):\n                raise Exception(\"can't stop services which aren't started: \" + \n                                \", \".join([s.shortname for s in slist if not s.stoppable]))\n\n        if not wait:\n            asyncio.async(self._queued_stop(slist, service_names, disable))\n        else:\n            for s in slist:\n                yield from s.stop()\n                if disable:\n                    s.enabled = False\n\n    @asyncio.coroutine\n    def _queued_stop(self, slist, names, disable):\n        try:\n            for s in slist:\n                yield from s.stop()\n                if disable:\n                    s.enabled = False\n        except Exception as ex:\n            error(\"queued stop (for {0}) failed: {1}\", names, ex)\n\n    @asyncio.coroutine\n    def reset(self, service_names, force = False, wait = False):\n        slist = self._lookup_services(service_names)\n\n        if not force:\n            running = [s for s in slist if s.running]\n            if running:\n                raise Exception(\"can't reset services which are running: \" + \", \".join([s.shortname for s in running]))\n\n        if not wait:\n            asyncio.async(self._queued_reset(slist, service_names))\n        else:\n            for s in slist:\n                yield from s.reset(restarts_ok = True)\n\n    @asyncio.coroutine\n    def _queued_reset(self, slist, names):\n        try:\n            for s in slist:\n                yield from s.reset(restarts_ok = True)\n        except Exception as ex:\n            error(\"queued reset (for {0}) failed: {1}\", names, ex)\n\n    @asyncio.coroutine\n    def enable(self, service_names):\n        slist = self._lookup_services(service_names)\n        for s in slist:\n            s.enabled = True\n\n    @asyncio.coroutine\n    def disable(self, service_names):\n        slist = self._lookup_services(service_names)\n        for s in slist:\n            s.enabled = False\n"
  },
  {
    "path": "chaperone/cproc/version.py",
    "content": "# This file is designed to be used as a package module, but also as a main program runnable\n# by Python2 or Python3 which will print the version.  Used in setup.py\n\nVERSION = (0,3,9)\nDISPLAY_VERSION = \".\".join([str(v) for v in VERSION])\n\nLICENSE = \"Apache License, Version 2.0\"\n\nMAINTAINER = \"Gary Wisniewski <garyw@blueseastech.com>\"\n\nLINK_PYPI = \"https://pypi.python.org/pypi/chaperone\"\nLINK_DOC = \"http://garywiz.github.io/chaperone\"\nLINK_SOURCE = \"http://github.com/garywiz/chaperone\"\nLINK_QUICKSTART = \"http://github.com/garywiz/chaperone-baseimage\"\nLINK_LICENSE = \"http://www.apache.org/licenses/LICENSE-2.0\"\n\nimport sys\nimport os\n\nVERSION_MESSAGE = \"\"\"\nThis is '{1}' version {0.DISPLAY_VERSION}.\n\nDocumentation and source is available at {0.LINK_SOURCE}.\nLicensed under the {0.LICENSE}.\n\"\"\".format(sys.modules[__name__], os.path.basename(sys.argv[0]))\n\nif __name__ == '__main__':\n    print(DISPLAY_VERSION)\n"
  },
  {
    "path": "chaperone/cproc/watcher.py",
    "content": "import os\nimport asyncio\nimport threading\n\nfrom functools import partial\nfrom asyncio.unix_events import BaseChildWatcher\n\nfrom chaperone.cutil.logging import warn, info, debug\nfrom chaperone.cutil.proc import ProcStatus\nfrom chaperone.cutil.misc import get_signal_name\nfrom chaperone.cutil.events import EventSource\n\nclass InitChildWatcher(BaseChildWatcher):\n    \"\"\"An init-responsible child watcher.\n\n    Plugs into the asyncio child watcher framework to allow harvesting of both known and unknown\n    child processes.\n    \"\"\"\n    def __init__(self, **kwargs):\n        super().__init__()\n        self.events = EventSource(**kwargs)\n        self._callbacks = {}\n        self._lock = threading.Lock()\n        self._zombies = {}\n        self._forks = 0\n        self._no_processes = None\n        self._had_children = False\n\n    def close(self):\n        self._callbacks.clear()\n        self._zombies.clear()\n        super().close()\n\n    def __enter__(self):\n        with self._lock:\n            self._forks += 1\n\n            return self\n\n    def __exit__(self, a, b, c):\n        with self._lock:\n            self._forks -= 1\n\n            if self._forks or not self._zombies:\n                return\n\n            collateral_victims = str(self._zombies)\n            self._zombies.clear()\n\n        info(\n            \"Caught subprocesses termination from unknown pids: %s\",\n            collateral_victims)\n\n    @property\n    def number_of_waiters(self):\n        return len(self._callbacks)\n\n    def add_child_handler(self, pid, callback, *args):\n        assert self._forks, \"Must use the context manager\"\n        with self._lock:\n            try:\n                returncode = self._zombies.pop(pid)\n            except KeyError:\n                # The child is running.\n                self._callbacks[pid] = callback, args\n                return\n\n        # The child is dead already. We can fire the callback.\n        callback(pid, returncode, *args)\n\n    def remove_child_handler(self, pid):\n        try:\n            del self._callbacks[pid]\n            return True\n        except KeyError:\n            return False\n\n    def check_processes(self):\n        # Checks to see if any processes terminated, and triggers onNoProcesses\n        self._do_waitpid_all()\n\n    def _do_waitpid_all(self):\n        # Because of signal coalescing, we must keep calling waitpid() as\n        # long as we're able to reap a child.\n        while True:\n            try:\n                pid, status = os.waitpid(-1, os.WNOHANG)\n                debug(\"REAP pid={0},status={1}\".format(pid,status))\n            except ChildProcessError:\n                # No more child processes exist.\n                if self._had_children:\n                    debug(\"no child processes present\")\n                    self.events.onNoProcesses()\n                return\n            else:\n                self._had_children = True\n                if pid == 0:\n                    # A child process is still alive.\n                    return\n\n                returncode = ProcStatus(status)\n\n            with self._lock:\n                try:\n                    callback, args = self._callbacks.pop(pid)\n                except KeyError:\n                    # unknown child\n                    if self._forks:\n                        # It may not be registered yet.\n                        self._zombies[pid] = returncode\n                        continue\n                    callback = None\n\n            if callback is None:\n                info(\n                    \"Caught subprocess termination from unknown pid: \"\n                    \"%d -> %d\", pid, returncode)\n            else:\n                callback(pid, returncode, *args)\n"
  },
  {
    "path": "chaperone/cutil/__init__.py",
    "content": "# Placeholder\n"
  },
  {
    "path": "chaperone/cutil/config.py",
    "content": "import os\nimport re\nimport pwd\nimport shlex\nfrom operator import attrgetter\nfrom copy import deepcopy\nfrom itertools import chain\n\nimport yaml\nimport voluptuous as V\n\nfrom chaperone.cutil.env import Environment, ENV_CONFIG_DIR, ENV_SERVICE\nfrom chaperone.cutil.errors import ChParameterError\nfrom chaperone.cutil.logging import info, warn, debug\nfrom chaperone.cutil.misc import lazydict, lookup_user, get_signal_number\n\n@V.message('not an executable file', cls=V.FileInvalid)\n@V.truth\ndef IsExecutable(v):\n    return os.path.isfile(v) and os.access(v, os.X_OK)\n    \n_config_schema = V.Any(\n    { V.Match('^.+\\.service$'): {\n        'after': str,\n        'before': str,\n        V.Required('command'): str,\n        'directory': str,\n        'debug': bool,\n        'enabled': V.Any(bool, str),\n        'env_inherit': [ str ],\n        'env_set': { str: str },\n        'env_unset': [ str ],\n        'exit_kills': bool,\n        'gid': V.Any(str, int),\n        'ignore_failures': bool,\n        'interval': str,\n        'kill_signal': str,\n        'optional': bool,\n        'port': V.Any(str, int),\n        'pidfile': str,\n        'process_timeout': V.Any(float, int),\n        'startup_pause': V.Any(float, int),\n        'restart': bool,\n        'restart_limit': int,\n        'restart_delay': int,\n        'service_groups': str,\n        'setpgrp': bool,\n        'stderr': V.Any('log', 'inherit'),\n        'stdout': V.Any('log', 'inherit'),\n        'type': V.Any('oneshot', 'simple', 'forking', 'notify', 'cron', 'inetd'),\n        'uid': V.Any(str, int),\n      },\n      V.Match('^settings$'): {\n        'debug': bool,\n        'detect_exit': bool,\n        'env_inherit': [ str ],\n        'env_set': { str: str },\n        'env_unset': [ str ],\n        'gid': V.Any(str, int),\n        'idle_delay': V.Any(float, int),\n        'ignore_failures': bool,\n        'process_timeout': V.Any(float, int),\n        'startup_pause': V.Any(float, int),\n        'shutdown_timeout': V.Any(float, int),\n        'uid': V.Any(str, int),\n        'logrec_hostname': str,\n        'enable_syslog': bool,\n        'status_interval': V.Any(float, int),\n      },\n      V.Match('^.+\\.logging'): {\n        'enabled': V.Any(bool, str),\n        'extended': bool,\n        'file': str,\n        'syslog_host': str,\n        'selector': str,\n        'stderr': bool,\n        'stdout': bool,\n        'overwrite': bool,\n        'uid': V.Any(str, int),\n        'gid': V.Any(str, int),\n        'logrec_hostname': str,\n     },\n   }\n)\n    \nvalidator = V.Schema(_config_schema)\n\n_RE_LISTSEP = re.compile(r'\\s*,\\s*')\n\ndef print_services(label, svlist):\n    # Useful for debugging startup order\n    print(label)\n    for s in svlist:\n        print(s)\n        p = getattr(s, 'prerequisites', None)\n        if p:\n            print('  prereq:', p)\n\n# Note that we extend YAML by allowing an empty string to mean \"false\".  This makes some macro\n# expansions work better, such as ... enabled:\"$(MYSQL_ENABLED:+true)\"\n\n_RE_YAML_BOOL = re.compile(r'^\\s*(?:(?P<true>y|true|yes|on)|(n|false|no|off|))\\s*$', re.IGNORECASE)\n\nclass _BaseConfig(object):\n\n    name = None\n    environment = None\n    env_set = None\n    env_unset = None\n    env_inherit = ['*']\n\n    _repr_pat = None\n    _expand_these = {}\n    _typecheck = {}\n    _settings_defaults = {}\n    \n    @classmethod\n    def createConfig(cls, config=None, **kwargs):\n        \"\"\"\n        Creates a new configuration given a system configuration object.  Initializes the\n        environment as triggers any per-configuration attribute initialization.\n        \"\"\"\n        return cls(kwargs, \n                   env=config.get_environment(),\n                   settings=config.get_settings())\n\n    def _typecheck_assure_bool(self, attr):\n        \"Assures that the specified attribute is a legal boolean.\"\n        val = getattr(self, attr)\n        if val is None or isinstance(val, bool):\n            return\n        # First, try both 'true' and 'false' according to YAML conventions\n        match = _RE_YAML_BOOL.match(str(val))\n        if not match:\n            raise ChParameterError(\"invalid boolean parameter for '{0}': '{1}'\".format(attr, val))\n        setattr(self, attr, bool(match.group('true')))\n\n    def _typecheck_assure_int(self, attr):\n        \"Assures that the specified attribute is a legal integer.\"\n        val = getattr(self, attr)\n        if val is None or isinstance(val, int):\n            return\n        try:\n            setattr(self, attr, int(val))\n        except ValueError:\n            raise ChParameterError(\"invalid integer parameter for '{0}': '{1}'\".format(attr, val))\n\n    def __init__(self, initdict, name = \"MAIN\", env = None, settings = None):\n        self.name = name\n\n        if settings:\n            for sd in self._settings_defaults:\n                if sd not in initdict:\n                    val = settings.get(sd)\n                    if val is not None:\n                        setattr(self, sd, val)\n\n        for k,v in initdict.items():\n            setattr(self, k, v)\n\n        # User names always have .xxx qualifier because of schema restrictions.  Otherwise, it's a user\n        # defined name subject to restrictions.\n\n        splitname = self.name.rsplit('.', 1)\n        if len(splitname) == 2 and splitname[0] == splitname[0].upper():\n            raise ChParameterError(\"all-uppercase names such as '{0}' are reserved for the system.\".format(self.name))\n\n        # UID and GID are expanded according to the incoming environment,\n        # since the new environment depends upon these.\n        if env:\n            env.expand_attributes(self, 'uid', 'gid')\n\n        uid = self.get('uid')\n        gid = self.get('gid')\n\n        if gid is not None and uid is None:\n            raise Exception(\"cannot specify 'gid' without 'uid'\")\n\n        # We can now use 'self' as our config, with all defaults. \n\n        env = self.environment = Environment(env, uid=uid, gid=gid, config=self, \n                                             resolve_xid = not self.get('optional', False))\n        self.augment_environment(env)\n\n        if self._expand_these:\n            env.expand_attributes(self, *self._expand_these)\n\n        for attr,func in self._typecheck.items():\n            getattr(self, '_typecheck_'+func)(attr)\n\n        self.post_init()\n\n    def shortname(self):\n        return self.name\n\n    def post_init(self):\n        pass\n\n    def augment_environment(self, env):\n        pass\n\n    def get(self, attr, default = None):\n        return getattr(self, attr, default)\n        \n    def __repr__(self):\n        if self._repr_pat:\n            return self._repr_pat.format(self)\n        return super().__repr__()\n\n\nclass ServiceConfig(_BaseConfig):\n\n    after = None\n    before = None\n    command = None\n    debug = None\n    directory = None\n    enabled = True\n    exit_kills = False\n    gid = None\n    interval = None\n    ignore_failures = False\n    kill_signal = None\n    optional = False\n    pidfile = None              # the pidfile to monitor\n    port = None                 # used for inetd processes\n    process_timeout = None      # time to elapse before we decide a process has misbehaved\n    startup_pause = 0.5         # time to wait momentarily to see if a service starts (if needed)\n    restart = False\n    restart_limit = 5           # number of times to invoke a restart before giving up\n    restart_delay = 3           # number of seconds to delay between restarts\n    setpgrp = True              # if this process should run in its own process group\n    service_groups = \"default\"  # will be transformed into a tuple() upon construction\n    stderr = \"log\"\n    stdout = \"log\"\n    type = 'simple'\n    uid = None\n\n    exec_args = None            # derived from bin/command/args, but may be preset using createConfig\n    idle_delay = 1.0            # present, but mirrored from settings, not settable per-service\n                                # since it is only triggered once when the first IDLE group item executes\n\n    prerequisites = None        # a list of service names which are prerequisites to this one\n\n    _repr_pat = \"Service:{0.name}(service_groups={0.service_groups}, after={0.after}, before={0.before})\"\n    _expand_these = {'command', 'stdout', 'stderr', 'interval', 'directory', 'exec_args', 'pidfile', 'enabled', 'port'}\n    _typecheck = {'enabled': 'assure_bool', 'port': 'assure_int'}\n    _assure_bool = {'enabled'}\n    _settings_defaults = {'debug', 'idle_delay', 'process_timeout', 'startup_pause', 'ignore_failures'}\n\n    system_group_names = ('IDLE', 'INIT')\n    system_service_names = ('CONSOLE', 'MAIN')\n\n    @property\n    def shortname(self):\n        return self.name.replace('.service', '')\n\n    def augment_environment(self, env):\n        if self.name:\n            env[ENV_SERVICE] = self.name\n\n    def post_init(self):\n        # Assure that exec_args is set to the actual arguments used for execution\n        if self.command:\n            self.exec_args = shlex.split(self.command)\n\n        # Lookup signal number\n        if self.kill_signal is not None:\n            self.kill_signal = get_signal_number(self.kill_signal)\n\n        # Expand before, after and service_groups into sets/tuples\n        self.before = set(_RE_LISTSEP.split(self.before)) if self.before is not None else set()\n        self.after = set(_RE_LISTSEP.split(self.after)) if self.after is not None else set()\n        self.service_groups = tuple(_RE_LISTSEP.split(self.service_groups)) if self.service_groups is not None else tuple()\n\n        for sname in chain(self.before, self.after):\n            if sname.upper() == sname and sname not in chain(self.system_group_names, self.system_service_names):\n                raise ChParameterError(\"{0} dependency reference not valid; '{1}' is not a recognized system name\"\n                                       .format(self.name, sname))\n\n        for sname in self.service_groups:\n            if sname.upper() == sname and sname not in self.system_group_names:\n                raise ChParameterError(\"{0} contains an unrecognized system group name '{1}'\".format(self.name, sname))\n\n        if 'IDLE' in self.after:\n            raise Exception(\"{0} cannot specify services which start *after* service_group IDLE\".format(self.name))\n        if 'INIT' in self.before:\n            raise Exception(\"{0} cannot specify services which start *before* service_group INIT\".format(self.name))\n\n        \nclass LogConfig(_BaseConfig):\n\n    selector = '*.*'\n    file = None\n    stderr = False\n    stdout = False\n    enabled = True\n    overwrite = False\n    extended = False            # include facility/priority information\n    uid = None                  # used to control permissions on logfile creation\n    gid = None\n    logrec_hostname = None      # hostname used to override hostname in syslog record\n    syslog_host = None          # remote IP of syslog handler\n\n    _expand_these = {'selector', 'file', 'enabled', 'logrec_hostname', 'syslog_host'}\n    _typecheck = {'enabled': 'assure_bool'}\n    _settings_defaults = {'logrec_hostname'}\n\n    @property\n    def shortname(self):\n        return self.name.replace('.logging', '')\n\n\nclass ServiceDict(lazydict):\n\n    _ordered_startup = None\n\n    def __init__(self, servdict, env = None, settings = None):\n        \"\"\"\n        Accepts a dictionary of values to be turned into services.\n        \"\"\"\n        super().__init__(\n            ((k,ServiceConfig(v,k,env,settings)) for (k,v) in servdict)\n        )\n\n    def add(self, service):\n        self[service.name] = service\n\n    def clear(self):\n        super().clear()\n        self._ordered_startup = None\n\n    def get_dependency_graph(self):\n        \"\"\"\n        Returns a set of dependency groups.  Each group represents a set of dependencies starting at the\n        root of the dependency tree.  This is valuable for debugging dependencies.   The output graph\n        is ascii-art which shows the earliest start times and latest stop times for each service,\n        roughly in order of start-up.\n        \"\"\"\n\n        sep = ' | '\n        sulist = self.get_startup_list()\n        \n        curcol = 0\n        maxwidth = 0\n        for s in sulist:\n            ourlen = len(s.shortname)\n            s._column = curcol + ourlen - 1\n            curcol += ourlen + len(sep)\n            maxwidth = max(maxwidth, ourlen)\n\n        def histogram(serv):\n            # find the earliest prerequsite, or 0 if there is none\n            pcols = tuple(s._column for s in sulist if s.name in serv.prerequisites)\n            start = (pcols and max(pcols) + 1) or 0\n            return (' ' * start) + ('=' * (serv._column - start + 1))\n\n        lines = list()\n\n        lines.append(' ' * (maxwidth + len(sep)) + sep.join(s.shortname for s in sulist))\n\n        for s in sulist:\n            lines.append(s.shortname.ljust(maxwidth) + sep + histogram(s))\n\n        lines.append(('-' * (maxwidth)) + '-> depends on...')\n\n        for s in sulist:\n            lines.append(s.shortname.ljust(maxwidth) + sep + ', '.join(pr.replace('.service', '') for pr in s.prerequisites))\n\n        return lines\n\n    def get_startup_list(self):\n        \"\"\"\n        Returns the list of start-up items in priority order by examining before: and after: \n        attributes.\n        \"\"\"\n        if self._ordered_startup is not None:\n            return self._ordered_startup\n\n        services = self.deepcopy()\n        groups = lazydict()\n        for k,v in services.items():\n            for g in v.service_groups:\n                groups.setdefault(g, lambda: lazydict())[k] = v\n\n        #print_services('initial', services.values())\n\n        # The \"IDLE\" and \"INIT\" groups are special.  Revamp things so that any services in the \"IDLE\" group\n        # have an implicit \"after: 'all-others'\" and any services in \"INIT\" have an implicit \"before: 'all-others'\n        # where all-others is an explicit list of all services NOT in the respective group\n\n        if 'IDLE' in groups:\n            nonidle = set(k for k,v in services.items() if \"IDLE\" not in v.service_groups)\n            for s in groups['IDLE'].values():\n                s.after.update(nonidle)\n        if 'INIT' in groups:\n            noninit = set(k for k,v in services.items() if \"INIT\" not in v.service_groups)\n            for s in groups['INIT'].values():\n                s.before.update(noninit)\n\n        # We want to only look at the \"after:\" attribute, so we will eliminate the relevance\n        # of befores...\n\n        for k,v in services.items():\n            for bef in v.before:\n                if bef in groups:\n                    for g in groups[bef].values():\n                        g.after.add(v.name)\n                elif bef in services:\n                    services[bef].after.add(v.name)\n            v.before = None\n\n        # Before is now gone, make sure that all \"after... groups\" are translated into \"after.... service\"\n\n        for group in groups.values():\n            afters = set()\n            for item in group.values():\n                afters.update(item.after)\n            for a in afters:\n                if a in groups:\n                    names = groups[a].keys()\n                    for item in group.values():\n                        item.after.update(names)\n                \n        # Now remove any undefined services or groups and turn the 'after' attribute into a definitive\n        # graph.\n        #\n        # Note: sorted() occurs a couple times below.  The main reason is so that the results\n        #       are deterministic in cases where exact order is not defined.\n\n        afters = set(services.keys())\n        for v in services.values():\n            v.refs = sorted(map(lambda n: services[n], v.after.intersection(afters)), key=attrgetter('name'))\n\n        #print_services('before add nodes', services.values())\n\n        svlist = list()         # this will be our final list, containing original items\n        svseen = set()\n\n        def add_nodes(items):\n            for item in items:\n                if hasattr(item, 'active'):\n                    raise Exception(\"circular dependency in service declaration\")\n                item.active = True\n                add_nodes(item.refs)\n                del item.active\n                if item.name not in svseen:\n                    svseen.add(item.name)\n                    svlist.append(self[item.name])\n                    # set startup prerequisite dependencies\n                    svlist[-1].prerequisites = set(r.name for r in item.refs)\n        add_nodes(sorted(services.values(), key=attrgetter('name')))\n\n        #print_services('final service list', svlist)\n\n        self._ordered_startup = svlist\n\n        return svlist\n            \nclass Configuration(object):\n\n    uid = None                  # specifies if a system-wide user was provided\n    gid = None\n    _conf = None\n    _env = None                 # calculated environment\n\n    @classmethod\n    def configFromCommandSpec(cls, spec, user = None, default = None, extra_settings = None, disable_console_log = False):\n        \"\"\"\n        A command specification (typically specified with the --config=<file_or_dir> command\n        line option) is used to create a configuration object.   The target may be either a file\n        or a directory.  If it is a file, then the file itself will be the only configuration\n        read.  If it is a directory, then a search is made for any top-level files which end in\n        .conf or .yaml, and those will be combined according to lexicographic order.\n\n        If the configuration path is a relative path, then it is relative to either the root\n        directory, or the home directory of the given user.  This allows a user-specific\n        configuration to automatically take effect if desired.\n        \"\"\"\n\n        frombase = '/'\n\n        if user:\n            frombase = lookup_user(user).pw_dir\n\n        trypath = os.path.join(frombase, spec)\n\n        debug(\"TRY CONFIG PATH: {0}\".format(trypath))\n\n        if not os.path.exists(trypath):\n            return cls(default = default)\n        else:\n            os.environ[ENV_CONFIG_DIR] = os.path.dirname(trypath)\n\n        if os.path.isdir(trypath):\n            return cls(*[os.path.join(trypath, f) for f in sorted(os.listdir(trypath))\n                         if f.endswith('.yaml') or f.endswith('.conf')],\n                       default = default, uid = user, extra_settings = extra_settings, disable_console_log = disable_console_log)\n\n\n        return cls(trypath, default = default, uid = user, extra_settings = extra_settings, disable_console_log = disable_console_log)\n        \n    def __init__(self, *args, default = None, uid = None, extra_settings = None, disable_console_log = False):\n        \"\"\"\n        Given one or more files, load our configuration.  If no configuration is provided,\n        then use the configuration specified by the default.\n        \"\"\"\n        debug(\"CONFIG INPUT (uid={1}): '{0}'\".format(args, uid))\n\n        self.uid = uid\n        self._conf = lazydict()\n\n        for fn in args:\n            if os.path.exists(fn):\n                self._merge(yaml.load(open(fn, 'r').read().expandtabs()))\n        \n        if not self._conf and default:\n            self._conf = lazydict(yaml.load(default))\n\n        validator(self._conf)\n\n        if extra_settings:\n            self.update_settings(extra_settings)\n\n        s = self.get_settings()\n        self.uid = s.get('uid', self.uid)\n        self.gid = s.get('gid', self.gid)\n\n        # Special case used by --no-console-log.  It really was just easiest to do it this way\n        # rather than try to build some special notion of \"console logging\" into the log services\n        # backends.\n\n        if disable_console_log:\n            for k,v in self._conf.items():\n                if k.endswith('.logging'):\n                    if 'stdout' in v:\n                        del v['stdout']\n                    if 'stderr' in v:\n                        del v['stderr']\n\n    def _merge(self, items):\n        if type(items) == list:\n            items = {k:dict() for k in items}\n        conf = self._conf\n        for k,v in items.items():\n            if k in conf and not k.endswith('.service'):\n                conf.smart_update(k,v)\n            else:\n                conf[k] = v\n\n    def get_services(self):\n        env = self.get_environment()\n        return ServiceDict( \n            ((k,v) for k,v in self._conf.items() if k.endswith('.service')),\n            env,\n            self._conf.get('settings')\n        )\n\n    def get_logconfigs(self):\n        env = self.get_environment()\n        settings = self._conf.get('settings')\n        return lazydict(\n            ((k,LogConfig(v,k,env,settings)) for k,v in self._conf.items() if k.endswith('.logging'))\n        )\n\n    def get_settings(self):\n        return self._conf.get('settings') or {}\n\n    def update_settings(self, updates):\n        curset = self.get_settings()\n        curset.update(updates)\n        self._conf['settings'] = curset\n\n    def get_environment(self):\n        if not self._env:\n            self._env = Environment(config=self.get_settings(), uid=self.uid, gid=self.gid)\n        return self._env\n\n    def dump(self):\n        debug('FULL CONFIGURATION: {0}'.format(self._conf))\n"
  },
  {
    "path": "chaperone/cutil/env.py",
    "content": "import re\nimport os\nimport subprocess\nfrom fnmatch import fnmatch\n\nfrom chaperone.cutil.logging import error, debug, warn\nfrom chaperone.cutil.misc import lookup_user, lazydict\nfrom chaperone.cutil.errors import ChVariableError, ChParameterError, ChNotFoundError\n\n##\n## ALL chaperone configuration variables defined here for easy reference\n\nENV_CONFIG_DIR       = '_CHAP_CONFIG_DIR'          # directory which CONTAINS the config file *or* directory\nENV_INTERACTIVE      = '_CHAP_INTERACTIVE'         # if this session is interactive (has a ptty attached)\nENV_SERVICE          = '_CHAP_SERVICE'             # name of the current service\nENV_SERIAL           = '_CHAP_SERVICE_SERIAL'      # Contains a monotonic unique serial number for each started service, starting with 1\nENV_SERVTIME         = '_CHAP_SERVICE_TIME'        # Timestamp when service started running\nENV_TASK_MODE        = '_CHAP_TASK_MODE'           # if we are running in --task mode\n\nENV_CHAP_OPTIONS     = '_CHAP_OPTIONS'             # Preset before chaperone runs to set default options\n\n# Technically IEEE 1003.1-2001 states env vars can contain anything except '=' and NUL but we need to\n# obviously exclude the terminator!\n# \n# Minimal support is included for nested parenthesis when operators are used, as in:\n#      $(VAR:-$(VAL))\n# However, more levels of nesting are not supported and will cause substitutions to be unrecognised.\n\n_RE_BACKTICK = re.compile(r'`([^`]+)`', re.DOTALL)\n\n# Parsing for operators within expansions\n_RE_OPERS = re.compile(r'^(?:([^:]+):([-|?+_/])(.*)|(`.+`))$', re.DOTALL)\n_RE_SLASHOP = re.compile(r'^(.+)(?<!\\\\)/(.*)(?<!\\\\)/([i]*)$', re.DOTALL)\n_RE_BAREBAR = re.compile(r'(?<!\\\\)\\|')\n\n_DICT_CONST = dict()            # a dict we must never change, just an optimisation\n\n\nclass EnvScanner:\n    \"\"\"\n    A class which performs basic parsing of strings containing environment variables,\n    with support for nested constructs.  No, you can't do this with regular expressions.\n    \"\"\"\n\n    open_expansion = '({'\n    quotes = \"\\\"`\";             # we assume that single quotes may not be paired.  This prevents contractions\n                                # from inhibiting expansions\n    escape = \"\\\\\"\n    variable_id = '$'\n    nestlist = ')]}([{'         # arranged so that ending delimiters are first and positions match\n\n    def __init__(self, variable_id = None, open_expansion = None):\n        if variable_id:\n            self.variable_id = variable_id\n        if open_expansion:\n            self.open_expansion = open_expansion\n        self._RE_START = re.compile('(' + re.escape(self.escape) + ')?' + re.escape(self.variable_id) + \n                                    '(' + ('|'.join([re.escape(d[0]) for d in self.open_expansion])) + ')')\n        \n    def parse(self, buf, func, *args):\n        \"\"\"\n        Parses buffer and expands variables using func(exp_data, exp_whole, *args)\n        where, given $(xxx):\n           exp_data is the actual contents of the variable, so 'xxx'\n           exp_whole is the entire expression, so '$(xxx)'\n        \"\"\"\n        \n        # Quickly return if we don't have any expansions\n\n        st = self._RE_START\n        match = st.search(buf)\n        if not match:\n            return buf\n\n        # Now do the hard work\n\n        results = []\n        buflen = len(buf)\n        startpos = 0\n\n        nestlen = len(self.nestlist)\n        halfnest = nestlen // 2 # delims < halfnest are paired closing delimiters\n        lookfor = self.nestlist + self.quotes\n\n        while match:\n\n            pos = match.start()\n            if pos != startpos:\n                results.append(buf[startpos:pos])\n\n            if match.group(1):\n                # just escape the value\n                results.append(self.variable_id)\n                startpos = match.start(2)\n                pos = buflen\n                match = st.search(buf, startpos)\n            else:\n                pos = match.start(2)\n                startpos = pos + 1\n\n                # Init the stack.  We know a push will come first\n                stack = []\n\n                # find the very end of the area, counting nested items\n                while True:\n                    ci = lookfor.find(buf[pos])\n                    #print(pos, buf[pos], ci, stack, results)\n                    if ci >= 0:\n                        s0 = (not stack and -1) or stack[-1]\n                        if s0 == ci:\n                            stack.pop()\n                            # We are totally done if the stack is empty\n                            if not stack:\n                                results.append(func(buf[startpos:pos], buf[match.start():pos+1], *args))\n                                startpos = pos + 1\n                                pos = buflen\n                                match = st.search(buf, startpos)\n                                break\n                        elif ci >= halfnest and s0 < nestlen: # don't match within quotes\n                            # at matching end delimiter, which may be nesting, or not\n                            stack.append(ci-halfnest if ci < nestlen else ci)\n                    pos += 1\n                    if pos >= buflen:\n                        startpos = match.start(0)\n                        match = None\n                        break\n        \n        if pos != startpos:\n            results.append(buf[startpos:pos])\n\n        return ''.join(results)\n\n\nclass Environment(lazydict):\n\n    uid = None\n    gid = None\n\n    # This is a cached version of this environment, expanded\n    _expanded = None\n\n    # The _shadow Environment contains a pointer to the environment which contained\n    # the LAST active value for each env_set item so that we can deal with self-referential\n    # cases like:\n    #    'PATH': '/usr/local:$(PATH)'\n    _shadow = None\n\n    # A class variable to keep track of backtick expansions so we don't do them more than once\n    _cls_btcache = dict()\n    _cls_use_btcache = True     # if shell expansions should be cached once or re-executed\n    _cls_backtick = True        # indicates backticks are enabled\n\n    # Default scanner\n    _cls_scan = EnvScanner()\n\n    @classmethod\n    def set_parse_parameters(cls, variable_id = None, open_expansion = None):\n        cls._cls_scan = EnvScanner(variable_id, open_expansion)\n\n    @classmethod\n    def set_backtick_expansion(cls, enabled = True, cache = True):\n        cls._cls_backtick = enabled\n        cls._cls_use_btcache = cache\n\n    def __init__(self, from_env = os.environ, config = None, uid = None, gid = None, resolve_xid = True):\n        \"\"\"\n        Create a new environment.  An environment may have a user associated with it.  If so,\n        then it will be pre-populated with the user's HOME, USER and LOGNAME so that expansions\n        can reference these.\n        \n        Note that if resolve_xid is False, then credentials if they do not exist, but leave the uid/gid the same.  \n        This means that certain features, like HOME variables, will not be properly set, leading to possible\n        interactions between the optional components and their actual specification.  However, this is better\n        than having optional components trigger errors because uninstalled software did not create uid's\n        needed for operation.  The onus is on the service itself (in cproc) to assure that checking\n        is performed.\n\n        Note also that environments which use backtick expansions will *still* fail, because the backticks\n        must occur within the context of the specified user, and it would be a security violation to\n        allow a default.\n        \"\"\"\n        super().__init__()\n\n        #print(\"\\n--ENV INIT\", config, uid, from_env, from_env and getattr(from_env, 'uid', None))\n\n        userenv = dict()\n\n        # Inherit user from passed-in environment\n        self._shadow = getattr(from_env, '_shadow', None)\n        shadow = None           # we don't bother to recreate this in any complex fashion unless we need to\n\n        if uid is None:\n            self.uid = getattr(from_env, 'uid', self.uid)\n            self.gid = getattr(from_env, 'gid', self.gid)\n        else:\n            pwrec = None\n            try:\n                pwrec = lookup_user(uid, gid)\n            except ChNotFoundError:\n                if resolve_xid:\n                    raise\n                self.uid = uid\n                self.gid = gid\n\n            if pwrec:\n                self.uid = pwrec.pw_uid\n                self.gid = pwrec.pw_gid\n                userenv['HOME'] = pwrec.pw_dir\n                userenv['USER'] = userenv['LOGNAME'] = pwrec.pw_name\n\n        if not config:\n            if from_env:\n                self.update(from_env)\n            self.update(userenv)\n        else:\n            inherit = config.get('env_inherit') or ['*']\n            if inherit and from_env:\n                self.update({k:v for k,v in from_env.items() if any([fnmatch(k,pat) for pat in inherit])})\n            self.update(userenv)\n\n            add = config.get('env_set')\n            unset = config.get('env_unset')\n\n            if add or unset:\n                self._shadow = shadow = (getattr(self, '_shadow') or _DICT_CONST).copy()\n\n            if add:\n                for k,v in add.items():\n                    if from_env and k in from_env:\n                        shadow[k] = from_env # we keep track of the environment where the predecessor originated\n                    self[k] = v\n            if unset:\n                patmatch = lambda p: any([fnmatch(p,pat) for pat in unset])\n                for delkey in [k for k in self.keys() if patmatch(k)]:\n                    del self[delkey]\n                for delkey in [k for k in shadow.keys() if patmatch(k)]:\n                    del shadow[delkey]\n\n        #print('   DONE (.uid={0}): {1}\\n'.format(self.uid, self))\n\n    def _get_shadow_environment(self, var):\n        \"\"\"\n        Returns the environment where var  existed before the specified variable was set, even\n        that occurred long ago.  Delays expansion of the parent environment until this point,\n        since it is only rarely that self-referential environment variables need to consult the shadow.\n        \"\"\"\n        try:\n            shadow = self._shadow[var]\n        except (TypeError, KeyError):\n            return None\n\n        try:\n            return shadow.expanded()\n        except AttributeError:\n            pass\n\n        # Note shadow may be None at this point, or a dict()\n        self._shadow[var] = shadow = Environment(shadow)\n\n        return shadow.expanded()\n\n    def __setitem__(self, key, value):\n        super().__setitem__(key, value)\n        self._expanded = None\n\n    def __delitem__(self, key):\n        super().__delitem__(key)\n        self._expanded = None\n\n    def clear(self):\n        super().clear()\n        self._expanded = None\n\n    def _elookup(self, match):\n        whole = match.group(0)\n        return self.get(whole[2:-1], whole)\n\n    def expand(self, instr):\n        \"\"\"\n        Expands an input string by replacing environment variables of the form ${ENV} or $(ENV).\n        If an expansion is not found, the substituion is ignored and the original reference remains.\n\n        Two bash features are employed to allow tests:\n            $(VAR:-sub)    Expands to sub if VAR not defined\n            $(VAR:+sub)    Expands to sub if VAR IS defined\n\n        If a list is provided instead of a string, a list will be returned with each item\n        separately expanded.\n        \"\"\"\n        if isinstance(instr, list):\n            return [self.expand(item) for item in instr]\n        if not isinstance(instr, str):\n            return instr\n        return self._cls_scan.parse(instr, self._expand_into, self)\n\n    def expand_attributes(self, obj, *args):\n        \"\"\"\n        Given an object and a set of attributes, expands each and replaces the originals with\n        expanded versions.   Implicitly expands the environment to assure all variable substitutions\n        occur correctly.\n        \"\"\"\n        explist = (k for k in args if hasattr(obj, k))\n        if not explist:\n            return\n\n        env = self.expanded()\n        for attr in explist:\n            setattr(obj, attr, env.expand(getattr(obj, attr)))\n            \n    def expanded(self):\n        \"\"\"\n        Does a recursive expansion on all variables until there are no matches.  Circular recursion\n        is halted rather than reported as an error.  Returns a version of this environment\n        which has been expanded.  Asking an expanded() copy for another expanded() copy returns self\n        unless the expanded copy has been modified.\n        \"\"\"\n        if self._expanded is not None:\n            return self._expanded\n\n        result = Environment(None) \n        for k in sorted(self.keys()): # sorted so outcome is deterministic\n            self._expand_into(k, None, result, k)\n\n        # Copy uid after we expand, since any user information is already present in our\n        # own environment.\n        result.uid = self.uid\n        result.gid = self.gid\n        result._shadow = self._shadow\n\n        # Cache a copy, but also tell the cached copy that it's expanded cached copy is itself.\n        result._expanded = result\n        self._expanded = result\n\n        return result\n\n    def _expand_into(self, k, wholematch, result, parent = None):\n        \"\"\"\n        Internal workhorse that expands the variable 'k' INTO the given result dictionary.\n        The result dictionary will conatin the expanded values.   The result dictionary is\n        also a cache for nested and recursive environment expansion.\n\n        'wholematch' is None unless called from in an re.sub() (or similar context).\n                 If set, it indicates the complete expansion expression, including adornments.\n                 It is used as the default expansion when a variable is not defined.\n        'parent' is the name of the variable which was being expanded in the last\n                 recursion, to catch the special case of self-referential variables.\n        \"\"\"\n\n        match = _RE_OPERS.match(k)\n\n        if match:\n            (k, oper, repl, backtick) = match.groups()\n\n\n        # Phase 1: Base variable value.  Start by determining the value of variable\n        #          'k' within the current context.  \n\n        # 1A: We have a backtick shortcut, such as $(`date`)\n        if match and backtick:\n            return self._recurse(result, backtick, parent)\n\n        # 1B: We have an embedded self reference such as \"PATH\": \"/bin:$(PATH)\".  We use\n        #     the last defined value in a prior environment as the value.\n        elif parent == k and wholematch is not None:\n            val = (self._get_shadow_environment(k) or _DICT_CONST).get(k) or ''\n\n        # 1C: We have already calculated a result and will use that instead, but only\n        #     in a nested expansion.  We re-evaluate top-levels all the time.\n        elif wholematch is not None and k in result:\n            val = result[k]\n\n        # 1D: We have a variable which is not part of our environment at all, and\n        #     either treat it as empty, or as the wholematch value for further\n        #     processing\n        elif k not in self:\n            val = \"\" if match else wholematch\n        \n        # 1E: Finally, we will store this value and expand further.\n        else:\n            result[k] = self[k] # assure that recursion attempts stop with this value\n            val = result[k] = self._recurse(result, self[k], k)\n            \n        # We now have, in 'val', the fully expanded contents of the variable 'k'\n\n        if not match:\n            return val\n\n\n        # Phase 2: Process any operators to return a possibily modified\n        #          value as the result of the complete expression.\n\n        if oper == '?':\n            if not val:\n                raise ChVariableError(self._recurse(result, repl, parent))\n\n        elif oper == '/':\n            smatch = _RE_SLASHOP.match(repl)\n            if not smatch:\n                raise ChParameterError(\"invalid regex replacement syntax in '{0}'\".format(match.group(0)))\n\n            val = self._recurse(result, re.sub((smatch.group(3) and \"(?\" + smatch.group(3) + \")\") + smatch.group(1),\n                                               smatch.group(2).replace('\\/', '/'),\n                                               val), parent)\n\n        elif oper == '|':\n            vts = _RE_BAREBAR.split(repl, 3)\n            if len(vts) == 1: # same as +\n                val = '' if not val else self._recurse(result, vts[0], parent)\n            elif len(vts) == 2:\n                val = self._recurse(result, vts[0] if val else vts[1], parent)\n            elif len(vts) >= 3:\n                editval = vts[1] if fnmatch(val.replace(r'\\|', '|').lower(), vts[0].lower()) else vts[2]\n                val = self._recurse(result, editval.replace(r'\\|', '|'), parent)\n\n        elif oper == \"+\":\n            val = '' if not val else self._recurse(result, repl, parent)\n\n        elif oper == \"_\":       # strict opposite of +\n            val = '' if val else self._recurse(result, repl, parent)\n\n        elif oper == \"-\":       # bash :-\n            if not val:\n                val = self._recurse(result, repl, parent)\n\n        return val\n    \n    def _recurse(self, result, buf, parent_var = None):\n        \"Worker method to isolate recursive env variable expansion, with backtick support\"\n        return _RE_BACKTICK.sub(self._backtick_expand,\n                                self._cls_scan.parse(buf, self._expand_into, result, parent_var))\n\n    def _backtick_expand(self, cmd):\n        \"\"\"\n        Performs rudimentary backtick expansion after all other environment variables have been\n        expanded.   Because these are cached, the user should not expect results to differ\n        for different environment contexts, nor should the environment itself be relied upon.\n        \"\"\"\n\n        # Accepts either a string or match object\n        if not isinstance(cmd, str):\n            cmd = cmd.group(1)\n\n        if not self._cls_backtick:\n            return \"`\" + cmd + \"`\"\n\n        key = '{0}:{1}:{2}'.format(self.uid, self.gid, cmd)\n\n        result = self._cls_btcache.get(key)\n\n        if result is None:\n            if self.uid:\n                try:\n                    pwrec = lookup_user(self.uid, self.gid)\n                except ChNotFoundError as ex:\n                    ex.annotate('(required for backtick expansion `{0}`)'.format(cmd))\n                    raise ex\n            else:\n                pwrec = None\n\n            def _proc_setup():\n                if pwrec:\n                    os.setgid(pwrec.pw_gid)\n                    os.setuid(pwrec.pw_uid)\n\n            try:\n                result = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT,\n                                                 preexec_fn=_proc_setup)\n                result = result.decode()\n            except Exception as ex:\n                error(ex, \"Backtick expansion returned error: \" + str(ex))\n                result = \"\"\n\n            result = result.strip().replace(\"\\n\", \" \")\n            if self._cls_use_btcache:\n                self._cls_btcache[key] = result\n\n        return result\n\n    def get_public_environment(self):\n        \"\"\"\n        Public variables are those which are exported to the application and do NOT start with an\n        underscore.  All underscore names will be kept private.\n        \"\"\"\n        return {k:v for k,v in self.expanded().items() if not (k.startswith('_') or v in (None, ''))}\n"
  },
  {
    "path": "chaperone/cutil/errors.py",
    "content": "import errno\n\nclass ChError(Exception):\n\n    # Named the same as OSError so that exception code can detect the presence\n    # of an errno for reporting purposes\n    errno = None\n    annotation = None\n\n    def annotate(self, text):\n        if self.annotation:\n            self.annotation += ' ' + text\n        else:\n            self.annotation = text\n\n    def __str__(self):\n        supmsg = super().__str__()\n        if self.annotation:\n            supmsg += ' ' + self.annotation\n        return supmsg\n        \n    def __init__(self, message = None, errno = None):\n        super().__init__(message)\n        if errno is not None:\n            self.errno = errno\n\nclass ChParameterError(ChError):\n    errno = errno.EINVAL\n\nclass ChNotFoundError(ChError):\n    errno = errno.ENOENT\n\nclass ChSystemError(ChError):\n    pass\n\nclass ChProcessError(ChError):\n\n    def __init__(Self, message = None, errno = None, resultcode = None):\n        if resultcode is not None and errno is None:\n            errno = resultcode.errno\n        super().__init__(message, errno)\n\nclass ChVariableError(ChError):\n    pass\n\ndef get_errno_from_exception(ex):\n    try:\n        return ex.errno\n    except AttributeError:\n        return None\n"
  },
  {
    "path": "chaperone/cutil/events.py",
    "content": "\nIS_EVENT = lambda e: e.startswith('on') and len(e) > 2 and e[2:3].isupper()\n\ndef SWALLOW_EVENT(*args, **kwargs):\n    pass\n\n\nclass EventSource:\n    \"\"\"\n    This is a elegant generic class to set up and handle events.\n\n    Events are always identified by keyword arguments of the format\n    onXxxxx.\n\n      def __init__(self, **kwargs):\n        events = EventSource()\n        kwargs = events.add(**kwargs)\n\n      def foo(self):\n        self.events.onMiscEvent()\n\n      \n    \"\"\"\n\n    __events = None\n\n    def __init__(self, **kwargs):\n        self.__events = dict()\n        if kwargs:\n            self._exec_kwargs(self._do_add, kwargs)\n\n    def __getattribute__(self, key):\n        if IS_EVENT(key):\n            return self.__events.get(key, SWALLOW_EVENT)\n\n        return object.__getattribute__(self, key)\n\n    def _exec_kwargs(self, oper, kwargs):\n        events = [e for e in kwargs.keys() if IS_EVENT(e)]\n        if not events:\n            return kwargs\n\n        for e in events:\n            oper(e, kwargs[e])\n            del kwargs[e]\n\n        return kwargs\n\n    def clear(self):\n        \"Removes all event handlers.\"\n        self.__events.clear()\n\n    def reset(self, **kwargs):\n        \"Removes all event handlers and sets new ones.\"\n        self.__events.clear()\n        return self._exec_kwargs(self._do_add, kwargs)\n\n    def add(self, **kwargs):\n        \"\"\"\n        Adds one or more events:\n           add(onError = handler, onExit = handler)\n       \n        Returns the kwargs not processed.\n        \"\"\"\n        return self._exec_kwargs(self._do_add, kwargs)\n\n    def remove(self, **kwargs):\n        \"\"\"\n        Removes one or more events:\n           remove(onError = handler, onExit = handler)\n       \n        Returns the kwargs not processed.\n        \"\"\"\n        return self._exec_kwargs(self._do_remove, kwargs)\n\n    def _do_add(self, name, value):\n        assert callable(value)\n\n        e = self.__events.get(name)\n\n        # No such event, add a singleton\n        if not e:\n            self.__events[name] = value\n            return\n\n        # Add to multi-event dispatcher\n        try:\n            e.__eventlist.append(value)\n            return\n        except AttributeError:\n            pass\n        \n        # Create multi-event dispatcher\n\n        displist = [e, value]\n        def dispatcher(*args, _displist = displist, **kwargs):\n            for edisp in _displist:\n                edisp(*args, **kwargs)\n        dispatcher.__eventlist = displist\n\n        self.__events[name] = dispatcher\n\n    def _do_remove(self, name, value):\n        e = self.__events.get(name)\n\n        if not name:\n            return\n\n        try:\n            e.__eventlist.remove(value)\n        except ValueError:\n            return                      # not in list, ignore\n        except AttributeError:\n            try:\n                del self.__events[name] # singleton\n            except KeyError:\n                return                  # no singleton, ignore\n"
  },
  {
    "path": "chaperone/cutil/format.py",
    "content": "def fstr(s):\n    if s is None:\n        return '-'\n    if isinstance(s, bool):\n        return str(s).lower()\n    return str(s)\n\nclass TableFormatter(list):\n\n    \"\"\"\n    A quick formatting class which allows you to build a table, then output it\n    neatly with columns and headings.\n    \"\"\"\n\n    attributes = None\n    headings = None\n    _sortfield = None\n\n    def __init__(self, *args, sort=None):\n        self.attributes = tuple(isinstance(a, tuple) and a[1] or a for a in args)\n        self.headings = tuple(isinstance(a, tuple) and a[0] or a for a in args)\n        self._hsize = list(len(h) for h in self.headings)\n        if sort in self.attributes:\n            self._sortfield = self.attributes.index(sort)\n\n    def add_rows(self, rows):\n        for r in rows:\n            row = tuple(getattr(r, attr, None) for attr in self.attributes)\n            for i in range(len(row)):\n                self._hsize[i] = max(self._hsize[i], len(fstr(row[i])))\n            self.append(row)\n\n    def get_formatted_data(self):\n        if self._sortfield is not None:\n            rows = sorted(self, key=lambda r: r[self._sortfield])\n        else:\n            rows = self\n\n        hz = self._hsize\n        fieldcount = range(len(hz))\n        sep = \"  \"\n        dividers = tuple(\"-\" * hz[i] for i in fieldcount)\n\n        return \"\\n\".join(sep.join(fstr(row[i]).ljust(hz[i])\n                                  for i in fieldcount) \n                         for row in [self.headings] + [dividers] + rows)\n"
  },
  {
    "path": "chaperone/cutil/logging.py",
    "content": "import logging\nimport os\nimport sys\nimport traceback\nfrom time import strftime\n\nfrom logging.handlers import SysLogHandler\nfrom functools import partial\n\nimport chaperone.cutil.syslog_info as syslog_info\n\nlogger = logging.getLogger(__name__)\n\n_root_logger = logging.getLogger(None)\n_stderr_handler = logging.StreamHandler()\n_cur_level = logging.NOTSET\n\n_format = logging.Formatter()\n_stderr_handler.setFormatter(_format)\n\n_root_logger.addHandler(_stderr_handler)\n\n\ndef set_log_level(lev):\n    global _cur_level\n\n    _cur_level = syslog_info.syslog_to_python_lev(lev)\n    logger.setLevel(_cur_level)\n\n\ndef set_custom_handler(handler, enable = True):\n    if enable:\n        _root_logger.addHandler(handler)\n        _root_logger.removeHandler(_stderr_handler)\n        logger.setLevel(logging.DEBUG)\n    else:\n        _root_logger.removeHandler(handler)\n        _root_logger.addHandler(_stderr_handler)\n        logger.setLevel(_cur_level)\n\n\ndef _versatile_logprint(delegate, fmt, *args, \n                        facility=None, exceptions=False, \n                        program=None, pid=None, **kwargs):\n    \"\"\"\n    In addition to standard log formatting, the following two special cases are\n    covered:\n    1.  If there are no formatting characters (%), then simply concatenate repr() of *args\n    2.  If there are '{' formatting arguments, then apply new-style .format using arguments\n        provided.\n\n    Additionally, you can pass an exception as the first argument:\n    1.  If no other arguments are provided, then the exception message will be the\n        log item.\n    2.  A traceback will be printed in the case where the logger priority level is set to debug.\n    \"\"\"\n\n    if isinstance(fmt, Exception):\n        ex = fmt\n        args = list(args)\n        if len(args) == 0:\n            fmt = [str(ex)]\n        else:\n            fmt = args.pop(0)\n    else:\n        ex = None\n\n    if facility is not None or program or pid:\n        extra = kwargs['extra'] = {}\n        if facility:\n            extra['_facility'] = facility\n        if program:\n            extra['program_name'] = str(program)\n        if pid:\n            extra['program_pid'] = str(pid)\n\n    \n    if ex and (exceptions or logger.level == logging.DEBUG): # use python level here\n        trace = \"\\n\" + traceback.format_exc()\n    else:\n        trace = \"\"\n\n    if not len(args):\n        delegate(fmt, **kwargs)\n    elif '%' not in fmt:\n        if '{' in fmt:\n            delegate('%s', fmt.format(*args) + trace, **kwargs)\n        else:\n            delegate('%s', \" \".join([repr(a) for a in args]) + trace, **kwargs)\n    else:\n        delegate(fmt, *args, **kwargs)\n\nwarn = partial(_versatile_logprint, logger.warning)\ninfo = partial(_versatile_logprint, logger.info)\ndebug = partial(_versatile_logprint, logger.debug, exceptions=True)\nerror = partial(_versatile_logprint, logger.error)\n"
  },
  {
    "path": "chaperone/cutil/misc.py",
    "content": "import os\nimport pwd\nimport grp\nimport copy\nimport signal\nimport subprocess\n\nfrom chaperone.cutil.errors import ChNotFoundError, ChParameterError, ChSystemError\n\nclass objectplus:\n    \"\"\"\n    An object which provides some general-purpose useful patterns.\n    \"\"\"\n\n    _cls_singleton = None\n\n    @classmethod\n    def sharedInstance(cls):\n        \"Return a singleton object for this class.\"\n        if not cls._cls_singleton:\n            cls._cls_singleton = cls()\n        return cls._cls_singleton\n\n\nclass lazydict(dict):\n\n    __slots__ = ()              # create no __dict__ overhead for a pure dict subclass\n\n    def __init__(self, *args):\n        \"\"\"\n        Allow a series of iterables as an initializer.\n        \"\"\"\n        super().__init__()\n        for a in args:\n            self.update(a)\n\n    def get(self, key, default = None):\n        \"\"\"\n        A very of get() that accepts lazy defaults.  You can provide a callable which will be invoked only\n        if necessary.\n        \"\"\"\n        if key in self:\n            return self[key]\n\n        return default() if callable(default) else default\n    \n    def setdefault(self, key, default = None):\n        \"\"\"\n        A version of setdefault that works the way it should, by having a lambda that is executed\n        only in the case where the item does not exist.\n        \"\"\"\n        if key in self:\n            return self[key]\n        self[key] = value = default() if callable(default) else default\n\n        return value\n\n    def smart_update(self, key, theirs):\n        \"\"\"\n        Smart update replaces values in our dictionary with values from the other.  However,\n        in the case where both dictionaries contain sub-dictionaries, the sub-dictionaries\n        are updated rather than replaced.  (This makes things like env_set inheritance easier.)\n        \"\"\"\n        ours = super().get(key)\n        if ours is None:\n            ours[key] = theirs\n            return\n\n        for k,v in theirs.items():\n            oursub = ours.get(k)\n            if isinstance(oursub, dict) and isinstance(v, dict):\n                oursub.update(v)\n            else:\n                ours[k] = v\n\n    def deepcopy(self):\n        return copy.deepcopy(self)\n\n\ndef maybe_remove(fn, strict = False):\n    \"\"\"\n    Tries to remove a file but ignores a FileNotFoundError or Permission error.  If an exception\n    would have been raised, returns the exception, otherwise None.\n\n    If \"strict\" then the file must either be missing, or successfully removed.  Other errors\n    will still raise exceptions.\n    \"\"\"\n    try:\n        os.remove(fn)\n    except (FileNotFoundError if strict else (FileNotFoundError, PermissionError)) as ex:\n        return ex\n\n    return None\n\n\n        \ndef is_exe(p):\n    return os.path.isfile(p) and os.access(p, os.X_OK)\n\ndef executable_path(fn, env = os.environ):\n    \"\"\"\n    Returns the fully qualified pathname to an executable.  The PATH is searched, and\n    any tilde expansions are performed.  Exceptions are raised as usual.\n    \"\"\"\n    penv = env.get(\"PATH\")\n    newfn = os.path.expanduser(fn)\n    path,prog = os.path.split(newfn)\n    \n    if not path and penv:\n        for path in penv.split(os.pathsep):\n            if is_exe(os.path.join(path, prog)):\n                newfn = os.path.join(path, prog)\n                break\n\n    if not os.path.isfile(newfn):\n        raise FileNotFoundError(fn)\n    if not os.access(newfn, os.X_OK):\n        raise PermissionError(fn)\n\n    return newfn\n                \n_lookup_user_cache = {}\n\ndef lookup_user(uid, gid = None):\n    \"\"\"\n    Looks up a user using either a name or integer user value.  If a group is specified,\n    Then set the group explicitly in the returned pwrec\n    \"\"\"\n    key = (uid, gid)\n    retval = _lookup_user_cache.get(key)\n    if retval:\n        return retval\n\n    # calculate the new entry\n\n    intuid = None\n\n    try:\n        intuid = int(uid)\n    except ValueError:\n        pass\n    \n    try:\n        if intuid is not None:\n            pwrec = pwd.getpwuid(intuid)\n        else:\n            pwrec = pwd.getpwnam(uid)\n    except KeyError:\n        raise ChNotFoundError(\"specified user ('{0}') does not exist\".format(uid))\n\n    if gid is None:\n        return pwrec\n\n    retval = _lookup_user_cache[key] = type(pwrec)(\n        (pwrec.pw_name,\n         pwrec.pw_passwd,\n         pwrec.pw_uid,\n         lookup_group(gid, True),\n         pwrec.pw_gecos,\n         pwrec.pw_dir,\n         pwrec.pw_shell)\n    )\n\n    return retval\n\n\ndef lookup_group(gid, optional = False):\n    \"\"\"\n    Looks up a user using either a name or integer user value.\n    If 'optional' is true, then does not require that the group exist, and always\n    returns the numeric value of 'gid', or the mapping from 'gid' if it is a name.\n    Otherwise returns the group record.\n    \"\"\"\n    intgid = None\n\n    try:\n        intgid = int(gid)\n    except ValueError:\n        pass\n    \n    if intgid is not None:\n        if optional:\n            return intgid\n        findit = grp.getgrgid\n    else:\n        findit = grp.getgrnam\n\n    try:\n        grrec = findit(gid)\n    except KeyError:\n        raise ChNotFoundError(\"specified group ('{0}') does not exist\".format(gid))\n\n    return grrec.gr_gid if optional else grrec\n\n\ndef groupadd(name, gid):\n    \"\"\"\n    Adds a group to the system with the specified name and GID.\n    \"\"\"\n    # First, try the gnu tools way\n    try:\n        if subprocess.call(['groupadd', '-g', str(gid), name]) == 0:\n            return\n        raise ChSystemError(\"Unable to add a group with name={0} and GID={1}\".format(name, gid))\n    except FileNotFoundError:\n        pass\n\n    # Now, try using 'addgroup' with the busybox syntax\n    if subprocess.call(\"addgroup -g {0} {1}\".format(gid, name), shell=True) == 0:\n        return\n\n    raise ChSystemError(\"Unable to add a group with name={0} and GID={1}\".format(name, gid))\n\n\ndef useradd(name, uid = None, gid = None, home = None):\n    \"\"\"\n    Adds a user to the system given an optional UID and numeric GID.\n    \"\"\"\n\n    ucmd = ['useradd', '--no-create-home']\n    if uid is not None:\n        ucmd += ['-u', str(uid)]\n    if gid is not None:\n        ucmd += ['-g', str(gid)]\n    if home is not None:\n        ucmd += ['--home-dir', home]\n    \n    ucmd += [name]\n\n    tried = \" \".join(ucmd)\n\n    # try gnu tools first\n    try:\n        if subprocess.call(ucmd) == 0:\n            return\n        raise ChSystemError(\"Error while trying to add user: {0} ({1})\".format(name, tried))\n    except FileNotFoundError:\n        pass\n\n    ucmd = \"adduser -D -H\"\n    if uid is not None:\n        ucmd += \" -u \" + str(uid)\n    if gid is not None:\n        ucmd += \" -G \" + str(gid)\n    if home is not None:\n        ucmd += \" -h '{0}'\".format(home)\n\n    ucmd += \" \" + name\n\n    tried += \"\\n\" + ucmd\n\n    # try busybox-style adduser\n    if subprocess.call(ucmd, shell=True) == 0:\n        return\n\n    raise ChSystemError(\"Error while trying to add user: {0}\\ntried:\\n{1}\".format(name, tried))\n    \n\ndef userdel(name):\n    \"\"\"\n    Removes a user from the system.\n    \"\"\"\n    del_ex = ChSystemError(\"Error while trying to remove user: {0}\".format(name))\n\n    # try gnu tools first\n    try:\n        if subprocess.call(['userdel', name]) == 0:\n            return\n        raise del_ex\n    except FileNotFoundError:\n        pass\n\n    # try busybox-style adduser\n    if subprocess.call(\"deluser \" + name, shell=True) == 0:\n        return\n\n    raise del_ex\n    \n    \n# User Directories Directory cache\n_udd = None\n\ndef get_user_directories_directory():\n    \"\"\"\n    Determines the directory where user directories are stored.  This is actually\n    not that easy, and different systems have different ways of doing it.  So,\n    we try adding a user called '_chaptest_' just to see where the directory goes,\n    and use that.\n    \"\"\"\n    global _udd\n\n    if _udd is not None:\n        return _udd\n\n    try:\n        testuser = \"_chaptest_\"\n        useradd(testuser)\n        userinfo = lookup_user(testuser)\n\n        _udd = os.path.dirname(userinfo.pw_dir)\n\n        userdel(testuser)\n    except Exception:\n        _udd = \"/\"              # default if any error occurs\n\n    return _udd\n\ndef maybe_create_user(user, uid = None, gid = None, using_file = None, default_home = None):\n    \"\"\"\n    If the user does not exist, then create one with the given name, and optionally\n    the specified uid.  If a gid is specified, create a group with the same name as the \n    user, and the given gid.\n\n    If the user does exist, then confirm that the uid and gid match, if either\n    or both are specified.\n\n    If 'using_file' is specified, then uid/gid are ignored and replaced with the uid/gid\n    of the specified file.  The file must exist and be readable.\n    \"\"\"\n\n    if using_file:\n        stat = os.stat(using_file)\n        if uid is None:\n            uid = stat.st_uid\n        if gid is None:\n            gid = stat.st_gid\n\n    if uid is not None:\n        try:\n            uid = int(uid)\n        except ValueError:\n            raise ChParameterError(\"Specified UID is not a number: {0}\".format(uid))\n        \n    try:\n        pwrec = lookup_user(user)\n    except ChNotFoundError:\n        pwrec = None\n\n    # If the user exists, we do nothing, but we do validate that their UID and GID\n    # exist.\n\n    if pwrec:\n        if uid is not None and uid != pwrec.pw_uid:\n            raise ChParameterError(\"User {0} exists, but does not have expected UID={1}\".format(user, uid))\n        if gid is not None and lookup_group(gid).gr_gid != pwrec.pw_gid:\n            raise ChParameterError(\"User {0} exists, but does not have expected GID={1}\".format(user, gid))\n        return\n\n    # Now, we need to create the user, and optionally the group.\n\n    if gid is not None:\n\n        create_group = False\n        try:\n            newgid = lookup_group(gid).gr_name # always use name\n        except ChNotFoundError:\n            create_group = True\n            try:\n                newgid = int(gid)   # must be a number at this point\n            except ValueError:\n                # We don't report the numeric error, because we *know* there is no such group\n                # and we won't create a symbolic group with a randomly-created number.\n                raise ChParameterError(\"Group does not exist: {0}\".format(gid))\n\n        if create_group:\n            groupadd(user, newgid)\n            newgid = lookup_group(user).gr_name\n            \n        gid = newgid              # always will be the group name\n\n    # Test to see if the user directory itself already exists, which should be the case.\n    # If it doesn't, then use the default, if provided.\n\n    home = None\n\n    if default_home:\n        udd = get_user_directories_directory()\n        if not os.path.exists(os.path.join(udd, user)):\n            home = default_home\n\n    useradd(user, uid, gid, home)\n\n\ndef _assure_dir_for(path, pwrec, gid):\n    # gid is present so we know if we need to set group modes, but\n    # we always use the one in pwrec\n\n    if os.path.exists(path):\n        return\n\n    _assure_dir_for(os.path.dirname(path), pwrec, gid)\n\n    os.mkdir(path, 0o755 if not gid else 0o775)\n    if pwrec:\n        os.chown(path, pwrec.pw_uid, pwrec.pw_gid if gid else -1)\n    \ndef open_foruser(filename, mode = 'r', uid = None, gid = None, exists_ok = True):\n    \"\"\"\n    Similar to open(), but assures all directories exist (similar to os.makedirs)\n    and assures that all created objects are writable by the given user, and\n    optionally by the given group (causing mode to be set accordingly).\n    \"\"\"\n    if uid:\n        pwrec = lookup_user(uid, gid)\n    else:\n        pwrec = None\n        gid = None\n\n    rp = os.path.realpath(filename)\n    _assure_dir_for(os.path.dirname(rp), pwrec, gid)\n\n    fobj = open(rp, mode)\n\n    if pwrec:\n        os.chown(rp, pwrec.pw_uid, pwrec.pw_gid if gid else -1)\n        os.chmod(rp, 0o644 if not gid else 0o664)\n\n    return fobj\n\n\nSIGDICT = dict((v,k) for k,v in sorted(signal.__dict__.items())\n               if k.startswith('SIG') and not k.startswith('SIG_'))\n\ndef remove_for_recreate(filename):\n    \"\"\"\n    Indicates the intention to recreate the file at the given path.  This is function can be used\n    in advance to assure that\n       a) any existing file is gone, and\n       b) full permissions and directories exist for creation of a new file in it's place\n    \"\"\"\n    ex = maybe_remove(filename, strict = True)\n    open_foruser(filename, mode='w').close()\n    os.remove(filename)\n\ndef get_signal_name(signum):\n    return SIGDICT.get(signum, \"SIG%d\" % signum)\n\ndef get_signal_number(signame):\n    sup = signame.upper()\n    if sup.startswith('SIG') and not sup.startswith('SIG_'):\n        num = getattr(signal, sup, None)\n    else:\n        try:\n            num = int(signame)\n        except ValueError:\n            num = None\n    \n    if num is None:\n        raise ChParameterError(\"Invalid signal specifier: \" + str(signame))\n\n    return num\n"
  },
  {
    "path": "chaperone/cutil/notify.py",
    "content": "import asyncio\nimport socket\nimport os\nimport re\n\nfrom chaperone.cutil.servers import Server, ServerProtocol\nfrom chaperone.cutil.misc import maybe_remove\nfrom chaperone.cutil.logging import debug\n\n_RE_NOTIFY = re.compile(r'^([A-Za-z]+)=(.+)$')\n\nclass NotifyProtocol(ServerProtocol):\n\n    def datagram_received(self, data, addr):\n        lines = data.decode().split(\"\\n\")\n        for line in lines:\n            m = _RE_NOTIFY.match(line)\n            if m:\n                self.events.onNotify(self.owner, m.group(1), m.group(2))\n\n\nclass NotifyListener(Server):\n\n    def _create_server(self):\n        loop = asyncio.get_event_loop()\n        return loop.create_datagram_endpoint(NotifyProtocol.buildProtocol(self), family=socket.AF_UNIX)\n\n    @property\n    def is_client(self):\n        return False\n\n    @property\n    def socket_name(self):\n        return self._socket_name\n\n    @property\n    def bind_name(self):\n        if self._socket_name.startswith('@'):\n            return self._socket_name.replace('@', \"\\0\")\n        return self._socket_name\n\n    def __init__(self, socket_name, **kwargs):\n        super().__init__(**kwargs)\n        self._socket_name = socket_name\n        \n    @asyncio.coroutine\n    def send(self, message):\n        if not self.server:\n            yield from self.run()\n\n        self.server[0].sendto(message.encode(), self.bind_name)\n\n    @asyncio.coroutine\n    def server_running(self):\n        (transport, protocol) = self.server\n\n        bindname = self.bind_name\n\n        # Clients connect to an existing socket\n        if self.is_client:\n            loop = asyncio.get_event_loop()\n            yield from loop.sock_connect(transport._sock, bindname)\n            return\n\n        # Servers set up a binding to a new one\n        transport._sock.bind(bindname)\n\n        if not bindname.startswith(\"\\0\"): # if not abstract socket\n            os.chmod(bindname, 0o777)\n\n    def close(self):\n        super().close()\n        if not (self.is_client or self._socket_name.startswith('@')):\n            maybe_remove(self._socket_name)\n\n\n# A lot like a socket server, there are only subtle differences.\n\nclass NotifyClient(NotifyListener):\n\n    @property\n    def is_client(self):\n        return True\n\n# A sink to specific notify messages.  Can operate with or without a client,\n# and has multiple levels of support.\n\nclass NotifySink:\n\n    NSLEV = 0       # level 0: nothing\n    NSLEV = 1       # level 1: only READY notifications\n    NSLEV = 2       # level 2: READY and STATUS\n    NSLEV = 3       # level 3: adds ERRNO, STARTING and STOPPING messages\n\n    _LEVS = [\n        set(),\n        {'READY'},\n        {'READY', 'STATUS'},\n        {'READY', 'STATUS', 'ERRNO', 'STOPPING'},\n    ]\n\n    _client = None\n    _lev = None\n    _sent = None\n\n    def __init__(self):\n        self.level = 99\n        self._sent = set()\n\n    @property\n    def level(self):\n        try:\n            return self._LEVS.index(self._lev)\n        except ValueError:\n            return None\n\n    @level.setter\n    def level(self, val):\n        if val > len(self._LEVS):\n            val = len(self._LEVS) - 1\n        self._lev = self._LEVS[val].copy()\n\n    def enable(self, ntype):\n        self._lev.add(ntype.upper())\n\n    def disable(self, ntype):\n        self._lev.discard(ntype.upper())\n\n    def error(self, val):\n        if not self.sent(\"ERRNO\"):\n            self.send(\"ERRNO\", int(val))\n\n    def stopping(self):\n        if not self.sent(\"STOPPING\"):\n            self.send(\"STOPPING\", 1)\n\n    def ready(self):\n        if not self.sent(\"READY\"):\n            self.send(\"READY\", 1)\n\n    def status(self, statmsg):\n        self.send(\"STATUS\", statmsg)\n\n    def mainpid(self):\n        self.send(\"MAINPID\", os.getpid())\n\n    def sent(self, name):\n        return name in self._sent\n\n    def send(self, name, val):\n        if name not in self._lev:\n            return\n        self._sent.add(name)\n        if self._client:\n            debug(\"queueing '{0}={1}' to notify socket '{2}'\".format(name, val, self._client.socket_name))\n            asyncio.async(self._do_send(\"{0}={1}\".format(name, val)))\n\n    @asyncio.coroutine\n    def _do_send(self, msg):\n        if self._client:\n            yield from self._client.send(msg)\n\n    @asyncio.coroutine\n    def connect(self, socket = None):\n        \"\"\"\n        Connects to the notify socket.  However, if we can't, it's not considered an error.\n        We just return False.\n        \"\"\"\n\n        self.close()\n\n        if socket is None:\n            if \"NOTIFY_SOCKET\" not in os.environ:\n                return False\n            socket = os.environ[\"NOTIFY_SOCKET\"]\n        \n        self._client = NotifyClient(socket, \n                                    onClose = lambda which,exc: self.close(),\n                                    onError = lambda which,exc: debug(\"{0} error, notifications disabled\".format(socket)))\n\n        try:\n            yield from self._client.run()\n        except OSError as ex:\n            debug(\"could not connect to notify socket '{0} ({1})\".format(socket, ex))\n            self.close()\n            return False\n\n        return True\n        \n    def close(self):\n        if not self._client:\n            return\n        self._client.close()\n        self._client = None\n"
  },
  {
    "path": "chaperone/cutil/patches.py",
    "content": "import inspect\nimport importlib\n\n# This module contains patches to Python.  A patch wouldn't appear here if it didn't have major impact,\n# and they are constructed and researched carefully.  Avoid if possible, please.\n\n# Patch routine for patching classes.  Ignore ALL exceptions, since there could be any number of\n# reasons why a distribution may not allow such patching (though most do).  Exact code is compared,\n# so there is little chance of an error in deciding if the patch is relevant.\n\ndef PATCH_CLASS(module, clsname, member, oldstr, newfunc):\n    try:\n        cls = getattr(importlib.import_module(module), clsname)\n        should_be = ''.join(inspect.getsourcelines(getattr(cls, member))[0])\n        if should_be == oldstr:\n            setattr(cls, member, newfunc)\n    except Exception:\n        pass\n\n\n# PATCH  for Issue23140: https://bugs.python.org/issue23140\n# WHERE  asyncio\n# IMPACT Eliminates exceptions during process termination\n# WHY    There is no workround except upgrading to Python 3.4.3, which dramatically affects\n#        distro compatibility.  Mostly, this benefits Ubuntu 14.04LTS.\n\nOLD_process_exited = \"\"\"    def process_exited(self):\n        # wake up futures waiting for wait()\n        returncode = self._transport.get_returncode()\n        while self._waiters:\n            waiter = self._waiters.popleft()\n            waiter.set_result(returncode)\n\"\"\"\n\ndef NEW_process_exited(self):\n    # wake up futures waiting for wait()\n    returncode = self._transport.get_returncode()\n    while self._waiters:\n        waiter = self._waiters.popleft()\n        if not waiter.cancelled():\n            waiter.set_result(returncode)\n\nPATCH_CLASS('asyncio.subprocess', 'SubprocessStreamProtocol', 'process_exited', OLD_process_exited, NEW_process_exited)\n"
  },
  {
    "path": "chaperone/cutil/proc.py",
    "content": "import os\nfrom chaperone.cutil.misc import get_signal_name\n\nclass ProcStatus(int):\n\n    _other_error = None\n    _errno = None\n\n    def __new__(cls, val):\n        try:\n            intval = int(val)\n        except ValueError:\n            rval = int.__new__(cls, 0)\n            rval._other_error = str(val)\n            return rval\n\n        return int.__new__(cls, intval)\n\n    @property\n    def exited(self):\n        return os.WIFEXITED(self)\n\n    @property\n    def signaled(self):\n        return os.WIFSIGNALED(self)\n\n    @property\n    def stopped(self):\n        return os.WIFSTOPPED(self)\n\n    @property\n    def continued(self):\n        return os.WIFCONTINUED(self)\n\n    @property\n    def exit_status(self):\n        status = (os.WIFEXITED(self) or None) and os.WEXITSTATUS(self)\n        if not status and self._errno:\n            return 1            # default to exit_status = 1 in the case of an errno value\n        return status\n\n    @property\n    def normal_exit(self):\n        return self.exit_status == 0 and not self._other_error\n\n    @property\n    def errno(self):\n        \"Map situation to an errno, even if contrived, unless one was provided.\"\n        if self._errno is not None:\n            return self._errno\n        if self.signal:\n            return 4  #EINTR\n        return 8      #ENOEXEC\n    @errno.setter\n    def errno(self, val):\n        self._errno = val\n\n    @property\n    def exit_message(self):\n        es = self.exit_status\n        if es is not None:\n            return os.strerror(es)\n        return None\n        \n    @property\n    def signal(self):\n        if os.WIFSTOPPED(self):\n            return os.WSTOPSIG(self)\n        if os.WIFSIGNALED(self):\n            return os.WTERMSIG(self)\n        return None\n\n    @property\n    def briefly(self):\n        if self.signaled or self.stopped:\n            return get_signal_name(self.signal)\n        if self.exited:\n            return \"exit({0})\".format(self.exit_status)\n        return '?'\n\n    def __format__(self, spec):\n        if spec:\n            return int.__format__(self, spec)\n        msg = \"<ProcStatus\"\n        if self._errno:\n            msg += \" errno={0}\".format(self._errno)\n        if self.exited:\n            msg += \" exit_status={0}\".format(self.exit_status)\n        if self.signaled:\n            msg += \" signal=%d\" % self.signal\n        if self.stopped:\n            msg += \" stoppped=%d\" % self.signal\n        return msg + \">\"\n"
  },
  {
    "path": "chaperone/cutil/servers.py",
    "content": "import asyncio\nfrom functools import partial\nfrom chaperone.cutil.events import EventSource\n\nclass ServerProtocol(asyncio.Protocol):\n\n    @classmethod\n    def buildProtocol(cls, owner, **kwargs):\n        return partial(cls, owner, **kwargs)\n\n    def __init__(self, owner, **kwargs):\n        \"\"\"\n        Copy keywords directly into attributes when each protocol is created.\n        This creates flexibility so that various servers can pass information to protocols.\n        \"\"\"\n        \n        super().__init__()\n\n        self.owner = owner\n        self.events = self.owner.events\n\n        for k,v in kwargs.items():\n            setattr(self, k, v)\n\n    def connection_made(self, transport):\n        self.transport = transport\n        self.events.onConnection(self.owner)\n\n    def error_received(self, exc):\n        self.events.onError(self.owner, exc)\n        self.events.onClose(self.owner, exc)\n\n    def connection_lost(self, exc):\n        self.events.onClose(self.owner, exc)\n\nclass Server:\n\n    server = None\n\n    def __init__(self, **kwargs):\n        self.events = EventSource(**kwargs)\n\n    @asyncio.coroutine\n    def run(self):\n        self.loop = asyncio.get_event_loop()\n        self.server = yield from self._create_server()\n        yield from self.server_running()\n\n    @asyncio.coroutine\n    def server_running(self):\n        pass\n\n    def close(self):\n        s = self.server\n        if s:\n            if isinstance(s, tuple):\n                s = s[0]\n            s.close()\n"
  },
  {
    "path": "chaperone/cutil/syslog.py",
    "content": "import asyncio\nimport socket\nimport os\nimport re\nimport sys\nimport logging\n\nfrom time import strftime\nfrom functools import partial\n\nfrom chaperone.cutil.logging import info, warn, debug, set_custom_handler\nfrom chaperone.cutil.misc import lazydict, maybe_remove, remove_for_recreate\nfrom chaperone.cutil.servers import ServerProtocol, Server\nfrom chaperone.cutil.syslog_handlers import LogOutput\n\nimport chaperone.cutil.syslog_info as syslog_info\n\n_RE_SPEC = re.compile(r'^(?P<fpfx>!?)(?:/(?P<regex>.+)/|\\[(?P<prog>.+)\\]|(?P<fac>[,*0-9a-zA-Z]+))\\.(?P<pfx>!?=?)(?P<pri>[*a-zA-Z]+)$')\n_RE_SPECSEP = re.compile(r' *; *')\n\n# The following is based on RFC3164 with some tweaks to deal with anomalies.\n# One anomaly worth mentioning is that some log sources append newlines (or whitespace) to their messages,\n# or include embedded newlines.  Here is a good JIRA discussion about how Apache dealt with this, including some background:\n#   https://issues.apache.org/jira/browse/LOG4NET-370\n# At present we merely DISCARD whitespace from the end of messages, but don't attempt to break multiple\n# messages into separate lines so that UDP syslog destinations don't have to deal with packet reordering,\n# which is a real pain for some people, with an example here:\n#  https://redmine.pfsense.org/issues/1938\n\n_RE_RFC3164 = re.compile(r'^<(?P<pri>\\d+)>(?P<date>\\w{3} [ 0-9][0-9] \\d\\d:\\d\\d:\\d\\d) (?:(?P<host>[^ :\\[]+) )?(?P<tag>[^ :\\[]+)(?P<rest>[:\\[ ].+?)\\s*$', re.DOTALL)\n\n\nclass _syslog_spec_matcher:\n    \"\"\"\n    This class supports matching a classic syslog.conf spec:\n       <facilty>.<priority>\n    where:\n        facility is a list of comma-separated faclities, or '*'\n        priority is a priority (meaning >=priority) or =priority (meaning exactly that priority)\n    either may be preceded by '!' to invert the match.\n\n    And the extensions:\n       /regex/.<priority>\n       where regex will match the entire message\n\n       [prog].<priority>\n       where prog will match the program specifier, if any\n\n    One or more of the above can be combined, separated by semicolons.\n\n    Note that the syslogd semantics are hard to actually figure out, even if you scour the web.  So, here are\n    some rules.\n\n    The semicolon \"joins\" constraints by combining all negative constraints (those which omit facilities or priorities)\n    and positive constraints separately.   The result will be logged ONLY if all the positive constraints are true\n    and all of the negative constraints are false!\n\n    So,\n       *.!emerg               LOGS NOTHING (missing inclusions)\n       *.*;*.!emerg           logs everything bug .emerg\n       *.info;![cron].*       logs all info or higher, but omits everything from program \"cron\"\n       *.*;![cron].!=info     Omits the info messages from any program BUT cron\n       [cron].*;*.!info       includes all cron messages except those of info and above\n\n    More specifically:\n       *.info                 Includes info through emergency (6->0) but not Debug\n       *.!info                Excludes info through emergency but does not exclude debug\n       *.=info                Includes just info itself\n       *.!=info               Excludes everything BUT info\n       !f.!=info              Excludes everyting BUT info from everything BUT f\n\n    Why all this bother?\n    1.  Basic cases are pretty easy to read and understand.\n    2.  Negations can be understood if documented, and are useful.\n    3.  I don't want to introduce a completely new syntax.\n    3.  Somewhere out here, there is some nerdy OCD guy who will say \"But wait, your selector format is so CLOSE\n        to the syslog format that you MUST support it with the same semantics or you're going to alienate [me].\"  \n        Just nipping that in the bud.\n    \"\"\"\n\n    __slots__ = ('_regexes', '_match', 'debugexpr', 'selector')\n\n    def __init__(self, selector, minimum_priority = None):\n        self.selector = selector\n        self._compile(minimum_priority)\n\n    def reset_minimum_priority(self, minimum_priority = None):\n        \"\"\"\n        Recompile the spec using a new minimum priority.  minimum_priority may be None to eliminate\n        any such minimum from having an effect and reverting to the exact selectors.\n        \"\"\"\n        self._compile(minimum_priority)\n\n    def  _compile(self, minimum_priority):\n        self._regexes = []\n\n        pieces = _RE_SPECSEP.split(self.selector)\n\n        # Build the list of negations and positive expressions\n        neg = list()\n        pos = list()\n        for p in pieces:\n            self._init_spec(p, neg, pos, minimum_priority)\n\n        if not pos:\n            self._buildex(\"False\")\n        elif not neg:\n            self._buildex(\" or \".join(pos))\n        else:\n            self._buildex(\"(\" + (\" and \".join(neg)) + \") and (\" + (\" or \".join(pos)) + \")\")\n\n    def _buildex(self, expr):\n        # Perform some quick peepole optimization, then compile\n        nexpr = expr.replace(\"True and \", \"\").replace(\" and True\", \"\")\n        nexpr = nexpr.replace(\"not True\", \"False\").replace(\" and ((True))\", \"\")\n        nexpr = nexpr.replace(\"False or \", \"\").replace(\" or False\", \"\")\n        self.debugexpr = nexpr\n        self._match = eval(\"lambda s,p,f,g,buf: \" + nexpr)\n\n    def _init_spec(self, spec, neg, pos, minpri):\n        match = _RE_SPEC.match(spec)\n\n        if not match:\n            raise Exception(\"Invalid log spec syntax: \" + spec)\n\n        # Compile an expression to match\n\n        gdict = match.groupdict()\n\n        if gdict['regex'] is not None:\n            self._regexes.append(re.compile(gdict['regex'], re.IGNORECASE))\n            c1 = 'bool(s._regexes[%d].search(buf))' % (len(self._regexes) - 1)\n        elif gdict['prog'] is not None:\n            c1 = '(g and \"%s\" == g.lower())' % gdict['prog'].lower()\n        elif gdict['fac'] != '*':\n            faclist = [syslog_info.FACILITY_DICT.get(f) for f in gdict.get('fac', '').lower().split(',')]\n            if None in faclist:\n                raise Exception(\"Invalid logging facility code, %s: %s\" % (gdict['fac'], spec))\n            c1 = '(' + ' or '.join(['f==%d' % f for f in faclist]) + ')'\n        else:\n            c1 = 'True'\n\n        pri = gdict['pri']\n        pfx = gdict.get('pfx', '')\n\n        if pri == '*':\n            c2 = 'True'\n        else:\n            prival = syslog_info.PRIORITY_DICT.get(pri.lower())\n            if prival == None:\n                raise Exception(\"Invalid logging priority, %s: %s\" % (pri, spec))\n            if minpri is not None and minpri > prival:\n                prival = minpri\n            if '=' in pfx:\n                c2 = \"p==%d\" % prival\n            else:\n                c2 = \"p<=%d\" % prival\n\n        fpfx = gdict.get('fpfx', '')\n\n        # Assess negatives and positives.\n        # neg will contain \"EXCLUDE IF\" and pos will contain \"INCLUDE IF\"\n\n        if '!' in fpfx:\n            # Double exclusion means to exclude everything except the given priority from\n            # everything except the given facility\n            if '!' in pfx:\n                neg.append(\"(not %s and not %s)\" % (c1, c2))\n            else:\n                neg.append(\"not (%s and %s)\" % (c1, c2))\n        elif '!' in pfx: \n            neg.append(\"(not %s or not %s)\" % (c1, c2))\n        else:\n            pos.append(\"(%s and %s)\" % (c1, c2))\n            \n    def match(self, msg, prog = None, priority = syslog_info.LOG_ERR, facility = syslog_info.LOG_SYSLOG):\n        result = self._match(self, priority, facility, prog, msg)\n        #print('MATCH', prog, result, self.debugexpr)\n        return result\n\n        \nclass SyslogServerProtocol(ServerProtocol):\n\n    def datagram_received(self, data, addr):\n        self.data_received(data)\n\n    def data_received(self, data):\n        try:\n            message = data.decode('ascii', 'ignore')\n        except Exception as ex:\n            self._output(\"Could not decode SYSLOG record data\")\n            sys.stdout.flush()\n            return\n\n        messages = message.split(\"\\0\")\n\n        for m in messages:\n            if m:\n                self.owner.parse_to_output(m)\n        sys.stdout.flush()\n\nclass SyslogServer(Server):\n\n    _loglist = list()\n    _server = None\n    _log_socket = None\n\n    _capture_handler = None     # our capture handler to redirect python logs\n\n    def __init__(self, logsock = \"/dev/log\", datagram = True, **kwargs):\n        super().__init__(**kwargs)\n\n        self._datagram = datagram\n        self._log_socket = logsock\n\n        try:\n            os.remove(logsock)\n        except Exception:\n            pass\n\n    def _create_server(self):\n        if not self._datagram:\n            return self.loop.create_unix_server(\n                SyslogServerProtocol.buildProtocol(self), path=self._log_socket)\n\n        # Assure we will be able to bind later\n        remove_for_recreate(self._log_socket)\n\n        return self.loop.create_datagram_endpoint(\n            SyslogServerProtocol.buildProtocol(self), family=socket.AF_UNIX)\n\n    @asyncio.coroutine\n    def server_running(self):\n        # Bind the socket if it's a datagram\n        if self._datagram:\n            transport = self.server[0]\n            transport._sock.bind(self._log_socket)\n        os.chmod(self._log_socket, 0o777)\n\n    def close(self):\n        self.capture_python_logging(False)\n        for logitem in self._loglist:\n            for m in logitem[1]:\n                m.close()\n        super().close()\n        maybe_remove(self._log_socket)\n\n    def configure(self, config, minimum_priority = None):\n        loglist = self._loglist = list()\n        lc = config.get_logconfigs()\n        for k,v in lc.items():\n            matcher = _syslog_spec_matcher(v.selector or '*.*', minimum_priority)\n            loglist.append( (matcher, LogOutput.getOutputHandlers(v)) )\n\n    def reset_minimum_priority(self, minimum_priority = None):\n        \"\"\"\n        Specifies a new minimum priority for logging.  Recompiles all selectors, so it's best\n        to provide this when the configure is done, if possible.\n        \"\"\"\n        for m in self._loglist:\n            m[0].reset_minimum_priority(minimum_priority)\n\n    def capture_python_logging(self, enable = True):\n        if enable:\n            if not self._capture_handler:\n                self._capture_handler = CustomSysLog(self)\n                set_custom_handler(self._capture_handler)\n        elif self._capture_handler:\n            set_custom_handler(self._capture_handler, False)\n            self._capture_handler = None\n\n    def parse_to_output(self, msg):\n        # For a description of what a valid syslog line can look like, see:\n        # http://www.rsyslog.com/doc/syslog_parsing.html\n\n        match = _RE_RFC3164.match(msg)\n        if not match:\n            pri = syslog_info.LOG_SYSLOG * 8 + syslog_info.LOG_ERR\n            logattrs = { 'tag': '?', 'format_error': True, 'host' : None }\n        else:\n            logattrs = match.groupdict()\n            pri = int(logattrs['pri'])\n            if logattrs['tag'][0] == '/':\n                logattrs['tag'] = os.path.basename(logattrs['tag'])\n\n        logattrs['raw'] = msg\n\n        self.writeLog(logattrs, priority = pri & 7, facility = pri // 8)\n\n    def writeLog(self, logattrs, priority, facility):\n        #print(\"\\nWRITELOG\", priority, facility, logattrs)\n        for m in self._loglist:\n            if m[0].match(logattrs['raw'], logattrs['tag'], priority, facility):\n                for logger in m[1]:\n                    logger.writeLog(logattrs, priority, facility)\n\n    \nclass SysLogFormatter(logging.Formatter):\n    \"\"\"\n    Handles formatting Python output in the same format as normal syslog daemons.\n    \"\"\"\n\n    def __init__(self, program, pid):\n\n        self.default_program = program\n        self.default_pid = pid\n\n        super().__init__('{asctime} {program_name}[{program_pid}]: {message}', style='{')\n\n    def format(self, record):\n        if not hasattr(record, 'program_name'):\n            setattr(record, 'program_name', self.default_program)\n        if not hasattr(record, 'program_pid'):\n            setattr(record, 'program_pid', self.default_pid)\n        return super().format(record)\n\n    def formatTime(self, record, datefmt=None):\n        timestr = strftime('%b %d %H:%M:%S', self.converter(record.created))\n        # this may be picky, but people parse syslogs, let's not annoy them\n        if timestr[3:5] == ' 0':\n            return timestr.replace(' 0', '  ', 1)\n        return timestr\n\n        \nclass CustomSysLog(logging.Handler):\n    \"\"\"\n    A custom Python logging class that makes it easy to redirect Python output to our\n    internal syslog capture handler.\n    \"\"\"\n\n    PRIORITY_NAMES = {\n        \"ALERT\":    syslog_info.LOG_ALERT,\n        \"CRIT\":     syslog_info.LOG_CRIT,\n        \"CRITICAL\": syslog_info.LOG_CRIT,\n        \"DEBUG\":    syslog_info.LOG_DEBUG,\n        \"EMERG\":    syslog_info.LOG_EMERG,\n        \"ERR\":      syslog_info.LOG_ERR,\n        \"ERROR\":    syslog_info.LOG_ERR,        #  DEPRECATED\n        \"INFO\":     syslog_info.LOG_INFO,\n        \"NOTICE\":   syslog_info.LOG_NOTICE,\n        \"PANIC\":    syslog_info.LOG_EMERG,      #  DEPRECATED\n        \"WARN\":     syslog_info.LOG_WARNING,    #  DEPRECATED\n        \"WARNING\":  syslog_info.LOG_WARNING,\n        }\n\n    def __init__(self, owner):\n        super().__init__(logging.DEBUG) # enable all levels since we manage filtering ourselves\n        self._owner = owner\n        self.setFormatter(SysLogFormatter(sys.argv[0] or '-', os.getpid()))\n\n    def emit(self, record):\n        facility = getattr(record, '_facility', syslog_info.LOG_LOCAL5)\n        priority = self.PRIORITY_NAMES.get(record.levelname, syslog_info.LOG_ERR)\n\n        self._owner.parse_to_output(\"<{0}>\".format(facility << 3 | priority) + self.format(record))\n"
  },
  {
    "path": "chaperone/cutil/syslog_handlers.py",
    "content": "import sys\nimport os\nimport socket\nimport asyncio\n\nfrom time import time, localtime, strftime\n\nfrom chaperone.cutil.misc import lazydict, open_foruser\nfrom chaperone.cutil.syslog_info import get_syslog_info\n\n_our_hostname = socket.gethostname()\n\nclass LogOutput:\n    name = None\n    config_match = lambda c: False\n\n    _cls_handlers = lazydict()\n    _cls_reghandlers = list()\n\n    @classmethod\n    def register(cls, handlercls):\n        cls._cls_reghandlers.append(handlercls)\n\n    @classmethod\n    def getOutputHandlers(cls, config):\n        return list(filter(None, [h.getHandler(config) for h in cls._cls_reghandlers]))\n\n    @classmethod\n    def getName(cls, config):\n        return cls.name\n\n    @classmethod\n    def matchesConfig(cls, config):\n        return config.enabled and cls.config_match(config)\n\n    @classmethod\n    def getHandler(cls, config):\n        if not cls.matchesConfig(config):\n            return None\n        name = cls.getName(config)\n        if name is None:\n            return None\n        return cls._cls_handlers.setdefault(name, lambda: cls(config))\n\n    def __init__(self, config):\n        self.name = config.name\n        self.config = config\n\n    def close(self):\n        pass\n\n    def writeLog(self, logattrs, priority, facility):\n        if logattrs.get('format_error'):\n            msg = \"??\" + logattrs['raw']\n        else:\n            # Note that 'rest' always starts with a ':', '[' or ' '.\n            msg = (logattrs['date'] + ' ' + \n                   (self.config.logrec_hostname or logattrs['host'] or _our_hostname) + ' ' + \n                   logattrs['tag'] + logattrs['rest'])\n        if self.config.extended:\n            msg = get_syslog_info(facility, priority) + \" \" + msg\n        self.write(msg)\n\n    def write(self, data):\n        h = self.handle\n        h.write(data)\n        h.write(\"\\n\")\n        h.flush()\n                         \n\nclass StdoutHandler(LogOutput):\n\n    name = \"sys:stdout\"\n    handle = sys.stdout\n    config_match = lambda c: c.stdout\n\nLogOutput.register(StdoutHandler)\n\n\nclass StderrHandler(LogOutput):\n\n    name = \"sys:stderr\"\n    handle = sys.stderr\n    config_match = lambda c: c.stderr\n\nLogOutput.register(StderrHandler)\n\n\nclass RemoteClientProtocol:\n    def __init__(self, loop):\n        self.loop = loop\n        self.transport = None\n\n    def connection_made(self, transport):\n        self.transport = transport\n\n    def send(self, message):\n        self.transport.sendto(message.encode())\n\n    def datagram_received(self, data, addr):\n        pass\n\n    def error_received(self, exc):\n        pass\n\n    def connection_lost(self, exc):\n        self.transport = None\n\n    def close(self):\n        if self.transport:\n            self.transport.close()\n\n\nclass RemoteHandler(LogOutput):\n\n    config_match = lambda c: c.syslog_host is not None\n\n    _pending = None             # a pending future to setup this handler\n    _protocol = None            # protocol for this logger\n\n    @classmethod\n    def getName(cls, config):\n        return \"syslog_host:\" + config.syslog_host\n\n    @asyncio.coroutine\n    def setup_handler(self):\n        loop = asyncio.get_event_loop()\n        connect = loop.create_datagram_endpoint(lambda: RemoteClientProtocol(loop),\n                                                remote_addr=(self.config.syslog_host, 514))\n        (transport, protocol) = yield from connect\n        self._pending = None\n        self._protocol = protocol\n\n    def __init__(self, config):\n        super().__init__(config)\n        self._pending = asyncio.async(self.setup_handler())\n\n    def write(self, data):\n        if self._protocol:\n            self._protocol.send(data)\n\n    def close(self):\n        if self._pending:\n            if not self._pending.cancelled():\n                self._pending.cancel()\n            self._pending = None\n        if self._protocol:\n            self._protocol.close()\n            self._protocol = None\n\nLogOutput.register(RemoteHandler)\n\n\nclass FileHandler(LogOutput):\n\n    config_match = lambda c: c.file is not None\n\n    CHECK_INTERVAL = 60\n\n    _orig_filename = None\n    _cur_filename = None\n    _next_check = 0\n    _stat = None\n\n    @classmethod\n    def getName(cls, config):\n        return 'file:' + config.file\n\n    def __init__(self, config):\n        super().__init__(config)\n        self._orig_filename = os.path.abspath(config.file)\n        self._maybe_reopen()\n\n    def _maybe_reopen(self):\n        new_filename = strftime(self.config.file, localtime())\n        if new_filename != self._cur_filename or not self._stat:\n            reopen = True\n        else:\n            try:\n                newstat = os.stat(new_filename)\n            except FileNotFoundError:\n                newstat = None\n            reopen = not newstat or (newstat.st_dev != self._stat.st_dev or\n                                     newstat.st_ino != self._stat.st_ino)\n\n        if not reopen:\n            return\n\n        if self._stat:\n            self.handle.flush()\n            self.handle.close()\n            self.handle = self._stat = None\n\n        env = self.config.environment\n        self._cur_filename = new_filename\n\n        self.handle = open_foruser(new_filename, 'w' if self.config.overwrite else 'a', env.uid, env.gid)\n        self._stat = os.fstat(self.handle.fileno())\n\n    def close(self):\n        if self._stat:\n            self.handle.close()\n            self._stat = None\n            self._next_check = 0\n            self._cur_filename = None\n\n    def write(self, data):\n        if self._next_check <= time():\n            self._maybe_reopen()\n            self._next_check = time() + self.CHECK_INTERVAL\n        super().write(data)\n\nLogOutput.register(FileHandler)\n"
  },
  {
    "path": "chaperone/cutil/syslog_info.py",
    "content": "import logging\nfrom logging.handlers import SysLogHandler\n\n# Copy all syslog levels\nfor k,v in SysLogHandler.__dict__.items():\n    if k.startswith('LOG_'):\n        globals()[k] = v\n    \nFACILITY = ('kern', 'user', 'mail', 'daemon', 'auth', 'syslog', 'lpr', 'news', 'uucp', 'cron', 'authpriv',\n             'ftp', 'ntp', 'audit', 'alert', 'altcron', 'local0', 'local1', 'local2', 'local3', 'local4',\n             'local5', 'local6', 'local7')\nFACILITY_DICT = {FACILITY[i]:i for i in range(len(FACILITY))}\n\nPRIORITY = ('emerg', 'alert', 'crit', 'err', 'warn', 'notice', 'info', 'debug')\nPRIORITY_DICT = {PRIORITY[i]:i for i in range(len(PRIORITY))}\n\nPRIORITY_DICT['warning'] = PRIORITY_DICT['warn']\nPRIORITY_DICT['error'] = PRIORITY_DICT['err']\n\n# Python equivalent for PRIORITY settings\nPRIORITY_PYTHON = (logging.CRITICAL, logging.CRITICAL, logging.CRITICAL, logging.ERROR,\n                   logging.WARNING, logging.INFO, logging.INFO, logging.DEBUG)\n\ndef get_syslog_info(facility, priority):\n    try:\n        f = FACILITY[facility]\n    except IndexError:\n        f = '?'\n    try:\n        return f + '.' + PRIORITY[priority]\n    except IndexError:\n        return f + '.?'\n\n    \ndef syslog_to_python_lev(lev):\n    if lev < 0 or lev > len(PRIORITY):\n        return logging.DEBUG\n    return PRIORITY_PYTHON[lev]\n"
  },
  {
    "path": "chaperone/exec/__init__.py",
    "content": "# Placeholder\n"
  },
  {
    "path": "chaperone/exec/chaperone.py",
    "content": "\"\"\"\nLightweight process and service manager\n\nUsage:\n    chaperone [--config=<file_or_dir>]\n              [--user=<name> | --create-user=<newuser>] [--default-home=<dir>]\n              [--exitkills | --no-exitkills] [--ignore-failures] [--log-level=<level>] [--no-console-log]\n              [--debug] [--force] [--disable-services] [--no-defaults] [--no-syslog]\n              [--version] [--show-dependencies]\n              [--task]\n              [<command> [<args> ...]]\n\nOptions:\n    --config=<file_or_dir>   Specifies file or directory for configuration (default is /etc/chaperone.d)\n    --create-user=<newuser>  Create a new user with an optional UID (name or name/uid), \n                             then run as if --user was specified.\n    --default-home=<dir>     If the --create-user home directory does not exist, then use this\n                             directory as the default home directory for the new user instead.\n    --debug                  Turn on debugging features (same as --log-level=DEBUG)\n    --disable-services       Does not run any services, only the given command (troubleshooting)\n    --exitkills              When given command exits, kill the system (default if container running interactive)\n    --force                  If chaperone normally refuses, do it anyway and take the risk.\n    --ignore-failures        Assumes that \"ignore_failures:true\" was specified on all services (troubleshooting)\n    --log-level=<level>      Specify log level filtering, such as INFO, DEBUG, etc.\n    --no-console-log         Disable all logging to stdout and stderr (useful when the container produces non-log output)\n    --no-exitkills           When givencommand exits, don't kill the system (default if container running daemon)\n    --no-defaults            Ignores any default options in the CHAPERONE_OPTIONS environment variable\n    --no-syslog              The internal syslog server will not be started (useful when a separate syslog\n                             daemon is started later).\n    --user=<name>            Start first process as user (else root)\n    --show-dependencies      Shows a list of service dependencies then exits\n    --task                   Run in task mode (see below).\n    --version                Display version and exit\n\nNotes:\n  * If a user is specified, then the --config is relative to the user's home directory.\n  * Chaperone makes the assumption that an interactive command should shut down the system upon exit,\n    but a non-interactive command should not.  You can reverse this assumption with options.\n  * --task is used in cases where you wish to execute a script in the container environment\n    for utility purposes, such as a script to extract data from the container, etc.  This switch\n    is equivalent to \"--log err --exitkills --disable-services\" and also requires a command\n    to be specified as usual.\n\"\"\"\n\n# perform any patches first\nimport chaperone.cutil.patches\n\n# regular code begins\nimport sys\nimport shlex\nimport os\nimport re\nimport asyncio\nimport subprocess\n\nfrom functools import partial\nfrom docopt import docopt\n\nfrom chaperone.cproc import TopLevelProcess\nfrom chaperone.cproc.version import VERSION_MESSAGE\nfrom chaperone.cutil.config import Configuration, ServiceConfig\nfrom chaperone.cutil.env import ENV_INTERACTIVE, ENV_TASK_MODE, ENV_CHAP_OPTIONS\nfrom chaperone.cutil.misc import maybe_create_user\nfrom chaperone.cutil.logging import warn, info, debug, error\n\nMSG_PID1 = \"\"\"Normally, chaperone expects to run as PID 1 in the 'init' role.\nIf you want to go ahead anyway, use --force.\"\"\"\n\nMSG_NOTHING_TO_DO = \"\"\"There are no services configured to run, nor is there a command specified\non the command line to run as an application.  You need to do one or the other.\"\"\"\n\n# We require usernames to start with a letter or underscore.  This is consistent with default Linux\n# rules.  Yeah I know, regexes can get complicated, but they can also do a lot of work to make the\n# rest of the code simpler.  Note that <file> matches strings like /foo:bar as a path of \"/foo\" with a\n# groupname of bar, but the colon can be escaped if you actualy have a filename that contains\n# a colon like \"/foo\\:bar\".\n\nRE_CREATEUSER = re.compile(\n   r'''(?P<user>[a-z_][a-z0-9_-]*)           # ALWAYS start with the username\n       (?::(?P<file>/(?:\\\\:|[^:])+))?        # File is next if it's :/path\n       (?::(?P<uid>\\d*))?                    # Either /uid or :uid introduces a uid (number may be missing)\n       (?::(?P<gid>[a-z_][a-z0-9_-]*|\\d+)?)? # followed by an optional GID\n       $''',\n   re.IGNORECASE | re.X)\n\ndef main_entry():\n\n   # parse these first since we may disable the environment check\n   options = docopt(__doc__, options_first=True, version=VERSION_MESSAGE)\n\n   if options['--task']:\n      options['--disable-services'] = True\n      options['--no-console-log'] = not options['--debug']\n      options['--exitkills'] = True\n      os.environ[ENV_TASK_MODE] = '1'\n\n   if not options['--no-defaults']:\n      envopts = os.environ.get(ENV_CHAP_OPTIONS)\n      if envopts:\n         try:\n            defaults = docopt(__doc__, argv=(shlex.split(envopts)), options_first=True)\n         except SystemExit as ex:\n            print(\"Error occurred in {0} environment variable: {1}\".format(ENV_CHAP_OPTIONS, envopts))\n            raise\n         # Replace any \"false\" command option with the default version.\n         options.update({k:defaults[k] for k in options.keys() if not options[k]})\n\n   if options['--config'] is None:\n      options['--config'] = '/etc/chaperone.d'\n\n   if options['--debug']:\n      options['--log-level'] = \"DEBUG\"\n      print('COMMAND OPTIONS', options)\n\n   force = options['--force']\n\n   if not force and os.getpid() != 1:\n      print(MSG_PID1)\n      exit(1)\n\n   tty = sys.stdin.isatty()\n   os.environ[ENV_INTERACTIVE] = \"1\" if tty else \"0\"\n\n   kill_switch = options['--exitkills'] or (False if options['--no-exitkills'] else tty)\n\n   cmd = options['<command>']\n\n   if options['--task'] and not cmd:\n      error(\"--task can only be used if a shell command is specified as an argument\")\n      exit(1)\n\n   # It's possible that BOTH --create-user and --user exist due to the way _CHAP_OPTIONS is overlaid\n   # with command line options.  So, in such a case, note that we ignore --user.\n\n   create = options['--create-user']\n\n   if create is None:\n      user = options['--user']\n   else:\n     match = RE_CREATEUSER.match(create)\n     if not match:\n        print(\"Invalid format for --create-user argument: {0}\".format(create))\n        exit(1)\n     udata = match.groupdict()\n     try:\n        maybe_create_user(udata['user'], udata['uid'] or None, udata['gid'] or None, \n                          udata['file'] and udata['file'].replace(r'\\:', ':'),\n                          options['--default-home'])\n     except Exception as ex:\n        print(\"--create-user failure: {0}\".format(ex))\n        exit(1)\n     user = udata['user']\n\n   extras = dict()\n   if options['--ignore-failures']:\n      extras['ignore_failures'] =  True\n   if options['--no-syslog']:\n      extras['enable_syslog'] = False\n      \n   try:\n      config = Configuration.configFromCommandSpec(options['--config'], user=user, extra_settings=extras,\n                                                   disable_console_log=options['--no-console-log'])\n      services = config.get_services()\n   except Exception as ex:\n      error(ex, \"Configuration Error: {0}\", ex)\n      exit(1)\n\n   if not (services or cmd):\n      print(MSG_NOTHING_TO_DO)\n      exit(1)\n\n   if options['--show-dependencies']:\n      dg = services.get_dependency_graph()\n      print(\"\\n\".join(dg))\n      exit(0)\n\n   if not cmd and options['--disable-services']:\n      error(\"--disable-services not valid without specifying a command to run\")\n      exit(1)\n\n   # Now, create the tlp and proceed\n\n   tlp = TopLevelProcess(config)\n\n   if options['--log-level']:\n      tlp.force_log_level(options['--log-level'])\n\n   if tlp.debug:\n      config.dump()\n\n   # Set proctitle and go\n\n   proctitle = \"[\" + os.path.basename(sys.argv[0]) + \"]\"\n   if cmd:\n      proctitle += \" \" + cmd\n\n   try:\n      from setproctitle import setproctitle\n      setproctitle(proctitle)\n   except ImportError:\n      pass\n\n   # Define here so we can share scope\n\n   @asyncio.coroutine\n   def startup_done():\n\n      if options['--ignore-failures']:\n         warn(\"ignoring failures on all service startups due to --ignore-failures\")\n\n      if options['--disable-services'] and services:\n         warn(\"services will not be configured due to --disable-services\")\n\n      extra_services = None\n      if cmd:\n         cmdsvc = ServiceConfig.createConfig(config=config,\n                                             name=\"CONSOLE\",\n                                             exec_args=[cmd] + options['<args>'],\n                                             uid=user,\n                                             kill_signal=(\"SIGHUP\" if tty else None),\n                                             setpgrp=not tty,\n                                             exit_kills=kill_switch,\n                                             service_groups=\"IDLE\",\n                                             ignore_failures=not options['--task'],\n                                             stderr='inherit', stdout='inherit')\n         extra_services = [cmdsvc]\n\n      yield from tlp.run_services(extra_services, disable_others = options['--disable-services'])\n\n      tlp.signal_ready()\n\n   tlp.run_event_loop(startup_done())\n"
  },
  {
    "path": "chaperone/exec/envcp.py",
    "content": "\"\"\"\nCopy text files and expand environment variables as you copy.\n\nUsage:\n    envcp [options] FILE ...\n\nOptions:\n        --strip suffix      If the destination is a directory, strip \"suffix\"\n                            off source files.\n        --overwrite         Overwrite destination files rather than exiting\n                            with an error.\n        -v --verbose        Display progress.\n        -a --archive        Preserve permissions when copying.\n        --shell-enable      Enable shell escapes using backticks, as in $(`ls`)\n        --xprefix char      The leading string to identify a variable.  Defaults to '$'\n        --xgrouping chars   Grouping types which are recognized, defaults to '({'\n\nCopies a file to a destination file (two arguments), or any number of files to a destination\ndirectory.  As files are copied, environment variables will be expanded.   If the destination\nis a directory, then --strip can be used to specify a file suffix to be stripped off.\n\nFormats allowed: are $(ENV) or ${ENV}.  The bareword $ENV is not recognized.\n\"\"\"\n\n# perform any patches first\nimport chaperone.cutil.patches\n\n# regular code begins\nimport sys\nimport os\nimport asyncio\nimport shlex\nfrom docopt import docopt\n\nfrom chaperone.cproc.version import VERSION_MESSAGE\nfrom chaperone.cutil.env import Environment\n\ndef check_canwrite(flist, overok):\n    for f in flist:\n        if os.path.exists(f) and not overok:\n            print(\"error: file {0} exists, won't overwrite\".format(f))\n            exit(1)\n\ndef main_entry():\n    options = docopt(__doc__, version=VERSION_MESSAGE)\n\n    files = options['FILE']\n\n    start = options['--xprefix']\n    braces = options['--xgrouping']\n\n    if braces:\n        if any([b not in '{([' for b in braces]):\n            print(\"error: --xgrouping can accept one or more of '{{', '[', or '(' only.  Not this: '{0}'.\".format(braces))\n            exit(1)\n    \n    # Enable or disable, but don't cache them if enabled\n    Environment.set_backtick_expansion(bool(options['--shell-enable']), False)\n\n    Environment.set_parse_parameters(start, braces)\n\n    env = Environment()\n\n    # Support stdin/stdout behavior if '-' is the only file specified on the command line\n\n    if '-' in files:\n        if len(files) > 1:\n            print(\"error: '-' for stdin/stdout cannot be combined with other filename arguments\")\n            exit(1)\n        sys.stdout.write(env.expand(sys.stdin.read()))\n        sys.stdout.flush()\n        exit(0)\n        \n    if len(files) < 2:\n        print(\"error: must include two or more filename arguments\")\n        exit(1)\n\n    destdir = os.path.abspath(files[-1]);\n    destfile = None\n\n    if os.path.isdir(destdir):\n        if not os.access(destdir, os.W_OK|os.X_OK):\n            print(\"error: directory {0} exists but is not writable\".format(destdir))\n        st = options['--strip']\n        if st:\n            files = [(f, os.path.basename(f).rstrip(st)) for f in files[:-1]]\n        else:\n            files = [(f, os.path.basename(f)) for f in files[:-1]]\n        check_canwrite([os.path.join(destdir, p[1]) for p in files], options['--overwrite'])\n    elif len(files) != 2:\n        print(\"error: destination is not a directory and more than 2 files specified\")\n        exit(1)\n    else:\n        destfile = files[1]\n        files = [(files[0], files[0])]\n        check_canwrite([destfile], options['--overwrite'])\n\n    # files is now a list of pairs [(source, dest-basename), ...]\n\n    for curpair in files:\n        if not os.path.exists(curpair[0]):\n            print(\"error: file does not exist, {0}\".format(curpair[0]))\n            exit(1)\n        if not os.access(curpair[0], os.R_OK):\n            print(\"error: file is not readable, {0}\".format(curpair[0]))\n            exit(1)\n\n    for curpair in files:\n        if not destfile:\n            destfile = os.path.join(destdir, curpair[1])\n        try:\n            oldstat = os.stat(curpair[0])\n            oldf = open(curpair[0], 'r')\n        except Exception as ex:\n            print(\"error: cannot open input file {0}: {1}\".format(curpair[0], ex))\n            exit(1)\n        try:\n            newf = open(destfile, 'w')\n        except Exception as ex:\n            print(\"error: cannot open output file {0}: {1}\".format(destfile, ex))\n            exit(1)\n\n        newf.write(env.expand(oldf.read()))\n        oldf.close()\n        newf.close()\n\n        if options['--archive']:\n            # ATTEMPT to retain permissions\n            try:\n                os.chown(destfile, oldstat.st_uid, oldstat.st_gid);\n            except PermissionError:\n                # Try them separately.  User first, then group.\n                try:\n                    os.chown(destfile, oldstat.st_uid, -1);\n                except PermissionError:\n                    pass\n                try:\n                    os.chown(destfile, -1, oldstat.st_gid);\n                except PermissionError:\n                    pass\n            try:\n                os.chmod(destfile, oldstat.st_mode);\n            except PermissionError:\n                pass\n            try:\n                os.utime(destfile, times=(oldstat.st_atime, oldstat.st_mtime))\n            except PermissionError:\n                pass\n\n        if options['--verbose']:\n            print(\"envcp {0} {1}\".format(curpair[0], destfile))\n\n        destfile = None\n"
  },
  {
    "path": "chaperone/exec/sdnotify.py",
    "content": "\"\"\"\nSystemd notify tool (compatible with systemd-notify)\n\nUsage:\n    sdnotify [options] [VARIABLE=VALUE ...]\n\nOptions:\n    --pid PID        Inform chaperone/systemd of MAINPID\n                     (must say --pid=self if you want the programs PID)\n    --status=STATUS  Inform chaperone/systemd of status information\n    --ready          Send the ready signal (READY=1)\n    --booted         Indicate whether we were booted with systemd.\n                     (Note: Always indicates 'no', exit status 1.)\n    --ignore         Silently ignore inability to send notifications.\n                     (Always ignored if NOTIFY_SOCKET is not set.)\n\nAll of the above specified will be sent in the order given above, then\nany VARIABLE=VALUE pairs will be sent.\n\nThis is provided by Chaperone as an alternative to systemd-notify for distros\nwhich may not have one.\n\"\"\"\n\n# perform any patches first\nimport chaperone.cutil.patches\n\n# regular code begins\nimport sys\nimport os\nimport socket\nfrom docopt import docopt\n\nfrom chaperone.cproc.version import VERSION_MESSAGE\n\ndef _mkabstract(socket_name):\n    if socket_name.startswith('@'):\n        socket_name = '\\0%s' % socket_name[1:]\n    return socket_name\n\n\ndef do_notify(msg):\n    notify_socket = os.getenv('NOTIFY_SOCKET')\n    if notify_socket:\n        sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)\n        try:\n            sock.connect(_mkabstract(notify_socket))\n            sock.sendall(msg.encode())\n        except EnvironmentError as ex:\n            raise Exception(\"Systemd notification failed: \" + str(ex))\n        finally:\n            sock.close()\n\ndef main_entry():\n    options = docopt(__doc__, version=VERSION_MESSAGE)\n\n    mlist = list()\n\n    if options['--pid']:\n        pid = options['--pid']\n        if pid == 'self':\n            mlist.append(\"MAINPID=\"+str(os.getpid()))\n        else:\n            try:\n                pidval = int(pid)\n            except ValueError:\n                print(\"error: not a valid PID '{0}'\".format(pid))\n                exit(1)\n            mlist.append(\"MAINPID=\"+str(pid))\n    \n    if options['--status']:\n        mlist.append(\"STATUS=\" + options['--status'])\n\n    if options['--ready']:\n        mlist.append(\"READY=1\")\n\n    for vv in options['VARIABLE=VALUE']:\n        vvs = vv.split('=')\n        if len(vvs) != 2:\n            print(\"error: not a valid format for VARIABLE=VALUE, '{0}'\".format(vv))\n            exit(1)\n        mlist.append(\"{0}={1}\".format(vvs[0].upper(), vvs[1]))\n\n    for msg in mlist:\n        try:\n            do_notify(msg)\n        except Exception as ex:\n            if not options['--ignore']:\n                print(\"error: could not send sd_notify message, \" + str(ex))\n                exit(1)\n    \n    if options['--booted']:\n        exit(1)\n"
  },
  {
    "path": "chaperone/exec/sdnotify_exec.py",
    "content": "\"\"\"\nSystemd notify exec shell (compatible with systemd-notify)\nRuns a program and either proxies or simulates sd-notify functionality.\n\nUsage:\n    sdnotify-exec [options] COMMAND [ARGS ...]\n\nOptions:\n    --noproxy             Ignores NOTIFY_SOCKET if inherited in the environment\n                          and does not proxy messages.  Useful with --wait-xxx options..\n    --wait-ready          If COMMAND exits normally, wait until either READY=1 or ERRNO=n, \n                          are sent to the notify socket, then return the exit\n                          value from the command.\n    --wait-stop           Will continue running even if COMMAND exits, continuing \n                          proxy services until ERRNO=n or STOPPING=1 are detected.\n                          MAINPID notifications will be blocked, since the proxy\n                          will continue to be the main program.  Overrides --wait-ready.\n    --timeout secs        Specifies the timeout  before the lack of response triggers\n                          an error exit.  COMMAND may continue to run.\n                          (no effect without --wait-ready or --wait-stop)\n    --socket name         Name of socket file created.  By default, a unique\n                          socket name will be chosen automatically.\n    --template value      Sets %{SOCKET_ARGS} template to 'value'.\n    --verbose             Provide information about activity\n\nEnvironment variables (one of which is SOCKET_ARGS) can be used anywhere in the\ncommand by using the syntax %{VAR}.   The default SOCKET_ARGS template is designed \nfor Docker and is set to:\n  '--env NOTIFY_SOCKET=/tmp/notify-%{PID}.sock -v %{NOTIFY_SOCKET}:/tmp/notify-%{PID}.sock'\n\nThus, you can easily use \"docker run\" like this:\n\n  sdnotify-exec docker run %{SOCKET_ARGS} some-image\n\nEnvironment variables that can be useful:\n\n  NOTIFY_SOCKET       Newly created notification socket\n  ORIG_NOTIFY_SOCKET  Original notify socket (if any) passed to this program\n  PID                 PID of the running sdnotify-exec program\n  SOCKET_ARGS         Argument template\n\nOnly \"NOTIFY_SOCKET\" itself is passed to the created process, though all are available\nfor command and argument expansion.\n\n\"\"\"\n\n# perform any patches first\nimport chaperone.cutil.patches\n\n# regular code begins\nimport sys\nimport os\nimport re\nimport signal\nimport asyncio\nimport shlex\nfrom functools import partial\nfrom docopt import docopt\n\nfrom chaperone.cproc.version import VERSION_MESSAGE\nfrom chaperone.cutil.notify import NotifyListener, NotifyClient\nfrom chaperone.cutil.env import Environment\n\nDEFAULT_TEMPLATE='--env NOTIFY_SOCKET=/tmp/notify-%{PID}.sock -v %{NOTIFY_SOCKET}:/tmp/notify-%{PID}.sock'\n\nloop = asyncio.get_event_loop()\nparent_socket = os.environ.get(\"NOTIFY_SOCKET\")\n\nRE_FIND_UNSAFE = re.compile(r'[^{}\\w@%+=:,./-]', re.ASCII).search\n\ndef maybe_quote(s):\n    if RE_FIND_UNSAFE(s) is None:\n        return s\n    return shlex.quote(s)\n\nclass SDNotifyExec:\n\n    exitcode = 0\n    sockname = None\n    listener = None\n    parent = None\n    timeout = None\n    wait_mode = None\n    verbose = False\n\n    parent_client = None\n    proxy_enabled = True\n\n    INFO_MESSAGE = {\n        'READY': \"READY={1}{2}\",\n        'MAINPID': \"Process PID (={1}) notification{2}\",\n        'ERRNO': \"Process ERROR (={1}) notification{2}\",\n        'STATUS': \"Status message = '{1}'{2}\",\n        'default': \"{0}={1}{2}\",\n    }\n\n    def __init__(self, options):\n        self.sockname = options['--socket']\n        if not self.sockname:\n            self.sockname = \"/tmp/sdnotify-proxy-{0}.sock\".format(os.getpid())\n\n        self.proxy_enabled = parent_socket and not options['--noproxy']\n        if options['--wait-stop']:\n            self.wait_mode = 'stop'\n        elif options['--wait-ready']:\n            self.wait_mode = 'ready'\n\n        if options['--timeout'] and self.wait_mode:\n            self.timeout = float(options['--timeout'])\n\n        self.verbose = options['--verbose']\n\n        # Modify original environment\n\n        os.environ['NOTIFY_SOCKET'] = self.sockname\n\n        # Set up the environment, reparse the options, build the final command\n        Environment.set_parse_parameters('%', '{')\n        env = Environment()\n\n        env['PID'] = str(os.getpid())\n        env['SOCKET_ARGS'] = options['--template'] or DEFAULT_TEMPLATE\n        if parent_socket:\n            env['ORIG_NOTIFY_SOCKET'] = parent_socket\n        \n        env = env.expanded()\n\n        self.proc_args = shlex.split(env.expand(' '.join(maybe_quote(arg) \n                                                         for arg in [options['COMMAND']] + options['ARGS'])))\n\n        self.listener = NotifyListener(self.sockname, \n                                       onNotify = self.notify_received,\n                                       onClose = self._parent_closed)\n        loop.add_signal_handler(signal.SIGTERM, self._got_sig)\n        loop.add_signal_handler(signal.SIGINT, self._got_sig)\n\n        proctitle = '[sdnotify-exec]'\n\n        try:\n            from setproctitle import setproctitle\n            setproctitle(proctitle)\n        except ImportError:\n            pass\n\n    def info(self, msg):\n        if self.verbose:\n            print(\"info: \" + msg)\n\n    def _got_sig(self):\n        self.kill_program()\n\n    def kill_program(self, exitcode = None):\n        if exitcode is not None:\n            self.exitcode = exitcode\n        loop.call_soon(self._really_kill)\n\n    def _really_kill(self):\n        self.listener.close()\n        loop.stop()\n\n    def _parent_closed(self, which, ex):\n        if which == self.parent_client:\n            self.proxy_enabled = False\n            self.parent_client = None\n\n    @asyncio.coroutine\n    def _do_proxy_send(self, name, value):\n        if not (parent_socket and self.proxy_enabled):\n            return\n\n        if not self.parent_client:\n            self.parent_client = NotifyClient(parent_socket, onClose = self._parent_closed)\n            yield from self.parent_client.run()\n\n        yield from self.parent_client.send(\"{0}={1}\".format(name, value))\n\n    def send_to_proxy(self, name, value):\n        asyncio.async(self._do_proxy_send(name, value))\n\n    def notify_received(self, which, name, value):\n        self.send_to_proxy(name, value)\n\n        sent_info = False\n\n        if self.wait_mode:\n            if name == \"READY\" and value == \"1\":\n                if self.wait_mode == 'ready':\n                    sent_info = True\n                    self.info(\"ready notification received (will exit)\")\n                    self.kill_program(0)\n            elif name == \"ERRNO\":\n                sent_info = True\n                self.info(\"error notification ({0}) received from {1}\".format(value, self.proc_args[0]))\n                self.kill_program(int(value))\n            elif name == \"STOPPING\" and value == \"1\":\n                sent_info = True\n                self.info(\"STOP notification received from {0} (will exit)\".format(self.proc_args[0]))\n                self.kill_program()\n\n        if not sent_info:\n            self.info(self.INFO_MESSAGE.get(name, self.INFO_MESSAGE['default']).\n                      format(name, value, ' (ignored but passed on)' if self.proxy_enabled else ' (ignored)'))\n                                                                                  \n    @asyncio.coroutine\n    def _notify_timeout(self):\n        self.info(\"waiting {0} seconds for notification\".format(self.timeout))\n        yield from asyncio.sleep(self.timeout)\n        print(\"ERROR: Timeout exceeded while waiting for notification from '{0}'\".format(self.proc_args[0]))\n        self.kill_program(1)\n\n    @asyncio.coroutine\n    def _run_process(self):\n\n        self.info('running: {0}'.format(self.proc_args[0]))\n\n        create = asyncio.create_subprocess_exec(*self.proc_args, start_new_session=bool(self.wait_mode))\n        proc = yield from create\n\n        if self.timeout:\n            asyncio.async(self._notify_timeout())\n\n        exitcode = yield from proc.wait()\n        if not self.exitcode:   # may have arrived from ERRNO\n            self.exitcode = exitcode\n\n    @asyncio.coroutine\n    def run(self):\n\n        try:\n            yield from self.listener.run()\n        except ValueError as ex:\n            print(\"Error while trying to create socket: \" + str(ex))\n            self.kill_program()\n        else:\n            try:\n                yield from self._run_process()\n            except Exception as ex:\n                print(\"Error running command: \" + str(ex))\n                self.kill_program()\n\n        # Command has executed, now determine our exit and proxy disposition\n\n        if not self.wait_mode:\n            self.info(\"program {0} exit({1}), terminating since --wait not specified\".format(self.proc_args[0], self.exitcode))\n            self.kill_program()\n\ndef main_entry():\n    options = docopt(__doc__, options_first=True, version=VERSION_MESSAGE)\n\n    mainclass = SDNotifyExec(options)\n    asyncio.async(mainclass.run())\n\n    loop.run_forever()\n    loop.close()\n\n    exit(mainclass.exitcode)\n"
  },
  {
    "path": "chaperone/exec/telchap.py",
    "content": "\"\"\"\nInteractive command tool for chaperone\n\nUsage:\n    telchap <command> [<args> ...]\n\"\"\"\n\n# perform any patches first\nimport chaperone.cutil.patches\n\n# regular code begins\nimport sys\nimport os\nimport asyncio\nimport shlex\nfrom docopt import docopt\n\nfrom chaperone.cproc.client import CommandClient\nfrom chaperone.cproc.version import VERSION_MESSAGE\n\ndef main_entry():\n    options = docopt(__doc__, options_first=True, version=VERSION_MESSAGE)\n    try:\n        result = CommandClient.sendCommand(options['<command>'] + \" \" + \" \".join([shlex.quote(a) for a in options['<args>']]))\n    except (ConnectionRefusedError, FileNotFoundError) as ex:\n        result = \"chaperone does not seem to be listening, is it running?\\n(Error is: {0})\".format(ex)\n\n    print(result)\n"
  },
  {
    "path": "doc/.gitignore",
    "content": "build/*\ndocserver/var\n"
  },
  {
    "path": "doc/Makefile",
    "content": "# Makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line.\nSPHINXOPTS    =\nSPHINXBUILD   = sphinx-build\nPAPER         =\nBUILDDIR      = build\n\n# Internal variables.\nPAPEROPT_a4     = -D latex_paper_size=a4\nPAPEROPT_letter = -D latex_paper_size=letter\nALLSPHINXOPTS   = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source\n# the i18n builder cannot share the environment and doctrees with the others\nI18NSPHINXOPTS  = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source\n\n.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext\n\nhelp:\n\t@echo \"Please use \\`make <target>' where <target> is one of\"\n\t@echo \"  html       to make standalone HTML files\"\n\t@echo \"  dirhtml    to make HTML files named index.html in directories\"\n\t@echo \"  singlehtml to make a single large HTML file\"\n\t@echo \"  pickle     to make pickle files\"\n\t@echo \"  json       to make JSON files\"\n\t@echo \"  htmlhelp   to make HTML files and a HTML help project\"\n\t@echo \"  qthelp     to make HTML files and a qthelp project\"\n\t@echo \"  devhelp    to make HTML files and a Devhelp project\"\n\t@echo \"  epub       to make an epub\"\n\t@echo \"  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter\"\n\t@echo \"  latexpdf   to make LaTeX files and run them through pdflatex\"\n\t@echo \"  text       to make text files\"\n\t@echo \"  man        to make manual pages\"\n\t@echo \"  texinfo    to make Texinfo files\"\n\t@echo \"  info       to make Texinfo files and run them through makeinfo\"\n\t@echo \"  gettext    to make PO message catalogs\"\n\t@echo \"  changes    to make an overview of all changed/added/deprecated items\"\n\t@echo \"  linkcheck  to check all external links for integrity\"\n\t@echo \"  doctest    to run all doctests embedded in the documentation (if enabled)\"\n\nclean:\n\t-rm -rf $(BUILDDIR)/*\n\nhtml:\n\t$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html\n\t@echo\n\t@echo \"Build finished. The HTML pages are in $(BUILDDIR)/html.\"\n\ndirhtml:\n\t$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml\n\t@echo\n\t@echo \"Build finished. The HTML pages are in $(BUILDDIR)/dirhtml.\"\n\nsinglehtml:\n\t$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml\n\t@echo\n\t@echo \"Build finished. The HTML page is in $(BUILDDIR)/singlehtml.\"\n\npickle:\n\t$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle\n\t@echo\n\t@echo \"Build finished; now you can process the pickle files.\"\n\njson:\n\t$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json\n\t@echo\n\t@echo \"Build finished; now you can process the JSON files.\"\n\nhtmlhelp:\n\t$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp\n\t@echo\n\t@echo \"Build finished; now you can run HTML Help Workshop with the\" \\\n\t      \".hhp project file in $(BUILDDIR)/htmlhelp.\"\n\nqthelp:\n\t$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp\n\t@echo\n\t@echo \"Build finished; now you can run \"qcollectiongenerator\" with the\" \\\n\t      \".qhcp project file in $(BUILDDIR)/qthelp, like this:\"\n\t@echo \"# qcollectiongenerator $(BUILDDIR)/qthelp/chaperone.qhcp\"\n\t@echo \"To view the help file:\"\n\t@echo \"# assistant -collectionFile $(BUILDDIR)/qthelp/chaperone.qhc\"\n\ndevhelp:\n\t$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp\n\t@echo\n\t@echo \"Build finished.\"\n\t@echo \"To view the help file:\"\n\t@echo \"# mkdir -p $$HOME/.local/share/devhelp/chaperone\"\n\t@echo \"# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/chaperone\"\n\t@echo \"# devhelp\"\n\nepub:\n\t$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub\n\t@echo\n\t@echo \"Build finished. The epub file is in $(BUILDDIR)/epub.\"\n\nlatex:\n\t$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex\n\t@echo\n\t@echo \"Build finished; the LaTeX files are in $(BUILDDIR)/latex.\"\n\t@echo \"Run \\`make' in that directory to run these through (pdf)latex\" \\\n\t      \"(use \\`make latexpdf' here to do that automatically).\"\n\nlatexpdf:\n\t$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex\n\t@echo \"Running LaTeX files through pdflatex...\"\n\t$(MAKE) -C $(BUILDDIR)/latex all-pdf\n\t@echo \"pdflatex finished; the PDF files are in $(BUILDDIR)/latex.\"\n\ntext:\n\t$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text\n\t@echo\n\t@echo \"Build finished. The text files are in $(BUILDDIR)/text.\"\n\nman:\n\t$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man\n\t@echo\n\t@echo \"Build finished. The manual pages are in $(BUILDDIR)/man.\"\n\ntexinfo:\n\t$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo\n\t@echo\n\t@echo \"Build finished. The Texinfo files are in $(BUILDDIR)/texinfo.\"\n\t@echo \"Run \\`make' in that directory to run these through makeinfo\" \\\n\t      \"(use \\`make info' here to do that automatically).\"\n\ninfo:\n\t$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo\n\t@echo \"Running Texinfo files through makeinfo...\"\n\tmake -C $(BUILDDIR)/texinfo info\n\t@echo \"makeinfo finished; the Info files are in $(BUILDDIR)/texinfo.\"\n\ngettext:\n\t$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale\n\t@echo\n\t@echo \"Build finished. The message catalogs are in $(BUILDDIR)/locale.\"\n\nchanges:\n\t$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes\n\t@echo\n\t@echo \"The overview file is in $(BUILDDIR)/changes.\"\n\nlinkcheck:\n\t$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck\n\t@echo\n\t@echo \"Link check complete; look for any errors in the above output \" \\\n\t      \"or in $(BUILDDIR)/linkcheck/output.txt.\"\n\ndoctest:\n\t$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest\n\t@echo \"Testing of doctests in the sources finished, look at the \" \\\n\t      \"results in $(BUILDDIR)/doctest/output.txt.\"\n"
  },
  {
    "path": "doc/docserver/README",
    "content": "This is a basic documentation webserver that runs on port 8088 and points to the Sphinx\ndocumentation located in ../build/html.\n\nBuilt with chaperone-lamp, of course, in just a few minutes.\n\n"
  },
  {
    "path": "doc/docserver/build/Dockerfile",
    "content": "FROM chapdev/chaperone-lamp:latest\nADD . /setup/\nRUN /setup/build/install.sh\n"
  },
  {
    "path": "doc/docserver/build/install.sh",
    "content": "cd /setup\n# remove existing chaperone.d and init.d from /apps so none linger\nrm -rf /apps/chaperone.d /apps/init.d\n# copy everything from setup to the root /apps\ntar cvf - --exclude 'build*' --exclude 'run.sh' . | (cd /apps; tar xf -)\n# Add additional setup commands for your production image here, if any.\nrm -rf /setup\n"
  },
  {
    "path": "doc/docserver/build.sh",
    "content": "#!/bin/bash\n#Created by chaplocal on `date`\n# the cd trick assures this works even if the current directory is not current.\ncd ${0%/*}\nif [ $# != 1 ]; then\n  echo \"Usage: ./build.sh <production-image-name>\"\n  exit 1\nfi\nprodimage=\"$1\"\nif [ ! -f build/Dockerfile ]; then\n  echo \"Expecting to find Dockerfile in ./build ... not found!\"\n  exit 1\nfi\ntar czh --exclude '*~' --exclude 'var/*' . | docker build -t $prodimage -f build/Dockerfile -\n"
  },
  {
    "path": "doc/docserver/chaperone.d/010-start.conf",
    "content": "# 010-start.conf\n#\n# This is the first start-up file for the chaperone base images.  Note that start-up files\n# are processed in order alphabetically, so settings in later files can override those in\n# earlier files.\n\n# General environmental settings.  These settings apply to all services and logging entries.\n# There should be only one \"settings\" directive in each configuration file.  But, any\n# settings encountered in subsequent configuration files can override or augment these.\n# Note that variables are expanded as late as possile.  So, there can be variables\n# defined here which depend upon variables which will be defined later (such as _CHAP_SERVICE),\n# which is defined implicitly for each service.\n\nsettings: {\n\n  env_set: {\n\n  'LANG': 'en_US.UTF-8',\n  'LC_CTYPE': '$(LANG)',\n  'PATH': '$(APPS_DIR)/bin:/usr/local/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin',\n\n  # Uncomment the below to tell init.sh to lock-down the root account after the first\n  # successful start.\n  #'SECURE_ROOT': '1',\n\n  # Variables starting with _CHAP are internal and won't be exported to services,\n  # so we derive public environment variables if needed...\n  'APPS_DIR': '$(_CHAP_CONFIG_DIR:-/)',\n  'CHAP_SERVICE_NAME': '$(_CHAP_SERVICE:-)',\n  'CHAP_TASK_MODE': '$(_CHAP_TASK_MODE:-)',\n  },\n\n}\n\n# This is the startup script which manages the contents of $(APPS_DIR)/init.d.  It will\n# run each of the init.d scripts in sequence.  Because this is part of the special \"INIT\"\n# group, it will be run before any other service which is not in the group.  This makes\n# it unnecessary to worry about 'before:' and 'after:' settings for init scripts.\n\ninit.service: {\n  type: oneshot,\n  command: '/bin/bash $(APPS_DIR)/etc/init.sh',\n  before: 'default,database,application',\n  service_groups: 'INIT',\n}\n\n# We select all messages from the \"chaperone\" program itself, which will include\n# all messages which originate from the chaperone daemon.  We put these in a single\n# log file which will be appended to on each run, so that if these log files\n# are on a mounted user volume, they will accumulate for historical purposes.\n\nchaperone.logging: {\n  enabled: true,\n  selector: '[chaperone].*',\n  file: '$(APPS_DIR)/var/log/chaperone.log',\n}\n\n# The rest, except for chaperone, goes to the syslog\n\nsyslog.logging: {\n  enabled: true,\n  selector: '*.info;![chaperone].*',\n  file: '$(APPS_DIR)/var/log/syslog.log',\n}\n\n# For the console, we include everything which is a warning except authentication\n# messages and daemon messages which are not errors.\n\nconsole.logging: {\n  enabled: true,\n  stdout: true,\n  selector: '*.warn;authpriv,auth.!*;daemon.!warn',\n}\n"
  },
  {
    "path": "doc/docserver/chaperone.d/120-apache2.conf",
    "content": "# 120-apache2.conf\n#\n# Start up apache.  This is a \"simple\" service, so chaperone will monitor Apache and restart\n# it if necessary.  Note that apache2.conf refers to MYSQL_UNIX_PORT (set by 105-mysql.conf)\n# to tell PHP where MySQL is running.\n#\n# In the case where no USER variable is specified, we run as the www-data user.\n\napache2.service: {\n  command: \"/usr/sbin/apache2 -f $(APPS_DIR)/etc/apache2.conf -DFOREGROUND\",\n  restart: true,\n  uid: \"$(USER:-www-data)\",\n  env_set: {\n    APACHE_LOCK_DIR: /tmp,\n    APACHE_PID_FILE: /tmp/apache2.pid,\n    APACHE_RUN_USER: www-data,\n    APACHE_RUN_GROUP: www-data,\n    APACHE_LOG_DIR: \"$(APPS_DIR)/var/log/apache2\",\n    APACHE_SITES_DIR: \"$(APPS_DIR)/www\",\n    MYSQL_SOCKET: \"$(APPS_DIR)/var/run/mysqld.sock\",\n  },\n  # If Apache2 does not require a database, you can leave this out.\n  after: database,\n}\n\n# Use daily logging (the %d) so that log rotation isn't so important.  Logs\n# will be created automatically for each day where they are requied.\n# See 300-logrotate.conf if you want to enable log rotation as a periodic\n# job.  Note that chaperone watches for logs which are rotated and will\n# automatically open a new file if the old one is rotated.\n#\n# Write logs either as the USER= user, or as www-data.\n\napache2.logging: {\n  enabled: true,\n  selector: 'local1.*;*.!err',\n  file: '$(APPS_DIR)/var/log/apache2/apache-%d.log',\n  uid: \"$(USER:-www-data)\",\n}\n\napache2.logging: {\n  enabled: true,\n  selector: 'local1.err',\n  stderr: true,\n  file: '$(APPS_DIR)/var/log/apache2/error-%d.log',\n  uid: \"$(USER:-www-data)\",\n}\n"
  },
  {
    "path": "doc/docserver/etc/apache2.conf",
    "content": "# This is the main Apache server configuration file.  It contains the\n# configuration directives that give the server its instructions.\n# See http://httpd.apache.org/docs/2.4/ for detailed information about\n# the directives and /usr/share/doc/apache2/README.Debian about Debian specific\n# hints.\n\n# This is a CHAPERONE-specific configuration designed to keep things lean.  It is based loosely\n# on Ubuntu 14.04 /etc/apache2/apache2.conf, and every attempt has been made to assure that\n# system-installed modules and configurations will work.\n\n# The chaperone configuration is designed to work within a self-contained application directory\n# defined by APPS_DIR.  Note that it may be a user directory, and thus chaperone allows\n# Apache to run entirely under any user account, along with a MySQL server that is also\n# sequestered in the same way.   This means that you can have containers \"point\" to apps\n# directories on your host server and manage per-container resources consistently in\n# those directories during development, until you move the entire apps directory into\n# a production container environment or image.\n\n#\n# The accept serialization lock file MUST BE STORED ON A LOCAL DISK.\n#\nMutex file:${APACHE_LOCK_DIR} default\n\nPidFile ${APACHE_PID_FILE}\n\n# Timeout: The number of seconds before receives and sends time out.\nTimeout 300\nKeepAlive On\nMaxKeepAliveRequests 100\nKeepAliveTimeout 5\n\n# Note that the user and group are defined in chaperone.d/120-apache.conf\n#User ${APACHE_RUN_USER}\n#Group ${APACHE_RUN_GROUP}\n\n# The default is off because it'd be overall better for the net if people\n# had to knowingly turn this feature on, since enabling it means that\n# each client request will result in AT LEAST one lookup request to the\n# nameserver.\nHostnameLookups Off\n\n# ErrorLog: The location of the error log file.\n# We dump errors to syslog so that we can easily duplicate it to the container stderr if we want.\nErrorLog syslog:local1\n\n# Available values: trace8, ..., trace1, debug, info, notice, warn,\n# error, crit, alert, emerg.\nLogLevel warn\n\n# Include standard Debian/Ubuntu module configuration:\nInclude /etc/apache2/mods-enabled/*.load\nInclude /etc/apache2/mods-enabled/*.conf\n\n# CHAPERONE: Override to listen on 8080 and 8443\nListen 8080\n\n<IfModule ssl_module>\n\tListen 8443\n</IfModule>\n<IfModule mod_gnutls.c>\n\tListen 8443\n</IfModule>\n\n# Sets the default security model of the Apache2 HTTPD server. It does\n# not allow access to the root filesystem outside of /usr/share and /var/www.\n# The former is used by web applications packaged in Debian,\n# the latter may be used for local directories served by the web server. If\n# your system is serving content from a sub-directory in /srv you must allow\n# access here, or in any related virtual host.\n<Directory />\n\tOptions FollowSymLinks\n\tAllowOverride None\n\tRequire all denied\n</Directory>\n\n<Directory /usr/share>\n\tAllowOverride None\n\tRequire all granted\n</Directory>\n\n<Directory \"/home\">\n\tOptions Indexes FollowSymLinks\n\tAllowOverride None\n\tRequire all granted\n</Directory>\n\nAccessFileName .htaccess\n\n# The following lines prevent .htaccess and .htpasswd files from being\n# viewed by Web clients.\n<FilesMatch \"^\\.ht\">\n\tRequire all denied\n</FilesMatch>\n\n\n# The following directives define some format nicknames for use with\n# a CustomLog directive.\nLogFormat \"%v:%p %h %l %u %t \\\"%r\\\" %>s %O \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" vhost_combined\nLogFormat \"%h %l %u %t \\\"%r\\\" %>s %O \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" combined\nLogFormat \"%h %l %u %t \\\"%r\\\" %>s %O\" common\nLogFormat \"%{Referer}i -> %U\" referer\nLogFormat \"%{User-agent}i\" agent\n\n# Include of directories ignores editors' and dpkg's backup files,\n# see README.Debian for details.\n\n# Include generic snippets of statements\nIncludeOptional /etc/apache2/conf-enabled/*.conf\n\n##\n## CHAPERONE SPECIFICS\n##\n\n# Point MySQL socket to the right spot\n#php_admin_value mysql.default_socket ${MYSQL_UNIX_PORT}\n#php_admin_value mysqli.default_socket ${MYSQL_UNIX_PORT}\n\n# Sit definition added here\n\n<VirtualHost *:8080>\n\n\t# The ServerName directive sets the request scheme, hostname and port that\n\t# the server uses to identify itself. \n\t#ServerName www.example.com\n\n\tServerAdmin webmaster@localhost\n\tDocumentRoot ${APPS_DIR}/../build/html\n\n\t# Errors go to the syslog so they can be duplicated to the console easily\n\tErrorLog syslog:local1\n\tCustomLog ${APACHE_LOG_DIR}/default-access.log combined\n\n</VirtualHost>\n"
  },
  {
    "path": "doc/docserver/etc/init.sh",
    "content": "#!/bin/bash\n# A quick script to initialize the system\n\n# We publish two variables for use in startup scripts:\n#\n#   CONTAINER_INIT=1   if we are initializing the container for the first time\n#   APPS_INIT=1        if we are initializing the $APPS_DIR for the first time\n#\n# Both may be relevant, since it's possible that the $APPS_DIR may be on a mount point\n# so it can be reused when starting up containers which refer to it.\n\nfunction dolog() { logger -t init.sh -p info $*; }\n\napps_init_file=\"$APPS_DIR/var/run/apps_init.done\"\ncont_init_file=\"/container_init.done\"\n\nexport CONTAINER_INIT=0\nexport APPS_INIT=0\n\nif [ ! -f $cont_init_file ]; then\n    dolog \"initializing container for the first time\"\n    CONTAINER_INIT=1\n    su -c \"date >$cont_init_file\"\nfi\n\nif [ ! -f $apps_init_file ]; then\n    dolog \"initializing $APPS_DIR for the first time\"\n    APPS_INIT=1\n    mkdir -p $APPS_DIR/var/run $APPS_DIR/var/log\n    chmod 777 $APPS_DIR/var/run $APPS_DIR/var/log\n    date >$apps_init_file\nfi\n\nif [ -d $APPS_DIR/init.d ]; then\n  for initf in $( find $APPS_DIR/init.d -type f -executable \\! -name '*~' ); do\n    dolog \"running $initf...\"\n    $initf\n  done\nfi\n\nif [ \"$SECURE_ROOT\" == \"1\" -a $CONTAINER_INIT == 1 ]; then\n  dolog locking down root account\n  su -c 'passwd -l root'\nfi\n"
  },
  {
    "path": "doc/docserver/run.sh",
    "content": "#!/bin/bash\n#Created by chaplocal on Wed Jun 10 16:08:42 EST 2015\n\ncd ${0%/*} # go to directory of this file\nAPPS=$PWD\ncd ..\n\noptions=\"-t -i -e TERM=$TERM --rm=true\"\nshellopt=\"/bin/bash\"\nif [ \"$1\" == '-d' ]; then\n  shift\n  options=\"-d\"\n  shellopt=\"\"\nfi\n\nif [ \"$1\" == \"-h\" ]; then\n  echo \"Usage: run.sh [-d] [-h] [extra-chaperone-options]\"\n  echo \"       Run chapdev/chaperone-baseimage:latest as a daemon or interactively (the default).\"\n  exit\nfi\n\n# Extract our local UID/GID\nmyuid=`id -u`\nmygid=`id -g`\n\n# Run the image with this directory as our local apps dir.\n# Create a user with uid=$myuid inside the container so the mountpoint permissions\n# are correct.\n\ndocker run $options -v /home:/home -p 8088:8080 chapdev/chaperone-lamp:latest \\\n   --create $USER:$myuid --config $APPS/chaperone.d $* $shellopt\n"
  },
  {
    "path": "doc/source/_static/custom.css",
    "content": ".wy-table-responsive table td, .wy-table-responsive table th {\n  white-space: normal !important;\n}\n\n.wy-table-responsive {\n  overflow: visible !important;\n}\n\ntable .caption-number:after {\n    content: \": \"\n}\n\n.rst-content p.caption {\n    font-size: 80%;\n    padding-top: 5px;\n}\n\n.rst-content code.kbd {\n    color: #E74C3C;\n}\n\n"
  },
  {
    "path": "doc/source/_templates/layout.html",
    "content": "{% extends \"!layout.html\" %}\n\n{% block footer %}\n{{ super() }}\n<script>\n  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){\n  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),\n  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)\n  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');\n\n  ga('create', 'UA-59042532-2', 'auto');\n  ga('send', 'pageview');\n\n</script>\n</div>\n{% endblock %}\n"
  },
  {
    "path": "doc/source/conf.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# chaperone documentation build configuration file, created by\n# sphinx-quickstart on Mon May  6 17:19:12 2013.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys, os\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath('.'))))\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.intersphinx']\n\n#\nintersphinx_mapping = {'python': ('http://docs.python.org/2.7', None)}\n\n# Autodoc settings\nautodoc_member_order = 'groupwise'\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'chaperone'\ncopyright = u'2015, Gary J. Wisniewski'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.3.0'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.3.0'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['includes/*']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n\n# -- Options for HTML output ---------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.  See the documentation for\n# a list of builtin themes.\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further.  For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents.  If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar.  Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it.  The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'chaperonedoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n  ('index', 'chaperone.tex', u'Chaperone Documentation',\n   u'Gary J. Wisniewski', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n    ('index', 'chaperone', u'Chaperone Documentation',\n     [u'Gary J. Wisniewski'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n#  dir menu entry, description, category)\ntexinfo_documents = [\n  ('index', 'chaperone', u'Chaperone Documentation',\n   u'Gary J. Wisniewski', 'chaperone', 'One line description of project.',\n   'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# --------------------------------------------------------------------------------\n# Add custom CSS (garyw did this)\n# --------------------------------------------------------------------------------\n\ntrim_footnote_reference_space = True\n\nnumfig = True\nnumfig_secnum = 1\n\ndef setup(app):\n    #app.add_javascript(\"custom.js\")\n    app.add_stylesheet(\"custom.css\")\n"
  },
  {
    "path": "doc/source/guide/chap-docker-simple.rst",
    "content": "\n.. _chap.example-docker:\n\nA Simple Docker Example\n=======================\n\nThe following example creates a simple Docker container running an Apache daemon and an SSH server, both\nmanaged by Chaperone. \n\nIn this example, we'll use Chaperone to run both processes as ``root``, configured to work exactly\nas they were configured in the Ubuntu distribution.  This example is based upon a \n`similar example from docker.com <https://docs.docker.com/articles/using_supervisord/>`_ which \nuses `Supervisor <http://supervisord.org>`_ as it's process manager.  Chaperone provides a far\nmore powerful featureset than 'supervisor' with a much smaller container footprint.\n\nCreating a Dockerfile\n---------------------\n\nWe'll start by creating a basic ``Dockerfile`` for our new image::\n\n    FROM ubuntu:14.04\n    MAINTAINER garyw@blueseastech.com\n\nNow, we can install ``openssh-server``, ``apache2``, and ``python3-pip``, then use\n``pip3`` to install Chaperone itself.  We also need to create a few directories\nthat will be needed by the installed software::\n\n    RUN apt-get update && \\\n\tapt-get install -y openssh-server apache2 python3-pip && \\\n\tpip3 install chaperone\n    RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /etc/chaperone.d\n\nAdding Chaperone's Configuration File\n-------------------------------------\n\nNow, let's add a configuration file for Chaperone.   Chaperone looks in\n``/etc/chaperone.d`` by default and will read any configuration files it finds there.\nSo, we'll copy our single configuration there so Chaperone reads it upon startup::\n\n    COPY chaperone.conf /etc/chaperone.d/chaperone.conf\n\nLet's take a look at what's inside ``chaperone.conf``::\n\n    sshd.service: { \n      command: \"/usr/sbin/sshd -D\"\n    }\n\n    apache2.service: {\n      command: \"bash -c 'source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND'\",\n    }\n\n    console.logging: {\n      selector: '*.warn',\n      stdout: true,\n    }\n\nThe above is a complete configuration file with three sections.  the first two start up\nboth ``sshd`` and ``apache2``.  The third section tells Chaperone to intercept all ``syslog``\nmessages and redirect them to ``stdout``.  That way, we'll be able to use the ``docker logs``\ncommand to inspect the status of the running container.\n\nThe above is really a simple configuration, but you can use the complete :ref:`set of service directives <service>`\nto control how each service behaves.\n\nExposing Ports and Running Chaperone\n------------------------------------\n\nLet's finish our ``Dockerfile`` by exposing some required ports and specifying Chaperone\nas the ``ENTRYPOINT`` so that Chaperone will start first and manage our container::\n\n    EXPOSE 22 80\n    ENTRYPOINT [\"/usr/local/bin/chaperone\"]\n\nHere, we've exposed ports 22 and 80 on the container and we're running the\n``/usr/local/bin/chaperone`` binary when the container launches.\n\nBuilding the Image\n------------------\n\nWe can now build our new image::\n\n   $ docker build -t <yourname>/chap-sample .\n\nRunning the Container\n---------------------\n\nOnce you've built an image, you can launch a container from it::\n\n  $ docker run -p 22 -p 80 -t -i <yourname>/chap-sample\n\n  Jul 21 04:08:19 6d3e4eee4265 apache2[6]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.90. Set the 'ServerName' directive globally to suppress this message\n\nAnd when you want to stop it, just use ``Ctrl-C``::\n\n  C-c C-c^C\n  Ctrl-C ... killing chaperone.\n  Jul 21 04:08:23 6d3e4eee4265 chaperone[1]: Request made to kill system. (forced)\n  Jul 21 04:08:23 6d3e4eee4265 chaperone[1]: sshd.service terminated abnormally with <ProcStatus signal=2>\n\nWhat's Next?\n------------\n\nYou can build upon the above simple sample if you want.  That gives you maximum flexibility to design\nyour container service environmetn exactly as you want.  If so, we recommend you scan the \n:ref:`reference` section so you know what features are available.\n\nIf you want, you can also use the complete set of pre-built Chaperone images\n`available here on Docker Hub <https://registry.hub.docker.com/repos/chapdev/>`_.  These images\nare excellent examples of complete Chaperone-managed development and production environments.\nYou can learn more by reading the introduction to these \nimages `on their GitHub page <https://github.com/garywiz/chaperone-docker>`_.\n"
  },
  {
    "path": "doc/source/guide/chap-docker-smaller.rst",
    "content": "\n.. _chap.small-docker:\n\nCreating Small Docker Images\n============================\n\nThe default official Docker images are not always very compact.  For example, the official Ubuntu image\nis about 180MB, and the official Java image is a whopping 810MB!\n\nThis is made worse by some distributions (like Ubuntu and Debian) which have defaults which don't cater\nto small image sizes and prefer to assure that things you *might* need are installed.   So, for example,\ninstalling Python's package manager ``pip`` will cause about 200MB of extra packages to be installed just\n\"in case\" some package requires the full compiler toolchain (which most Python packages, including Chaperone, do not).\n\nChaperone, including all its dependences, need take up no more than 35-40MB maximum, including Python3.\n\nSo, here is a quick guide to creating small Chaperone packages with a minimum of effort.\n\n\nEliminating Ubuntu/Debian Recommended Packages\n----------------------------------------------\n\nThe simplest thing you can do when installing packages under Ubuntu or Debian is use the ``--no-install-recommends`` switch\nwhen you run ``apt-get``.  For example, the :ref:`Simple Docker Example <chap.example-docker>` section recommends you install Chaperone, Apache and SSH like this::\n\n    RUN apt-get update && \\\n\tapt-get install -y openssh-server apache2 python3-pip && \\\n\tpip3 install chaperone\n    RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /etc/chaperone.d\n\nIf you do, you end up with a docker image which is 451MB::\n\n    $ docker images\n    REPOSITORY           TAG        IMAGE ID        CREATED          VIRTUAL SIZE\n    sample-simple        latest     328d42703323    34 minutes ago   451.8 MB\n    $\n\nHowever, if you change the install commands to::\n\n    RUN apt-get update && \\\n\tapt-get install -y --no-install-recommends openssh-server apache2 python3-pip && \\\n\tpip3 install chaperone\n\nThe functionally equivalent image is only 242MB::\n\n    sample-simple        latest     8839acc1e4ef    24 minutes ago   242 MB\n\nA Small Ubuntu Base Image with Chaperone\n----------------------------------------\n\nThe sample image above contains both SSH as well as Apache.  However, let's assume that you want\nto create the simplest Chaperone base image possible.   Here is the ``Dockerfile`` to start with::\n\n    FROM ubuntu:14.04\n    RUN apt-get update && \\\n\tapt-get install -y --no-install-recommends python3-pip && \\\n\tpip3 install chaperone\n    RUN mkdir -p /etc/chaperone.d\n    COPY chaperone.conf /etc/chaperone.d/chaperone.conf\n    ENTRYPOINT [\"/usr/local/bin/chaperone\"]\n\nThe following ``chaperone.conf`` can serve as your starting point::\n\n    your.service: {\n      command: \"logger -p warn 'Replace this with your service'\",\n    }\n\n    console.logging: {\n      selector: '*.warn',\n      stdout: true,\n    }\n\nIf you build the above image, it will be just 226MB, only 38MB larger than the Ubuntu image::\n\n    $ docker images\n    REPOSITORY           TAG        IMAGE ID        CREATED            VIRTUAL SIZE\n    base-ubuntu          latest     182521cfa43e    About an hour ago  226 MB\n\n\nA 53MB Alpine Image with Chaperone\n----------------------------------\n\nIf you really care about keeping your images as minimal as possible, consider using \n`Alpine Linux <http://www.alpinelinux.org/>`_ as your base image.   Alpine is a simple,\nstripped down distribution that is ideal for creating lean, mean containers.\n\nHere's a ``Dockerfile`` that will create small Alpine Linux image, complete with both\nChaperone as well as Python3::\n\n    FROM alpine:3.2\n    RUN apk add --update python3 && pip3 install chaperone\n    RUN mkdir -p /etc/chaperone.d\n    COPY chaperone.conf /etc/chaperone.d/chaperone.conf\n    ENTRYPOINT [\"/usr/bin/chaperone\"]\n\nThe resulting image is less than 53MB::\n\n    $ docker images\n    REPOSITORY           TAG        IMAGE ID        CREATED            VIRTUAL SIZE\n    base-alpine          latest     1c9d85d9bb67    About an hour ago  52.59 MB\n\n\nPre-Built Images\n----------------\n\nWhen building our official Chaperone base images (`located here on Docker Hub <https://hub.docker.com/u/chapdev/dashboard/>`_),\nwe used the techniques above to create versatile images with reasonably sophisticated start-ups.  They may be\noverkill for most applications, but they may also serve as good configuration examples.\n\nNotably, the `chaperone-alpinejava <https://hub.docker.com/r/chapdev/chaperone-alpinejava/>`_ image is a good\nexample of what's possible.   It contains a complete Oracle 8 production environment, Python 3, Chaperone, and\nit's a remarkably small 216MB!\n\nHopefully the above information is a useful way to get started at streamlining images.\n\n\n"
  },
  {
    "path": "doc/source/guide/chap-docker.rst",
    "content": ".. _chap.docker:\n\nUsing Chaperone with Docker\n===========================\n\nWhile Chaperone is a general-purpose program that can be used to manage any small hierarchy of\nprocesses, it was designed specifically to solve problems encountered when creating containers.\n\nWhile the goal is to keep containers streamlined and small, ideally containing only one\nprocess, the reality is that in many real-world applications, existing daemons may need\nto be exploited for use within a container to save time or provide commonly-available\nfunctionality.  Some applications also benefit from greater modularity by breaking up\nfunctionality into multiple processes to better exploit CPU resources.\n\nThe moment a container contains even two cooperating proceses, the problem of management\narises, and ``chaperone`` was designed to solve multi-process management simple\nand well-contained.\n\n.. toctree::\n   :maxdepth: 2\n\n   chap-docker-simple.rst\n   chap-docker-smaller.rst\n"
  },
  {
    "path": "doc/source/guide/chap-intro.rst",
    "content": "\n.. _intro:\n\nIntroduction to Chaperone\n=========================\n\nOverview\n--------\n\nContainer technologies like Docker and Rocket have changed dramatically the way\nwe bundle and distribute applications. While many containers are built with\na single contained process in mind, other applications require a small suite\nof processes bundled into the \"black box\" that containers provide.  When this\nhappens, the need arises for a container control system, but the available\ntechnologies such as ``systemd`` or ``upstart`` are both too modular and\ntoo heavy, resulting in \"fat containers\" which introduce the very kinds of\noverhead container technologies are designed to eliminate.\n\nChaperone is designed to solve this problem by providing a single, self-contained\n\"caretaker\" process which provides the following capabilities within the container:\n\n* Dependency-based parallel start-up of services.\n* A robust process manager with service types for forking, oneshot, simple,\n  and notify service types modelled after systemd.\n* Port-triggered services inside the container using the inetd service type.\n* A \"cron\" service type to schedule periodic tasks.\n* A built-in highly configurable syslog service which can direct syslog\n  messages to multiple output files and duplicate selected streams or severities\n  to the container stdout as well.\n* Control capabilities so that services can be stopped, started, or restarted easily\n  at the command line or within application programs.\n* Emulation of systemd's ``sd_notify`` capability, allocating notify sockets\n  for each service so that cgroups and other privileges are not needed\n  within the container.  Chaperone also recognizes a passed-in ``NOTIFY_SOCKET``\n  and will inform the host systemd of final container readiness and status.\n* Features to support the creation of \"mini-systems\" within a single directory\n  so that system services can run in userspace, or be mounted on host shares\n  to keep development processes and production processes as close to identical\n  as possible (see ``chaperone-lamp`` for an example of how this can be realized).\n  \nIn addition, many incidental features are present, such as process monitoring and\nzombie clean-up, clean shutdown and container restarts, and interactive console\nprocess detection so that applications know when they are being run interactively.\n\n"
  },
  {
    "path": "doc/source/guide/chap-other.rst",
    "content": "\n.. include:: /includes/incomplete.rst\n\n.. _chap.other:\n\nOther Uses for Chaperone\n========================\n\nChaperone was designed for container use in scenarios such as Docker containers.  However,\nit has also been designed to operate as a non-root process manager, though this has not\nbeen tested very well.\n\nIf runnnig as a non-root user, observe the following:\n\n* The :ref:`--force <option.force>` switch will need to be used at startup.\n* Chaperone will not create it's ``syslog`` service at ``/dev/log``.\n* Chaperone will not create the ``telchap`` command socket at ``/dev/chaperone.sock``.\n* Process cleanup will not occur if processes are reparented, since they will be\n  reparented to PID 1.\n\nOther than these notes, Chaperone *should* work as a process manager within userspace\nfor managing small groups of related processes.  If you find use cases outside\nof container management, let me know.\n"
  },
  {
    "path": "doc/source/guide/chap-using.rst",
    "content": ".. _chap.using:\n\nUsing Chaperone\n===============\n\nChaperone is a simple, but full-featured process manager.  It is designed to be as flexible\nas possible.\n\n.. toctree::\n   :maxdepth: 2\n\n   chap-docker.rst\n   chap-other.rst\n"
  },
  {
    "path": "doc/source/includes/defs.rst",
    "content": ".. |ENV| replace:: :kbd:`$ENV`\n"
  },
  {
    "path": "doc/source/includes/incomplete.rst",
    "content": ".. note:: \n\n   This section is being worked on and is not yet complete.  The :ref:`reference` is currently complete and ready to use.\n\n   For status information about Chaperone and documentation, see :ref:`status`.\n\n     \n"
  },
  {
    "path": "doc/source/index.rst",
    "content": ".. chaperone documentation master file, created by\n   sphinx-quickstart on Mon May  6 17:19:12 2013.\n   You can adapt this file completely to your liking, but it should at least\n   contain the root `toctree` directive.\n\nChaperone: A lightweight, all-in-one process manager for lean containers\n========================================================================\n\nChaperone is a lightweight alternative to process environment managers\nlike ``systemd`` or ``upstart``.   While chaperone provides an extensive\nfeature set, including dependency-based startup, syslog logging, zombie harvesting,\nand job scheduling, it does all of this in a single self-contained process that can\nrun as a \"system init\" daemon or can run in userspace.   \n\nThis makes Chaperone an ideal tool for managing \"small\" process spaces like Docker\ncontainers while still providing the system services many daemons expect.\n\nIf you are using Chaperone with Docker, we suggest reading the :ref:`intro`, then try out\nthe ``chaperone-lamp`` Docker image by\n`chaperone-docker github page <https://github.com/garywiz/chaperone-docker#try-it-out>`_\n\nAny bugs should be reported as issues at https://github.com/garywiz/chaperone/issues.\n\nCurrent status of Chaperone and related repositories is located on the\n:ref:`Project Status <status>` page.\n\nContents\n--------\n\n.. toctree::\n   :maxdepth: 2\n\n   guide/chap-intro.rst\n   guide/chap-using.rst\n   ref/index.rst\n\nDownloading and Installing\n--------------------------\n\nThe easiest way to install ``chaperone`` is using ``pip`` from the https://pypi.python.org/pypi/chaperone package::\n\n    # Ubuntu or debian prerequisites...\n    apt-get install python3-pip\n\n    # chaperone (may be all you need)\n    pip3 install chaperone\n\nIf you're interested in the source code, or contributing, you can find the ``chaperone`` source code \nat https://github.com/garywiz/chaperone.\n    \n\nLicense\n-------\n\nCopyright (c) 2015, Gary J. Wisniewski <garyw@blueseastech.com>\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n   http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n"
  },
  {
    "path": "doc/source/ref/command-line.rst",
    "content": ".. chaperone documentation n\n   command line documentation\n\n.. _ref.chaperone:\n\nChaperone Command Reference\n===========================\n\nCommand Quick Reference\n-----------------------\n\nChaperone is usually executed as a container entrypoint and has the following syntax::\n\n  chaperone [options] [initial-command [args...]]\n\nThe initial command is optional.  If provided, it will be run as an \"IDLE\" oneshot service, running after all\nother services have been started.\n\nOptions are described in the table below, followed by more extensive reference information.\n\n=============================================================  =================================================================================\ncommand-line switch                \t       \t\t       function\n=============================================================  =================================================================================\n:ref:`--config=config-location <option.config>`                Specifies a file or directory where configuration information is found.\n                                   \t       \t\t       Default is ``/etc/chaperone.d``.\n:ref:`--debug <option.debug>`\t\t\t\t       Turns debugging features.  (Implies ``--log-level=debug`` as well)\n:ref:`--disable-services <option.disable-services>`\t       No services will be started.  Only the command-line command will execute.\n:ref:`--exitkills <option.exitkills>`\t\t\t       When the command specified on the command line terminates, the chaperone\n                                   \t       \t\t       will execute a normal shutdown operation.\n:ref:`--no-exitkills <option.no-exitkills>`\t\t       Reverses the effect of ``--exitkills``.  Useful when the ``--exitkills`` is\n                                   \t       \t\t       implied or specified as a default.\n:ref:`--force <option.force>`\t\t\t\t       If chaperone refuses to do something, tell it to try anyway.\n--help                             \t       \t\t       Displays command and option help.\n:ref:`--ignore-failures <option.ignore-failures>`\t       Run as if :ref:`ignore_failures <service.ignore_failures>` were true for all\n                                   \t       \t\t       services.\n:ref:`--log-level=level <option.log-level>`\t\t       Force the syslog log output level to this value.  (one of 'emerg', 'alert', 'crit',\n                                   \t       \t\t       'err', 'warn', 'notice', 'info', or 'debug).\n:ref:`--no-console-log <option.no-console-log>`                Forces 'stderr' and 'stdout' to *false* for all logging services.\n:ref:`--no-defaults <option.no-defaults>`\t\t       Ignore the :ref:`_CHAP_OPTIONS <env._CHAP_OPTIONS>` environment variable,\n                                   \t       \t\t       if present.\n:ref:`--no-syslog <option.no-syslog>`\t\t\t       Disable the syslog service at start-up and do not create ``/dev/log``.\n:ref:`--user=username <option.user>`\t\t\t       Run all processes as ``user`` (uid number or name).  The user must exist.\n                                   \t       \t\t       By default, all processes run as ``root``.\n:ref:`--create-user=newuser[:uid:gid] <option.create-user>`    Create a new user upon start-up with optional ``uid`` and ``gid``.  Then\n                                   \t       \t\t       run as if ``--user=<user>`` was specified.\n:ref:`--default-home=directory <option.default-home>`          If :ref:`--create-user <option.create-user>` specifies a user whose\n\t\t\t       \t\t\t\t       home directory does not exist, then create the new user account with this\n\t\t\t\t\t\t\t       directory as the user's home directory.\n:ref:`--show-dependencies <option.show-dependencies>`\t       Display service dependency graph, then exit.\n:ref:`--task <option.task>`\t\t\t\t       Run in \"task mode\".  This implies ``--log-level=err``, ``--disable-services``,\n                                   \t       \t\t       and ``--exitkills``.  This switch is useful when the container publishes\n                                   \t       \t\t       commands which must run in isolation, such as displaying container internal\n                                   \t       \t\t       information such as version information.\n--version                          \t       \t\t       Displays the chaperone version number.\n=============================================================  =================================================================================\n                                                 \nChaperone Command Execution\n---------------------------\n\nChaperone goes through a set of startup phases in order to establish a working environment.\n\n1.  Chaperone first examines the environment looking for the :ref:`_CHAP_OPTIONS <env._CHAP_OPTIONS>` variable.\n    If found, Chaperone uses it to establish default values.  The remaining environment variables will be passed to\n    running services depending upon the both global and per-service settings.\n\n2.  Command line options are read and combined with any default options to form the final command option set.\n    Configuration information is optional, and if no configuration is found, it is not considered an error.\n\n3.  Once configuration information is present, chaperone proceeds to start it's internal ``syslog`` service,\n    creating sockets such as ``/dev/log`` and starts it's internal command processor which accepts\n    commands at ``/dev/chaperone`` or interactive commands (via :ref:`telchap <ref.telchap>`) at\n    ``/dev/chaperone.sock``.  Chaperone also sets up utility environment variables such as\n    :ref:`_CHAP_INTERACTIVE <env._CHAP_INTERACTIVE>` so that they can be used in service configurations.\n\n4.  If a command and arguments are provided on the command line, an \"IDLE\" oneshot service is configured\n    so that it runs after all other services are started.  If chaperone is running interactively,\n    :option:`--exitkills <chaperone --exitkills>` is implied, otherwise, termination of this service\n    will leave the system running just as if any other oneshot service exited normally.\n\n5.  Services in the \"INIT\" service group (if any) are executed and must start successfully before other services\n    are started.\n\n6.  All other services are started in dependency order.  Failures during startup comprise a system\n    failure unless :option:`--ignore-failures <chaperone --ignore-failures>` is used on the command line, or\n    the service is declared with :ref:`ignore_failures <service.ignore_failures>` set to \"true\".\n\n7.  Services in the \"IDLE\" service group (if any) are executed (which includes any command specified on the\n    command line).\n\nOnce started, Chaperone monitors all services, performs logging, and cleans up zombie processes when\nthey exit.   When it receives a ``SIGTERM`` it will shutdown all processes in an orderly fashion.\n\n\nNote that when a command is specified on the chaperone command line, chaperone starts a ``CONSOLE`` service internally.\nThis service can be managed just like any other service, and shows up in service listings when using the :ref:`telchap <ref.telchap>`\ncommand.   If chaperone is started in an interactive environment (has a pseudo-tty as ``stdin``), it uses\n``SIGHUP`` to terminate the process. Otherwise, it uses ``SIGTERM`` as usual.   This is to accommodate login\nshells such ``bash`` and ``sh``, which expect this behavior.\n\n\nOption Reference Information\n----------------------------\n\n.. program:: chaperone\n\n.. _option.config:\n\n.. option:: --config <file-or-directory>\n\n   Specifies the full or relative path to the Chaperone's configuration directory or configuration\n   file.   For example, assume that ``chaperone.conf`` is a file and ``chaperone.d`` is the name\n   of a directory::\n\n     chaperone --config /home/wwwuser/chaperone.conf\n\n   will tell Chaperone to read all configuration directives from the single self-contained\n   configuration file specified.  No other directives will be read.  Or,::\n\n     chaperone --config /home/wwwuser/chaperone.d\n\n   specifies that the contents of the directory ``chaperone.d`` should be scanned and any file\n   ending with ``.conf`` or ``.yaml`` will be read (in alphabetic order) to create the final\n   configuration.   To understand how Chaperone handles directives which occur in multiple\n   files, see :ref:`config.file-format`.\n\n   If not specified, defaults to ``/etc/chaperone.d``, or uses the default option set in\n   the ``_CHAP_OPTIONS`` (see :ref:`ch.env`) environment variable.\n\n.. _option.debug:\n\n.. option:: --debug\n\n   Enables debugging features.   When debugging is enabled:\n\n   * chaperone will print out a raw dump of all command line options (including those derived from defaults),\n     as well as configuration information.\n   * Internal debugging messages will be turned on, describing service start-up in more detail.\n   * Traceback for internal errors will be enabled, making it easier to report bugs.\n   * syslog logging will be forced to output all log levels (the same as using ``filter: '*.debug'`` in all\n     logging entries.\n\n.. _option.disable-services:\n\n.. option:: --disable-services\n\n   When set to 'true', then no services will be started or configured, though dependencies and configuration\n   syntax will be checked normally.\n\n   This switch can be useful in cases where services do not start correctly, or you want to enter a fresh\n   container for inspection or other purposes.  For example::\n\n     chaperone --disable-services /bin/bash\n\n   will run ``bash`` alone as a child of chaperone, or in the case of using chaperone-enabled Docker images::\n\n     docker run -t -i chapdev/chaperone-lamp --disable-services /bin/bash\n\n   creates a fresh LAMP container running only ``bash`` so you can inspect the contents of the container without\n   enabling any of the services.\n\n.. _option.exitkills:\n\n.. option:: --exitkills\n\n   This option works in conjunction with an ``initial-command`` specified on the command line, and will cause\n   the entire container to shut down when the command completes.\n\n   Chaperone attempts to anticipate what is needed automatically, and if run in an interactive container,\n   will default to ``--exitkills`` or when run as a daemon defaults to ``--no-exitkills``.  For example,\n   the following docker command will cause an exit after ``bash`` completes::\n\n     docker run -t -i --rm=true chapdev/chaperone-baseimage /bin/bash\n\n   whereas the following command will not exit upon bash's completion::\n\n     docker run -d chapdev/chaperone-baseimage /bin/bash\n\n   Both this option as well as :ref:`--no-exitkills <option.no-exitkills>` are provided when Chaperone's\n   default behavior is not desired.\n\n.. _option.no-exitkills:\n\n.. option:: --no-exitkills\n\n   Will not shutdown the system when the ``initial-command`` exits.  See :ref:`--exitkills <option.exitkills>`.\n\n.. _option.force:\n\n.. option:: --force\n\n   This option can be used to force Chaperone to attempt an operation even though it typically\n   would refuse.  At present, there are not many situations where this command is useful, but that may\n   change.  In cases where it can be used, Chaperone will display an alert, for example::\n\n     wheezy:~$ chaperone\n     Normally, chaperone expects to run as PID 1 in the 'init' role.\n     If you want to go ahead anyway, use --force.\n     wheezy:~$\n\n.. _option.ignore-failures:\n\n.. option:: --ignore-failures\n\n   Running with this option causes Chaperone to run as if the global setting :ref:`ignore_failures <settings.ignore_failures>` were\n   set to \"true\".\n\n   This can be useful when a service is failing on startup and causes sytem failure (as described in the :ref:`table.service-types` table).\n   In such situations, troubleshooting can be difficult since the container may be transient and failure information may be lost.\n\n   For example, to run a shell in a container even if it is failing on startup::\n\n     docker run -t -i --rm=true chapdev/chaperone-lamp --ignore-failures /bin/bash\n\n \n.. _option.log-level:\n\n.. option:: --log-level level-name\n\n   Normally, Chaperone should be configured to do logging with :ref:`logging directives <logging>`.  However, at times, more\n   detail is needed in the logs for troubleshooting purposes.  \n\n   This option should be followed by one of the log levels: **emerg**, **alert**, **crit**, **err**, **warn**, **notice**,\n   **info**, or **debug**.  When specified, it forces the logging system to behave as if *all* log definitions have a minimum\n   severity of ``level-name``.\n\n   For example, ``--log-level info`` assures that all types messages except debugging messages will be displayed in all logs;\n   ``--log-level debug`` assures that all types of messages are displayed.\n\n   Note that logging still must be configured so that syslog messages have some destination.  By default, log messages\n   are captured but not directed to 'stdout' or a file.  Most configurations include at least a simple logging directive like this::\n\n     console.logging: {\n       selector: '*.warn',\n       stdout: true,\n     }\n\n   which tells Chaperone to direct any messages of warning level or greater severity to 'stdout'.  Including ``--log-level info``,\n   for example, would cause Chaperone to behave as if the declaration looked like this::\n\n     console.logging: {\n       selector: '*.info',\n       stdout: true,\n     }\n\n   Note also that using the :ref:`--debug <option.debug>` switch automatically sets the log level to 'debug', so use of this\n   switch in such cases is redundant.\n\n.. _option.no-console-log:\n\n.. option:: --no-console-log\n\n   This switch unsets any :ref:`stdout <logging.stdout>` and :ref:`stderr <logging.stderr>` logging directives, thus disabling\n   any logging to the console.\n\n   Disabling console output can be useful in special-case situations, such as when a command-line command wishes to dump\n   container internals to ``stdout`` in some format (such as ``gzip``) which may be corrupted if inadvertent console\n   messages are produced.\n\n.. _option.no-syslog:\n\n.. option:: --no-syslog\n\n   This switch tells Chaperone to disable the normal creation of ``/dev/log`` and to perform all of its own logging to the\n   console.  Chaperone defaults to automatically starting its own internal logging service.   Disabling syslog can be useful in cases\n   where a container has some other method of logging, or wants to start a standard\n   syslog deamon itself.   \n\n   This switch is equivalent to setting the global setting :ref:`enable_syslog <settings.enable_syslog>` to `false` and will\n   override any settings in Chaperone's configuration files.\n\n.. _option.no-defaults:\n\n.. option:: --no-defaults\n\n   Using this switch causes Chaperone to ignore any configuration defaults set in the :ref:`_CHAP_OPTIONS <env._CHAP_OPTIONS>`\n   environment variable.  Only the options provided on the command line itself will be recognized when this switch is used.\n\n.. _option.user:\n\n.. option:: --user name-or-number\n\n   Normally, when Chaperone is started, it runs as the same user which executed the ``chaperone`` command (usually ``root``).\n   However, in many cases, it is desirable to have Chaperone spawn all services and use permissions of a different user. \n   This switch specifies the user account under which Chaperone will start all processes and logging services.  For example, \n   assume you have an account within a container called ``appuser`` and all services should run under that user account.\n   You would simply do this::\n\n     docker run -d my_chaperone_image --user appuser\n\n   Chaperone will automatically assure that ``HOME``, ``LOGIN`` and ``LOGNAME`` are set correctly so that the\n   application make sure all files are located relative to the application home directory.\n\n   Typically, a production container would be built with this switch incorporated into the built image itself.\n   (Such as using Docker's ``CMD`` or ``ENTRYPOINT`` directives in a `Dockerfile <https://docs.docker.com/reference/builder/>`_.\n\n   Note the user *must exist* already inside the container's configuration.  If not, you can \n   use :ref:`--create-user <option.create-user>` to dynamically create a new user inside the container upon startup.\n\n.. _option.create-user:\n\n.. option:: --create-user name[:uid[:gid]] or --create-user name:/path/to/file[:uid[:gid]]\n\n   Often, a generic container can be designed to allow userspace mount points, isolating persistent data\n   outside the container so that the container becomes entirely transient.   Because containers have a\n   set of isolated user credentials, sharing files and permissions with the host volumes can often\n   lead to difficulties.\n\n   The ``--create-user`` switch allows you to \"match\" the host user (and optionally group) to the running\n   process tree within the container so that file permissions are consistent.  \n\n   This switch accepts the following:\n\n   * A ``name`` parameter which should be the name of a user that will be created the first time\n     the container runs.\n   * An optional ``uid`` which must be the numeric user ID of the user to be created.  If omitted,\n     a new user ID will be assigned.\n   * An optional ``gid`` which can be the name or number of an existing group, or the number\n     of a new group to be created specifically for the new user.\n   * An optional format where the name is followed by the path to an *existing* file on the system\n     whose ``uid`` and ``gid`` will be used to create the new user.\n\n   The final alternative form is specified by including the path as follows::\n\n     --create-user name:/path/to/file\n\n   When ``uid`` and ``gid`` or the file option are omitted, Chaperone will use the container's installed OS policy\n   to determine how to assign user credentials.\n\n   This feature can be used to create generic start-up scripts for containers so that they\n   share the credentials of whatever user created them.  Here is an example::\n\n     #!/bin/bash\n     # Extract host user UID/GID\n     myuid=`id -u`\n     mygid=`id -g`\n     # Run the daemon\n     docker run -d -v /home:/home my-app-image --create-user $USER:$myuid:$mygid\n\n   Once started, the image can now be stopped and restarted while retaining\n   the credential relationship with the host.\n\n   .. note::\n      Because containers are often *not* transient, and can be restarted, Chaperone is a bit\n      smart about interpreting this switch, which usually be present both when the container\n      is first started and when it is started again.  So, if the user name specified by\n      ``--create-user`` already exists, Chaperone will check to assure that any\n      ``uid`` or ``gid`` are correct, and proceed silently.\n\n      If the user credentials are defined differently, then an error will occur.\n\n\n.. _option.default-home:\n\n.. option:: --default-home directory\n\n   This option is meaningful only when used in combination with :ref:`--create-user <option.create-user>`\n   and specifies the home directory to use if the user's home directory does not exist.\n\n   This switch can be useful if a user's home directory may optionally be mounted as part\n   of a volume mount, or if no such mount is provided, the user directory can default to an\n   alternate location within the container itself.\n\n   For example, assume that a container normally accepts a mount-point for ``/home``, where\n   the specified user (in this case ``joebloggs``) has a pre-existing home directory,\n   as follows::\n\n     docker run -v /home:/home myimage --create-user joebloggs --config apps/chaperone.conf\n\n   In this case, chaperone would find it's configuration in ``/home/joebloggs/apps/chaperone.conf``.\n\n   But, if you wanted the container to be more versatile, you may want to create an\n   application directory *inside* the container as well so that the container could run\n   with either an internal configuration, or an external configuration to simplify\n   development.\n\n   So, the following could be used to provide a default home::\n\n     docker run -v myimage --create-user joebloggs --default-home /defhome \\\n         --config apps/chaperone.conf\n\n   The above command would instead find chaperone's configuration in ``/defhome/apps/chaperone.conf``,\n   providing that no directory ``/home/joebloggs`` exists inside the container.\n\n   Typically, when a container is first built, this switch is included in the\n   :ref:`_CHAP_OPTIONS <env._CHAP_OPTIONS>` environment variable.  Doing so allows the container\n   to be executed with a home directory mountpoint, or without.\n\n\n.. _option.show-dependencies:\n\n.. option:: --show-dependencies\n\n   More complex service scenarios which use service directives :ref:`before <service.before>`,\n   :ref:`after <service.after>` and :ref:`service_groups <service.service_groups>` can sometimes\n   require debugging to assure the startup sequence is correct.\n\n   This switch provides some assistance by creating an ASCII dependency graph which\n   shows the relationship between services after Chaperone analyzes service\n   dependencies.\n\n   Here is how you can see a sample::\n\n     $ docker run -i --rm=true chapdev/chaperone-lamp --show-dependencies\n                 init | mysql | apache2 | logrotate | sample\n     init      | ====\n     mysql     |     ========\n     apache2   |             ==========\n     logrotate |             ======================\n     sample    |                                   =========\n     ----------> depends on...\n     init      | \n     mysql     | init\n     apache2   | mysql, init\n     logrotate | mysql, init\n     sample    | logrotate, apache2, mysql, init\n\n   The output consists of two sections.  The top section shows the earliest\n   start time for each service, relative to other defined services, rougly\n   in the order Chaperone will start them.  The lower section contains\n   the explicit dependencies after they have been resolved.\n\n   You can also obtain this information from inside the container using\n   the \":ref:`telchap dependencies <telchap.dependencies>`\" command::\n\n      rbunion@69c0e692d78c:~$ telchap dependencies\n      telchap dependencies\n                  init | mysql | apache2 | logrotate | sample | CONSOLE\n      init      | ====\n      mysql     |     ========\n      apache2   |             ==========\n      logrotate |             ======================\n      sample    |                                   =========\n      CONSOLE   |                                            ==========\n      ----------> depends on...\n      init      | \n      mysql     | init\n      apache2   | init, mysql\n      logrotate | init, mysql\n      sample    | apache2, logrotate, init, mysql\n      CONSOLE   | apache2, logrotate, init, mysql, sample\n\n   If the container is running with a command-line command (such as ``bash``)\n   you will also see the ``CONSOLE`` service listed, which is the service\n   which was created internally to manage the interactive console.  Because\n   the console is part of the :ref:`IDLE group <service.service_groups>`,\n   you can see that it depends upon all other services before it will\n   start.\n\n.. _option.task:\n\n.. option:: --task\n\n   This is a convenience switch which is presently equivalent to combining:\n\n     * :ref:`--no-console-log <option.no-console-log>`,\n     * :ref:`--disable-services <option.disable-services>`, and\n     * :ref:`--exitkills <option.exitkills>`.\n\n   It is useful when the command provided on the command line does\n   some utility task which circumvents the normal operation of the\n   container.\n\n   For example, imagine that you create a complex container with\n   several internal components, and want to provide an easy way\n   to report on the versions of software inside the container.\n   You could write a simple script, perhaps called ``/app/bin/report-versions``\n   then run it like this::\n\n     $ docker run -i --rm=true my-app-image --task /app/bin/report-versions\n     ngnnx: 1.9.1\n     cluster-supervisor: git tag = 'production-1.22'\n     replicator: 0.1\n     $\n\n   The ``--task`` switch attempts to silence any other output,\n   and assure the container does nothing except start the command-line\n   command (using the configured Chaperone environment), then exit.\n\n   See the :ref:`get-chaplocal <get-chaplocal>` task for an example\n   of how this switch has been used in practice.\n"
  },
  {
    "path": "doc/source/ref/config-format.rst",
    "content": ".. chaperone documentation\n   configuration directives\n\n.. _config.file-format:\n\nConfiguration File Format\n=========================\n\nChaperone's configuration is contained either in a single file, or a directory of configuration.\nYou specify the configuration with the :ref:`--config <option.config>` switch on the command line.\nIf none is specified, the default `/etc/chaperone.d` is used.  If a directory is chosen, then only the\ntop-level of the directory will be searched, and only files ending in ``.conf`` or ``.yaml`` will be\nrecognized and read in alphabetic order.\n\nConfiguration files are written using `YAML Version 2 <http://www.yaml.org/spec/1.2/spec.html>`_.  For example, you can\ndefine two chaperone services like this::\n\n  mysql.service:\n    command: \"/etc/init.d/mysql start\"\n\n  apache2.service:\n    command: \"/etc/init.d/apache2 start\"\n    after: mysql.service\n    \nWhile the above works perfectly fine, we prefer to use the `YAML \"flow style\" <http://yaml.org/spec/1.2/spec.html#Flow>`_ which\nlooks very similar to JSON.  In flow format, the above looks like this::\n\n  mysql.service: {\n    command: \"/etc/init.d/mysql start\"\n  }\n\n  apache2.service: {\n    command: \"/etc/init.d/apache2 start\",\n    after: mysql.service,\n  }\n\nThe flow style is both easy to read, and works better when configurations become more complex.  So, throughout\nthe chaperone documentation, we'll stick to the flow format.\n\nComments can be included both between lines and at the end of lines using the hash symbol (``#``).  Here is a complete well-commented\nconfiguration section for a sample service that's included with the ``chaperone-baseimage`` docker image::\n\n  # This is a sample oneshot service that runs at IDLE time, just before \n  # the console app, if present. It will output something so at least\n  # something appears on the screen.\n\n  sample.service: {\n    # This is a oneshot service, but most likely a real applicaton will be another type\n    # such as 'simple', or 'forking'.\n    type: oneshot,\n    enabled: true,   # CHANGE TO 'false' so this app doesn't run any more\n\n    # Command output goes directly to stdout instead of to the syslog.\n    # Note that you normally want to have services output to the syslog, because\n    # chaperone's logging directives allow you to echo syslog data to stdout.  That's\n    # a better place to control things (see 010-start.conf).\n    command: \"$(APPS_DIR)/bin/sample_app\",\n    stdout: inherit,\n\n    # Because we're in the IDLE group, we will run only after all system services have\n    # started.  However, if there is a command line program, like /bin/bash, we want to\n    # run before that one.  All upper-case group names have special meanings.  However,\n    # You can define your own service groups, then use them to declare startup\n    # dependencies.\n    service_groups: \"IDLE\",\n    before: \"CONSOLE\",\n\n    # These environment variables will be added only for your service\n    env_set: {\n      'INTERACTIVE': '$(_CHAP_INTERACTIVE)',\n    }\n  }\n"
  },
  {
    "path": "doc/source/ref/config-global.rst",
    "content": ".. chaperone documentation\n   configuration directives\n\n.. include:: /includes/defs.rst\n\n.. _config.settings:\n\nConfiguration: Global Settings\n==============================\n\nSettings Quick Reference\n------------------------\n\nGlobal settings are identified by a configuration file section titled settings, for example::\n\n  settings: {\n    ignore_failures: true,\n    env_set: {\n      'LANG': 'en_US.UTF-8',\n      'LC_CTYPE': '$(LANG)',\n      'PATH': '$(APPS_DIR)/bin:/usr/local/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin',\n    },\n  }\n\nDirectives applied in the setting section apply globally and some define defaults to be inherited by\nlogging or service declarations.\n\nEntries below marked with |ENV| support :ref:`environment variable expansion <env.expansion>`.\n\n.. _table.settings-quick:\n\n.. table:: Global Settings Quick Reference\n\n   =================================================== =============================================================================\n   settings variable                                   meaning\n   =================================================== =============================================================================\n   :ref:`env_inherit <settings.env_inherit>`           An array of patterns which can match one or more\n\t\t\t\t\t\t       environment variables.  Environment variables which\n\t\t\t\t\t\t       do not match any pattern will be excluded.  Default is ``['*']``.\n   :ref:`env_set <settings.env_set>`                   Additional environment variables to be set.\n   :ref:`env_unset <settings.env_unset>`               Environment variables to be removed.\n   :ref:`idle_delay <settings.idle_delay>`             The \"grace period\" after all services have started before\n\t\t\t\t\t\t       services in the \"IDLE\" group will begin running.  Default is 1.0 seconds.\n   :ref:`ignore_failures <settings.ignore_failures>`   Specifies the ``ignore_failures`` default for services.\n   :ref:`process_timeout <settings.process_timeout>`   Specifies the amount of time Chaperone will wait for a service to start.\n\t\t\t\t\t\t       The default varies for each type of service.\n\t\t\t\t\t\t       See :ref:`service process_timeout <service.process_timeout>` for more\n\t\t\t\t\t\t       information.\n   :ref:`shutdown_timeout <settings.shutdown_timeout>` The amount of time Chaperone will wait for services to complete shutdown\n\t\t\t\t\t\t       before forcing a kill with SIGKILL.  Default is 8 seconds.\n   :ref:`startup_pause <settings.startup_pause>`       Specifies the ``startup_pause`` default for services.\n   :ref:`enable_syslog <settings.enable_syslog>`       Specifies whether Chaperone will start its own internal syslog service\n\t\t       \t\t\t\t       at start-up.  Defaults to ``true``.\n   :ref:`detect_exit <settings.detect_exit>`           If true (the default), then Chaperone tries to intelligently detect\n   \t\t     \t\t\t\t       when all processes have exit and none are schedule, then terminates.\n   :ref:`uid <settings.uid>`                           The default uid (name or number) for all services and logging tasks.\n\t\t\t\t\t\t       Overrides the value specified by :ref:`--user <option.user>` or\n\t\t\t\t\t\t       :ref:`--create-user <option.create-user>`. |ENV|\n   :ref:`gid <settings.gid>`                           The default gid (name or number) for all services and logging tasks.\n   \t     \t\t\t\t\t       |ENV|\n   =================================================== =============================================================================\n\nSettings Reference\n------------------\n\n.. _settings.env_inherit:\n\n.. describe:: env_inherit [ 'pattern', 'pattern', ... ]\n\nSpecifies a list of patterns which define what will be inherited from the environment passed to Chaperone when it\nwas executed.  Patterns are standard filename \"glob\" patterns.   By default, all environment variables will be\ninherited.\n\nFor example::\n\n  settings: {\n    env_inherit: [ 'PATH', 'TERM', 'HOST', 'SSH_*' ],\n  }\n\n.. _settings.env_set:\n\n.. describe:: env_set { 'NAME': 'value', ... }\n\nProvides a list of name/value pairs for setting or overriding environment variables.  The values may contain\n:ref:`variable expansions <env.expansion>`.  Note that variables are not expanded immediately, so you can\nrefer to variables which may be defined later in services.  For example::\n\n  settings: {\n    env_set: {\n      'SHELL': '/bin/ksh',\n      'PATH': '/services/$(_CHAP_SERVICE)/bin:$(PATH)'\n      }\n    }\n\nIn the above, while the value of ``SHELL`` is known, the value of ``_CHAP_SERVICE`` will not be valid\nuntil a service executes.   However, because variables use \"late expansion\", you can define variables\nsuch as the above as templates so that they will be available to all services.\n\n.. _settings.env_unset:\n\n.. describe:: env_unset [ 'pattern, 'pattern', ... ]\n\nRemoves the environment variables which match any of the given patterns from the environment.  These variables\nwill not be passed down to services or logging directives.  Patterns are standard filename 'glob' patterns.\n\n.. _settings.idle_delay:\n\n.. describe:: idle_delay seconds\n\nSpecifies the number of seconds Chaperone will pause before tasks in the :ref:`IDLE service group <service.service_groups>`\nwill be started.  May contain fractional values such as \"0.1\".  Defaults to 1 second.\n\nThis delay is useful in at least two common situations:\n\n1. When service startup may cause log messages to appear at the console,\n   the console program (usually a shell) may have its prompt interleaved with console messages.\n   This delay decreases the likelihood of this happening.\n\n2. When services of type :ref:`simple <service.sect.type>` are used, there is no real way to determine\n   if services have fully started.  However, the idle delay does nothing except add a \"fudge factor\",\n   which, while useful, would be better implemented using proper 'notify', or 'forking' services.\n\n\n.. _settings.ignore_failures:\n\n.. describe:: ignore_failures ( false | true )\n\n   If set to 'true', then any the default for the service's :ref:`ignore_failures <service.ignore_failures>` will be\n   'true' rather than the normal 'false' default.   Any setting by a service overrides this value.\n\n   Primarily, this is useful for debugging and has similar utility as the command-line switch\n   :ref:`--ignore-failures <option.ignore-failures>` since it allows you to bypass normal system failure\n   checks and allow services to start even though dependencies may have failed.\n\n.. _settings.process_timeout:\n\n.. describe:: process_timeout: seconds\n\n   This allows you to set the global default for service :ref:`process_timeout <service.process_timeout>`.\n   Normally the process timeout value is determined by the :ref:`service type <service.sect.type>`.  Setting\n   this value globally will cause *all* processes to use the same process timeout as their defaults.\n\n   If a service specifies its own value, it will always take precedence over this default.\n\n.. _settings.shutdown_timeout:\n\n.. describe:: shutdown_timeout\n\n   When Chaperone receives a shutdown request (usually ``SIGTERM``), it goes through an orderly shutdown,\n   telling each service to stop.  If there are still services running after the shutdown timeout, \n   Chaperone will force all processes to quit using ``SIGKILL``.  The default for this value is\n   10 seconds.\n\n.. _settings.startup_pause:\n\n.. describe:: startup_pause\n\n   This allows you to set the global default for the service :ref:`startup_pause <service.startup_pause>` value.\n   If not specified, the service default will be used.\n\n   If a service specifies its own value, it will always take precedence over this default.\n\n.. _settings.enable_syslog:\n\n.. describe:: enable_syslog\n\n   This setting allows you to enable or disable Chaperone's internal syslog service.  If set to ``false`` then\n   the ``/dev/log`` file will not be created, and Chaperone will not intercept and redirect logging from running\n   applications.  Note that applications which write to ``stdout`` and ``stderr`` will still be intercepted\n   and processed by Chaperone's logging directives.\n\n   If omitted, this setting defaults to `true`.\n\n   Syslog can also be disabled by using the Chaperone command line option :ref:`--no-syslog <option.no-syslog>`.\n\n.. _settings.detect_exit:\n\n.. describe:: detect_exit\n\n   When 'true' (the default), then Chaperone intelligently watches the process environment to determine\n   whether it should automatically exit.   Chaperone will exit when:\n\n   * All processes have exited, and ...\n   * There are no pending ``inetd`` or ``cron`` services which are configured and active.\n\n   Generally, this behavior is desirable, but there are situations where disabling this can be useful.\n   For example, if a container contains a set of dormant (disabled) services, and they are manually\n   enabled or disabled during runtime, setting this to 'false' will cause Chaperone to remain running\n   even if there are no active services and all work has completed.\n\n   If set to 'false', then Chaperone will only exit whenever it is explicitly killed with ``SIGTERM``,\n   or when a service exits whose :ref:`exit_kills <service.exit_kills>` configuration value is set to 'true'.\n\n.. _settings.uid:\n\n.. describe:: uid user-name-or-number\n\n   This sets the default user account which will be used by services and logging directives.\n   If the ``uid`` setting is not specified, the default will the user specified on the command\n   line with :option:`--user <chaperone --user>` or :option:`--create-user <chaperone --create-user>`.\n\n   If none of the above are specified, the Chaperone runs the service normally under its own account\n   without specifying a new user.\n\n   Services and logging are affected differently by user credentials:\n\n   * See :ref:`service uid <service.uid>`, or ...\n   * :ref:`logging uid <logging.uid>` for more details.\n\n.. _settings.gid:\n\n.. describe:: gid group-name-or-number\n\n   When :ref:`uid <settings.uid>` is specified (either explicitly or implicitly inherited), the ``gid``\n   directive can be used to specify an alternate group to be used for logging or services.  \n"
  },
  {
    "path": "doc/source/ref/config-logging.rst",
    "content": ".. chapereone documentation\n   configuration directives\n\n.. include:: /includes/defs.rst\n\n.. _logging:\n\nConfiguration: Logging Declarations\n===================================\n\nLogging Quick Reference\n-----------------------\n\nChaperone has its own internal ``syslog`` service which listens on the ``/dev/log`` socket.  However, by default,\nnone of the messages sent to the syslog will be stored or output unless logging declarations are made.\n\nThe simplest logging directive tells chaperone what to do with log entries using a superset of the familiar\n`syslogd configuration format <http://linux.die.net/man/5/syslog.conf>`_.  For example, the following will\ndirect all messages at the warning level (or greater) to ``stdout``::\n\n  console.logging: {\n    selector: '*.warn',\n    stdout: true,\n  }\n\nYou can define as many different logging entries, and all will be respected as individual output targets.  If you\nhave services which do significant syslog output, you can decide on a per-service basis which logs go where,\nwhat aspects are sent to ``stdout`` and which go to log files.\n\nAn overview of logging directives follow, then detailed reference information.  Entries below\nmarked with |ENV| support :ref:`environment variable expansion <env.expansion>`.\n\n\n.. _table.logging-quick:\n\n.. table:: Logging Directives Quick Reference\n\n   ================================================= =============================================================================\n   logging keyword                        \t     meaning\n   ================================================= =============================================================================\n   :ref:`selector <logging.selector>`     \t     Specifies the syslog-compatible selection filter for this logging entry.\n\t\t  \t\t\t  \t     |ENV|\n   :ref:`file <logging.file>`             \t     Specifies an optional file for output. |ENV|\n   :ref:`stderr <logging.stderr>`         \t     Directs output to ``stderr`` (can be used with ``file``).\n   :ref:`stdout <logging.stdout>`         \t     Directs output to ``stdout`` (can be used with ``file``).\n   :ref:`syslog_host <logging.syslog_host>`          Directs output to the host or IP address specified (can be used in\n   \t\t     \t\t\t\t     combination with ``file``, ``stderr``, and ``stdout``.\n   :ref:`enabled <logging.enabled>`       \t     Can be set to ``false`` to disable this logging entry. |ENV|\n   :ref:`logrec_hostname <logging.logrec_hostname>`  Overrides the normal hostname inserted in syslog output records.\n   :ref:`overwrite <logging.overwrite>`   \t     If ``file`` is provided, then setting this to ``true`` will overwrite\n                                          \t     the file upon opening.  By default, log files operate in append mode.\n   :ref:`extended <logging.extended>`     \t     Prefixes log entries with their facility and priority (useful primarily\n                                          \t     for debugging).\n   :ref:`uid <logging.uid>`               \t     The uid (name or number) for permissions on created files and directories. \n   \t     \t\t\t\t  \t     |ENV|\n   :ref:`gid <logging.gid>`               \t     The gid (name or number) for permissions on created files and directories.\n      \t     \t\t\t\t  \t     |ENV|\n   ================================================= =============================================================================\n\n.. _logging.sect.selectors:\n\nSyslog Selectors\n----------------\n\nThe method used for selecting which log entries are sent to which logging services are specified using a selector\nformat similar to the one used by the standard ``syslogd`` daemon. [#f1]_  Chaperone includes some extensions to\nthe standard format to introduce greater flexiblity without deviating too far from the well-known syntax.\n\nIn the absence of a selector, Chaperone will direct all syslog output to the given location, so this entry\nechoes literally every ``syslog`` message to the container's ``stdout``::\n\n  everything.logging: { stdout: true }\n\nWhile this may be alright for simple applications, or for debugging, most applications require more nuanced\ncontrol of what goes where.  This is done by using *selectors*.  For example, the following includes\na selector which echoes only messages which have 'err' severity or greater to ``stdout``::\n\n  badstuff.logging { stdout:true, selector: '*.err' }\n\nSelector Format\n***************\n\nThe general format for selectors is:\n\n   [!] *<facility>* . [!][=] *<priority>* ; ...\n\nwhere\n\n*<facility>*\n   Describes the subsystem where the syslog message originated.  It is a comma-separated list of one of\n   the following, with the last two options being Chaperone extensions:\n\n   1. An asterisk (``*``) indicating all facilities.\n   2. One of the keywords **kern**, **user**, **mail**, **daemon**, **auth**, **syslog**, **lpr**, **news**,\n      **uucp**, **clock**, **authpriv**, **ftp**, **ntp**, **audit**, **alert**, **cron**, or **local0**\n      through **local7**.\n   3. A program identifier enclosed in brackets, such as ``[httpd]`` or ``[chaperone]``.\n   4. A regular expression which will match any text within the message, such as ``/error/`` or ``/seg.*fault/``.\n\n*<priority>*\n   Describes the priority of the message, and is either an asterisk (``*``) or\n   one of the following keywords in ascending order\n   of severity: **debug**, **info**, **notice**, **warn** (or **warning**), **err** (or **error**),\n   **crit**, **alert**, **emerg**.\n\nSelectors including an exclamation mark are *negative* selectors, omitting otherwise included log entries.  A selector\n*must* include positive selectors or no log entries will be selected.  For example::\n\n  # Select all errors (or more severe) except those sent to the auth subsystem\n  selector: '*.err;auth,authpriv.!*'\n\nHowever, the following selector will select nothing because there is no positive component::\n\n  # Does nothing\n  selector: 'auth,authpriv.!*'\n\nFacility Selection\n******************\n\nChaperone includes a more versatile set of options for selecting the facility where the message\noriginated.  You can include the classic ``syslog`` facility indication, or a program name (in brackets)\nor even a regular expression to match.  \n\nFor example, assume a syslog message from ``sshd``::\n\n  Jun  3 19:40:16 weevil sshd[1642]: Accepted publickey for root from ::1 port 48488 ssh2: RSA 24:2d:95:ec:09:fb:49:fa:e9:ff:e0:9e:c2:4d:13:42\n\nSince ``sshd`` defaults to logging to the ``auth`` subsystem, the following would select the above message::\n\n  selector: 'auth,authpriv.*'\n\nYou could also specify the program name::\n\n  selector: '[sshd].*'\n\nYou could even use a regular expression to match arbitrary strings to select the message (assuming the above message is written\nat priority 'info' or greater::\n\n  selector: '/publickey/.info'\n\nYou could also select all info messages which did not contain the word \"publickey\" like this::\n\n  selector: '*.info;!/publickey/.*'\n\nPriority Selection\n******************\n\nPriority selection is simpler, but it's important to notice that choosing a priority means that messages\nof that level *or greater severity* are selected::\n\n  selector: '*.err'\n\nwill select messages of **err**, **crit**, **alert**, or **emerge**, whereas::\n\n  selector: '*.*;*.!err'\n\nwill select messages of **debug**, **info**, **notice** or **warn**.   If you want to specify a priority\nwhich is exact (either for exclusion or inclusion), use the ``=`` prefix.  The following selector\nincludes log entries *only* if they are at level 'debug'::\n\n  selector: '*.=debug'\n\n\nLogging Config Reference\n------------------------\n\n.. _logging.selector:\n\n.. describe:: selector: \"selector; [selector; ...]\"\n\n   Specifies the logging entries which will be selected for reporting by this service.  Multiple selectors can be provided,\n   separated by semicolons.  If no selector option is provided, Chaperone assumes a selector of ``*.*``.\n\n   See the separate section above :ref:`on syslog selectors <logging.sect.selectors>` for more details.\n\n.. _logging.file:\n\n.. describe:: file: \"filepath\"\n\n   Indicates that output should be written to ``filepath``, which must be a full pathname or a pathname relative\n   to the home directory of the logging user (implicitly defined, or defined by the :ref:`uid <logging.uid>` directive.\n\n   *Note*: this should be an actual file, not a system file such as ``/dev/stdout``.  You can use the :ref:`stdout <logging.stdout>`\n   directive to cause syslog output to be directed to ``stdout``.\n\n   Chaperone supports two special features for logging filenames:\n\n   1.  You can include substitutions within a log filename using the '%' substitution set compatible \n       with `strftime <http://man7.org/linux/man-pages/man3/strftime.3.html>`_.  If so, Chaperone will close and\n       reopen the log file whenever the name changes.  For example::\n\n\t filename: \"$(APPS_DIR)/var/log/app-messages-%a.log\"\n\n       would create log files for each day of the week with names ``app-messages-sun.log``, ``app-messages-mon.log``. \n\n       Sometimes, this allows you to eliminate the need for log rotation.\n\n   2.  If Chaperone notices that the file's 'inode' or mountpoint has changed, it will close and reopen the file\n       automatically.  This means you can create jobs to do log-rotation, or manually rename or move the existing logfile\n       and Chaperone will take notice and assure a new log file is opened.\n\n   Note that you can combine this directive with :ref:`stdout <logging.stdout>`, :ref:`stderr <logging.stderr>`, and\n   :ref:`syslog_host <logging.syslog_host>`.  Output will be simultaneously written to all chosen locations.\n\n.. _logging.stdout:\n\n.. describe:: stdout ( false | true )\n\n   If this is 'true', then all selected syslog records will be copied to the 'stdout' of the container.  Defaults to 'false'.\n\n   Note that you can combine this directive with :ref:`stderr <logging.stderr>`, :ref:`file <logging.file>`, and\n   :ref:`syslog_host <logging.syslog_host>`.  Output will be simultaneously written to all chosen locations.\n\n.. _logging.stderr:\n\n.. describe:: stderr ( false | true )\n\n   If this is 'true', then all selected syslog records will be copied to the 'stderr' of the container.  Defaults to 'false'.\n\n   Note that you can combine this directive with :ref:`stdout <logging.stdout>`, :ref:`file <logging.file>`, and\n   :ref:`syslog_host <logging.syslog_host>`.  Output will be simultaneously written to all chosen locations.\n\n.. _logging.syslog_host:\n\n.. describe:: syslog_host hostname-or-ip\n\n   When set, chaperone will send all matching log records to the remote host specified by ``hostname-or-ip``.  The remote\n   host should be running a ``syslog`` daemon on UDP port 514.\n\n   Since UDP is a connectionless protocol, no error will be given if the remote host is unreachable, or is\n   not running the ``syslog`` daemon.  Packets will silently be sent and ignored.\n\n   Note that you can combine this directive with :ref:`stdout <logging.stdout>`, :ref:`stderr <logging.stderr>`, and \n   :ref:`file <logging.file>`. Output will be simultaneously written to all chosen locations.\n\n.. _logging.logrec_hostname:\n\n.. describe:: logrec_hostname hostname-string\n\n   Normally, syslog records include the hostname of the current host.  For example::\n\n      Jul 16 02:53:54 813703fb4021 sudo : pam_unix(sudo:session): session closed for user root\n\n   Note in the above line, that the string ``813703fb4021`` is the hostname of the current machine, which in the\n   case of Docker, is a randomly generated string.\n\n   You can use this directive to force the hostname to a particular string.  For example, you could set \n   ``logrec_hostname`` to ``dirserv-1``, which would cause the above sample line to\n   instead be written like this::\n\n      Jul 16 02:53:54 dirserv-1 sudo : pam_unix(sudo:session): session closed for user root\n\n   This can be useful when logs are being consolidated using remote logging, and some consistent means of identifying\n   the log source is desirable.\n\n.. _logging.enabled:\n\n.. describe:: enabled ( true | false )\n\n   Set this to 'false' to disable all logging to this logging service.\n\n.. _logging.overwrite:\n\n.. describe:: overwrite ( false | true )\n\n   By default, Chaperone will append logs to any existing log file which matches the :ref:`file <logging.file>` directive.\n   Setting this to 'true' will overwrite any log file.  Note that log files are opened when Chaperone starts running, so\n   any overwrite will be immediate.\n\n.. _logging.extended:\n\n.. describe:: extended ( false | true )\n\n   This option prefixes every output syslog line with the facility and priority which was used to write to the syslog.\n   Normally, this is not desirable, since often people rely upon the format of a log file line, which typically\n   looks like this::\n\n     Jun 15 02:09:33 su [27]: pam_unix(su:session): session opened for user root by (uid=1000)\n\n   If you set ``extended=true``, then log output lines will look like this::\n\n     authpriv.info Jun 15 02:09:33 su [27]: pam_unix(su:session): session opened for user root by (uid=1000)\n\n   Note that ``authpriv.info`` is at the beginning of the line, and indicates the facility and priority.\n\n   This is primarily useful for debugging and fine-tuning logging output, as there is no good way to determine\n   the exact facility and priority used by some daemons if they do not clearly document it.\n\n.. _logging.uid:\n\n.. describe:: uid user-name-or-number\n\n   Chaperone will create and manage log files as the user specified by ``uid``.  If ``uid`` is not specified,\n   the :ref:`settings uid <settings.uid>` will be used, and finally the user specified on the command\n   line with :option:`--user <chaperone --user>` or :option:`--create-user <chaperone --create-user>`.\n\n   If none of the above are specified, the Chaperone runs the service normally under its own account\n   without specifying a new user.\n\n   Specifying a user requires root privileges.  Within containers like Docker, chaperone usually runs\n   as root, so service configurations can specify alternate users even if they are run under a\n   different user account.\n\n   For example, if Chaperone were run from \n   docker using the `chaperone-baseimage <https://hub.docker.com/r/chapdev/chaperone-baseimage/>`_ image like this::\n\n     docker run -d chapdev/chaperone-baseimage \\\n                 --user wwwuser --config /home/wwwuser/chaperone.conf\n      \n   there is no reason that ``chaperone.conf`` could not contain the following logging definitions::\n\n     mysql.logging: {\n       uid: root,\n       selector: \"[mysql].*\",\n       file: \"/var/log/mysql-%d.log\",\n     }\n\n   In this case, \"mysql.logging\" would be written as 'root', regardless of what the user configuration\n   is for other services.\n\n   Typically, when using a :ref:`userspace development model <guide.UDM>`, you want daemon log\n   files to be written under the development user's ID for easy management.\n\n.. _logging.gid:\n\n.. describe:: gid group-name-or-number\n\n   When :ref:`uid <logging.uid>` is specified (either explicitly or implicitly inherited), the ``gid``\n   directive can be used to specify an alternate group to be used for logging.  If not specified,\n   then the user's primary group will be used.\n\n   As with :ref:`uid <logging.uid>` specifying a group requires root priviliges.\n\n.. rubric:: Notes\n\n.. [#f1]\n\n   The \"standard\" ``syslogd``, for our purposes, is the one authored by `Wettstein and Schulze <http://linux.die.net/man/5/syslog.conf>`_.\n   While it has been in use for decades, there are also many variations and some inconsistencies in the way selectors are\n   interpreted.\n"
  },
  {
    "path": "doc/source/ref/config-service.rst",
    "content": ".. chaperone documentation\n   configuration directives\n\n.. include:: /includes/defs.rst\n\n.. _service:\n\nConfiguration: Service Declarations\n===================================\n\nService Quick Reference\n-----------------------\n\nService configurations are identified by user-defined names and end with the suffix ``.service``.  So,\nfor example, the following defines a registration script called ``register_my_app`` which runs when all other\nservices have been launched::\n\n  myreg.service: {\n    type: oneshot,\n    command: \"/usr/local/bin/register_my_app --host central-registry.example.com\",\n    service_groups: IDLE,\n  }\n\nMultiple services can be declared in a single file.  Order within a configuration file is not important.\nHowever, if several configuration files are involved, services in subsequent files (alphabetically) will\nreplace earlier services defined with the same name.\n\nEach service inherits the environment defined by the :ref:`settings directive <config.settings>` and\ncan be tailored separately for the needs of each service.  Entries below marked with |ENV| support\n:ref:`environment variable expansion <env.expansion>`.\n\n.. _table.service-quick:\n\n.. table::  Service Directives Quick Reference\n\n   ================================================  =============================================================================\n   service variable                                  meaning\n   ================================================  =============================================================================\n   :ref:`type <service.type>`                        Defines the service type: 'oneshot', 'simple', forking', 'notify', 'inetd',\n                                                     or 'cron'.  Default is 'simple'.\n   :ref:`command <service.command>`                  Specifies the command to execute.  The command is not processed by a shell,\n                                                     but environment variable expansion is supported. |ENV|\n   :ref:`enabled <service.enabled>`                  If 'false', the service will not be started, nor will it be required by\n                                                     any dependents.  Default is 'true'. |ENV|\n   :ref:`stderr <service.stderr>`                    Either 'log' to write stderr to the syslog, or 'inherit' to write stderr\n                                                     to the container's stderr file handle.   Default is 'log'. |ENV|\n   :ref:`stdout <service.stdout>`                    Either 'log' to write stdout to the syslog, or 'inherit' to write stdout\n                                                     to the container's stdout file handle.   Default is 'log'. |ENV|\n   :ref:`port <service.port>`\t\t\t     For service type 'inetd', specifies the dynamic port number for \n   \t      \t\t\t\t\t     connections.  There is no default. |ENV|\n   :ref:`after <service.after>`                      A comma-separated list of services or service groups which must start\n                                                     before this service is allowed ot start (dependencies).\n   :ref:`before <service.before>`                    A comma-separated list of services or service groups which cannot be\n                                                     started until this service starts successfully (dependents).\n   :ref:`directory <service.directory>`              The directory where the command will be executed.  Otherwise, the account\n                                                     home directory will be used. |ENV|\n   :ref:`env_inherit <service.env_inherit>`          An array of patterns which can match one or more\n                                                     environment variables.  Environment variables which\n                                                     do not match any pattern will be excluded.  Default is ``['*']``.\n   :ref:`env_set <service.env_set>`                  Additional environment variables to be set.\n   :ref:`env_unset <service.env_unset>`              Environment variables to be removed.\n   :ref:`exit_kills <service.exit_kills>`            If 'true' the entire system should be shut down when this service stops.\n                                                     Default is 'false'.\n   :ref:`ignore_failures <service.ignore_failures>`  If 'true', failures of this service will be ignored but logged.\n                                                     Dependent services are still allowed to start.\n   :ref:`interval <service.interval>`                For `type=cron` services, specifies the crontab-compatible interval\n                                                     in standard ``M H DOM MON DOW`` format. |ENV|\n   :ref:`kill_signal <service.kill_signal>`          The signal used to kill this process.  Default is ``SIGTERM``.\n   :ref:`optional <service.optional>`                If 'true', then if the command file is not present on the system,\n                                                     the service will act as if it were not enabled.\n   :ref:`pidfile <service.pidfile>`                  The full path to the file which will contain the process 'pid'\n                                                     upon startup. ('forking' and 'simple' types only) |ENV|\n   :ref:`process_timeout <service.process_timeout>`  Specifies the amount of time Chaperone will wait for a service to start.\n                                                     The default varies for each type of service.\n                                                     See :ref:`service types <service.sect.type>` for more\n                                                     information.\n   :ref:`restart <service.restart>`                  If 'true', then chaperone will restart this service if it fails (but\n                                                     not if it terminates normally).  Default is 'false'.\n   :ref:`restart_delay <service.restart_delay>`      The number of seconds to pause between restarts.  Default is 3 seconds.\n   :ref:`restart_limit <service.restart_limit>`      The maximum number of restart attempts.  Default is 5.\n   :ref:`service_groups <service.service_groups>`    A comma-separated list of service groups this service belongs to.  All\n                                                     uppercase services are reserved by the system.\n   :ref:`setpgrp <service.setpgrp>`                  If 'true', then the service will be isolated in its own process\n                                                     group upon startup.  This is the default.\n   :ref:`startup_pause <service.startup_pause>`      The amount of time Chaperone will wait to see if a service fails\n                                                     immediately upon startup.  Defaults is 0.5 seconds.\n   :ref:`uid <service.uid>`                          The uid (name or number) of the user for this service. |ENV|\n   :ref:`gid <service.gid>`                          The gid (name or number) of the group for this service. |ENV|\n   ================================================  =============================================================================\n\n.. _service.sect.type:\n\nService Types\n-------------\n\nThe ``type`` option defines how the service will be treated, when it is considered active, and what happens\nwhen the service terminates either normally, or abnormally.\n\nValid service types are: *simple* (the default), *oneshot*, *forking*, *notify*, and *cron*.   These service types\nare patterned loosely after service types defined by `systemd <http://www.freedesktop.org/software/systemd/man/systemd.service.html>`_,\nbut there are important differences [#f1]_ , so this section should be read carefully before making any assumptions.\n\nAs shown in :numref:`table.service-types`, each service type has a different behavior.   In the event the service's process reports\nan error, it is either a *system failure* or *service failure*.  A system failure results in an immediate, orderly shutdown of\nany services which have been started, along with logging an error report and termination of the system.  A service failure is\nan isolated situation affecting only the service itself.\n\n.. _table.service-types:\n\n.. table::  Service Types\n\n   ================  ==========================================================  ========================= =========================\n   type              behavior                                                    system failure            service failure\n   ================  ==========================================================  ========================= =========================\n   simple            This is the default type.  Chaperone considers a service    Service terminates        Service terminates\n                     \"started\" as soon as the startup grace period               abnormally during grace   abnormally later despite\n                     (defined by :ref:`startup_pause <service.startup_pause>`)   period or pidfile not     retries.\n                     elapses.\t\t\t\t\t\t\t found (if specified) \t   \n                     If the service terminates normally at any time, the\t before process timeout. \n                     service is considered \"started\" until reset.\n   forking           A forking service is expected to set up all                 Service terminates        Service terminates\n                     communications channels and assure that the service         abnormally during the     abnormally later despite\n                     is ready for application use, then exit normally            process timeout, or       retries (only if pidfile\n                     before the                                                  the pidfile cannot be     specified).  Otherwise,\n                     :ref:`process_timeout <service.process_timeout>`            found (if specified)      never. [#f2]_\n                     expires.  *Note*: The default process timeout for           during the timeout\n                     forking services is 300 seconds.                            period.\n   oneshot           A oneshot service is designed to execute scripts which      Service terminates        Service terminates\n                     complete an operation and are considered started once       abnormally during         abnormally during a\n                     they run successfully.  *Note*: The default process         the process timeout.      manual \"start\"\n                     timeout for oneshot services is 60 seconds.                                           operation.\n   notify            A notify service is expected to establish communication     Service terminates        Service sends a\n                     with chaperone using the *sd_notify* protcol.  The          abnormally during the     failure notification.\n                     :ref:`NOTIFY_SOCKET <env.NOTIFY_SOCKET>`                    process timeout.\n                     environment variable will be set, and chaperone will\n                     consider the service started only when notified\n                     appropriately. *Note*: The default process timeout\n                     for a notify service is 30 seconds.\n   inetd             The \"inetd\" type listens for TCP connections on the port    Service executable        Never.  Services\n   \t\t     specified by the \t      \t      \t\t                 is missing or invalid,    which fail are logged\n\t\t     :ref:`port <service.port>` parameter.  When a connection\t or TCP port is invalid\t   but new connections\n\t\t     is received, chaperone will start a service connecting\t or already in use.\t   will still be\n\t\t     `stdin`, `stdout` and `stderr` of the inbound socket\t\t\t\t   accepted.\n\t\t     to the specified command.\n   cron              The cron type schedules a script or program for periodic    Service executable        Never.  Failures of\n                     execution.  The service is considered started once          is missing or invalid     isolated executions\n                     successfully scheduled.  Both scheduling parameters         but not optional.         do not constitute\n                     (specified using :ref:`interval <service.interval>`)                                  a permanent service\n                     as well as the presence of the executable specified                                   failure.\n                     in :ref:`command <service.command>` will be checked\n                     before scheduling is considered successful.  Cron\n                     services which are declared as\n                     :ref:`optional <service.optional>` will not be\n                     scheduled and will be treated as if they were disabled.\n   ================  ==========================================================  ========================= =========================\n\nNote: Unlike ``systemd``, Chaperone does not have an \"idle\" service type.  This is accomplished instead using a special\nsystem-defined service group called \"IDLE\", thereby permitting any service type to be activated when startup is\ncomplete.   See :ref:`service_groups <service.service_groups>` for more information.\n\n\nService Config Reference\n------------------------\n\n.. _service.type:\n\n.. describe:: type: ( simple | forking | oneshot | notify | inetd | cron )\n\n   The ``type`` option defines how the service will be treated, when it is considered active, and what happens\n   when the service terminates either normally, or abnormally.  See the :ref:`separate section on service types <service.sect.type>` for\n   a full description of what chaperone service types are and how they behave.\n\n   This setting is optional.  If omitted, the default is \"simple\".\n\n.. _service.command:\n\n.. describe:: command: \"executable args ...\"\n\n   The ``command`` option defines the command and arguments which will be executed when the service is started.  Both\n   :ref:`environment variable expansion <env.expansion>` and \"tilde\" expansion for user names are supported, though\n   \"tilde\" expansion is supported only on the command name itself, not on arguments.\n\n   Note that the command line is *not* passed to a shell, so other shell meta-characters or shell environment variable\n   syntax not supported.\n\n   The first token on the command line must be an executable program available in the ``PATH``.  If it is not found,\n   it will be considered an error.  However, if :ref:`optional <service.optional>`\n   is set to 'true', then the service will be disabled in such cases.  This makes it easy to define configurations\n   for programs which may or may not be installed.  *Note*: If the executable is present, but permissions deny\n   access, it is considered an error regardless of whether the service is declared optional.\n\n   In all cases, the environment that is used for ``PATH`` and expansions is the same environment that would be\n   passed to the service.  If the executable is not available in the service's ``PATH`` then a fully qualified\n   pathname should be used.\n\n.. _service.enabled:\n\n.. describe:: enabled: ( true | false )\n\n   If enabled is 'true' (the default), then the service will start normally as per its type.  If it is\n   set to 'false', then the service will be ignored upon start-up, and any dependencies will\n   be considered satisfied.\n\n   Services can be enabled and disabled dynamically while Chaperone is running using the\n   :ref:`telchap command <ref.telchap>`.\n\n   Since you can use environment variable expansions, it can be useful to make service startup conditional\n   based upon some environment variable setting, such as::\n\n     mysql.service: {\n       type: simple,\n       enabled: \"$(ENABLE_MYSQL:+true)\",\n       ...\n     }\n\n.. _service.env_inherit:\n\n.. describe:: env_inherit [ 'pattern', 'pattern', ... ]\n\nSpecifies a list of patterns which define what will be inherited from the environment defined by the\n:ref:`global settings <config.settings>`  Patterns are standard filename \"glob\" patterns.\nBy default, all environment variables will be inherited from the settings environment.\n\nFor example::\n\n  sample.service: {\n    command: '/opt/app/bin/do_the_stuff',\n    env_inherit: [ 'PATH', 'TERM', 'HOST', 'SSH_*' ],\n  }\n\n.. _service.env_set:\n\n.. describe:: env_set { 'NAME': 'value', ... }\n\nProvides a list of name/value pairs for setting or overriding environment variables.  The values may contain\n:ref:`variable expansions <env.expansion>`.    The inherited environment will be the one configured\nusing similar settings directives such as :ref:`settings env_set <settings.env_set>`.\n\n.. _service.env_unset:\n\n.. describe:: env_unset [ 'pattern, 'pattern', ... ]\n\nRemoves the environment variables which match any of the given patterns from the environment.\nPatterns are standard filename 'glob' patterns.\n\n.. _service.stdout:\n\n.. describe:: stdout: ( 'log' | 'inherit' )\n\n   Can be set to 'log' to output service `stdout` to syslog (the default) or 'inherit' to output service messages\n   directly to the container's stdout.   While it may be tempting to use 'inherit', we suggest you use the syslog\n   service instead, then tailor :ref:`logging <logging>` entries accordingly if console output desired.\n   This will provide much more flexibility.\n\n   Messages from the process `stdout` will be logged as syslog facility and severity of `daemon.info`. [#f3]_\n\n.. _service.stderr:\n\n.. describe:: stderr: ( 'log' | 'inherit' )\n\n   Can be set to 'log' to output service `stderr` to syslog (the default) or 'inherit' to output service messages\n   directly to the container's stderr.   While it may be tempting to use 'inherit', we suggest you use the syslog\n   service instead, then tailor :ref:`logging <logging>` entries accordingly if console output desired.\n   This will provide much more flexibility.\n\n   Messages from the process `stderr` will be logged as syslog facility and severity of `daemon.warn`. [#f3]_\n\n.. _service.port:\n\n.. describe:: port: tcp-port-number\n\n   Specifies the TCP port number associated with an 'inetd' service, and must be specified when the\n   type is 'inetd'.   When this service is started, Chaperone will bind to the specified TCP port and\n   listen for incoming connections.  When a connection is received, Chaperone will start the service\n   specified by the given :ref:`command <service.command>` parameter.\n\n   The service will be started with `stdin`, `stdout`, and `stderr` connected to the started process.\n   For example, the following script would initiate a simple \"echo\" service which would terminate\n   when a blank line is sent::\n\n     #!/usr/bin/python3\n     import sys\n     while True:\n        result = input(\"echo:\")\n\tif not result or result.strip() == \"\":\n\t   exit(0)\n\tprint(\"echoed ->\", result)\n\tsys.stdout.flush()\n\n   Note the ``sys.stdout.flush()`` command.  Generally, such a command (or equivalent) will be necessary\n   to assure that the program flushes it's output buffer.\n\n   Commands can be simple informational services, or long-running servers.  If Chaperone receives multiple\n   socket connections, it will start up as many processes as are needed to satisfy each request.  In other words,\n   a single command invocation is responsible for a single client connection.\n\n   If the script needs to do logging, it will need to do so via ``/dev/log``, or an equivalent syslog facility\n   within the language, since `stderr` also is connected to the remote socket.\n\n   There are many use-cases for creating simple port-triggerable services, especially in environments\n   like Docker where containers contain only one or two processes, but auxilliary features may be\n   desired without committing a long-running daemon to the task.\n\n   For example, here is a blog post which\n   describes `Service Monitoring with xinetd <http://www.softwareprojects.com/resources/programming/t-monitoring-services-with-xinetd-2082.html>`_.\n   The same type of scripts work identically with Chaperone.\n\n.. _service.after:\n\n.. describe:: after: \"service-or-group, ...\"\n\n   Specifies one or more services or service groups which must be started successfully before this service\n   will start.\n\n   The value specified is a comma-separated list of services or service groups.  Services are always\n   identified with a ``.service`` suffix.  Otherwise, the reference is to a service group.  Thus::\n\n     some.service: { after: \"one.service, setup\", command: \"echo some\" }\n\n   defines a service which will start only after the service \"one.service\" and all services which\n   are members of the \"setup\" group.\n\n   For more information see :ref:`service_groups <service.service_groups>`.\n\n.. _service.before:\n\n.. describe:: before: \"service-or-group, ...\"\n\n   Specifies one or more services or service groups which will not be started until this service starts\n   successfully.\n\n   The value specified is a comma-separated list of services or service groups.  Services are always\n   identified with a ``.service`` suffix.  Otherwise, the reference is to a service group.  Thus::\n\n     some.service: { before: \"one.service, application\", command: \"echo some\" }\n\n   defines a service which will start before \"one.service\" and any services which\n   are members of the \"application\" group.\n\n   For more information see :ref:`service_groups <service.service_groups>`.\n\n.. _service.directory:\n\n.. describe:: directory: \"directory-path\"\n\n   Specifies the start-up directory for this service.  If not provided, then the start-up directory is\n   the home directory for the user under which the service will run.\n\n.. _service.exit_kills:\n\n.. describe:: exit_kills ( false | true )\n\n   If set to 'true', then when this service terminates, Chaperone will initiate an orderly system shutdown.\n   This is useful in cases where the lifetime of a controlling service, such as a shell or main application should\n   dictate the lifetime of the container.\n\n.. _service.ignore_failures:\n\n.. describe:: ignore_failures ( false | true )\n\n   If set to 'true', then any failure by the service will be logged but ignored.  Service failures are logged\n   using syslog facility `local5.info` (`local5` is the facility used for all messages that originate from\n   Chaperone itself.\n\n.. _service.interval:\n\n.. describe:: interval: \"cron-interval-spec\"\n\n   This is required for service ``type=cron`` and contains the cron specification which indicates the interval\n   for period execution.  Nearly all features documented in `this crontab man page <http://unixhelp.ed.ac.uk/CGI/man-cgi?crontab+5>`_\n   are supported, including extensions for ranges and special keywords such as ``@hourly`` which can be specified\n   with or without the leading ``@``.  So, a simple hourly cron service can be defined like this::\n\n     cleanup_cookies.service: {\n       type: cron,\n       interval: hourly,\n       command: \"/opt/superapp/bin/clean_temp_cookies --silent\",\n     }\n\n   which is equivalent to::\n\n     cleanup_cookies.service: {\n       type: cron,\n       interval: \"0 * * * *\",\n       command: \"/opt/superapp/bin/clean_temp_cookies --silent\",\n     }\n\n   Chaperone also supports an optional sixth field [#f4]_ for seconds so that seconds can be provided, so the following runs\n   every 15 seconds::\n\n     pingit.service: {\n       type: cron,\n       interval: \"* * * * * * */15\"\n       command: \"/opt/superapp/bin/ping_central_hub\",\n     }\n\n   Note that the ``@reboot`` special nickname is not supported, since Chaperone provides similar features using\n   the ``INIT`` service group.\n\n.. _service.kill_signal:\n\n.. describe:: kill_signal: ( name | number )\n\n   Specifies the signal which is sent to the process for normal termination.  By default, Chaperone sends ``SIGTERM``.\n\n.. _service.optional:\n\n.. describe:: optional: ( false | true )\n\n   If 'true', then this service is considered optional and will be disabled upon start-up if the executable is not\n   found.   Only a \"file not found\" error triggers optional service behavior.  If the executable file exists,\n   but permissions are incorrect, it is still considered a failure.\n\n   Optional services may be started manually later if, for example, the executable should become available after\n   system start-up.\n\n.. _service.pidfile:\n\n.. describe:: pidfile: file-path\n\n   This setting specifies the \"PID file\" which the service will create upon startup to indicate it's controlling\n   process ID.   This is valid only for 'simple', and 'forking' services.  The appearance of the pidfile is an\n   indication that the service has been activated.\n\n   When the ``pidfile`` directive exists:\n\n   1. Chaperone start the service command normally.\n   2. If the executable runs without error, Chaperone will watch for the appearance\n      of the file specified in the ``pidfile`` directive.\n   3. If the PID file does not appear within the timeframe given by the :ref:`process_timeout <service.process_timeout>`,\n      then it is considered a failure.\n\n   If the ``pidfile`` is seen, and contains a valid integer process ID *which denotes a running process*, then\n   Chaperone will monitor the status of that process for failures to determine the disposition of the service.\n\n   For 'simple' service types, it is possible (and likely) that the PID value will be the same as the PID of the\n   originally running process, since 'simple' types are not expected to exit for the duration of their activity.\n\n.. _service.process_timeout:\n\n.. describe:: process_timeout: seconds\n\n   When Chaperone is waiting for a service to start, it will wait for this number of seconds before it considers that\n   the service has failed.   This value is meaningful for process types `oneshot`, `forking`, and `notify` only\n   and is ignored for other types:\n\n   For `oneshot` services:\n      Chaperone assumes that a oneshot service is only started once it completes its task successfully, and\n      therefore waits ``process_timeout`` seconds before allowing dependent services ot start.  For oneshot\n      services the default process timeout is *60 seconds*.\n\n   For `forking` services:\n      Chaperone assumes a forking service does set-up, then proceeds to launch subprocesses to provide\n      services.   The default process timeout for a forking service is *30 seconds*.\n\n   For `notify` services:\n      Since a notify service has an explicit means to tell chaperone about it's status, the process timeout\n      defaults to *300 seconds* to provide the service with a greater amount of startup time.\n\n.. _service.restart:\n\n.. describe:: restart: ( false | true )\n\n   By default, chaperone will not restart a service once it has failed.  Setting this to 'true' will tell chaperone\n   to wait :ref:`restart_delay <service.restart_delay>` seconds after a failure, then restart the service until the\n   :ref:`restart_limit <service.restart_limit>` is reached.   If all restarts fail, the chaperone considers\n   the service to be failed.\n\n   Note that restarts do *not* happen during system startup.  If a service fails during system startup, the\n   failure is considered a system failure (unless :ref:`ignore_failures <service.ignore_failures>` is 'true')\n\n.. _service.restart_delay:\n\n.. describe:: restart_delay: seconds\n\n   When a service fails and is about to be restarted, chaperone delays for this interval before attempting\n   restart.   By default, this value is *0.5 seconds*.\n\n   Consider increasing the restart delay for services which may fail because of network issues, since network\n   issues may be transient (such as routers rebooting).\n\n.. _service.restart_limit:\n\n.. describe:: restart_limit: number-of-retries\n\n   This value indicate the number of restarts which will be performed when a service fails.  Once the service\n   starts successfully, the restart counter is reset.\n\n.. _service.service_groups:\n\n.. describe:: service_groups: \"group[,group,...]\"\n\n   This directive declares that the service has membership in one or more service groups.  If not specified,\n   all services have membership in the group \"default\".\n\n   There are also two system-defined groups which have special meaning:\n\n   ``INIT``\n     This group will be started first, before any other service that is *not a member of the INIT group* itself.\n     The order in which services will start within the INIT group is unspecified unless services make explicit\n     :ref:`before <service.before>` or :ref:`after <service.after>` declarations.\n\n   ``IDLE``\n     This group will be started after all other services that are *not a member of the IDLE group* itself.\n     The order in which services will start within the IDLE group is unspecified unless services make explicit\n     :ref:`before <service.before>` or :ref:`after <service.after>` declarations.\n\n   User-defined groups can be defined and used for any purpose, but must not have names which are all\n   uppercase, as these are reserved for system groups.\n\n   Group membership does *not* imply that the group will be started as a unit, or that the entire group\n   will complete startup before other groups start.  For example, consider these service declarations::\n\n     one.service:    { service_group: \"setup\", command: \"echo one\" }\n     two.service:    { service_group: \"setup\", command: \"echo two\" }\n     three.service:  { service_group: \"sanity_checks\", command: \"echo three\" }\n     four.service:   { service_group: \"sanity_checks\", command: \"echo four\" }\n\n   Chaperone does not consider members of the same group to be related in any way, and will start them\n   randomly in parallel at start-up.  Assuring a sequence of start-up operations *must* be done using\n   :ref:`before <service.before>` or :ref:`after <service.after>`, as follows::\n\n     one.service:    { service_group: \"setup\", command: \"echo one\" }\n     two.service:    { service_group: \"setup\", command: \"echo two\" }\n     three.service:  { service_group: \"sanity_checks\", after: \"setup\" command: \"echo three\" }\n     four.service:   { service_group: \"sanity_checks\", command: \"echo four\" }\n\n   The \"after\" declaration assures that \"three.service\" will start only once all services in the \"setup\"\n   group have successfully started.  *But*, \"four.service\" is still independent and can start at any time.\n\n   So, for \"four.service\" there are two options.  By declaring \"four.service\" like this::\n\n     four.service:   { service_group: \"sanity_checks\", after: \"setup\", command: \"echo four\" }\n\n   it will also wait for all \"setup\" services, *but* it will start in parallel with \"three.service\",\n   whereas the declaration::\n\n     four.service:   { service_group: \"sanity_checks\", after: \"three.service\", command: \"echo four\" }\n\n   achieves two goals: it assures the \"four.service\" starts after \"three.service\" but also assures\n   all \"setup\" services will be completed, since \"three.service\" already expresses such a dependency.\n\n   .. note::\n      In all cases, references to a service group operate identically to explicit references to all\n      group members.  Group references are merely a shortcut.  Therefore::\n\n        four.service:   { service_group: \"sanity_checks\",\n                          after: \"setup\",\n                          command: \"echo four\" }\n\n      is functionally identical to::\n\n        four.service:   { service_group: \"sanity_checks\",\n                          after: \"one.service,two.service,three.service\",\n                          command: \"echo four\" }\n\n\n.. _service.setpgrp:\n\n.. describe:: setpgrp ( true | false )\n\n   By default, chaperone makes each newly created service the parent of it's own process group.  This has the advantage\n   of providing partial isolation for the service, and assures that if signals are sent to the group, no other processes\n   are affected.  It also provides a poor man's method of tracking service groupings. [#f5]_\n\n   While this is a reasonable default, some interactive processes (such as shells like ``/bin/bash``) should be executed with\n   ``setpgrp: false``, since they use process groups extensively themselves and will want to set up process groups\n   according to their job control strategy.\n\n\n.. _service.startup_pause:\n\n.. describe:: startup_pause seconds\n\n   When Chaperone starts a service, it waits a short time to determine whether the service fails immediately.  This\n   is the \"startup_pause\" and defaults to 0.5 seconds.\n\n   Currently, Chaperone only uses this technique for ``type=simple`` and ``type=notify`` services, so\n   it will have no impact on other service types.  Because \"simple\" services are considered started as soon as \n   process execution begins, the this short pause catches errors which occur within the first few moments of \n   process initialization (such as unexpected permission problems) rather than allowing dependent \n   services to start immediately.\n\n.. _service.uid:\n\n.. describe:: uid user-name-or-number\n\n   Chaperone will run the service as the user specified by ``uid``.  If ``uid`` is not specified for the service,\n   the :ref:`settings uid <settings.uid>` will be used, and finally the user specified on the command\n   line with :option:`--user <chaperone --user>` or :option:`--create-user <chaperone --create-user>`.\n\n   When Chaperone is told to use a particular user account, it also sets the ``HOME``, ``USER``, and\n   ``LOGNAME`` environment variables to reflect those associated with the user.\n\n   If none of the above are specified, the Chaperone runs the service normally under its own account\n   without specifying a new user.\n\n   Specifying a user requires root privileges.  Within containers like Docker, chaperone usually runs\n   as root, so service configurations can specify alternate users even if they are run under a\n   different user account.\n\n   For example, if Chaperone were run from docker\n   using the `chaperone-baseimage <https://hub.docker.com/r/chapdev/chaperone-baseimage/>`_ image like this::\n\n     docker run -d chapdev/chaperone-baseimage \\\n                 --user wwwuser --config /home/wwwuser/chaperone.conf\n\n   there is no reason that ``chaperone.conf`` could not contain the following service definitions::\n\n     mysql.service: {\n       uid: root, command: \"/etc/init.d/mysql start\"\n     }\n     myapp.service: {\n       command: \"~/bin/my_application\"\n     }\n\n   In this case, \"myapp.service\" would run as user \"wwwuser\" becaues no ``uid`` was specified.  However\n   because Docker runs chaperone as root, it is perfectly valid for the configuration file to tell\n   Chaperone to run the \"mysql\" startup command as root.\n\n.. _service.gid:\n\n.. describe:: gid group-name-or-number\n\n   When :ref:`uid <service.uid>` is specified (either explicitly or implicitly inherited), the ``gid``\n   directive can be used to specify an alternate group to be used for execution.  If not specified,\n   then the user's primary group will be used.\n\n   As with :ref:`uid <service.uid>` specifying a group requires root priviliges.\n\n.. rubric:: Notes\n\n.. [#f1]\n\n   Making chaperone's service types similar to ``systemd`` service types is a blessing and a curse.  The blessing is that ``systemd``\n   is rapidly becoming the new standard for init daemons, so over time, there will be a good general knowledge of what various\n   service types mean.  The downside is that chaperone is significantly simpler than ``systemd`` and there will be subtle\n   (and probably to some, annoying) differences.  However, we took the risk of choosing a similar model, which we believe will\n   benefit from the standardization of important process management techniques like\n   `sd_notify <http://www.freedesktop.org/software/systemd/man/sd_notify.html>`_ as well as making it easier for those\n   familiar with ``systemd`` to use their knowledge in defining chaperone configurations.\n\n.. [#f2]\n\n   Chaperone does not attempt \"PID guessing\" as ``systemd`` and some other process managers attempt to do.  The assumption\n   is that \"notify\" will be the preferred means to determine if a service has started successfully, and to know what\n   it's PID is in case of a crash or internal notification.\n\n.. [#f3] Syslog facilities and severity levels are documented `on Wikipedia <https://en.wikipedia.org/wiki/Syslog>`_.\n\n.. [#f4]\n\n   Yes, the seconds field appears at the *end*.  This is inherited from the `croniter package <https://github.com/kiorky/croniter>`_\n   which we use to parse and manage the internal cron intervals.  We considered not documenting it because it seems\n   a bit non-standard, then figured... hey, could be useful.\n\n.. [#f5]\n\n   There is really only one bulletproof way to manage isolated groups of processes:\n   `control groups (or cgroups) <https://en.wikipedia.org/wiki/Cgroups>`_.  Chaperone intentionally avoids using\n   control groups for a number of reasons, but mostly because they require privileges which make containers\n   less secure.  In addition, despite their power and utility, control groups are have become a contentious\n   feature right now, being used extensively, and often in incompatible ways, by\n   both `Docker <docker.com>`_  and `systemd <http://www.freedesktop.org/software/systemd/man/systemd.service.html>`_.  Chaperone\n   is intended to be lean, simple and compatible with containers.  For now, avoiding cgroups we believe will\n   keep Chaperone a more useful and simple accessory.\n"
  },
  {
    "path": "doc/source/ref/config.rst",
    "content": ".. chaperone documentation\n   configuration directives\n\n.. _config:\n\nChaperone Configuration\n=======================\n\nChaperone has a versatile configuration language that can be quick and easy to use, or can comprise many services\nand dependencies.  For example, the following user appllication plus MySQL database server along with syslog\nredirection could be defined simply in just a few lines::\n\n  mysql.service:  { command: \"/etc/init.d/mysql start\",\n                    type: forking }\n  myapp.service:  { command: \"/opt/apps/bin/my_application\", \n                    restart: true, after: mysql.service }\n  syslog.logging: { filter: \"*.info\", stdout: true }\n\nConfigurations can be as sophisticated as desired, including cron-type scheduling, multiple types of jobs, and\ncomplex job trees.   These sections provide a complete reference to how the chaperone configuration directives.\n\n.. toctree::\n\n   config-format.rst\n   config-global.rst\n   config-service.rst\n   config-logging.rst\n"
  },
  {
    "path": "doc/source/ref/env.rst",
    "content": ".. include:: /includes/defs.rst\n\n.. _ch.env:\n\nEnvironment Variables\n=====================\n\nOverview and Quick Reference\n----------------------------\n\nChaperone-specific environment variables are described here.  Because environment variables\nare an important configuration component for many applications, Chaperone tries to make\nsure any Chaperone-specific variables do not automatically pollute the environment, and\nyet are available when needed.\n\nSo, with few exceptions, Chaperone environment variables start with\nthe prefix ``_CHAP_``, but are not automatically passed down to\nservices.  If you want to make these available to services, simply\ndefine an environment variable in your configuration which expands to\none of the internal variables::\n\n  settings: {\n    env_set: {\n      # Make the relevant service name available to all processes\n      'SERVICE_NAME': '$(_CHAP_SERVICE)',\n    }\n  }\n\n.. _table.env-quick:\n\n.. table:: Environment Variable Quick Reference\n\n   ======================================================  =====================================================================\n   environment variable                              \t   meaning\n   ======================================================  =====================================================================\n   :ref:`_CHAP_CONFIG_DIR <env._CHAP_CONFIG_DIR>`    \t   Will be set to the full path to the directory containing the\n\t\t\t  \t\t\t     \t   configuration file or directory.\n   :ref:`_CHAP_INTERACTIVE <env._CHAP_INTERACTIVE>`  \t   Will be set to \"1\" if chaperone is running with a controlling tty.\n   :ref:`_CHAP_OPTIONS <env._CHAP_OPTIONS>`          \t   Recognized during start-up and contains any default command-line\n   \t\t       \t\t\t\t     \t   options.\n   :ref:`_CHAP_SERVICE <env._CHAP_SERVICE>`          \t   Contains the name of the current service.\n   :ref:`_CHAP_SERVICE_SERIAL <env._CHAP_SERVICE_SERIAL>`  Contains a monotonically-increasing serial number which starts at\n\t\t\t    \t\t\t\t   1 and increases each time a service command is invoked.\n   :ref:`_CHAP_SERVICE_TIME <env._CHAP_SERVICE_TIME>`\t   Contains the Unix timestamp of the start-time of the service.\n   :ref:`_CHAP_TASK_MODE <env._CHAP_TASK_MODE>`      \t   Will be set to \"1\" if chaperone was invoked with the\n   \t\t\t \t\t\t     \t   :ref:`--task <option.task>` option.\n   :ref:`NOTIFY_SOCKET <env.NOTIFY_SOCKET>`          \t   Set to the per-service systemd-compatible notify socket for\n\t\t       \t\t\t\t     \t   service started with :ref:`type=notify <service.type>`.\n   ======================================================  =====================================================================\n\n\nManaging Environment Variables\n------------------------------\n\nEnvironment Inheritance\n***********************\n\nChaperone provides extensive control over environment variables as they are passed from the parent (often the\ncontainer technology, like Docker), and eventually down to individual services.\n\n.. _figure.env:\n\n.. figure:: /images/env_inherit.svg\n   :align: center\n\n   Chaperone Environment Management\n\nAs shown in :numref:`figure.env`, Chaperone controls the environment at two levels, and with three separate directives:\n\n1. Chaperone creates a \"global\" settings environment which consists of environment variables inherited from\n   the parent environment, modified by the three environment directives :ref:`env_inherit <settings.env_inherit>`, \n   :ref:`env_set <settings.env_set>`, and :ref:`env_unset <settings.env_unset>`.\n2. Each service can further modify the resulting environment using the same directives, and the changes apply\n   only to the environment of the selected service.\n\nIn each case, Chaperone processes each set of directives in the same way:\n\n1. The new environment is initialized based upon the setting of :ref:`env_inherit <settings.env_inherit>`,\n   a list of patterns.  If not specified, Chaperone assumes all environment variables will be inherited.\n2. Then, Chaperone sets any new environment variables specified by :ref:`env_set <settings.env_set>`.\n3. Finally, any environment variables specified by :ref:`env_unset <settings.env_unset>` are removed\n   if they exist.\n\n.. _env.expansion:\n\nEnvironment Variable Expansion\n******************************\n\nEnvironment variable directives (as well as some others), can contain bash-inspired [#f1]_ environment variable expansions, as indicated below:\n\n``$(ENVVAR)`` or ``${ENVVAR}``\n  Expands to the specified environment variable.  If the environment variable is not defined, the expansion text\n  is not replace and will appear as is.\n\n``$(ENVVAR:-default)``\n  Inserts the environment variable if it is present, otherwise, expands to the string specified by ``default`` (which can\n  be blank).\n\n``$(ENVVAR:+ifpresent)``\n  Inserts ``ifpresent`` if the environment variable *is defined*, otherwise inserts the empty string.\n\n``$(ENVVAR:_default)``\n  Inserts the empty string if the environment variable *is defined*, otherwise inserts ``default``.\n  (This is the opposite of the previous ``:+`` operation.)\n\n``$(ENVVAR:?error-message)``\n  Inserts the environment variable, or stops Chaperone with the specified ``error-message`` if the variable\n  is not defined.\n\n``$(ENVVAR:|present-val|absent-val)``\n  If the environment variable is defined, then inserts the expansion of ``present-val``, otherwise\n  inserts the expansion of ``absent-val``.\n\n``$(ENVVAR:|check-val|equal|notequal)``\n  Compares the expanded value of ``ENVVAR`` to ``check-val`` using case-insensitive filename glob matching rules.  If they\n  match, then inserts ``equal`` otherwise inserts ``notequal``.  For example, you can use a match expression of ``[ty]*`` to\n  match any value which starts with 't' or 'y'.\n\n``$(ENVVAR:/regex/repl/[i])``\n  Expands the named environment variable, then performs a regular expression substitution using ``regex`` with\n  the replacement string ``repl``.   If either contains slashes, they must be escaped using a backslash.\n  The optional flags can be set to ``i`` if case-insensitive matching is required.  Parenthesized groups\n  in ``regex`` can be referred to in the replacement as ``\\n`` where 'n' is zero to refer to the entire \n  matched string, or 1-n to specify the group number.\n\n``$(`shell-command`)``\n  Executes the specified shell command and inserts the resulting output.  Note that the shell command\n  may contain references to other environment variables. \n  \nThe forms above are patterned after ``bash`` and can be useful in cases where defaults are required.  For example,\nif you wanted to specify the user for a service in the event no user was otherwise specified::\n\n  sample.service: {\n    uid: \"$(USER:-www-data)\"\n    ...\n  }\n\nThe above would expand to the value of ``USER`` if it exists, otherwise would expand to ``www-data``.  Not all directives\nsupport environment expansion.  When it is supported, it will be explicitly indicated in the reference documentation for\nthe directive (for example, the :ref:`service directory <service.directory>` directive).\n\n.. note::\n\n   Environment variables are expanded *as late as possible* so that declarations defined at the global level can, if desired,\n   be filled in automatically at lower levels.  For example, consider this globally set environment variable declaration::\n\n     settings: {\n       env_set: {\n\t 'MY_NAME': '$(_CHAP_SERVICE)',\n\t 'HAS_NOTIFY_SOCKET': '$(NOTIFY_SOCKET:+1)',\n\t 'PATH': '/service-bins/$(MY_NAME):$(PATH)',\n       }\n     }\n\n   In the above case, note that all environment variables are dependent upon values which will *not exist*\n   until later when a service is executed.  Specifically ``_CHAP_SERVICE`` is set to the service name, and\n   ``NOTIFY_SOCKET`` will be set only if a socket is allocated when the process is run.  However, Chaperone\n   assures that such environment variables use late-expansion so that templates such as the above can\n   be created and inherited by both logging and service declarations.\n\n.. _env.backtick:\n\nBacktick Expansion\n******************\n\nChaperone supports backtick expansion similar to most command shells.  Backtick expansion can be used wherever\nenvironment variables can be used (denoted by the |ENV| symbol in the directive documentation).   Any valid\nsystem command can be included, and the output will be substituted for the backtick expression.   For example,\nto set an environment variable to the default gateway (normally the Docker bridged network)::\n\n    settings: {\n      env_set: {\n        \"GATEWAY_IP\": \"`ip route | awk '/default/ { print $3 }'`\"\n      }\n   }\n\nBacktick expansions are not intended to be a general-purpose shell escape, but intended for \nsituations (like the example) where some system information needs to be collected for configuration \npurposes.  Specifically, backtick expansion have the following characteristics:\n\n* Backticks will be processed *after* all dependent environment variables are expanded.\n* Expansions are done only once, even if they are present in multiple locations.  Thus, the backtick\n  expression `\\`date\\`` will expand to the same value no matter how many times it is used.\n* The environment passed to the backtick command will be *the initial chaperone environment* before\n  any directives are processed.\n* Backtick expansions will be performed as the user specified by the `uid` and `gid` relevant to the\n  section where the backtick expansion is used.\n\nHowever, note that backtick expansions may include references to other environment variables, such as::\n\n  settings: {\n    env_set: {\n      \"LOCALDATE\": \"`TZ=${TZ} date`\",\n      \"TZ\": \"America/Los_Angeles\",\n    }\n\nNote in the above that the `TZ` variable will be expanded first (if necessary) before the backtick\nexpression.\n\nChaperone also supports a special syntax when backtick expansion is the only desired outcome\nof a variable insertion.  The following two methods are equivalent::\n\n  env_set: {\n    \"HOSTNAME1\": \"$(`hostname -s`)\",\n    \"HOSTNAME2\": \"`hostname -s`\",\n  }\n\nThis alternate syntax is primarily useful in :ref:`the envcp utiltiy <ref.envcp>` since \nbackticks are not expanded in bare text as they are within Chaperone directives.\n\nVariable Reference\n------------------\n\n.. _env._CHAP_CONFIG_DIR:\n\n.. envvar:: _CHAP_CONFIG_DIR\n\n   This is the path to the directory which *contains* the target specified by \n   the :option:`--config <chaperone --config>` option.\n\n   For example, if you start Chaperone with the following command::\n\n     chaperone --config /home/appsuser/firstapp/chaperone.conf\n\n   then this environment variable will be set to ``/home/appsuser/firstapp``.  Note that\n   the method is the same *even if a configuration directory is specified*.  Thus, this\n   command::\n\n     chaperone --config /home/appsuser/firstapp/chaperone.d\n\n   would set ``_CHAP_CONFIG_DIR`` to exactly the same value even though the target\n   is a directory rather than a file.\n\n   One very useful application of this variable is to define \"self-relative\" execution\n   environments where all application files are stored relative to the location of the\n   configuration directory.  The ``chaperone-baseimage`` does this with the following\n   declaration::\n\n     settings: {\n       env_set: {\n         'APPS_DIR': '$(_CHAP_CONFIG_DIR:-/)',\n       }\n     }\n\n   Then, all other files, commands and configurations operate relative to the ``APPS_DIR``\n   environment variable.   If this principle is observed carefully you can easily run::\n\n     docker run --config /myapps/prerelease/chaperone.d\n\n   to run an isolated set of applications stored in ``/myapps/prerelease`` and another\n   set of isolated applications in the same image like this::\n\n     docker run --config /myapps/stable/chaperone.d\n\n.. _env._CHAP_OPTIONS:\n\n.. envvar:: _CHAP_OPTIONS\n\n   When Chaperone starts, it reads options both from the command line and from this environment\n   variable.  The environment variable provides defaults which should be used if they are \n   not present on the command line.\n\n   For example, in the ``chaperone-baseimage`` image configuration, the default value\n   for ``--config`` is set::\n\n\t    ENV _CHAP_OPTIONS --config apps/chaperone.d\n\t    ENTRYPOINT [\"/usr/local/bin/chaperone\"]\n\n.. _env._CHAP_INTERACTIVE:\n\n.. envvar:: _CHAP_INTERACTIVE\n\n   This variable will always be set by Chaperone to either \"0\" or \"1\".  A \"1\" value\n   indicates that Chaperone detected a controlling terminal (pseudo-tty).  For example::\n\n     $ docker run -t -i chapdev/chaperone-baseimage --task /bin/echo '$(_CHAP_INTERACTIVE)'\n     1\n     $ docker run -i chapdev/chaperone-baseimage --task /bin/echo '$(_CHAP_INTERACTIVE)'\n     0\n     $\n\n   Exporting this value to services can allow services to detect interactive\n   vs. daemon containers in order to tailor their operation.\n\n.. _env._CHAP_SERVICE:\n\n.. envvar:: _CHAP_SERVICE\n\n   For each :ref:`service definition <service>`, this variable will be set to the name\n   of the service itself, including the ``.service`` suffix.  So, the service::\n\n     mydata.service: {\n       command: \"/bin/bash -c '/bin/echo $(_CHAP_SERVICE) >/tmp/service.txt'\"\n     }\n\n   will write ``mydata.service`` to the file ``/tmp/service.txt`` (not particularly useful).\n\n   Note that even the main command runs as a conventional service named \"CONSOLE\"::\n\n     $ docker run -i chapdev/chaperone-baseimage --task /bin/echo '$(_CHAP_SERVICE)'\n     CONSOLE\n     $\n\n.. _env._CHAP_SERVICE_TIME:\n\n.. envvar:: _CHAP_SERVICE_TIME\n\n   Every time a service command is executed, this variable will contain the \n   Unix time (integral number of days since January 01, 1970) of command invocation.\n\n.. _env._CHAP_SERVICE_SERIAL:\n\n.. envvar:: _CHAP_SERVICE_SERIAL\n\n   Every time a service command is executed, this variable will contain an integral\n   value which starts at 1 and will be incremented for each invocation of the command.\n\n   This value can be especially useful for 'inetd' or 'cron' services which may run\n   multiple times and need a unique identifier for each invocation. \n\n.. _env._CHAP_TASK_MODE:\n\n.. envvar:: _CHAP_TASK_MODE\n\n   This variable will be defined and set to \"1\" whenever Chaperone was run with the\n   :ref:`--task <option.task>` command-line option.\n\n   It can be used within scripts or applications to tailor behavior, if desired.\n\n\nNotify Socket\n-------------\n\n.. _env.NOTIFY_SOCKET:\n\n.. envvar:: NOTIFY_SOCKET\n\n   Chaperone attempts to emulate ``systemd`` behavior by providing a\n   :ref:`\"notify\" service type <service.sect.type>`.   Processes created by this type\n   will have the additional variable ``NOTIFY_SOCKET`` set in their environment,\n   which is the path to a UNIX domain socket created privately within the\n   container.  The service should use this environment variable to trigger\n   notifications compatible with ``systemd``.\n\n.. rubric:: Notes\n\n.. [#f1]\n\n   Originally, the intent was to duplicate ``bash`` environment variable expansion syntax as compatibly as possible.\n   Over time, however, it became clear that pattern matching replacements such as ``${NAME/*.jpg/something}`` relied\n   upon many arcane ``bash`` details such as arrays and filename globbing.  Therefore, while the basic environment\n   tests (such as those for defaults as in ``$(HOME:-/home)``) are compatible, a more useful set of regex-based\n   features were added to eliminate the need for many ``bash`` substitution options.\n"
  },
  {
    "path": "doc/source/ref/index.rst",
    "content": ".. chaperone documentation master file, created by\n   sphinx-quickstart on Mon May  6 17:19:12 2013.\n   You can adapt this file completely to your liking, but it should at least\n   contain the root `toctree` directive.\n\n.. _reference:\n\nChaperone Reference\n===================\n\nThis is the full Chaperone Reference and describes in detail how to run and configuration Chaperone.\n\nHowever, if you are using Chaperone with Docker, you can save time and see how a pre-built container works\nand come back here later when you need more detail.\nTo get started, we suggest reading the :ref:`intro`, then try out the ``chaperone-lamp`` Docker \nimage by following the instructions on the `chaperone-docker github page <https://github.com/garywiz/chaperone-docker#try-it-out>`_\n\nAny bugs should be reported as issues at https://github.com/garywiz/chaperone/issues.\n\n.. toctree::\n   :maxdepth: 2\n\n   command-line\n   config\n   env\n   utilities\n"
  },
  {
    "path": "doc/source/ref/utilities.rst",
    "content": ".. chaperone documentation\n   configuration directives\n\n.. _utilities:\n\nAdditional Utilities\n====================\n\nIn addition the main Chaperone program (see :ref:`ref.chaperone`), several utilities are provided.  Only the\n:ref:`ref.telchap` utility requires Chaperone itself.  The others can be used independently.\n\n.. toctree::\n\n   utility-envcp.rst\n"
  },
  {
    "path": "doc/source/ref/utility-envcp.rst",
    "content": ".. chaperone documentation n\n   command line documentation\n\n.. _ref.envcp:\n\nUtility: ``envcp``\n==================\n\nOverview\n--------\n\nThe ``envcp`` utility is a simple method to create template and expand them using the contents of the environment.\n\nBasic usage is as follows::\n\n  envcp [options] FILE1 ... FILEn DESTINATION\n\nWhere:\n\n``FILE1`` ... ``FILEn``\n  A list of one or more files to be copied to the destination.  If the destination is a directory, then\n  one or more files will be copied.  If the destination does not exist, then only a single file\n  can be specified and the destination will be the name of the result file.\n\n``DESTINATION``\n  A directory to contain the result files, or a single file which should be the result of the copy.\n\nThe following options can be specified:\n\n  ============================= ================================================================================\n  Option\n  ============================= ================================================================================\n  -v / --verbose\t\tEcho file operations to ``stdout`` as files are copied.\n  -a / --archive\t\tAttempt to preserve file permissions, ownership, access and modification times.\n  --overwrite\t\t\tOverwrite destination files.  If not specified, ``envcp``\n  \t\t\t\twill terminate with an error when any destination file already exists.\n  --strip *suffix*\t\tWhen files are copied, strip off the specified filename suffix to derive\n  \t  \t\t\tthe filename that should be used in the destination directory.\n  --shell-enable\t\tEnables backtick expansion features.\n  --xprefix *char*\t\tSpecify the introductory prefix used for variable expansions.\n  \t    \t\t\tDefaults to the dollar-sign character (`$`).\n  --xgrouping *charlist*\tSpecify a list of opening brace types.  Defaults to the left curly brace\n  \t      \t\t\tand the left parenthesis (``{(``).\n  ============================= ================================================================================\n\nThe special option character ``-`` can be used to tell ``envcp`` that input should be taken from ``stdin``\nand output should be written to ``stdout``::\n\n  $ envcp - <input.txt >output.txt\n\n\nApplications\n------------\n\nThe ``envcp`` utility is usually used as a \"poor-man's macro processor\", similar to the \nway `GNU M4 <http://www.gnu.org/software/m4/m4.html>`_ is often employed, but much simpler.  Using a simple\nbash-like syntax, you can create template files and then customize them based upon the current set\nof environment variables.\n\nFor example, the `nginx <http://nginx.org>`_ web server unfortunately does not support environment\nvariables inside configuration files.  So, configuration lines like the following give an error::\n\n  ##\n  # Logging Settings\n  ##\n  access_log ${NGINX_LOG_DIR}/access.log;\n\nHowever, if the ``NGINX_LOG_DIR`` environment variable is found, then the following command can be used\nto reprocess a template file to create the true ``nginx`` configuration, like this::\n\n  $ envcp /apps/templates/nginx.conf.tpl /apps/config/nginx.conf\n\nYou can even process a complete set of templates by telling ``envcp`` to strip off the template\nsuffix when it makes the copy::\n\n  $ envcp --strip .tpl /apps/templates/*.tpl /apps/config\n\n\nAdvanced Templates\n------------------\n\nAny files copied by ``envcp`` can support the full Chaperone :ref:`environment variable expansion syntax <env.expansion>`.  However,\nit is important to note that Chaperone environment variable expansions can span multiple lines, making it possible to\ncreate reasonably complicated conditional macro expansions.\n\nFor example, this excerpt from a `bind9 <http://www.bind9.net/>`_ configuration file demonstrates how the ``forwarders`` section can be included only\nif the ``CONFIG_FWD_HOST`` variable is set::\n\n    $(CONFIG_FWD_HOST:+\n        forwarders { \n\t  $(CONFIG_FWD_HOST); \n        };\n\n\tforward only;\n    )\n\nYou can also match the contents of variables using the *if-then-else* construct::\n\n  $(ENABLE_ADMIN_PANEL:|T*|\n     Alias /admin/ /apps/www/admin_panel_live\n  |\n     Alias /admin/ /apps/www/errors/admin_dead/\n  )\n\nIn some situations, you may be creating shell-scripts which themselves are templates.  In this case, you may\nwant to customize the ``envcp`` variable prefix so that you can be sure any shell syntax is not interpreted\nby ``envcp``.  So, for example, if you have a shell script template like this::\n\n  #!/bin/bash\n  BASENAME=%%(IMAGE_BASENAME)\n  FULLNAME=${PWD:-/home}/${BASENAME}\n\nyou can tell ``envcp`` to use ``%%`` as the expansion prefix when you do the copy::\n\n  $ envcp --xprefix '%%' script.sh.tpl script.sh\n\n\nBacktick Syntax\n---------------\n\nChaperone has built-in support for shell-escapes\nusing :ref:`backtick expansion syntax <env.backtick>`.   While this is normally enabled\nin Chaperone configuration files, it is *disabled* by default in ``envcp`` to minimize\nthe chance of accidental (or malicious) shell injection within template scripts.\n\nSo, for example, if you have a file ``test.txt`` which contains::\n\n  The date is ... $(`date;echo yes`)\n  `ls -l`\n\nThen, you will see the following by default using ``envcp``::\n\n  $ envcp - <test.txt\n  The date is ... $(`date;echo yes`)\n  `ls -l`\n  $\n\nHowever, you can enable shell expansion if desired::\n\n  $ envcp --shell-escapes <test.txt\n  Tue Aug  4 01:35:07 UTC 2015 yes\n  `ls -l`\n  $\n\nNote that backticks are only expanded when they occur as part of variable expansion\nsyntax, and are never expanded elsewhere.  Since templates are often shell scripts,\nthis prevents any confusion between ``envcp`` expansions and syntax which is\npart of the script template itself.\n\nSee :ref:`env.expansion` for more information on how variables are expanded and\nhow backticks work.\n\n"
  },
  {
    "path": "doc/source/status.rst",
    "content": ".. _status:\n\nChaperone Project Status\n========================================================================\n\nThe Chaperone process manager is ready for public testing, and no\nlonger in pre-release.  It is relatively stable.\n\nIssues for Chaperone itself should be submitted on \nthe `chaperone github issues page <https://github.com/garywiz/chaperone/issues>`_\n\nDocumentation status:\n\n* `Reference Section <ref>`_: Complete.\n  Will be updated always to reflect feature changes and clarifications.\n* Usage Section: In progress.  Will contain samples and best practices.\n* Tools Section: Not started.  Command-line tools like ``telchap``, ``envcp``, and\n  ``sdnotify`` which are bundled with Chaperone need documentation pages of\n  their own.\n* Appendices: Documentation for chaperone-based images (such as those at\n  `chaperone-docker <http://github.com/garywiz/chaperone-docker>`_ will\n  be located as appendices of the Chaperone reference.\n\nThere are several production-quality images which we are building both for\nour own use, and as samples of various Chaperone use-cases.  \n\nThese are separately maintained and have their own read-me pages at\n`chaperone-docker <http://github.com/garywiz/chaperone-docker>`_.\n\nHelp is always appreciated.\n"
  },
  {
    "path": "samples/README",
    "content": "These are some early samples that may still be useful.\n\nHowever, it is much better to take a look at:\n\t https://github.com/garywiz/chaperone-docker\n\nThere you will find a set of working examples of various configurations.\n\n"
  },
  {
    "path": "samples/chaperone-devbase/Dockerfile",
    "content": "FROM ubuntu:14.04\n\nADD setup-bin/* *.sh /setup-bin/\nADD apps/ /apps/\nADD chaperone/ /setup-bin/chaperone/\nRUN /setup-bin/install.sh\n\n# We use the environment variable instead of entrypoint args so that any default can be overridden by CMD\nENV CHAPERONE_OPTIONS --config apps/chaperone.d\n\nENTRYPOINT [\"/usr/local/bin/chaperone\"]\n"
  },
  {
    "path": "samples/chaperone-devbase/apps/bin/README",
    "content": "Put commands which need to be executed at the command line, or by application\nprograms here.  This directory will automaticaly be included the path for all\nrunning services.\n"
  },
  {
    "path": "samples/chaperone-devbase/apps/chaperone.d/010-start.conf",
    "content": "# General environmental settings\n\nsettings: {\n  env_set: {\n  'PATH': '$(APPS_DIR)/bin:/usr/local/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin',\n  'APPS_DIR': '$(_CHAP_CONFIG_DIR:-/)',\n  #'SECURE_ROOT': '1',\n  },\n}\n\ninit.service: {\n  type: oneshot,\n  command: '/bin/bash $(APPS_DIR)/etc/init.sh',\n  before: 'default,database,application',\n  process_timeout: 20,\t\t# init may take longer\n  service_group: 'init',\n}\n\nchaperone.logging: {\n  enabled: true,\n  filter: '[chaperone].*',\n  file: '$(APPS_DIR)/var/log/chaperone-%d.log',\n}\n\nsyslog.logging: {\n  enabled: true,\n  filter: '*.info;![chaperone].*',\n  file: '$(APPS_DIR)/var/log/syslog-%d.log',\n}\n\nconsole.logging: {\n  enabled: true,\n  stdout: true,\n  filter: '*.warn;authpriv,auth.!*;daemon.!warn',\n}\n"
  },
  {
    "path": "samples/chaperone-devbase/apps/etc/README",
    "content": "This is a \"mini etc\" directory which, as much as possible, is where all normal application and service configuration\nfiles are stored.   For example, in the chaperone-lamp configuration, all MySQL and Apache configurations are stored\nhere, but may make reference to other files on the system (such as modules and plugins).  However, the normal\nstartup files in /etc/apache2 and /etc/mysql are not used, as they expect a normal fully-booted system.\n\nSystem start-up is controlled by the init.sh script, which reads additional startup files from ../init.d.\n\nThis is not built into chaperone, but rather is a custom configuration defined within chaperone.d.  If you want,\nyou can completely change the way things work and invent new startup schemes.  But, this is a good place to start.\n"
  },
  {
    "path": "samples/chaperone-devbase/apps/etc/init.sh",
    "content": "#!/bin/bash\n# A quick script to initialize the system\n\n# We publish two variables for use in startup scripts:\n#\n#   CONTAINER_INIT=1   if we are initializing the container for the first time\n#   APPS_INIT=1        if we are initializing the $APPS_DIR for the first time\n#\n# Both may be relevant, since it's possible that the $APPS_DIR may be on a mount point\n# so it can be reused when starting up containers which refer to it.\n\nfunction dolog() { logger -t init.sh -p info $*; }\n\napps_init_file=\"$APPS_DIR/var/run/apps_init.done\"\ncont_init_file=\"/container_init.done\"\n\nexport CONTAINER_INIT=0\nexport APPS_INIT=0\n\nif [ ! -f $cont_init_file ]; then\n    dolog \"initializing container for the first time\"\n    CONTAINER_INIT=1\n    su -c \"date >$cont_init_file\"\nfi\n\nif [ ! -f $apps_init_file ]; then\n    dolog \"initializing $APPS_DIR for the first time\"\n    APPS_INIT=1\n    mkdir -p $APPS_DIR/var/run $APPS_DIR/var/log\n    chmod 777 $APPS_DIR/var/run $APPS_DIR/var/log\n    date >$apps_init_file\nfi\n\nif [ -d $APPS_DIR/init.d ]; then\n  for initf in $( find $APPS_DIR/init.d -type f -executable \\! -name '*~' ); do\n    dolog \"running $initf...\"\n    $initf\n  done\nfi\n\nif [ \"$SECURE_ROOT\" == \"1\" -a $CONTAINER_INIT == 1 ]; then\n  dolog locking down root account\n  su -c 'passwd -l root'\nfi\n"
  },
  {
    "path": "samples/chaperone-devbase/apps/init.d/README",
    "content": "Files in this directory are executed upon container startup by the ../etc/init.sh script.\n\nThere are two modes:\n\n1.  When the container is first set up, CONTAINER_INIT==\"1\" and the script can use 'su' without a\n    password.  This is so that any setup activities can be performed which require full access\n    to the system.\n\n2.  On subsequent boots (if the container is stopped and started), the same scripts will be\n    run with CONTAINER_INIT==\"0\".  However, root access is locked down if env var SECURE_ROOT=1.\n\nNote that SECURE_ROOT is not defined by default.\n\nIn all cases, scripts are run as either root, or the user specified by --user on the\nchaperone command line.\n"
  },
  {
    "path": "samples/chaperone-devbase/build-image.sh",
    "content": "#!/bin/bash\n\n# the cd trick assures this works even if the current directory is not current.\ncd ${0%/*}\n./setup-bin/build -x\ndocker tag chaperone-devbase chaperone-base\n"
  },
  {
    "path": "samples/chaperone-devbase/install.sh",
    "content": "#!/bin/bash\n\n# Assumes there is an \"optional\" apt-get proxy running on our HOST\n# on port 3142.  You can run one by looking here: https://github.com/sameersbn/docker-apt-cacher-ng\n# Does no harm if nothing is running on that port.\n/setup-bin/ct_setproxy\n\n# see https://github.com/docker/docker/issues/1724\napt-get update\n\n# Normal install steps\napt-get -y install python3-pip\n\n# We install from the local directory rather than pip so we can test and develop.\ncd /setup-bin/chaperone\npython3 setup.py install\n\n# Now, just so there is no confusion, create a new, empty /var/log directory so that any logs\n# written will obviously be written by the current container software.  Keep the old one so\n# it's there for reference so we can see what the distribution did.\ncd /var\nmv log log-dist\nmkdir log\nchmod 775 log\nchown root.syslog log\n\n# Customize some system files\ncp /setup-bin/dot.bashrc /root/.bashrc\n\n# Allow unfettered root access by users. This is done so that apps/init.d scripts can\n# have unfettered access to root on their first startup to configure userspace files\n# if needed (see mysql in chaperone-lamp for an example).  At the end of the first startup\n# this is then locked down by apps/etc/init.sh.\npasswd -d root\nsed -i 's/nullok_secure/nullok/' /etc/pam.d/common-auth\n"
  },
  {
    "path": "samples/chaperone-lamp/Dockerfile",
    "content": "FROM chapdev/chaperone-base:latest\n\nADD *.sh /setup-bin/\nADD apps/ /apps/\nRUN /setup-bin/install.sh\n\nEXPOSE 8080\n"
  },
  {
    "path": "samples/chaperone-lamp/apps/chaperone.d/105-mysqld.conf",
    "content": "settings: {\n  env_set: {\n  'MYSQL_HOME': '$(APPS_DIR)/etc/mysql',\n  'MYSQL_UNIX_PORT': '$(APPS_DIR)/var/run/mysqld.sock',\n  },\n}\n\nmysql1.service: {\n  type: forking,\n  command: \"/etc/init.d/mysql start\",\n  enabled: false,\n  uid: root,\n  service_group: database,\n}\n\nmysql.service: {\n  type: simple,\n  command: \"$(APPS_DIR)/etc/mysql/start_mysql.sh\",\n  enabled: true,\n  service_group: database,\n}\n"
  },
  {
    "path": "samples/chaperone-lamp/apps/chaperone.d/120-apache2.conf",
    "content": "apache2.service: {\n  command: \"/usr/sbin/apache2 -f $(APPS_DIR)/etc/apache2.conf -DFOREGROUND\",\n  enabled: true,\n  restart: true,\n  optional: true,\n  uid: \"$(USER:-www-data)\",\n  env_set: {\n    APACHE_LOCK_DIR: /tmp,\n    APACHE_PID_FILE: /tmp/apache2.pid,\n    APACHE_RUN_USER: www-data,\n    APACHE_RUN_GROUP: www-data,\n    APACHE_LOG_DIR: \"$(APPS_DIR)/var/log/apache2\",\n    APACHE_SITES_DIR: \"$(APPS_DIR)/www\",\n    MYSQL_SOCKET: \"$(APPS_DIR)/var/run/mysqld.sock\",\n  },\n  after: database,\n}\n\napache2.logging: {\n  enabled: true,\n  filter: 'local1.*;*.!err',\n  file: '$(APPS_DIR)/var/log/apache2/access-%d.log',\n  uid: \"$(USER:-www-data)\",\n}\n\napache2.logging: {\n  enabled: true,\n  filter: 'local1.err',\n  stderr: true,\n  file: '$(APPS_DIR)/var/log/apache2/error-%d.log',\n  uid: \"$(USER:-www-data)\",\n}\n"
  },
  {
    "path": "samples/chaperone-lamp/apps/etc/apache2.conf",
    "content": "# This is the main Apache server configuration file.  It contains the\n# configuration directives that give the server its instructions.\n# See http://httpd.apache.org/docs/2.4/ for detailed information about\n# the directives and /usr/share/doc/apache2/README.Debian about Debian specific\n# hints.\n\n# This is a CHAPERONE-specific configuration designed to keep things lean.  It is based loosely\n# on Ubuntu 14.04 /etc/apache2/apache2.conf, and every attempt has been made to assure that\n# system-installed modules and configurations will work.\n\n# The chaperone configuration is designed to work within a self-contained application directory\n# defined by APPS_DIR.  Note that it may be a user directory, and thus chaperone allows\n# Apache to run entirely under any user account, along with a MySQL server that is also\n# sequestered in the same way.   This means that you can have containers \"point\" to apps\n# directories on your host server and manage per-container resources consistently in\n# those directories during development, until you move the entire apps directory into\n# a production container environment or image.\n\n#\n# The accept serialization lock file MUST BE STORED ON A LOCAL DISK.\n#\nMutex file:${APACHE_LOCK_DIR} default\n\nPidFile ${APACHE_PID_FILE}\n\n# Timeout: The number of seconds before receives and sends time out.\nTimeout 300\nKeepAlive On\nMaxKeepAliveRequests 100\nKeepAliveTimeout 5\n\n# Note that the user and group are defined in chaperone.d/120-apache.conf\n#User ${APACHE_RUN_USER}\n#Group ${APACHE_RUN_GROUP}\n\n# The default is off because it'd be overall better for the net if people\n# had to knowingly turn this feature on, since enabling it means that\n# each client request will result in AT LEAST one lookup request to the\n# nameserver.\nHostnameLookups Off\n\n# ErrorLog: The location of the error log file.\n# We dump errors to syslog so that we can easily duplicate it to the container stderr if we want.\nErrorLog syslog:local1\n\n# Available values: trace8, ..., trace1, debug, info, notice, warn,\n# error, crit, alert, emerg.\nLogLevel warn\n\n# Include standard Debian/Ubuntu module configuration:\nInclude /etc/apache2/mods-enabled/*.load\nInclude /etc/apache2/mods-enabled/*.conf\n\n# CHAPERONE: Override to listen on 8080 and 8443\nListen 8080\n\n<IfModule ssl_module>\n\tListen 8443\n</IfModule>\n<IfModule mod_gnutls.c>\n\tListen 8443\n</IfModule>\n\n# Sets the default security model of the Apache2 HTTPD server. It does\n# not allow access to the root filesystem outside of /usr/share and /var/www.\n# The former is used by web applications packaged in Debian,\n# the latter may be used for local directories served by the web server. If\n# your system is serving content from a sub-directory in /srv you must allow\n# access here, or in any related virtual host.\n<Directory />\n\tOptions FollowSymLinks\n\tAllowOverride None\n\tRequire all denied\n</Directory>\n\n<Directory /usr/share>\n\tAllowOverride None\n\tRequire all granted\n</Directory>\n\n<Directory ${APACHE_SITES_DIR}/>\n\tOptions Indexes FollowSymLinks\n\tAllowOverride None\n\tRequire all granted\n</Directory>\n\nAccessFileName .htaccess\n\n# The following lines prevent .htaccess and .htpasswd files from being\n# viewed by Web clients.\n<FilesMatch \"^\\.ht\">\n\tRequire all denied\n</FilesMatch>\n\n\n# The following directives define some format nicknames for use with\n# a CustomLog directive.\nLogFormat \"%v:%p %h %l %u %t \\\"%r\\\" %>s %O \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" vhost_combined\nLogFormat \"%h %l %u %t \\\"%r\\\" %>s %O \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" combined\nLogFormat \"%h %l %u %t \\\"%r\\\" %>s %O\" common\nLogFormat \"%{Referer}i -> %U\" referer\nLogFormat \"%{User-agent}i\" agent\n\n# Include of directories ignores editors' and dpkg's backup files,\n# see README.Debian for details.\n\n# Include generic snippets of statements\nIncludeOptional /etc/apache2/conf-enabled/*.conf\n\n##\n## CHAPERONE SPECIFICS\n##\n\n# Apache configuration files for chaperone sites (Note that we do NOT look in /etc/apache2/sites-enabled)\nIncludeOptional ${APACHE_SITES_DIR}/sites.d/*.conf\n\n# Point MySQL socket to the right spot\nphp_admin_value mysql.default_socket ${APPS_DIR}/var/run/mysqld.sock\nphp_admin_value mysqli.default_socket ${APPS_DIR}/var/run/mysqld.sock\n"
  },
  {
    "path": "samples/chaperone-lamp/apps/etc/mysql/my.cnf",
    "content": "#\n# The MySQL database server configuration file.\n#\n# You can copy this to one of:\n# - \"/etc/mysql/my.cnf\" to set global options,\n# - \"~/.my.cnf\" to set user-specific options.\n# \n# One can use all long options that the program supports.\n# Run program with --help to get a list of available options and with\n# --print-defaults to see which it would actually understand and use.\n#\n# For explanations see\n# http://dev.mysql.com/doc/mysql/en/server-system-variables.html\n\n# This will be passed to all mysql clients\n# It has been reported that passwords should be enclosed with ticks/quotes\n# escpecially if they contain \"#\" chars...\n# Remember to edit /etc/mysql/debian.cnf when changing the socket location.\n[client]\nport\t\t= 3306\n#socket\t\t= /var/run/mysqld/mysqld.sock\n\n# Here is entries for some specific programs\n# The following values assume you have at least 32M ram\n\n# This was formally known as [safe_mysqld]. Both versions are currently parsed.\n[mysqld_safe]\n#socket\t\t= /var/run/mysqld/mysqld.sock\nnice\t\t= 0\n\n[mysqld]\n#\n# * Basic Settings\n#\n#pid-file\t= /var/run/mysqld/mysqld.pid\n#socket\t\t= /var/run/mysqld/mysqld.sock\nport\t\t= 3306\nbasedir\t\t= /usr\n#datadir\t\t= /var/lib/mysql\ntmpdir\t\t= /tmp\nlc-messages-dir\t= /usr/share/mysql\nskip-external-locking\n#\n# Instead of skip-networking the default is now to listen only on\n# localhost which is more compatible and is not less secure.\nbind-address\t\t= 127.0.0.1\n#\n# * Fine Tuning\n#\nkey_buffer\t\t= 16M\nmax_allowed_packet\t= 16M\nthread_stack\t\t= 192K\nthread_cache_size       = 8\n# This replaces the startup script and checks MyISAM tables if needed\n# the first time they are touched\nmyisam-recover         = BACKUP\n#max_connections        = 100\n#table_cache            = 64\n#thread_concurrency     = 10\n#\n# * Query Cache Configuration\n#\nquery_cache_limit\t= 1M\nquery_cache_size        = 16M\n#\n# * Logging and Replication\n#\n# Both location gets rotated by the cronjob.\n# Be aware that this log type is a performance killer.\n# As of 5.1 you can enable the log at runtime!\n#general_log_file        = /var/log/mysql/mysql.log\n#general_log             = 1\n#\n# Error log - should be very few entries.\n#\n#log_error = /var/log/mysql/error.log\n#\n# Here you can see queries with especially long duration\n#log_slow_queries\t= /var/log/mysql/mysql-slow.log\n#long_query_time = 2\n#log-queries-not-using-indexes\n#\n# The following can be used as easy to replay backup logs or for replication.\n# note: if you are setting up a replication slave, see README.Debian about\n#       other settings you may need to change.\n#server-id\t\t= 1\n#log_bin\t\t\t= /var/log/mysql/mysql-bin.log\nexpire_logs_days\t= 10\nmax_binlog_size         = 100M\n#binlog_do_db\t\t= include_database_name\n#binlog_ignore_db\t= include_database_name\n#\n# * InnoDB\n#\n# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.\n# Read the manual for more InnoDB related options. There are many!\n#\n# * Security Features\n#\n# Read the manual, too, if you want chroot!\n# chroot = /var/lib/mysql/\n#\n# For generating SSL certificates I recommend the OpenSSL GUI \"tinyca\".\n#\n# ssl-ca=/etc/mysql/cacert.pem\n# ssl-cert=/etc/mysql/server-cert.pem\n# ssl-key=/etc/mysql/server-key.pem\n\n\n\n[mysqldump]\nquick\nquote-names\nmax_allowed_packet\t= 16M\n\n[mysql]\n#no-auto-rehash\t# faster start of mysql but no tab completition\n\n[isamchk]\nkey_buffer\t\t= 16M\n\n#\n# * IMPORTANT: Additional settings that can override those from this file!\n#   The files must end with '.cnf', otherwise they'll be ignored.\n#\n#!includedir /etc/mysql/conf.d/\n"
  },
  {
    "path": "samples/chaperone-lamp/apps/etc/mysql/start_mysql.sh",
    "content": "#!/bin/bash\n\n# For a general query log, include the following:\n#   --general-log-file=$APPS_DIR/log/mysqld-query.log\n#   --general-log=1\n\nexec /usr/sbin/mysqld \\\n   --defaults-file=$APPS_DIR/etc/mysql/my.cnf \\\n   --user ${USER:-mysql} \\\n   --datadir=$APPS_DIR/var/mysql \\\n   --socket=$APPS_DIR/var/run/mysqld.sock \\\n   --pid-file=$APPS_DIR/var/run/mysqld.pid \\\n   --log-error=$APPS_DIR/var/log/mysqld-error.log \\\n   --plugin-dir=/usr/lib/var/mysql/plugin\n"
  },
  {
    "path": "samples/chaperone-lamp/apps/init.d/mysql.sh",
    "content": "#!/bin/bash\n\ndistdir=/var/lib/mysql\nappdbdir=$APPS_DIR/var/mysql\n\nfunction dolog() { logger -t mysql.sh -p info $*; }\n\nif [ $CONTAINER_INIT == 1 ]; then\n    dolog \"hiding distribution mysql files in /etc so no clients see them\"\n    su -c \"cd /etc; mv my.cnf my.cnf-dist; mv mysql mysql-dist; mv $distdir $distdir-dist\"\nfi\n\nif [ $APPS_INIT == 1 ]; then\n    if [ ! -d $appdbdir ]; then\n\tdolog \"copying distribution $distdir to $appdbdir\"\n\tsu -c \"cp -a $distdir-dist $appdbdir; chown -R ${USER:-mysql} $appdbdir\"\n    else\n\tdolong \"existing $appdbdir found when initializing $APPS_DIR for the first time, not changed.\"\n    fi\nfi\n"
  },
  {
    "path": "samples/chaperone-lamp/apps/init.d/phpmyadmin.sh",
    "content": "#!/bin/bash\n\npuser=${USER:-www-data}\n\nfunction dolog() { logger -t mysql.sh -p info $*; }\n\nif [ $CONTAINER_INIT == 1 ]; then\n  dolog setting phpmyadmin user permissions for \"$puser\"\n  su -c \"chown -R $puser: /var/lib/phpmyadmin/tmp; chgrp --reference /var/lib/phpmyadmin/tmp /var/lib/phpmyadmin/*.php\"\n  su -c \"chgrp --reference /var/lib/phpmyadmin/tmp \\`find /etc/phpmyadmin -group www-data\\`\"\nfi\n"
  },
  {
    "path": "samples/chaperone-lamp/apps/www/default/index.php",
    "content": "<?= phpinfo(); ?>\n"
  },
  {
    "path": "samples/chaperone-lamp/apps/www/sites.d/default.conf",
    "content": "<VirtualHost *:8080>\n\n\t# The ServerName directive sets the request scheme, hostname and port that\n\t# the server uses to identify itself. \n\t#ServerName www.example.com\n\n\tServerAdmin webmaster@localhost\n\tDocumentRoot ${APACHE_SITES_DIR}/default\n\n\t# Errors go to the syslog so they can be duplicated to the console easily\n\tErrorLog syslog:local1\n\tCustomLog ${APACHE_LOG_DIR}/default-access.log combined\n\n</VirtualHost>\n"
  },
  {
    "path": "samples/chaperone-lamp/build-image.sh",
    "content": "#!/bin/bash\n\n# the cd trick assures this works even if the current directory is not current.\ncd ${0%/*}\n./setup-bin/build -x\n"
  },
  {
    "path": "samples/chaperone-lamp/install.sh",
    "content": "#!/bin/bash\n\nMYSQL_ROOT_PW='ChangeMe'\n\n# Assumes there is an \"optional\" apt-get proxy running on our HOST\n# on port 3142.  You can run one by looking here: https://github.com/sameersbn/docker-apt-cacher-ng\n# Does no harm if nothing is running on that port.\n/setup-bin/ct_setproxy\n\n# Normal install steps\napt-get install -y apache2\n\ndebconf-set-selections <<< \"debconf debconf/frontend select Noninteractive\"\n\ndebconf-set-selections <<< \"mysql-server mysql-server/root_password password $MYSQL_ROOT_PW\"\ndebconf-set-selections <<< \"mysql-server mysql-server/root_password_again password $MYSQL_ROOT_PW\"\ndebconf-set-selections <<< \"phpmyadmin phpmyadmin/dbconfig-install boolean true\"\ndebconf-set-selections <<< \"phpmyadmin phpmyadmin/app-password password $MYSQL_ROOT_PW\"\ndebconf-set-selections <<< \"phpmyadmin phpmyadmin/app-password-confirm password $MYSQL_ROOT_PW\"\ndebconf-set-selections <<< \"phpmyadmin phpmyadmin/mysql/app-pass password $MYSQL_ROOT_PW\"\ndebconf-set-selections <<< \"phpmyadmin phpmyadmin/mysql/admin-pass password $MYSQL_ROOT_PW\"\ndebconf-set-selections <<< \"phpmyadmin phpmyadmin/reconfigure-webserver multiselect apache2\"\n\napt-get install -y mysql-server\n/usr/bin/mysqld_safe &\n\n# Install phpmyadmin.  Actual setup occurs at first boot, since it depends on what user we run the container\n# as.\napt-get install -y phpmyadmin\nphp5enmod mcrypt\n\napt-get install -y php-pear\n"
  },
  {
    "path": "samples/docsample/Dockerfile",
    "content": "FROM ubuntu:14.04\nMAINTAINER garyw@blueseastech.com\n\nRUN apt-get update && \\\n    apt-get install -y openssh-server apache2 python3-pip && \\\n    pip3 install chaperone\nRUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /etc/chaperone.d\n\nCOPY chaperone.conf /etc/chaperone.d/chaperone.conf\n\nEXPOSE 22 80\n\nENTRYPOINT [\"/usr/local/bin/chaperone\"]\n"
  },
  {
    "path": "samples/docsample/README",
    "content": "This is a sample designed as a substitute for the Docker \"supervisor\" sample\nat: https://docs.docker.com/articles/using_supervisord/\n\nIt is updated for Ubuntu 14.04 as well as uses Chaperone as it's supervisor daemon.\n\n"
  },
  {
    "path": "samples/docsample/chaperone.conf",
    "content": "sshd.service: { \n  command: \"/usr/sbin/sshd -D\"\n}\n\napache2.service: {\n  command: \"bash -c 'source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND'\",\n}\n\nconsole.logging: {\n  selector: '*.warn',\n  stdout: true,\n}\n"
  },
  {
    "path": "samples/setup-bin/build",
    "content": "#!/bin/bash\n\n# This is a great little program to make it easy to share basic build components across\n# a set of docker files.   Basically, you do this:\n#    cd sandbox/someimage\n#    ln -s ../setup-bin  #if needed\n#    ./setup-bin/build\n#\n\nhelpmsg=\"\nusage: setup/build\\n\n\\n\n-n   name the image (else directoryname is used)\\n\n-x   disable the cache\\n\n-y   ask no questions and do the default\\n\n-p ? specify prefix to use for build tag (default chapdev/)\\n\n\\n\nIf you have additional arguments to docker, then include them after a --\\n\n\"\n\nif [ \"$0\" != './setup-bin/build' ] ; then\n    echo 'Sorry, I only work if executed as \"./setup-bin/build\"'\n    exit 1\nfi\n\nif [ ! -f Dockerfile ]; then\n    echo 'Hey, where is your ./Dockerfile?'\n    exit 1\nfi\n\nipfx='chapdev/'\nbuildargs=(-t ${PWD##*/})\nnoquestions=''\n\nwhile getopts \"n:hxy\" opt; do\n    case $opt in\n\tn)\n\t    buildargs[1]=$OPTARG\n\t    ;;\n\th)\n\t    echo -e $helpmsg\n\t    exit 0\n\t    ;;\n\ty)\n\t    noquestions='true'\n\t    yn='y'\n\t    ;;\n\tp)\n\t    ipfx=$OPTARG\n\t    ;;\n\tx)\n\t    buildargs+=(--no-cache)\n\t    ;;\n\t\\?)\n\t    exit 1\n\t    ;;\n    esac\ndone\n\nshift $((OPTIND-1))\n\nbuildargs[1]=$ipfx${buildargs[1]}\nimagename=${buildargs[1]}\necho Building image: $imagename\n\noldimage=`docker images -q $imagename`\n\necho docker build ${buildargs[*]} $* -\ntar czh . | docker build ${buildargs[*]} $* -\n\nnewimage=`docker images -q $imagename`\n\nif [ \"$oldimage\" -a \"$oldimage\" != \"$newimage\" ]; then\n    if [ ! \"$noquestions\" ]; then\n\tread -p \"Delete old image $oldimage? (y/n) \" yn\n    fi\n    if [ \"$yn\" = \"y\" ]; then\n\tdocker rmi $oldimage\n\techo $oldimage removed\n    fi\nfi\n"
  },
  {
    "path": "samples/setup-bin/ct_setproxy",
    "content": "#/bin/bash\n# If our host has an apt proxy container running at 3142, then use it for apt\n\ndefhost=`ip route | awk '/default/ { print $3; }'`\n\nif nc -z $defhost 3142; then\n   echo \"Acquire::http { Proxy \\\"http://$defhost:3142\\\"; };\" >/etc/apt/apt.conf.d/01proxy\n   echo ADDED PROXY FOR apt-get on $defhost\nelse\n   rm -f /etc/apt/apt.conf.d/01proxy\n   echo NO PROXY FOR apt-get\nfi\n"
  },
  {
    "path": "samples/setup-bin/dot.bashrc",
    "content": "# ~/.bashrc: executed by bash(1) for non-login shells.\n# This is a simpler, stripped-down version for containers\n\n# If not running interactively, don't do anything\n[ -z \"$PS1\" ] && return\n\n# don't put duplicate lines in the history. See bash(1) for more options\n# ... or force ignoredups and ignorespace\nHISTCONTROL=ignoredups:ignorespace\n\n# append to the history file, don't overwrite it\nshopt -s histappend\n\n# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)\nHISTSIZE=1000\nHISTFILESIZE=2000\n\n# make less more friendly for non-text input files, see lesspipe(1)\n[ -x /usr/bin/lesspipe ] && eval \"$(SHELL=/bin/sh lesspipe)\"\n\ncase \"$TERM\" in\nxterm*|rxvt*)\n    PS1=\"\\[\\u@\\h: \\w\\a\\]$PS1\"\n    ;;\n*)\n    ;;\nesac\n\n# some more ls aliases\nalias ll='ls -alF'\nalias la='ls -A'\nalias l='ls -CF'\n\n# Alias definitions.\nif [ -f ~/.bash_aliases ]; then\n    . ~/.bash_aliases\nfi\n"
  },
  {
    "path": "sandbox/.gitignore",
    "content": "apps-*\nvar-*\n\n\n"
  },
  {
    "path": "sandbox/.shinit",
    "content": "echo THIS IS THE SHELL INIT\n"
  },
  {
    "path": "sandbox/README",
    "content": "Files in this directory were created ad-hoc by me as a sandbox testing area.  Typically, I create a docker\nimage and point /home to my host's /home, then keep chaperone in a sub-directory where I work on it without\nneeding to install it each time.   I run my docker image with ./testdock and it isolates operation in this\nsandbox directory.\n\nThe \"testimage\" script is especially useful since it lets you work with a standard docker chaperone image\nwhile substituting the current chaperone source instead of using the installed version.\n"
  },
  {
    "path": "sandbox/bare_startup.sh",
    "content": "#!/bin/bash\n# Used to start up a bare chaperone test image using ubuntu:latest.  Helps for streamlining installation\n# and startup issues for new users.\n\necho Bare Ubuntu startup\n# Start up an apt-get proxy which runs on our host in another container, if it's present\n/setup/ct_setproxy\ncd $SANDBOX/../dist\npip3 install chaperone-*.tar.gz\nexec bash -i\n"
  },
  {
    "path": "sandbox/bareimage/Dockerfile",
    "content": "FROM ubuntu:14.04\n\nADD setup-bin/* *.sh /setup-bin/\nRUN /setup-bin/install-bareimage.sh\n"
  },
  {
    "path": "sandbox/bareimage/install-bareimage.sh",
    "content": "#!/bin/bash\n\n/setup-bin/ct_setproxy\napt-get update\napt-get -y install --no-install-recommends python3-pip\npip3 install setuptools\n"
  },
  {
    "path": "sandbox/bash.bashrc",
    "content": "PS1=\"image:\\W$ \"\nif [ \"$EMACS\" == \"t\" ]; then\n  stty -echo\nfi\ncd $APPS_DIR/..\nPATH=$PWD/bin:$PATH\ncd $APPS_DIR\necho \"\"\necho \"Now running inside container. Directory is: $APPS_DIR\"\necho \"\"\n"
  },
  {
    "path": "sandbox/bin/chaperone",
    "content": "#!/usr/bin/python3\n\nimport sys\nimport os\n\n# Assure we use the local package for testing and development\nsys.path[0] = os.path.dirname(os.path.dirname(sys.path[0]))\n\nfrom chaperone.exec.chaperone import main_entry\nmain_entry()\n"
  },
  {
    "path": "sandbox/bin/cps",
    "content": "#!/bin/bash\n# Shortcut for more relevant PS for containers\n\nps --forest -weo 'user,pid,ppid,pgid,sid,%cpu,%mem,stat,command'\n"
  },
  {
    "path": "sandbox/bin/fakeentry",
    "content": "#!/bin/bash\n# Useful for testing if you want to inject a shell BEFORE chaperone starts by changing the entry point.\n\nexport ENTARGS=\"$*\"\nexec /bin/bash\n"
  },
  {
    "path": "sandbox/bin/repeat",
    "content": "#!/usr/bin/python3\n\n\"\"\"\nRepeat utility for testing\n\nUsage: repeat [--nosignals] [-n=<reps>] [-i=<interval>] [-e] <message>\n\nOptions:\n    -n=<reps>         Specify number of repetitions, or infinite if absent\n    -i=<interval>     Specify interval, or 1 second if absent\n    --nosignals       Ignore all signals if present\n    -e                Output to stderr instead of stdout.    \n\"\"\"\n\nimport signal\nfrom time import sleep, strftime, localtime\nfrom docopt import docopt\nimport sys\n\nopt = docopt(__doc__)\n\nif opt['--nosignals']:\n   signal.signal(signal.SIGTERM, lambda signum, frame: print(\"ignoring SIGTERM\"))\n   signal.signal(signal.SIGHUP, lambda signum, frame: print(\"ignoring SIGHUP\"))\n   signal.signal(signal.SIGINT, lambda signum, frame: print(\"ignoring SIGINT\"))\n   \nreps = iter(int,1) if not opt['-n'] else range(int(opt['-n']))\ndelay = 1 if not opt['-i'] else int(opt['-i'])\nhandle = sys.stderr if opt['-e'] else sys.stdout\nmsg = \" \" + opt['<message>'] + \"\\n\"\n\nfor n in reps:\n   handle.write(strftime(\"%M:%S\", localtime()) + msg)\n   handle.flush()\n   sleep(delay)\n"
  },
  {
    "path": "sandbox/centos.d/apache.conf",
    "content": "apache1.service: {\n  type: notify,\n  command: \"/usr/sbin/httpd -DFOREGROUND\",\n  enabled: true,\n  restart: true,\n  env_set: {\n    LANG: C,\n  }\n}\n\nmysql.service: {\n  command: \"/etc/init.d/mysql start\",\n  enabled: true,\n  ignore_failures: true,\n}\n\napache2.service: {\n  command: \"/etc/init.d/apache2 start\",\n  after: \"mysql.service\",\n  enabled: true,\n  ignore_failures: true,\n}\n"
  },
  {
    "path": "sandbox/centos.d/app.conf",
    "content": "main.logging: {\n  stderr: false,\n}\n"
  },
  {
    "path": "sandbox/centos.d/cron.conf",
    "content": "cron.service: {\n  bin: /usr/sbin/cron,\n  args: -f,\n  optional: true,\n  restart: true,\n}\n"
  },
  {
    "path": "sandbox/centos.d/sys1.conf",
    "content": "settings: {\n  env_inherit: ['SANDBOX', '_*'],\n  env_set: {'TERM': 'xpath-revisited',\n            'QUESTIONER': 'the-law', \n            'WITHIN-HOME': '$(HOME)/inside-home',\n            'INTERACTIVE': '$(_CHAPERONE_INTERACTIVE)',\n            'CONFIG_DIR': '$(_CHAPERONE_CONFIG_DIR)',\n\t    'PROCTOOL': '$(SANDBOX)/proctool',\n\t    'ENV': '$(SANDBOX)/.shinit',\n\t    'PATH': '$(SANDBOX):/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/bin',\n           },\n  uid: 0,\n  idle_delay: 1,\n  debug: true,\n}\n\nnotify.service: {\n  type: notify,\n  command: \"$(PROCTOOL) --wait 20 --dump --notify '--ready' 'notify process'\",\n  stdout: inherit,\n  enabled: true,\n}\n\nfake1.service: {\n  command: \"$(PROCTOOL) --hang 'fake1 process'\",\n  enabled: false,\n}\n\nfake2.service: {\n  command: \"$(PROCTOOL) --hang 'fake2 service'\",\n  enabled: false,\n  stdout: inherit,\n  uid: 1000,\n  env_inherit: ['Q*'],\n}\n\nfake3.service: {\n  command: \"$(PROCTOOL) 'oneshot service'\",\n  enabled: false,\n  type: oneshot,\n  stdout: inherit,\n  ignore_failures: true,\n  uid: garyw,\n  service_group: 'earlystuff',\n  before: \"default\",\n}\n\nexittest.service: {\n  enabled: true,\n  restart: true,\n  restart_limit: 5,\n  ignore_failures: true,\n  command: \"$(PROCTOOL) --exit 20 'Exiting with 20'\",\n}\n\nrepeat.service: {\n  command: \"$(SANDBOX)/repeat -i4 'Repeat to stdout'\",\n  enabled: false,\n}\n\nrepeat_err.service: {\n  command: \"$(SANDBOX)/repeat -i4 -e 'Repeat to stderr'\",\n  enabled: false,\n}\n\nbeforemain.service: {\n  type: \"oneshot\",\n  enabled: false,\n  command: \"sh -c 'echo START IDLE TASK; sleep 2; echo ENDING IDLE TASK'\",\n  stdout: inherit,\n  before: \"MAIN\",\n  service_group: \"IDLE\",\n}\n \nmain.logging: {\n  filter: \"[chaperone].*\",\n  file: /var/log/chaperone-%d.log,\n  enabled: true,\n}\n\nconsole.logging: {\n  stdout: true,\n  filter: '*.warn;![debian-start].*;authpriv,auth.!*;!/Repeat to std/.*',\n  extended: true,\n  enabled: true,\n}\n\ndebian.logging: {\n  filter: '[debian-start].*',\n  file: /var/log/debian-start.log,\n  enabled: true,\n}\n\nsyslog.logging: {\n  filter: '*.info;![debian-start].*;![chaperone].*',\n  file: '/var/log/syslog-%d-%H%M',\n  enabled: true,\n}\n"
  },
  {
    "path": "sandbox/distserv/chaperone.d/005-config.conf",
    "content": "# 005-config.conf\n#\n# Put container configuration variables here.  This should strictly be for configuration\n# variables that are passed into the container.   100% of container configuraiton should\n# be possible by setting these variables here or on the 'docker run' command line.\n\nsettings: {\n\n  env_set: {\n\n    # This is the hostname of the host machine.  Generally, this is only needed\n    # by certain applications (such as those supporting SSL certiifcates, but is common\n    # enough to include as a standard option.\n\n    CONFIG_EXT_HOSTNAME: \"$(CONFIG_EXT_HOSTNAME:-localhost)\",\n\n    # HTTP ports of exported ports.  These are good policy to define in your \"docker run\"\n    # command so that internal applications know what ports the public interfaces are\n    # visible on.  Sometimes this is necessary, such as when appliations push their\n    # endpoints via API's or when webservers do redirects.  The default launchers\n    # for Chaperone containers handle this for you automatically.\n\n    CONFIG_EXT_HTTP_PORT: \"$(CONFIG_EXT_HTTP_PORT:-8080)\",\n    CONFIG_EXT_HTTPS_PORT: \"$(CONFIG_EXT_HTTPS_PORT:-8443)\",\n\n    # Configure this to enable SSL and generate snakeoil keys for the given domain\n    CONFIG_EXT_SSL_HOSTNAME: \"$(CONFIG_EXT_SSL_HOSTNAME:-)\",\n\n    # Create additional configuration variables here.  Start them with \"CONFIG_\"\n    # so they can be easily identified...\n\n  }\n\n}\n"
  },
  {
    "path": "sandbox/distserv/chaperone.d/010-start.conf",
    "content": "# 010-start.conf\n#\n# This is the first start-up file for the chaperone base images.  Note that start-up files\n# are processed in order alphabetically, so settings in later files can override those in\n# earlier files.\n\n# General environmental settings.  These settings apply to all services and logging entries.\n# There should be only one \"settings\" directive in each configuration file.  But, any\n# settings encountered in subsequent configuration files can override or augment these.\n# Note that variables are expanded as late as possile.  So, there can be variables\n# defined here which depend upon variables which will be defined later (such as _CHAP_SERVICE),\n# which is defined implicitly for each service.\n\nsettings: {\n\n  env_set: {\n\n  'LANG': 'en_US.UTF-8',\n  'LC_CTYPE': '$(LANG)',\n  'PATH': '$(APPS_DIR)/bin:/usr/local/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin',\n  'RANDFILE': '/tmp/openssl.rnd',\n\n  # Uncomment the below to tell startup.sh to lock-down the root account after the first\n  # successful start.\n  #'SECURE_ROOT': '1',\n\n  # Variables starting with _CHAP are internal and won't be exported to services,\n  # so we derive public environment variables if needed...\n  'APPS_DIR': '$(_CHAP_CONFIG_DIR:-/)',\n  'CHAP_SERVICE_NAME': '$(_CHAP_SERVICE:-)',\n  'CHAP_TASK_MODE': '$(_CHAP_TASK_MODE:-)',\n\n  # The best use-cases will want to move $(VAR_DIR) out of the container to keep\n  # the container emphemeral, so all references to var should always use this\n  # environment variable.\n  'VAR_DIR': '$(APPS_DIR)/var',\n\n  CHAPERONE_ROOT: \"`bash -c 'cd $(APPS_DIR)/../..; echo $PWD'`\"\n  },\n\n}\n\n# For the console, we include everything which is a warning except authentication\n# messages and daemon messages which are not errors.\n\nconsole.logging: {\n  enabled: true,\n  stdout: true,\n  selector: '*.warn;authpriv,auth.!*;daemon.!warn',\n}\n"
  },
  {
    "path": "sandbox/distserv/chaperone.d/120-apache2.conf",
    "content": "# 120-apache2.conf\n#\n# Start up apache.  This is a \"simple\" service, so chaperone will monitor Apache and restart\n# it if necessary.  Note that apache2.conf refers to MYSQL_UNIX_PORT (set by 105-mysql.conf)\n# to tell PHP where MySQL is running.\n#\n# In the case where no USER variable is specified, we run as the www-data user.\n\nsettings: {\n  env_set: {\n    HTTPD_SERVER_NAME: apache,\n  }  \n}\n\napache2.service: {\n  command: \"/usr/sbin/apache2 -f $(APPS_DIR)/etc/apache2.conf -DFOREGROUND\",\n  restart: true,\n  stdout: inherit, stderr: inherit,\n  uid: \"$(USER:-www-data)\",\n  env_set: {\n    APACHE_LOCK_DIR: /tmp,\n    APACHE_PID_FILE: /tmp/apache2.pid,\n    APACHE_RUN_USER: www-data,\n    APACHE_RUN_GROUP: www-data,\n    APACHE_RUN_DIR: \"/tmp\",\n    APACHE_LOG_DIR: \"/tmp\",\n  },\n  # If Apache2 does not require a database, you can leave this out.\n  after: database,\n}\n\napache2.logging: {\n  enabled: true,\n  selector: 'local1.*;*.!err',\n  stderr: true,\n}\n"
  },
  {
    "path": "sandbox/distserv/etc/apache2.conf",
    "content": "# This is the main Apache server configuration file.  It contains the\n# configuration directives that give the server its instructions.\n# See http://httpd.apache.org/docs/2.4/ for detailed information about\n# the directives and /usr/share/doc/apache2/README.Debian about Debian specific\n# hints.\n\n# This is a CHAPERONE-specific configuration designed to keep things lean.  It is based loosely\n# on Ubuntu 14.04 /etc/apache2/apache2.conf, and every attempt has been made to assure that\n# system-installed modules and configurations will work.\n\n# The chaperone configuration is designed to work within a self-contained application directory\n# defined by APPS_DIR.  Note that it may be a user directory, and thus chaperone allows\n# Apache to run entirely under any user account, along with a MySQL server that is also\n# sequestered in the same way.   This means that you can have containers \"point\" to apps\n# directories on your host server and manage per-container resources consistently in\n# those directories during development, until you move the entire apps directory into\n# a production container environment or image.\n\n#\n# The accept serialization lock file MUST BE STORED ON A LOCAL DISK.\n#\nMutex file:${APACHE_LOCK_DIR} default\n\nPidFile ${APACHE_PID_FILE}\n\n# Timeout: The number of seconds before receives and sends time out.\nTimeout 300\nKeepAlive On\nMaxKeepAliveRequests 100\nKeepAliveTimeout 5\n\n# Note that the user and group are defined in chaperone.d/120-apache.conf\n#User ${APACHE_RUN_USER}\n#Group ${APACHE_RUN_GROUP}\n\n# The default is off because it'd be overall better for the net if people\n# had to knowingly turn this feature on, since enabling it means that\n# each client request will result in AT LEAST one lookup request to the\n# nameserver.\nHostnameLookups Off\n\n# ErrorLog: The location of the error log file.\n# We dump errors to syslog so that we can easily duplicate it to the container stderr if we want.\nErrorLog syslog:local1\n\n# Available values: trace8, ..., trace1, debug, info, notice, warn,\n# error, crit, alert, emerg.\nLogLevel warn\n\n# Include standard Debian/Ubuntu module configuration:\nInclude /etc/apache2/mods-enabled/*.load\nInclude /etc/apache2/mods-enabled/*.conf\n\n# CHAPERONE: Override to listen on 8080 and 8443\nListen 8080\n\n<IfModule ssl_module>\n\tListen 8443\n</IfModule>\n<IfModule mod_gnutls.c>\n\tListen 8443\n</IfModule>\n\n# Sets the default security model of the Apache2 HTTPD server. It does\n# not allow access to the root filesystem outside of /usr/share and /var/www.\n# The former is used by web applications packaged in Debian,\n# the latter may be used for local directories served by the web server. If\n# your system is serving content from a sub-directory in /srv you must allow\n# access here, or in any related virtual host.\n<Directory />\n\tOptions FollowSymLinks\n\tAllowOverride None\n\tRequire all denied\n</Directory>\n\n<Directory /usr/share>\n\tAllowOverride None\n\tRequire all granted\n</Directory>\n\nDocumentRoot ${CHAPERONE_ROOT}\n\n<Directory ${CHAPERONE_ROOT}/>\n\tOptions Indexes FollowSymLinks\n\tAllowOverride None\n\tRequire all granted\n</Directory>\n\nAccessFileName .htaccess\n\n# The following lines prevent .htaccess and .htpasswd files from being\n# viewed by Web clients.\n<FilesMatch \"^\\.ht\">\n\tRequire all denied\n</FilesMatch>\n\n\n# The following directives define some format nicknames for use with\n# a CustomLog directive.\nLogFormat \"%v:%p %h %l %u %t \\\"%r\\\" %>s %O \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" vhost_combined\nLogFormat \"%h %l %u %t \\\"%r\\\" %>s %O \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" combined\nLogFormat \"%h %l %u %t \\\"%r\\\" %>s %O\" common\nLogFormat \"%{Referer}i -> %U\" referer\nLogFormat \"%{User-agent}i\" agent\n\n# Include of directories ignores editors' and dpkg's backup files,\n# see README.Debian for details.\n\n# Include generic snippets of statements\nIncludeOptional /etc/apache2/conf-enabled/*.conf\n"
  },
  {
    "path": "sandbox/distserv/run.sh",
    "content": "#!/bin/bash\n#Developer's startup script\n#Created by chaplocal on Thu Oct 15 03:47:31 UTC 2015\n\nIMAGE=\"chapdev/chaperone-apache\"\nINTERACTIVE_SHELL=\"/bin/bash\"\n\n# You can specify the external host and ports for your webserver here.  These variables\n# are also passed into the container so that any application code which does redirects\n# can use these if need be.\n\nEXT_HOSTNAME=localhost\nEXT_HTTP_PORT=9980\nEXT_HTTPS_PORT=9943\n\n# Uncomment to enable SSL and specify the certificate hostname\n#EXT_SSL_HOSTNAME=secure.example.com\n\nPORTOPT=\"-p $EXT_HTTP_PORT:8080 -e CONFIG_EXT_HTTP_PORT=$EXT_HTTP_PORT \\\n         -p $EXT_HTTPS_PORT:8443 -e CONFIG_EXT_HTTPS_PORT=$EXT_HTTPS_PORT\"\n\nusage() {\n  echo \"Usage: run.sh [-d] [-p port#] [-h] [extra-chaperone-options]\"\n  echo \"       Run $IMAGE as a daemon or interactively (the default).\"\n  echo \"       First available port will be remapped to $EXT_HOSTNAME if possible.\"\n  exit\n}\n\nif [ \"$CHAP_SERVICE_NAME\" != \"\" ]; then\n  echo run.sh should be executed on your docker host, not inside a container.\n  exit\nfi\n\ncd ${0%/*} # go to directory of this file\nAPPS=$PWD\ncd ..\n\noptions=\"-t -i -e TERM=$TERM --rm=true\"\nshellopt=\"/bin/bash\"\n\nwhile getopts \":-dp:n:\" o; do\n  case \"$o\" in\n    d)\n      options=\"-d\"\n      shellopt=\"\"\n      ;;\n    n)\n      options=\"$options --name $OPTARG\"\n      ;;\n    p)\n      PORTOPT=\"-p $OPTARG\"\n      ;;      \n    -) # first long option terminates\n      break\n      ;;\n    *)\n      usage\n      ;;\n  esac\ndone\nshift $((OPTIND-1))\n\n# Run the image with this directory as our local apps dir.\n# Create a user with a uid/gid based upon the file permissions of the chaperone.d\n# directory.\n\nMOUNT=${PWD#/}; MOUNT=/${MOUNT%%/*} # extract user mountpoint\nSELINUX_FLAG=$(sestatus 2>/dev/null | fgrep -q enabled && echo :z)\n\ndocker run --name distserv $options -v $MOUNT:$MOUNT$SELINUX_FLAG $PORTOPT \\\n   -e CONFIG_EXT_HOSTNAME=\"$EXT_HOSTNAME\" \\\n   -e CONFIG_EXT_SSL_HOSTNAME=\"$EXT_SSL_HOSTNAME\" \\\n   $IMAGE \\\n   --create $USER:$APPS/chaperone.d --config $APPS/chaperone.d $* $shellopt\n"
  },
  {
    "path": "sandbox/etc/apache2.conf",
    "content": "# This is the main Apache server configuration file.  It contains the\n# configuration directives that give the server its instructions.\n# See http://httpd.apache.org/docs/2.4/ for detailed information about\n# the directives and /usr/share/doc/apache2/README.Debian about Debian specific\n# hints.\n#\n#\n# Summary of how the Apache 2 configuration works in Debian:\n# The Apache 2 web server configuration in Debian is quite different to\n# upstream's suggested way to configure the web server. This is because Debian's\n# default Apache2 installation attempts to make adding and removing modules,\n# virtual hosts, and extra configuration directives as flexible as possible, in\n# order to make automating the changes and administering the server as easy as\n# possible.\n\n# It is split into several files forming the configuration hierarchy outlined\n# below, all located in the /etc/apache2/ directory:\n#\n#\t/etc/apache2/\n#\t|-- apache2.conf\n#\t|\t`--  ports.conf\n#\t|-- mods-enabled\n#\t|\t|-- *.load\n#\t|\t`-- *.conf\n#\t|-- conf-enabled\n#\t|\t`-- *.conf\n# \t`-- sites-enabled\n#\t \t`-- *.conf\n#\n#\n# * apache2.conf is the main configuration file (this file). It puts the pieces\n#   together by including all remaining configuration files when starting up the\n#   web server.\n#\n# * ports.conf is always included from the main configuration file. It is\n#   supposed to determine listening ports for incoming connections which can be\n#   customized anytime.\n#\n# * Configuration files in the mods-enabled/, conf-enabled/ and sites-enabled/\n#   directories contain particular configuration snippets which manage modules,\n#   global configuration fragments, or virtual host configurations,\n#   respectively.\n#\n#   They are activated by symlinking available configuration files from their\n#   respective *-available/ counterparts. These should be managed by using our\n#   helpers a2enmod/a2dismod, a2ensite/a2dissite and a2enconf/a2disconf. See\n#   their respective man pages for detailed information.\n#\n# * The binary is called apache2. Due to the use of environment variables, in\n#   the default configuration, apache2 needs to be started/stopped with\n#   /etc/init.d/apache2 or apache2ctl. Calling /usr/bin/apache2 directly will not\n#   work with the default configuration.\n\n\n# Global configuration\n#\n\n#\n# ServerRoot: The top of the directory tree under which the server's\n# configuration, error, and log files are kept.\n#\n# NOTE!  If you intend to place this on an NFS (or otherwise network)\n# mounted filesystem then please read the Mutex documentation (available\n# at <URL:http://httpd.apache.org/docs/2.4/mod/core.html#mutex>);\n# you will save yourself a lot of trouble.\n#\n# Do NOT add a slash at the end of the directory path.\n#\n#ServerRoot \"/etc/apache2\"\n\n#\n# The accept serialization lock file MUST BE STORED ON A LOCAL DISK.\n#\nMutex file:${APACHE_LOCK_DIR} default\n\n#\n# PidFile: The file in which the server should record its process\n# identification number when it starts.\n# This needs to be set in /etc/apache2/envvars\n#\nPidFile ${APACHE_PID_FILE}\n\n#\n# Timeout: The number of seconds before receives and sends time out.\n#\nTimeout 300\n\n#\n# KeepAlive: Whether or not to allow persistent connections (more than\n# one request per connection). Set to \"Off\" to deactivate.\n#\nKeepAlive On\n\n#\n# MaxKeepAliveRequests: The maximum number of requests to allow\n# during a persistent connection. Set to 0 to allow an unlimited amount.\n# We recommend you leave this number high, for maximum performance.\n#\nMaxKeepAliveRequests 100\n\n#\n# KeepAliveTimeout: Number of seconds to wait for the next request from the\n# same client on the same connection.\n#\nKeepAliveTimeout 5\n\n\n# These need to be set in /etc/apache2/envvars\nUser ${APACHE_RUN_USER}\nGroup ${APACHE_RUN_GROUP}\n\n#\n# HostnameLookups: Log the names of clients or just their IP addresses\n# e.g., www.apache.org (on) or 204.62.129.132 (off).\n# The default is off because it'd be overall better for the net if people\n# had to knowingly turn this feature on, since enabling it means that\n# each client request will result in AT LEAST one lookup request to the\n# nameserver.\n#\nHostnameLookups Off\n\n# ErrorLog: The location of the error log file.\n# If you do not specify an ErrorLog directive within a <VirtualHost>\n# container, error messages relating to that virtual host will be\n# logged here.  If you *do* define an error logfile for a <VirtualHost>\n# container, that host's errors will be logged there and not here.\n#\nErrorLog syslog:local1\n\n#\n# LogLevel: Control the severity of messages logged to the error_log.\n# Available values: trace8, ..., trace1, debug, info, notice, warn,\n# error, crit, alert, emerg.\n# It is also possible to configure the log level for particular modules, e.g.\n# \"LogLevel info ssl:warn\"\n#\nLogLevel warn\n\n# Include module configuration:\nIncludeOptional mods-enabled/*.load\nIncludeOptional mods-enabled/*.conf\n\n# Include list of ports to listen on\nInclude ports.conf\n\n\n# Sets the default security model of the Apache2 HTTPD server. It does\n# not allow access to the root filesystem outside of /usr/share and /var/www.\n# The former is used by web applications packaged in Debian,\n# the latter may be used for local directories served by the web server. If\n# your system is serving content from a sub-directory in /srv you must allow\n# access here, or in any related virtual host.\n<Directory />\n\tOptions FollowSymLinks\n\tAllowOverride None\n\tRequire all denied\n</Directory>\n\n<Directory /usr/share>\n\tAllowOverride None\n\tRequire all granted\n</Directory>\n\n<Directory /var/www/>\n\tOptions Indexes FollowSymLinks\n\tAllowOverride None\n\tRequire all granted\n</Directory>\n\n#<Directory /srv/>\n#\tOptions Indexes FollowSymLinks\n#\tAllowOverride None\n#\tRequire all granted\n#</Directory>\n\n\n\n\n# AccessFileName: The name of the file to look for in each directory\n# for additional configuration directives.  See also the AllowOverride\n# directive.\n#\nAccessFileName .htaccess\n\n#\n# The following lines prevent .htaccess and .htpasswd files from being\n# viewed by Web clients.\n#\n<FilesMatch \"^\\.ht\">\n\tRequire all denied\n</FilesMatch>\n\n\n#\n# The following directives define some format nicknames for use with\n# a CustomLog directive.\n#\n# These deviate from the Common Log Format definitions in that they use %O\n# (the actual bytes sent including headers) instead of %b (the size of the\n# requested file), because the latter makes it impossible to detect partial\n# requests.\n#\n# Note that the use of %{X-Forwarded-For}i instead of %h is not recommended.\n# Use mod_remoteip instead.\n#\nLogFormat \"%v:%p %h %l %u %t \\\"%r\\\" %>s %O \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" vhost_combined\nLogFormat \"%h %l %u %t \\\"%r\\\" %>s %O \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" combined\nLogFormat \"%h %l %u %t \\\"%r\\\" %>s %O\" common\nLogFormat \"%{Referer}i -> %U\" referer\nLogFormat \"%{User-agent}i\" agent\n\n# Include of directories ignores editors' and dpkg's backup files,\n# see README.Debian for details.\n\n# Include generic snippets of statements\nIncludeOptional conf-enabled/*.conf\n\n# Include the virtual host configurations:\nIncludeOptional sites-enabled/*.conf\n\n# vim: syntax=apache ts=4 sw=4 sts=4 sr noet\n"
  },
  {
    "path": "sandbox/etc/makezombie.conf",
    "content": "# A chaperone.d configuration which will create a zombie process\n\nzombie.service: {\n  command: \"$(APPS_DIR)/../bin/daemon $(APPS_DIR)/../bin/proctool --hang\"\n}\n"
  },
  {
    "path": "sandbox/test.d/apache.conf",
    "content": "apache1.service: {\n  command: \"/usr/sbin/apache2 -f $(SANDBOX)/etc/apache2.conf\",\n  enabled: true,\n  restart: false,\n  optional: true,\n  env_set: {\n    APACHE_LOCK_DIR: /tmp,\n    APACHE_PID_FILE: /tmp/apache2.pid,\n    APACHE_RUN_USER: www-data,\n    APACHE_RUN_GROUP: www-data,\n    APACHE_LOG_DIR: /var/log/apache2,\n  }\n}\n\nmysql.service: {\n  command: \"/etc/init.d/mysql start\",\n  enabled: false,\n}\n\napache2.service: {\n  command: \"/etc/init.d/apache2 start\",\n  after: \"mysql.service\",\n  enabled: false,\n}\n"
  },
  {
    "path": "sandbox/test.d/cron.conf",
    "content": "cron.service: {\n  command: '/usr/sbin/cron -f',\n  restart: true,\n  enabled: false,\n}\n"
  },
  {
    "path": "sandbox/test.d/sys1.conf",
    "content": "settings: {\n  env_inherit: ['SANDBOX', '_*'],\n  env_set: {'TERM': 'xpath-revisited',\n            'QUESTIONER': 'the-law', \n            'WITHIN-HOME': '$(HOME)/inside-home',\n            'INTERACTIVE': '$(_CHAP_INTERACTIVE)',\n            'CONFIG_DIR': '$(_CHAP_CONFIG_DIR)',\n\t    'PROCTOOL': '$(SANDBOX)/proctool',\n\t    'ENV': '$(SANDBOX)/.shinit',\n\t    'PATH': '$(SANDBOX):/usr/local/sbin:/usr/local/bin:/services/$(_CHAP_SERVICE)/bin:/usr/sbin:/usr/bin:/bin',\n\t    'APPS_PATH': '$(HOME:-)/apps',\n           },\n  uid: 0,\n  idle_delay: 1,\n  debug: true,\n}\ncron1.service: {\n  type: cron,\n  stdout: inherit, stderr: inherit,\n  interval: \"*/2 * * * *\",\n  command: \"proctool --wait 2 'running cron1.service'\"\n}\nhometest.service: {\n  type: oneshot,\n  command: \"$(PROCTOOL) my.$(APPS_PATH).apps-path\",\n}\n\nfake1.service: {\n  command: \"$(PROCTOOL) --dump --hang 'fake1 process'\",\n  env_set: { 'PATH': '/binno/proctool:$(PATH)' },\n  env_unset: [ '*HOME*', 'APPS_PATH' ],\n  stdout: inherit,\n  enabled: true,\n  debug: true,\n}\n\nfake2.service: {\n  command: \"$(PROCTOOL) --hang 'fake2 service'\",\n  enabled: false,\n  stdout: inherit,\n  uid: 1000,\n  env_inherit: ['Q*', 'SANDBOX', 'PROCTOOL'],\n}\n\nfake3.service: {\n  command: \"$(PROCTOOL) 'oneshot service'\",\n  enabled: false,\n  type: oneshot,\n  stdout: inherit,\n  ignore_failures: true,\n  uid: garyw,\n  service_groups: 'earlystuff',\n  before: \"default\",\n}\n\nexittest.service: {\n  enabled: true,\n  restart: true,\n  restart_limit: 3,\n  ignore_failures: true,\n  command: \"$(PROCTOOL) --exit 20 'Exiting with 20'\",\n}\n\nrepeat.service: {\n  command: \"$(SANDBOX)/repeat -i4 'Repeat to stdout'\",\n  enabled: true,\n}\n\nrepeat_err.service: {\n  command: \"$(SANDBOX)/repeat -i4 -e 'Repeat to stderr'\",\n  enabled: false,\n}\n\nbeforemain.service: {\n  type: \"oneshot\",\n  enabled: false,\n  command: \"sh -c 'echo START IDLE TASK; sleep 2; echo ENDING IDLE TASK'\",\n  stdout: inherit,\n  before: \"MAIN\",\n  service_groups: \"IDLE\",\n}\n \nmain.logging: {\n  selector: \"[chaperone].*\",\n  file: /var/log/chaperone-%d.log,\n  enabled: true,\n}\n\nconsole.logging: {\n  stdout: true,\n  selector: '*.warn;![debian-start].*;authpriv,auth.!*;!/Repeat to std/.*',\n  extended: true,\n  enabled: true,\n}\n\ndebian.logging: {\n  selector: '[debian-start].*',\n  file: /var/log/debian-start.log,\n  enabled: true,\n}\n\nsyslog.logging: {\n  selector: '*.info;![debian-start].*;![chaperone].*',\n  file: '/var/log/syslog-%d-%H%M',\n  enabled: true,\n}\n"
  },
  {
    "path": "sandbox/testbare",
    "content": "#!/bin/bash\n\n# Used to test a bareimage, an image which was created \"as if\" chaperone was JUST installed from pip,\n# mostly to be sure that if people do a \"pip3 install chaperone\" and then run chaperone, that errors\n# and other feedback are reasonable.\n\nSANDBOX=$PWD\n\ndocker run -t -i --rm=true -e \"TERM=$TERM\" -v /home:/home -e \"SANDBOX=$SANDBOX\" chapdev/bareimage \\\n    /bin/bash /home/garyw/dev/chaperone/sandbox/bare_startup.sh\n\n\n"
  },
  {
    "path": "sandbox/testcent",
    "content": "#!/bin/bash\n\nSANDBOX=$PWD\n\ndocker run -t -i -e \"TERM=$TERM\" --rm=true -v /home:/home --entrypoint=$SANDBOX/chaperone -e \"SANDBOX=$SANDBOX\" bst/chapdev-centos \\\n    --config dev/chaperone/sandbox/centos.d $* \\\n    --user garyw /bin/bash\n"
  },
  {
    "path": "sandbox/testdock",
    "content": "#!/bin/bash\n\nSANDBOX=$PWD\n\ndocker run -t -i -e \"TERM=$TERM\" --rm=true -v /home:/home --entrypoint=$SANDBOX/chaperone -e \"SANDBOX=$SANDBOX\" bst/chapdev \\\n    --config dev/chaperone/sandbox/test.d $* \\\n    --user garyw /bin/bash\n"
  },
  {
    "path": "sandbox/testimage",
    "content": "#!/bin/bash\n# Used to create an apps directory here in the sandbox which runs a\n# standard docker image, however uses the local chaperone sources\n# and creates an app directory here in the sandbox.  This is for\n# development of chaperone itself, and allows you to duplicate the\n# environment of an image.  Especially useful for reproducing problems\n# and troubleshooting images.\n\nif [ $# == 0 ]; then\n    echo \"usage: testimage image-suffix\"\n    exit 1\nfi\n\n# the cd trick assures this works even if the current directory is not current.\ncd ${0%/*}\n\nSUFFIX=$1\nshift\t\t\t\t# remaining arguments are for chaperone\n\n# Try with chaperone- prefix first\nIMAGE=chapdev/chaperone-$SUFFIX\nif ! docker inspect $IMAGE >/dev/null 2>&1; then\n  IMAGE=chapdev/$SUFFIX\nfi\n\nSANDBOX=$PWD\nAPPSDIR=$SANDBOX/apps-$SUFFIX\n\nbashcmd=\"/bin/bash --rcfile $SANDBOX/bash.bashrc\"\nif [ \"$1\" == \"-\" ]; then\n  bashcmd=\"\"\n  shift\nfi\n\nmyuid=`id -u`\nmygid=`id -g`\n\n# Copy the apps into this sandbox directory so we can work on it.\n\nif [ ! -d $APPSDIR ]; then\n    docker run -i --rm=true -v /home:/home $IMAGE --disable --exitkills --log err --user root \\\n\t/bin/bash -c \"cp -a /apps $APPSDIR; chown -R $myuid:$mygid $APPSDIR\"\nfi\n\n# Run the lamp image using our local copy of chaperone as well as the local apps directory\n\ndocker run -t -i -e \"TERM=$TERM\" -e \"EMACS=$EMACS\" --rm=true -v /home:/home \\\n    --name run-$SUFFIX \\\n    --entrypoint $SANDBOX/bin/chaperone $IMAGE \\\n    --create $USER:$myuid \\\n    --default-home / \\\n    --config $APPSDIR/chaperone.d $* $bashcmd\n"
  },
  {
    "path": "sandbox/testvar",
    "content": "#!/bin/bash\n# Used to create an apps directory here in the sandbox which runs a\n# standard docker image, however uses the local chaperone sources.\n# Creates a data-only \"var\" directory instead of a full apps directory\n# to test things like --default-home\n\nif [ $# == 0 ]; then\n    echo \"usage: testvar image-suffix\"\n    exit 1\nfi\n\n# the cd trick assures this works even if the current directory is not current.\ncd ${0%/*}\n\nSUFFIX=$1\nshift\t\t\t\t# remaining arguments are for chaperone\n\nIMAGE=chapdev/chaperone-$SUFFIX\nSANDBOX=$PWD\nVARDIR=$SANDBOX/var-$SUFFIX\n\nbashcmd=\"/bin/bash --rcfile $SANDBOX/bash.bashrc\"\nif [ \"$1\" == \"-\" ]; then\n  bashcmd=\"\"\n  shift\nfi\n\nmyuid=`id -u`\nmygid=`id -g`\n\n# Run the lamp image using our local copy of chaperone as well as the local var-only directory\n\nmkdir -p $VARDIR\n\ndocker run -t -i -e \"TERM=$TERM\" -e \"EMACS=$EMACS\" --rm=true -v /home:/sandbox \\\n    -v $VARDIR:/apps/var \\\n    --name run-$SUFFIX \\\n    --entrypoint /sandbox${SANDBOX#/home}/bin/chaperone $IMAGE \\\n    --create $USER/$myuid \\\n    --default-home / \\\n    $* $bashcmd\n"
  },
  {
    "path": "sandbox/user.d/sys1.conf",
    "content": "settings: {\n  env_inherit: ['SANDBOX', '_*'],\n  env_set: {'TERM': 'xpath-revisited',\n            'QUESTIONER': 'the-law', \n            'WITHIN-HOME': '$(HOME)/inside-home',\n            'INTERACTIVE': '$(_CHAP_INTERACTIVE)',\n            'CONFIG_DIR': '$(_CHAP_CONFIG_DIR)',\n\t    'PROCTOOL': '$(SANDBOX)/proctool',\n\t    'ENV': '$(SANDBOX)/.shinit',\n\t    'PATH': '$(SANDBOX):/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/bin',\n\t    'APPS_PATH': '$(HOME:-)/apps',\n           },\n  #uid: 0,\n  idle_delay: 1,\n  debug: true,\n}\ncron1.service: {\n  type: cron,\n  stdout: inherit, stderr: inherit,\n  interval: '* * * * *',\n  command: \"$(PROCTOOL) --wait 20 'running cron1.service'\"\n}\nhometest.service: {\n  type: oneshot,\n  command: \"$(PROCTOOL) my.$(APPS_PATH).apps-path\",\n}\n\nfake1.service: {\n  command: \"$(PROCTOOL) --hang 'fake1 process'\",\n  enabled: false,\n}\n\nfake2.service: {\n  command: \"$(PROCTOOL) --hang 'fake2 service'\",\n  enabled: true,\n  stdout: inherit,\n  uid: 1000,\n  env_inherit: ['Q*', 'SANDBOX', 'PROCTOOL'],\n}\n\nfake3.service: {\n  command: \"$(PROCTOOL) 'oneshot service'\",\n  enabled: false,\n  type: oneshot,\n  stdout: inherit,\n  ignore_failures: true,\n  uid: garyw,\n  service_groups: 'earlystuff',\n  before: \"default\",\n}\n\nexittest.service: {\n  enabled: false,\n  restart: true,\n  restart_limit: 5,\n  ignore_failures: true,\n  command: \"$(PROCTOOL) --exit 20 'Exiting with 20'\",\n}\n\nrepeat.service: {\n  command: \"$(SANDBOX)/repeat -i4 'Repeat to stdout'\",\n  enabled: false,\n}\n\nrepeat_err.service: {\n  command: \"$(SANDBOX)/repeat -i4 -e 'Repeat to stderr'\",\n  enabled: false,\n}\n\nbeforemain.service: {\n  type: \"oneshot\",\n  enabled: false,\n  command: \"sh -c 'echo START IDLE TASK; sleep 2; echo ENDING IDLE TASK'\",\n  stdout: inherit,\n  before: \"MAIN\",\n  service_groups: \"IDLE\",\n}\n \nmain.logging: {\n  filter: \"[chaperone].*\",\n  file: \"$(HOME)/tmp/chaperone-%d.log\",\n  enabled: true,\n}\n\nconsole.logging: {\n  stdout: true,\n  filter: '*.warn;![debian-start].*;authpriv,auth.!*;!/Repeat to std/.*',\n  extended: true,\n  enabled: true,\n}\n\ndebian.logging: {\n  filter: '[debian-start].*',\n  file: \"$(HOME)/tmp/debian-start.log\",\n  enabled: true,\n}\n\nsyslog.logging: {\n  filter: '*.info;![debian-start].*;![chaperone].*',\n  file: '$(HOME)/tmp/syslog-%d-%H%M',\n  enabled: true,\n}\n"
  },
  {
    "path": "setup.py",
    "content": "import os\nimport sys\nimport subprocess\nfrom setuptools import setup, find_packages\n\nif sys.version_info < (3,):\n    print(\"You must run setup.py with Python 3 only.  Python 2 distributions are not supported.\")\n    exit(1)\n\nourdir = os.path.dirname(__file__)\n\ndef read(fname):\n    return open(os.path.join(ourdir, fname)).read()\n\ndef get_version():\n    return subprocess.check_output([sys.executable, os.path.join(\"chaperone/cproc/version.py\")]).decode().strip()\n\ndef which(program):\n    def is_exe(fpath):\n        return os.path.isfile(fpath) and os.access(fpath, os.X_OK)\n\n    fpath, fname = os.path.split(program)\n    if fpath:\n        if is_exe(program):\n            return program\n    else:\n        for path in os.environ[\"PATH\"].split(os.pathsep):\n            path = path.strip('\"')\n            exe_file = os.path.join(path, program)\n            if is_exe(exe_file):\n                return exe_file\n    return None\n\nrequires_list = ['docopt>=0.6.2', 'PyYAML>=3.1.1', 'voluptuous>=0.8.7', 'aiocron>=0.3']\n\nif which('gcc'):\n    requires_list += [\"setproctitle>=1.1.8\"]\n\nsetup(\n    name = \"chaperone\",\n    version = get_version(),\n    description = 'Simple system init daemon for Docker-like environments',\n    long_description = read('README'),\n    packages = find_packages(),\n    #test_suite = \"pyt_tests.tests.test_all\",\n    entry_points={\n        'console_scripts': [\n            'chaperone = chaperone.exec.chaperone:main_entry',\n            'telchap = chaperone.exec.telchap:main_entry',\n            'envcp = chaperone.exec.envcp:main_entry',\n            'sdnotify = chaperone.exec.sdnotify:main_entry',\n            'sdnotify-exec = chaperone.exec.sdnotify_exec:main_entry',\n        ],\n    },\n    license = \"Apache Software License\",\n    author = \"Gary Wisniewski\",\n    author_email = \"garyw@blueseastech.com\",\n    url = \"http://github.com/garywiz/chaperone\",\n    keywords = \"docker init systemd syslog\",\n\n    install_requires = requires_list,\n\n    classifiers = [\n        \"Development Status :: 5 - Production/Stable\",\n        \"Intended Audience :: Developers\",\n        \"License :: OSI Approved :: Apache Software License\",\n        \"Natural Language :: English\",\n        \"Operating System :: POSIX :: Linux\",\n        \"Programming Language :: Python :: 3\",\n        \"Topic :: System :: Logging\",\n        \"Topic :: System :: Boot :: Init\",\n        ]\n    )\n"
  },
  {
    "path": "tests/.gitignore",
    "content": "test_logs\n"
  },
  {
    "path": "tests/README.md",
    "content": "This directory contains both Chaperone unit tests as well as more complex integration tests.  The `run-all-tests.sh` script runs them all.\n\nHowever, integration tests in this directory have several requirements.  They will run both on Ubuntu as well as RHEL.   Docker 1.8.1 is required, since socket mount permissions have problems with SELinux for earlier versions.\n\nFor both, you'll need everything Chaperone itself requires, and may need to install them manually since Chaperone may not be installed on the development system:\n\n    pip3 install docopt\n    pip3 install PyYAML\n    pip3 install voluptuous\n    pip3 install croniter\n\nYou will also need a working `chapdev/chaperone-lamp` image.  This is the image that is used for all of the tests in this directory and you can simply pull it if it isn't already available.\n\nWait, there's more.\n\nFor Ubuntu, you'll then need:\n\n    apt-get install expect-lite\n    apt-get install nc # should already be there\n\nFor CentOS/RHEL, it is a bit more complicated.  You'll need:\n\n    yum install expect\n    yum install nc\n\nand then you'll need to manually install `expect-lite` using the instructions [on the developer website](http://expect-lite.sourceforge.net/expect-lite_install.html).  (It's pretty easy actually, and foolproof).\n"
  },
  {
    "path": "tests/bin/chaperone",
    "content": "#!/usr/bin/python3\n\nimport sys\nimport os\n\n# Assure we use the local package for testing and development\nsys.path[0] = os.path.dirname(os.path.dirname(sys.path[0]))\n\nfrom chaperone.exec.chaperone import main_entry\nmain_entry()\n"
  },
  {
    "path": "tests/bin/daemon",
    "content": "#!/usr/bin/python3\n\n\"\"\"\nForks a process in a daemon-like fashion for testing.\n\nUsage:\n    daemon [--wait=seconds] [--ignore-signals] [--exit=code] COMMAND [ARGS ...]\n\"\"\"\n\nimport signal\nimport sys\nimport subprocess\nfrom time import sleep\nfrom docopt import docopt\n\nimport os\n\nfrom daemonutil import Daemon\n\noptions = docopt(__doc__, options_first=True)\n\nif options['--ignore-signals']:\n    signal.signal(signal.SIGTERM, lambda signum, frame: print(\"ignoring SIGTERM\"))\n    signal.signal(signal.SIGHUP, lambda signum, frame: print(\"ignoring SIGHUP\"))\n    signal.signal(signal.SIGINT, lambda signum, frame: print(\"ignoring SIGINT\"))\nelse:\n    signal.signal(signal.SIGTERM, lambda signum, frame: not print(\"received SIGTERM\"))\n    signal.signal(signal.SIGHUP, lambda signum, frame: not print(\"received SIGHUP\"))\n    signal.signal(signal.SIGINT, lambda signum, frame: not print(\"received SIGINT\"))\n\nif options['--wait']:\n    print(\"Waiting {0} ...\".format(options['--wait']))\n    sleep(float(options['--wait']))\n\nargs = [options['COMMAND']] + options['ARGS']\n\nprint(\"{1}:Launching {0} ...\".format(args, os.getpid()))\n\nclass mydaemon(Daemon):\n\n    def run(self):\n        subprocess.Popen(args, start_new_session=True)\n\nd = mydaemon()\n\nif options['--exit']:\n    d.start(int(options['--exit']))\nelse:\n    d.start()\n"
  },
  {
    "path": "tests/bin/daemonutil.py",
    "content": "\"\"\"\n\nGeneric linux daemon base class for python 3.x.\n\nFrom: http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/\n\nThank you!\n\n\"\"\"\n\nimport sys, os, time, atexit, signal\n\nclass Daemon:\n    \"\"\"A generic daemon class.\n\n    Usage: subclass the daemon class and override the run() method.\"\"\"\n\n    def __init__(self, pidfile = None): \n        self.pidfile = pidfile\n    \n    def daemonize(self, exitwith = 0):\n        \"\"\"Deamonize class. UNIX double fork mechanism.\"\"\"\n\n        sys.stdout.flush()\n        sys.stderr.flush()\n\n        try: \n            pid = os.fork() \n            if pid > 0:\n                # exit first parent\n                sys.exit(exitwith) \n        except OSError as err: \n            sys.stderr.write('fork #1 failed: {0}\\n'.format(err))\n            sys.exit(1)\n    \n        # decouple from parent environment\n        os.chdir('/') \n        os.setsid() \n        os.umask(0) \n    \n        # do second fork\n        try: \n            pid = os.fork() \n            if pid > 0:\n\n                # exit from second parent\n                sys.exit(0) \n        except OSError as err: \n            sys.stderr.write('fork #2 failed: {0}\\n'.format(err))\n            sys.exit(1) \n    \n        # redirect standard file descriptors\n        sys.stdout.flush()\n        sys.stderr.flush()\n        si = open(os.devnull, 'r')\n        so = open(os.devnull, 'a+')\n        se = open(os.devnull, 'a+')\n\n        os.dup2(si.fileno(), sys.stdin.fileno())\n        os.dup2(so.fileno(), sys.stdout.fileno())\n        os.dup2(se.fileno(), sys.stderr.fileno())\n    \n        # write pidfile\n        if self.pidfile:\n            atexit.register(self.delpid)\n\n            pid = str(os.getpid())\n            with open(self.pidfile,'w+') as f:\n                f.write(pid + '\\n')\n    \n    def delpid(self):\n        os.remove(self.pidfile)\n\n    def start(self, exitwith = 0):\n        \"\"\"Start the daemon.\"\"\"\n\n        # Check for a pidfile to see if the daemon already runs\n        if self.pidfile:\n            try:\n                with open(self.pidfile,'r') as pf:\n\n                    pid = int(pf.read().strip())\n            except IOError:\n                pid = None\n\n            if pid:\n                message = \"pidfile {0} already exist. \" + \\\n                        \"Daemon already running?\\n\"\n                sys.stderr.write(message.format(self.pidfile))\n                sys.exit(1)\n        \n        # Start the daemon\n        self.daemonize(exitwith)\n        self.run()\n\n    def stop(self):\n        \"\"\"Stop the daemon.\"\"\"\n\n        assert self.pidfile, \"Requires pidfile to use stop()\"\n\n        # Get the pid from the pidfile\n        try:\n            with open(self.pidfile,'r') as pf:\n                pid = int(pf.read().strip())\n        except IOError:\n            pid = None\n    \n        if not pid:\n            message = \"pidfile {0} does not exist. \" + \\\n                    \"Daemon not running?\\n\"\n            sys.stderr.write(message.format(self.pidfile))\n            return # not an error in a restart\n\n        # Try killing the daemon process    \n        try:\n            while 1:\n                os.kill(pid, signal.SIGTERM)\n                time.sleep(0.1)\n        except OSError as err:\n            e = str(err.args)\n            if e.find(\"No such process\") > 0:\n                if os.path.exists(self.pidfile):\n                    os.remove(self.pidfile)\n            else:\n                print (str(err.args))\n                sys.exit(1)\n\n    def restart(self):\n        \"\"\"Restart the daemon.\"\"\"\n        self.stop()\n        self.start()\n\n    def run(self):\n        \"\"\"You should override this method when you subclass Daemon.\n        \n        It will be called after the process has been daemonized by \n        start() or restart().\"\"\"\n"
  },
  {
    "path": "tests/bin/envcp",
    "content": "#!/usr/bin/python3\n\nimport sys\nimport os\n\n# Assure we use the local package for testing and development\nsys.path[0] = os.path.dirname(os.path.dirname(sys.path[0]))\n\nfrom chaperone.exec.envcp import main_entry\nmain_entry()\n"
  },
  {
    "path": "tests/bin/expect-lite-command-run",
    "content": "#!/bin/bash\n\nfunction RUNTASK() { \n    expect-lite-image-run --task $*\n}\n\nfunction RUNIMAGE() { \n    export CHTEST_DOCKER_CMD=\"sdnotify-exec --noproxy --verbose --wait-stop docker run %{SOCKET_ARGS}\"\n    export CHTEST_DOCKER_OPTS=$*\n    expect-lite-image-run\n}\n\nfunction RUNIMAGE_READY() { \n    export CHTEST_DOCKER_CMD=\"sdnotify-exec --noproxy --verbose --wait-ready docker run %{SOCKET_ARGS}\"\n    export CHTEST_DOCKER_OPTS=$*\n    expect-lite-image-run\n}\n\nexport -f RUNTASK RUNIMAGE RUNIMAGE_READY\nbash -i\n"
  },
  {
    "path": "tests/bin/expect-lite-image-run",
    "content": "#!/bin/bash\n\noptions=\"\"\nif [ \"$CHTEST_CONTAINER_NAME\" != \"\" ]; then\n  options=\"--name $CHTEST_CONTAINER_NAME\"\nfi\n\nif [[ \" $CHTEST_DOCKER_OPTS \" != *\\ -d* ]]; then\n  options=\"$options -i -t --rm\"\nfi\n\nif [ \"$CHTEST_DOCKER_CMD\" == \"\" ]; then\n  CHTEST_DOCKER_CMD=\"docker run\"\nfi\n\nSELINUX_FLAG=$(sestatus 2>/dev/null | fgrep -q enabled && echo :z)\n\nexec $CHTEST_DOCKER_CMD $options \\\n    -v /home:/home$SELINUX_FLAG \\\n    -e TESTHOME=$TESTHOME \\\n    -e TESTDIR=$TESTDIR \\\n    -e CHTEST_HOME=$CHTEST_HOME \\\n    $CHTEST_DOCKER_OPTS \\\n    --entrypoint $TESTHOME/bin/chaperone \\\n    $CHTEST_IMAGE \\\n    --create $USER:$TESTHOME \\\n    --default-home $CHTEST_HOME \\\n    --config $CHTEST_HOME/../chaperone.conf \\\n    $*\n"
  },
  {
    "path": "tests/bin/expect-test-command",
    "content": "#!/bin/bash\n\nexport EL_SHELL=\"expect-lite-command-run\"\nexec expect-lite $1\n"
  },
  {
    "path": "tests/bin/expect-test-image",
    "content": "#!/bin/bash\n\nexport EL_SHELL=\"expect-lite-image-run\"\nexec expect-lite $1\n"
  },
  {
    "path": "tests/bin/get-serial",
    "content": "#!/bin/bash\n\nserfile=$CHTEST_HOME/serial.dat\nif [ ! -f $serfile ]; then\n  current=0\nelse\n  current=$(cat $serfile)\nfi\n\nlet current=current+1\n\necho $current >$serfile\necho $current\n"
  },
  {
    "path": "tests/bin/is-running",
    "content": "#!/bin/bash\n\nps -C $1 >/dev/null && exit 0\nexit 1\n"
  },
  {
    "path": "tests/bin/kill-from-pidfile",
    "content": "#!/bin/bash\n\npidfile=$1\n\nif [ -f $pidfile ]; then\n  sudo kill `cat $1`\nfi\n"
  },
  {
    "path": "tests/bin/logecho",
    "content": "#!/bin/bash\n\nif [ \"$SERVICE_NAME\" == \"\" ]; then\n  SERVICE_NAME=\"pid$$\"\nfi\n\nlogger -p info -t $SERVICE_NAME \"$*\"\n"
  },
  {
    "path": "tests/bin/proctool",
    "content": "#!/usr/bin/python3\n\n\"\"\"\nTool to create processes for various purposes.\n\nUsage:\n    proctool [--dump] [--hang] [--wait=seconds] [--ignore-signals] [--exit=code] [--notify=CMD] [MESSAGE]\n\"\"\"\n\nimport signal\nimport sys\nfrom time import sleep\nfrom docopt import docopt\n\nimport os\n\noptions = docopt(__doc__)\n\nif options['MESSAGE']:\n   sys.stdout.write('proctool says: ' + options['MESSAGE'] + \"\\n\")\n   sys.stdout.flush()\n\nif options['--notify']:\n   cmd = options['--notify']\n   os.system('sdnotify ' + cmd)\n\nif options['--dump']:\n    print(\"UID:{0} GID:{1} PID:{2} Environment:\".format(os.getuid(), os.getgid(), os.getpid()))\n    for k,v in os.environ.items():\n        print(\" {0}={1}\".format(k,v))\n\nif options['--ignore-signals']:\n    signal.signal(signal.SIGTERM, lambda signum, frame: print(\"ignoring SIGTERM\"))\n    signal.signal(signal.SIGHUP, lambda signum, frame: print(\"ignoring SIGHUP\"))\n    signal.signal(signal.SIGINT, lambda signum, frame: print(\"ignoring SIGINT\"))\nelse:\n    signal.signal(signal.SIGTERM, lambda signum, frame: print(\"received SIGTERM\") or exit() )\n    signal.signal(signal.SIGHUP, lambda signum, frame: not print(\"received SIGHUP\"))\n    signal.signal(signal.SIGINT, lambda signum, frame: not print(\"received SIGINT\"))\n\nif options['--wait']:\n    sleep(float(options['--wait']))\n\nif options['--hang']:\n    while True:\n        sleep(100)\n\nif options['--exit']:\n    exit(int(options['--exit']))\n"
  },
  {
    "path": "tests/bin/read_from_port",
    "content": "#!/bin/bash\n\nif nc --version >/dev/null 2>&1; then\n   # nmap.org accepts --version and has different syntax (lovely eh)\n   nc --recv-only $*\nelse\n   # bsd version\n   nc $*\nfi\n"
  },
  {
    "path": "tests/bin/sdnotify",
    "content": "#!/usr/bin/python3\n\nimport sys\nimport os\n\n# Assure we use the local package for testing and development\nsys.path[0] = os.path.dirname(os.path.dirname(sys.path[0]))\n\nfrom chaperone.exec.sdnotify import main_entry\nmain_entry()\n"
  },
  {
    "path": "tests/bin/sdnotify-exec",
    "content": "#!/usr/bin/python3\n\nimport sys\nimport os\n\n# Assure we use the local package for testing and development\nsys.path[0] = os.path.dirname(os.path.dirname(sys.path[0]))\n\nfrom chaperone.exec.sdnotify_exec import main_entry\nmain_entry()\n"
  },
  {
    "path": "tests/bin/talkback",
    "content": "#!/usr/bin/python3\n# Simple echo script to test inetd\n\nimport sys\n\nfor line in sys.stdin:\n  if \"EXIT\" in line:\n    exit(0)\n  print(\"Echoing: \", line)\n  sys.stdout.flush()\n"
  },
  {
    "path": "tests/bin/telchap",
    "content": "#!/usr/bin/python3\n\nimport sys\nimport os\n\n# Assure we use the local package for testing and development\nsys.path[0] = os.path.dirname(os.path.dirname(sys.path[0]))\n\nfrom chaperone.exec.telchap import main_entry\nmain_entry()\n"
  },
  {
    "path": "tests/bin/test-driver",
    "content": "#!/bin/bash\n# Assumes the current directory contains executable files and runs them all.\n\nfunction relpath() { python -c \"import os,sys;print(os.path.relpath(*(sys.argv[1:])))\" \"$@\"; }\n\nfunction extract_title() {\n    script=$1\n    title=`sed -n 's/^#TITLE: *//p' $script`\n    [ \"$title\" == \"\" ] && title=$script\n    echo $title\n}\n\nexport CHTEST_CONTAINER_NAME=CHAP-TEST-CONTAINER-$$\n\nfunction kill_test_container() {\n    sleep 1  # Sometimes it takes docker a while to actually kill the container.\n    if docker inspect $CHTEST_CONTAINER_NAME >/dev/null 2>&1; then\n      echo Container still running: Forcing removal\n      docker kill $CHTEST_CONTAINER_NAME >/dev/null\n      docker rm -v $CHTEST_CONTAINER_NAME >/dev/null\n    fi\n}\n\nshellmode=0\nif [ \"$1\" == '--shell' ]; then\n  shellmode=1\n  shift\nfi\n\nexport TESTDIR=$(readlink -f $1)\nexport TESTHOME=$PWD\nexport CHTEST_HOME=$TESTDIR/_temp-$$_\n\nif [ \"$CHTEST_LOGDIR\" == \"\" ]; then\n  export CHTEST_LOGDIR=$TESTHOME/test_logs\nfi\n\nif [ \"$2\" == \"\" ]; then\n  IMAGE_NAME=chapdev/chaperone-lamp\nelse\n  IMAGE_NAME=$2\nfi\n\nexport CHTEST_IMAGE=$IMAGE_NAME\n\nif [ ! -d $TESTDIR ]; then\n   exit\nfi\n\nif [ -e $CHTEST_HOME ]; then\n   echo \"Can't continue... $CHTEST_HOME already exists.\"\n   exit 1\nfi\n\nif [ \"`which expect-lite`\" == \"\" ]; then\n   echo \"expect-lite must be installed for tests to run\"\n   exit 1\nfi\n\nmkdir -p $CHTEST_LOGDIR\n\nif [ $shellmode == 1 ]; then\n  mkdir $CHTEST_HOME\n  expect-lite-image-run --disable-services /bin/bash\n  rm -rf $CHTEST_HOME\n  exit\nfi\n\n(\n  exitcode=0\n  for sf in $( find $TESTDIR -type f -executable \\! -name '*~' ); do\n    if [ \"$CHTEST_ONLY_ENDSWITH\" != \"\" -a \"${sf%*/$CHTEST_ONLY_ENDSWITH}\" == \"$sf\" ]; then\n\tcontinue\n    fi\n    mkdir $CHTEST_HOME; cd $CHTEST_HOME\n    logfile=$CHTEST_LOGDIR/$(basename $TESTDIR)_${sf/*\\//}.log\n    rm -f $logfile.err\n    title=$(extract_title $sf)\n    echo \"RUNNING TEST: $title\"\n    echo \"\" >>$logfile.err\n    echo \"##\" >>$logfile.err\n    echo \"## RUNNING TEST: $title\" >>$logfile.err\n    echo \"##               $sf\" >>$logfile.err\n    echo \"##\" >>$logfile.err\n    if ! $sf >>$logfile.err 2>&1; then\n      echo \"TEST FAILED: $sf (see $(relpath $logfile.err $TESTHOME))\"\n      exitcode=2\n    else\n      mv $logfile.err $logfile\n    fi\n    kill_test_container\n    cd $TESTDIR; [ ! -f keep.tempdir ] && rm -rf $CHTEST_HOME\n  done\n  if [ $exitcode != 0 ]; then\n      echo \"Some tests failed in: $TESTDIR\"\n  fi\n  exit $exitcode\n)\n"
  },
  {
    "path": "tests/el-tests/basic-1/chaperone.conf",
    "content": "settings: {\n  env_set: {\n    PATH: \"$(TESTHOME)/bin:$(PATH)\",\n  }\n}\n\necho.service: {\n  command: \"echo first output\",\n  stdout: inherit,\n}\n\ndefault.logging: {\n  selector: \"*.debug\",\n  stdout: true,\n}\n"
  },
  {
    "path": "tests/el-tests/basic-1/test-001.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Test simplest possible commmand service\n>RUNIMAGE\n<first output\n<queueing 'READY=1'\n<info: STOP notification\n"
  },
  {
    "path": "tests/el-tests/basic-1/test-002.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Test simplest possible task\n\n>RUNTASK proctool testing-123\n<proctool says: testing-123\n"
  },
  {
    "path": "tests/el-tests/cron-1/chaperone.conf",
    "content": "settings: {\n   env_set: { PATH: \"$(TESTHOME)/bin:$(PATH)\" }\n}\n\ncron1-echo.service: {\n  type: cron,\n  enabled: \"$(ENABLE_CRON1:-false)\",\n  interval: \"* * * * * */10\",\n  command: \"echo from cron1\",\n}\n\n# ENABLE_APACHE4: Complex timing.  Apache running in the foreground, but has untracked\n# processes.  Cron job simulates what logrotate does, stopping and starting it.  The key\n# here is that Chaperone shouldn't shut the system down inadvertently as processes terminate\n# and restart.\n\ntest4-apache.service: {\n  type: simple,\n  enabled: \"$(ENABLE_APACHE4:-false)\",\n  command: \"service apache2 start\",\n  uid: root,\n}\n\ntest4-simrotate.service: {\n  type: cron,\n  command: \"bash $(_CHAP_CONFIG_DIR)/simulate-rotate.sh test4-apache\",\n  enabled: \"$(ENABLE_APACHE4:-false)\",\n  interval:  \"* * * * * */10\",\n  service_groups: IDLE,\n}\n\n# ENABLE_APACHE5: Same deal, but this time Chaperone knows about apache and telchap can do its job\n\ntest5-apache.service: {\n  type: simple,\n  enabled: \"$(ENABLE_APACHE5:-false)\",\n  command: \"service apache2 start\",\n  pidfile: \"/run/apache2/apache2.pid\",\n  uid: root,\n}\n\ntest5-simrotate.service: {\n  type: cron,\n  command: \"bash $(_CHAP_CONFIG_DIR)/simulate-rotate.sh test5-apache telchap\",\n  enabled: \"$(ENABLE_APACHE5:-false)\",\n  interval:  \"* * * * * */10\",\n  service_groups: IDLE,\n}\n\n# ENABLE_APACHE6: Just a test to be sure we can kill Apache AND chaperone with a non-scheduled background job.\n\ntest6-apache.service: {\n  type: simple,\n  enabled: \"$(ENABLE_APACHE6:-false)\",\n  command: \"service apache2 start\",\n  uid: root,\n}\n\ntest6-simrotate.service: {\n  type: oneshot,\n  command: \"bash -c 'sleep 5; kill-from-pidfile /run/apache2/apache2.pid'\",\n  enabled: \"$(ENABLE_APACHE6:-false)\",\n  interval:  \"* * * * * */10\",\n  service_groups: IDLE,\n}\n\n# ENABLE_APACHE7: Just a test to be sure we can kill Apache AND chaperone with a non-scheduled background job.\n\ntest7-apache.service: {\n  type: simple,\n  enabled: \"$(ENABLE_APACHE7:-false)\",\n  command: \"service apache2 start\",\n  uid: root,\n}\n\ntest7-simrotate.service: {\n  type: cron,\n  command: \"bash -c 'sleep 5; kill-from-pidfile /run/apache2/apache2.pid'\",\n  enabled: \"$(ENABLE_APACHE7:-false)\",\n  interval:  \"* * * * * */8\",\n  service_groups: IDLE,\n}\n\n# ENABLE_APACHE8: Cron job kills apache and disables self, container should die\n\ntest8-apache.service: {\n  type: simple,\n  enabled: \"$(ENABLE_APACHE8:-false)\",\n  command: \"service apache2 start\",\n  uid: root,\n}\n\ntest8-simrotate.service: {\n  type: cron,\n  command: \"bash -c 'sleep 5; kill-from-pidfile /run/apache2/apache2.pid; telchap stop test8-simrotate'\",\n  enabled: \"$(ENABLE_APACHE8:-false)\",\n  interval:  \"* * * * * */8\",\n  service_groups: IDLE,\n}\n\n# Debugging output for all\n\ndefault.logging: {\n  selector: \"*.debug\",\n  stdout: true,\n}\n"
  },
  {
    "path": "tests/el-tests/cron-1/simulate-rotate.sh",
    "content": "echo simulating rotation\necho SIMULATE-ROTATE SERIAL NUMBER: $(get-serial)\nservice=$1\ntelchap=$2\n$(is-running apache2) && echo apache is running || echo apache is NOT running\nps axf\nif [ \"$telchap\" != \"telchap\" ]; then\n  sudo kill `cat /run/apache2/apache2.pid`  # chaperone doesn't know this\n  echo DIRECT KILL of $service\nelse\n  echo Use TELCHAP to tell Chaperone to kill $service\nfi\ntelchap reset $service\nsleep 1\n$(is-running apache2) && echo apache is running || echo apache is NOT running\nps axf\ntelchap start $service\nsleep 1\n$(is-running apache2) && echo apache is running || echo apache is NOT running\nps axf\n"
  },
  {
    "path": "tests/el-tests/cron-1/test-001.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Cron services - Simple echo\n\n>(sleep 15; echo \"K\"\"ILL ME NOW\")&\n>RUNIMAGE -e ENABLE_CRON1=true\n@30\n<info: READY=1\n<system will remain active\n<KILL ME NOW\n>^C\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/cron-1/test-004.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Complex Apache and background restart - timing tests for process termination\n\n@30\n>(sleep 25; echo \"K\"\"ILL ME NOW\")&\n>RUNIMAGE -e ENABLE_APACHE4=true\n<Starting web server apache2\n<info: READY=1\n<ROTATE SERIAL NUMBER: 1\n<apache is running\n<services reset\n<apache is NOT running\n<Starting web server apache2\n<ROTATE SERIAL NUMBER: 2\n<apache is running\n<KILL ME NOW\n>^C\n<<info: error notification (4)\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/cron-1/test-005.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Complex Apache and background restart - process termination with PIDFILE\n\n@40\n>(sleep 25; echo \"K\"\"ILL ME NOW\")&\n>RUNIMAGE -e ENABLE_APACHE5=true\n<Starting web server apache2\n<info: READY=1\n<ROTATE SERIAL NUMBER: 1\n<apache is running\n<services reset\n<apache is NOT running\n<Starting web server apache2\n<ROTATE SERIAL NUMBER: 2\n<apache is running\n<KILL ME NOW\n>^C\n<<info: error notification (4)\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/cron-1/test-006.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Oneshot job kills Apache - be sure Chaperone terminates\n\n>RUNIMAGE -e ENABLE_APACHE6=true\n<test6-apache.service successfully started\n<info: READY=1\n<Caught subprocess termination from unknown pid\n<queueing 'STOPPING=1'\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/cron-1/test-007.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Cron job killing apache keeps container running (cron scheduled)\n\n@30\n>(sleep 20; echo \"K\"\"ILL ME NOW\")&\n>RUNIMAGE -e ENABLE_APACHE7=true\n<test7-apache.service successfully started\n<Caught subprocess termination from unknown pid\n<system will remain active since there are scheduled services\n<KILL ME NOW\n>^C\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/cron-1/test-008.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Cron job disables self plus kills Apache, container should die\n\n@30\n>RUNIMAGE -e ENABLE_APACHE8=true\n<test8-apache.service successfully started\n<Caught subprocess termination from unknown pid\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/exitkills-1/chaperone.conf",
    "content": "settings: {\n  env_set: { PATH: \"$(TESTHOME)/bin:$(PATH)\" }\n}\n\ntest1-keeper.service: {\n  command: \"bash -c 'logecho lagging task sleeping for 5 minutes... ; sleep 600'\",\n}\n\ntest1-kills.service: {\n  type: forking,\n  enabled: true,\n  command: \"daemon bash -c 'echo $$ >/tmp/kid.pid; logecho daemon running; sleep 10; logecho wait completed: daemon exiting'\",\n  pidfile: \"/tmp/kid.pid\",\n  exit_kills: true,\n  service_groups: IDLE,\n}\n\n# Debugging output for all\n\ndefault.logging: {\n  selector: \"*.debug\",\n  stdout: true,\n}\n"
  },
  {
    "path": "tests/el-tests/exitkills-1/test-001.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Forking service - combined with exit_kills\n\n@20\n>RUNIMAGE\n<: daemon running\n<kills.service changing PID to\n<: daemon exiting\n<terminated with exit_kills enabled\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/fork-1/chaperone.conf",
    "content": "settings: {\n  env_set: { PATH: \"$(TESTHOME)/bin:$(PATH)\" }\n}\n\ntest1-exit1.service: {\n  type: forking,\n  enabled: \"$(ENABLE_EXIT1:-false)\",\n  command: \"daemon bash -c 'logecho daemon running; sleep 5; logecho daemon exiting'\",\n}\n\ntest1-exit1b.service: {\n  type: forking,\n  enabled: \"$(ENABLE_EXIT1B:-false)\",\n  command: \"daemon --exit 3 bash -c 'logecho daemon running; sleep 5; logecho daemon exiting'\",\n}\n\n# The test3 apache service is simple, but forks, so Chaperone is technically unaware of\n# its children.\n\ntest3-apache.service: {\n  type: forking,\n  enabled: \"$(ENABLE_APACHE3:-false)\",\n  command: \"service apache2 start\",\n  uid: root,\n}\n\ntest3-apache-verify.service: {\n  type: oneshot,\n  enabled: \"$(ENABLE_APACHE3:-false)\",\n  command: \"bash -c 'sleep 2; telchap stop test3-apache; ps ax'\",\n  service_groups: \"IDLE\",\n}\n\n# The test4 apache service uses a pidfile when it forks so Chaperone is aware of its pid.\n\ntest4-apache.service: {\n  type: forking,\n  enabled: \"$(ENABLE_APACHE4:-false)\",\n  pidfile: /run/apache2/apache2.pid,\n  command: \"service apache2 start\",\n  uid: root,\n}\n\ntest4-apache-verify.service: {\n  type: oneshot,\n  enabled: \"$(ENABLE_APACHE4:-false)\",\n  command: \"bash -c 'sleep 2; telchap stop test4-apache; sleep 1; ps ax | ps -C apache2 || echo apache not running'\",\n  service_groups: \"IDLE\",\n}\n\n# Debugging output for all\n\ndefault.logging: {\n  selector: \"*.debug\",\n  stdout: true,\n}\n"
  },
  {
    "path": "tests/el-tests/fork-1/test-001.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Forking service - spawn daemon normally\n\n>RUNIMAGE -e ENABLE_EXIT1=true\n<test1-exit1.service successfully started\n<: daemon exiting\n<Final termination phase\n\n"
  },
  {
    "path": "tests/el-tests/fork-1/test-001b.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Forking service - spawn daemon - error on spawn\n\n>RUNIMAGE -e ENABLE_EXIT1B=true\n<echo daemon running\n<test1-exit1b.service received exception during attempted start\n<system startup cancelled\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/fork-1/test-003.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Forking services - no kill for untracked processes (using apache)\n\n>(sleep 8; echo \"K\"\"ILL ME NOW\")&\n>RUNIMAGE -e ENABLE_APACHE3=true\n<test3-apache.service successfully started\n<test3-apache.service received reset\n</usr/sbin/apache2 -k start\n<KILL ME NOW\n>^C\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/fork-1/test-004.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Forking services - track processes using pidfile (using apache)\n\n>RUNIMAGE -e ENABLE_APACHE4=true\n<test4-apache.service changing PID to\n<test4-apache.service successfully started\n<test4-apache.service received reset\n<test4-apache.service got explicit return code\n<: apache not running\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/inetd-1/chaperone.conf",
    "content": "settings: {\n  env_set: { PATH: \"$(TESTHOME)/bin:$(PATH)\" }\n}\n\n# INETD1: simple test\n\ninetd1.service: {\n  enabled: \"$(ENABLE_INETD1:-false)\",\n  type: inetd,\n  port: 8080,\n  command: \"echo hello from port 8080\",\n}\n\n# INETD2: disables both this service and inetd1 which will cause container exit\n\ninetd2.service: {\n  enabled: \"$(ENABLE_INETD2:-false)\",\n  type: inetd,\n  port: 8443,\n  command: \"bash -c 'telchap stop inetd1 inetd2; echo disabled both'\",\n  after: inetd1.service,  # so log entries are in the right order\n}\n  \n# Debugging output for all\n\ndefault.logging: {\n  selector: \"*.debug\",\n  stdout: true,\n}\n"
  },
  {
    "path": "tests/el-tests/inetd-1/test-001.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: inetd - simple exit service, keeps running\n\n# Start the image and capture the container ID and port\n# (Note: the initial \"echo\" below is needed to take care of a timing issue with 'docker run')\n\n>echo running...; echo CID:`RUNIMAGE_READY -d -P -e ENABLE_INETD1=true`\n+$cid=CID:([0-9a-f]{32,})\n>docker port $cid\n+$ourport=8080/tcp -> 0.0.0.0:([0-9]+)\n\n# Fire up an inetd connection inside the container and verify it works (ready assured)\n\n>read_from_port localhost $ourport\n<hello from port 8080\n\n# Kill and inspect logs\n\n>sleep 3\n>docker stop $cid\n>docker logs $cid\n<inetd1.service listening on port 8080\n<received connection on port 8080\n<system will remain active since there are scheduled services: inetd1.service\n<Request made to kill\n<Final termination phase\n\n# Clean up\n\n>docker rm -v $cid\n"
  },
  {
    "path": "tests/el-tests/inetd-1/test-002.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: inetd - second service disables both\n\n# Start the image and capture the container ID and port\n# (Note: the initial \"echo\" below is needed to take care of a timing issue with 'docker run')\n\n>echo running...; echo CID:`RUNIMAGE_READY -d -P -e ENABLE_INETD1=true -e ENABLE_INETD2=true`\n+$cid=CID:([0-9a-f]{32,})\n>docker port $cid\n+$ourport=8080/tcp -> 0.0.0.0:([0-9]+)\n>docker port $cid\n+$otherport=8443/tcp -> 0.0.0.0:([0-9]+)\n\n# Fire up an inetd connection inside the container and verify it works (ready assured)\n\n>read_from_port localhost $ourport\n<hello from port 8080\n\n# Second call causes container termination\n>sleep 2\n>read_from_port localhost $otherport\n\n# Kill and inspect logs\n\n>sleep 3\n>docker logs $cid\n<inetd1.service listening on port 8080\n<inetd2.service listening on port 8443\n<received connection on port 8080\n<system will remain active since there are scheduled services\n<received connection on port 8443\n<no child processes present\n<Final termination phase\n\n# Clean up\n\n>docker rm -v $cid\n"
  },
  {
    "path": "tests/el-tests/notify-1/chaperone.conf",
    "content": "settings: {\n  env_set: { PATH: \"$(TESTHOME)/bin:$(PATH)\", SERVICE_NAME: \"$(_CHAP_SERVICE)\" },\n  process_timeout: 5,\n}\n\ntest1-exit1.service: {\n  type: notify,\n  enabled: \"$(ENABLE_EXIT1:-false)\",\n  command: \"daemon bash -c 'logecho daemon running; sleep 3; logecho daemon exiting'\",\n}\n\ntest1-exit1b.service: {\n  type: notify,\n  enabled: \"$(ENABLE_EXIT1B:-false)\",\n  command: \"daemon --exit 3 bash -c 'logecho daemon running; sleep 3; logecho daemon exiting'\",\n}\n\ntest1-exit1c.service: {\n  type: notify,\n  enabled: \"$(ENABLE_EXIT1C:-false)\",\n  command: \"daemon --exit 3 --wait 8 bash -c 'logecho daemon running; sleep 3; logecho daemon exiting'\",\n}\n\ntest1-exit1d.service: {\n  type: notify,\n  enabled: \"$(ENABLE_EXIT1D:-false)\",\n  command: \"daemon bash -c 'logecho daemon running; sleep 3; sdnotify ERRNO=55'\",\n}\n\ntest1-exit1e.service: {\n  type: notify,\n  enabled: \"$(ENABLE_EXIT1E:-false)\",\n  process_timeout: 15,\n  command: \"daemon bash -c 'logecho daemon running; sleep 3; sdnotify --ready --pid $$; sleep 2'\",\n}\n\n# Debugging output for all\n\ndefault.logging: {\n  selector: \"*.debug\",\n  stdout: true,\n}\n"
  },
  {
    "path": "tests/el-tests/notify-1/test-001.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Notify service - spawn daemon normally - never gets notified\n\n>RUNIMAGE -e ENABLE_EXIT1=true\n<test1-exit1.service attempting start\n<notify service 'test1-exit1.service' did not receive ready notification\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/notify-1/test-001b.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Notify service - spawn daemon - error during grace period\n\n>RUNIMAGE -e ENABLE_EXIT1B=true\n<echo daemon running\n<test1-exit1b.service failed on start-up with result '<ProcStatus exit_status=3>'\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/notify-1/test-001c.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Notify service - spawn daemon - error while waiting for notify\n\n>RUNIMAGE -e ENABLE_EXIT1C=true\n<echo daemon running\n<'test1-exit1c.service' did not receive ready notification\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/notify-1/test-001d.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Notify service - spawn daemon - error from notifying process\n\n>RUNIMAGE -e ENABLE_EXIT1D=true\n<: daemon running\n<test1-exit1d.service failed with reported error <ProcStatus exit_status=55>\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/notify-1/test-001e.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Notify service - spawn daemon - normal ready notification\n\n>RUNIMAGE -e ENABLE_EXIT1E=true\n<: daemon running\n<test1-exit1e.service changing PID to\n@15\n<test1-exit1e.service got explicit return code\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/simple-1/chaperone.conf",
    "content": "test1-exit1.service: {\n  type: simple,\n  enabled: \"$(ENABLE_EXIT1:-false)\",\n  command: \"echo exit immediately\",\n}\n\n# The test3 apache service is simple, but forks, so Chaperone is technically unaware of\n# its children.\n\ntest3-apache.service: {\n  type: simple,\n  enabled: \"$(ENABLE_APACHE3:-false)\",\n  command: \"service apache2 start\",\n  uid: root,\n}\n\ntest3-apache-verify.service: {\n  type: oneshot,\n  enabled: \"$(ENABLE_APACHE3:-false)\",\n  command: \"bash -c 'sleep 2; telchap stop test3-apache; ps ax'\",\n  service_groups: \"IDLE\",\n}\n\n# The test4 apache service uses a pidfile when it forks so Chaperone is aware of its pid.\n\ntest4-apache.service: {\n  type: simple,\n  enabled: \"$(ENABLE_APACHE4:-false)\",\n  pidfile: /run/apache2/apache2.pid,\n  command: \"service apache2 start\",\n  uid: root,\n}\n\ntest4-apache-verify.service: {\n  type: oneshot,\n  enabled: \"$(ENABLE_APACHE4:-false)\",\n  command: \"bash -c 'sleep 2; telchap stop test4-apache; sleep 1; ps ax | ps -C apache2 || echo apache not running'\",\n  service_groups: \"IDLE\",\n}\n\n# Debugging output for all\n\ndefault.logging: {\n  selector: \"*.debug\",\n  stdout: true,\n}\n"
  },
  {
    "path": "tests/el-tests/simple-1/test-001.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Simple services - exit immediately\n\n>RUNIMAGE -e ENABLE_EXIT1=true\n<test1-exit1.service successfully started\n<test1-exit1.service exit status\n<no child processes present\n<Final termination phase\n\n"
  },
  {
    "path": "tests/el-tests/simple-1/test-002.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Simple services - all processes disabled\n\n>RUNIMAGE\n<test1-exit1.service not enabled\n<No service startups attempted\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/simple-1/test-003.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Simple services - no kill for untracked processes (using apache)\n\n>(sleep 8; echo \"K\"\"ILL ME NOW\")&\n>RUNIMAGE -e ENABLE_APACHE3=true\n<test3-apache.service successfully started\n<test3-apache.service exit status\n<test3-apache.service received reset\n</usr/sbin/apache2 -k start\n<KILL ME NOW\n>^C\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/simple-1/test-004.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Simple services - track processes using pidfile (using apache)\n\n>RUNIMAGE -e ENABLE_APACHE4=true\n<test4-apache.service changing PID to\n<test4-apache.service successfully started\n<test4-apache.service exit status\n<test4-apache.service received reset\n<: apache not running\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/simple-2/chaperone.conf",
    "content": "settings: {\n  detect_exit: false,\n}\n\ntest1-exit1.service: {\n  type: simple,\n  enabled: \"$(ENABLE_EXIT1:-false)\",\n  command: \"echo exit immediately\",\n}\n\n# The test3 apache service is simple, but forks, so Chaperone is technically unaware of\n# its children.\n\ntest3-apache.service: {\n  type: simple,\n  enabled: \"$(ENABLE_APACHE3:-false)\",\n  command: \"service apache2 start\",\n  uid: root,\n}\n\ntest3-apache-verify.service: {\n  type: oneshot,\n  enabled: \"$(ENABLE_APACHE3:-false)\",\n  command: \"bash -c 'sleep 2; telchap stop test3-apache; ps ax'\",\n  service_groups: \"IDLE\",\n}\n\n# The test4 apache service uses a pidfile when it forks so Chaperone is aware of its pid.\n\ntest4-apache.service: {\n  type: simple,\n  enabled: \"$(ENABLE_APACHE4:-false)\",\n  pidfile: /run/apache2/apache2.pid,\n  command: \"service apache2 start\",\n  uid: root,\n}\n\ntest4-apache-verify.service: {\n  type: oneshot,\n  enabled: \"$(ENABLE_APACHE4:-false)\",\n  command: \"bash -c 'sleep 2; telchap stop test4-apache; sleep 1; ps ax | ps -C apache2 || echo apache not running'\",\n  service_groups: \"IDLE\",\n}\n\n# Debugging output for all\n\ndefault.logging: {\n  selector: \"*.debug\",\n  stdout: true,\n}\n"
  },
  {
    "path": "tests/el-tests/simple-2/test-001.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Simple services - exit immediately - no exit detection\n\n>(sleep 5; echo \"K\"\"ILL ME NOW\")&\n>RUNIMAGE -e ENABLE_EXIT1=true\n<test1-exit1.service successfully started\n<test1-exit1.service exit status\n<KILL ME NOW\n>^C\n<No processes remain when attempting to kill system\n<Final termination phase\n\n"
  },
  {
    "path": "tests/el-tests/simple-2/test-002.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Simple services - all processes disabled - no exit detection\n\n>(sleep 5; echo \"K\"\"ILL ME NOW\")&\n>RUNIMAGE\n<test1-exit1.service not enabled\n<KILL ME NOW\n>^C\n<No processes remain when attempting to kill system\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/simple-2/test-003.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Simple services - no kill for untracked processes (using apache) - no exit detection\n\n>(sleep 8; echo \"K\"\"ILL ME NOW\")&\n>RUNIMAGE -e ENABLE_APACHE3=true\n<test3-apache.service successfully started\n<test3-apache.service exit status\n<test3-apache.service received reset\n</usr/sbin/apache2 -k start\n<KILL ME NOW\n>^C\n<Final termination phase\n"
  },
  {
    "path": "tests/el-tests/simple-2/test-004.elt",
    "content": "#!/usr/bin/env expect-test-command\n#TITLE: Simple services - track processes using pidfile (using apache) - no exit detection\n\n>(sleep 8; echo \"K\"\"ILL ME NOW\")&\n>RUNIMAGE -e ENABLE_APACHE4=true\n<test4-apache.service changing PID to\n<test4-apache.service successfully started\n<test4-apache.service exit status\n<test4-apache.service received reset\n<: apache not running\n<KILL ME NOW\n>^C\n<Final termination phase\n"
  },
  {
    "path": "tests/env_expand.py",
    "content": "from prefix import *\n\nfrom chaperone.cutil.env import Environment\n\nENV1 = {\n    \"HOME\": '/usr/garyw',\n    \"APPS-DIR\": '$(HOME)/apps',\n    \"ANOTHER\": '$(APPS-DIR)/theap',\n    \"RECUR2\": '$(RECUR1)..$(APPS-DIR)',\n    \"RECUR1\": 'two-$(RECUR3)-$(HOME)',\n    \"REF-RECUR\": \"$(ANOTHER) BUT NOT $(RECUR1)\",\n    \"RECUR3\": 'three:$(RECUR9)',\n    \"BTEST1\": '$(`echo \"12\" \"34${HOME}\"`)',\n    \"BTEST2\": '$(`echo \"12\" \"34${HOME}\"` x)',\n    \"BTEST3\": '$(`echo HO`ME)',\n}\n\nRESULT1 = \"[('ANOTHER', '/usr/garyw/apps/theap'), ('APPS-DIR', '/usr/garyw/apps'), ('BTEST1', '12 34/usr/garyw'), ('BTEST2', '$(12 34/home/garyw x)'), ('BTEST3', '$(HOME)'), ('HOME', '/usr/garyw'), ('RECUR1', 'two-three:$(RECUR9)-/usr/garyw'), ('RECUR2', 'two-three:$(RECUR9)-/usr/garyw../usr/garyw/apps'), ('RECUR3', 'three:$(RECUR9)'), ('REF-RECUR', '/usr/garyw/apps/theap BUT NOT two-three:$(RECUR9)-/usr/garyw')]\"\n\nENV2 = {\n    \"HOME\": '/usr/garyw',\n    \"APPS-DIR\": '$(HOME)/apps',\n    \"ANOTHER\": '$(APPS-DIR)/theap',\n    \"RECUR2\": '$(RECUR1)..$(APPS-DIR)',\n    \"RECUR1\": 'two-$(RECUR3)-$(HOME)',\n    \"REF-RECUR\": \"$(ANOTHER) BUT NOT $(RECUR1)\",\n    \"RECUR3\": 'three:$(RECUR2)',\n}\n\n#?RESULT2 = \"[('ANOTHER', '/usr/garyw/apps/theap'), ('APPS-DIR', '/usr/garyw/apps'), ('HOME', '/usr/garyw'), ('RECUR1', 'two-three:two-$(RECUR3)-$(HOME)../usr/garyw/apps-/usr/garyw'), ('RECUR2', 'two-$(RECUR3)-$(HOME)../usr/garyw/apps'), ('RECUR3', 'three:two-$(RECUR3)-$(HOME)../usr/garyw/apps'), ('REF-RECUR', '/usr/garyw/apps/theap BUT NOT two-three:two-$(RECUR3)-$(HOME)../usr/garyw/apps-/usr/garyw')]\"\n\nRESULT2 = \"[('ANOTHER', '/usr/garyw/apps/theap'), ('APPS-DIR', '/usr/garyw/apps'), ('HOME', '/usr/garyw'), ('RECUR1', 'two-three:two-$(RECUR3)-$(HOME)../usr/garyw/apps-/usr/garyw'), ('RECUR2', 'two-three:two-$(RECUR3)-$(HOME)../usr/garyw/apps-/usr/garyw../usr/garyw/apps'), ('RECUR3', 'three:two-three:two-$(RECUR3)-$(HOME)../usr/garyw/apps-/usr/garyw../usr/garyw/apps'), ('REF-RECUR', '/usr/garyw/apps/theap BUT NOT two-three:two-$(RECUR3)-$(HOME)../usr/garyw/apps-/usr/garyw')]\"\n\nENV3 = {\n    \"HOME\": '/usr/garyw',\n    \"APPS-DIR\": '$(HOME)/apps',\n    \"ANOTHER\": '$(APPS-DIR)/theap',\n    \"TWO\": '$(HOME) and $(APPS-DIR)',\n    \"MAYBE1\": '$(HOAX)/foo',\n    \"MAYBE10\": '$(HOAX:-$(MAYBE11:-11here))/foo',\n    \"MAYBE11\": 'to-$(MAYBE10:-10gone)',              # will trigger recursion\n    \"MAYBE12\": 'circA-$(MAYBE13:-10gone)',           # will trigger recursion\n    \"MAYBE13\": 'circB-$(MAYBE12:-10gone)',\n    \"MAYBE2\": '$(HOAX:-blach)/footwo',\n    \"MAYBE3\": '$(HOAX:+blach)/foo',\n    \"MAYBE4\": '$(HOME:-blach)/foo',\n    \"MAYBE5\": '$(HOME:+blach)/foo',\n    \"MAYBE6\": '$(HOAX:-$(MAYBE2))/foo',\n    \"MAYBE7\": '$(HOME:+blach.${MAYBE8:+8here})/foo',\n    \"MAYBE8\": '$(HOAX:-${MAYBE7:-7here})/foo',\n    \"MAYBE9\": '$(HOME:+blach.${MAYBE10:-10here})/foo',\n    \"HASNL\": \"Line One\\nLine Two\",\n    \"EXPNL\": \"$(HOME:+$(HASNL)\\nAnd more to go)\",\n}\n\nRESULT3 = \"[('ANOTHER', '/usr/garyw/apps/theap'), ('APPS-DIR', '/usr/garyw/apps'), ('EXPNL', 'Line One\\\\nLine Two\\\\nAnd more to go'), ('HASNL', 'Line One\\\\nLine Two'), ('HOME', '/usr/garyw'), ('MAYBE1', '$(HOAX)/foo'), ('MAYBE10', 'to-$(HOAX:-$(MAYBE11:-11here))/foo/foo'), ('MAYBE11', 'to-to-$(HOAX:-$(MAYBE11:-11here))/foo/foo'), ('MAYBE12', 'circA-circB-circA-$(MAYBE13:-10gone)'), ('MAYBE13', 'circB-circA-circB-circA-$(MAYBE13:-10gone)'), ('MAYBE2', 'blach/footwo'), ('MAYBE3', '/foo'), ('MAYBE4', '/usr/garyw/foo'), ('MAYBE5', 'blach/foo'), ('MAYBE6', 'blach/footwo/foo'), ('MAYBE7', 'blach.8here/foo'), ('MAYBE8', 'blach.8here/foo/foo'), ('MAYBE9', 'blach.to-$(HOAX:-$(MAYBE11:-11here))/foo/foo/foo'), ('TWO', '/usr/garyw and /usr/garyw/apps')]\"\n\n#RESULT3 = \"[('ANOTHER', '/usr/garyw/apps/theap'), ('APPS-DIR', '/usr/garyw/apps'), ('EXPNL', 'Line One\\\\nLine Two\\\\nAnd more to go'), ('HASNL', 'Line One\\\\nLine Two'), ('HOME', '/usr/garyw'), ('MAYBE1', '$(HOAX)/foo'), ('MAYBE10', 'to-$(HOAX:-$(MAYBE11:-11here))/foo/foo'), ('MAYBE11', 'to-$(HOAX:-$(MAYBE11:-11here))/foo'), ('MAYBE12', 'circA-circB-circA-$(MAYBE13:-10gone)'), ('MAYBE13', 'circB-circA-$(MAYBE13:-10gone)'), ('MAYBE2', 'blach/footwo'), ('MAYBE3', '/foo'), ('MAYBE4', '/usr/garyw/foo'), ('MAYBE5', 'blach/foo'), ('MAYBE6', 'blach/footwo/foo'), ('MAYBE7', 'blach.8here/foo'), ('MAYBE8', '$(HOME:+blach.${MAYBE8:+8here})/foo/foo'), ('MAYBE9', 'blach.to-$(HOAX:-$(MAYBE11:-11here))/foo/foo/foo'), ('TWO', '/usr/garyw and /usr/garyw/apps')]\"\n\nENV4 = {\n    \"HOME\": '/usr/garyw',\n    \"APPS-DIR\": '$(HOME)/apps',\n    \"ANOTHER\": '$(APPS-DIR)/theap',\n    \"TWO\": '$(HOME) and $(APPS-DIR)',\n    \"MAYBE1\": '$(HOAX)/foo',\n    \"MAYBE10\": '$(HOAX:-$(MAYBE11:-11here))/foo',\n    \"MAYBE11\": 'to-$(MAYBE10:+10gone)',             # breaks recursion\n    \"MAYBE12\": 'circA-$(MAYBE13:+10gone)',          # breaks recursion\n    \"MAYBE13\": 'circB-$(MAYBE12:-10gone)',\n    \"MAYBE2\": '$(HOAX:-blach)/footwo',\n    \"MAYBE3\": '$(HOAX:+blach)/foo',\n    \"MAYBE4\": '$(HOME:-blach)/foo',\n    \"MAYBE4B\": '$(HOME:_blach)/foo and $(HUME:_bleech)',\n    \"MAYBE5\": '$(HOME:+blach)/foo',\n    \"MAYBE6\": '$(HOAX:-$(MAYBE2))/foo',\n    \"MAYBE7\": '$(HOME:+blach.${MAYBE8:+8here})/foo',\n    \"MAYBE8\": '$(HOAX:-${MAYBE7:-7here})/foo',\n    \"MAYBE9\": '$(HOME:+blach.${MAYBE10:-10here})/foo',\n    \"UBERNEST\": 'nest:$(HOME:+$(HOAX:-inside${TWO})) and:$(ANOTHER)',\n    \"UBERNEST-DEEP\": 'nest:$(HOME:+$(HOAX:-inside$(TWO))) and:$(ANOTHER)',\n}\n\nRESULT4 = \"[('ANOTHER', '/usr/garyw/apps/theap'), ('APPS-DIR', '/usr/garyw/apps'), ('HOME', '/usr/garyw'), ('MAYBE1', '$(HOAX)/foo'), ('MAYBE10', 'to-10gone/foo'), ('MAYBE11', 'to-10gone'), ('MAYBE12', 'circA-10gone'), ('MAYBE13', 'circB-circA-10gone'), ('MAYBE2', 'blach/footwo'), ('MAYBE3', '/foo'), ('MAYBE4', '/usr/garyw/foo'), ('MAYBE4B', '/foo and bleech'), ('MAYBE5', 'blach/foo'), ('MAYBE6', 'blach/footwo/foo'), ('MAYBE7', 'blach.8here/foo'), ('MAYBE8', 'blach.8here/foo/foo'), ('MAYBE9', 'blach.to-10gone/foo/foo'), ('TWO', '/usr/garyw and /usr/garyw/apps'), ('UBERNEST', 'nest:inside/usr/garyw and /usr/garyw/apps and:/usr/garyw/apps/theap'), ('UBERNEST-NOT', 'nest:$(HOAX:-inside/usr/garyw and /usr/garyw/apps) and:/usr/garyw/apps/theap')]\"\n\nRESULT4 = \"[('ANOTHER', '/usr/garyw/apps/theap'), ('APPS-DIR', '/usr/garyw/apps'), ('HOME', '/usr/garyw'), ('MAYBE1', '$(HOAX)/foo'), ('MAYBE10', 'to-10gone/foo'), ('MAYBE11', 'to-10gone'), ('MAYBE12', 'circA-10gone'), ('MAYBE13', 'circB-circA-10gone'), ('MAYBE2', 'blach/footwo'), ('MAYBE3', '/foo'), ('MAYBE4', '/usr/garyw/foo'), ('MAYBE4B', '/foo and bleech'), ('MAYBE5', 'blach/foo'), ('MAYBE6', 'blach/footwo/foo'), ('MAYBE7', 'blach.8here/foo'), ('MAYBE8', 'blach.8here/foo/foo'), ('MAYBE9', 'blach.to-10gone/foo/foo'), ('TWO', '/usr/garyw and /usr/garyw/apps'), ('UBERNEST', 'nest:inside/usr/garyw and /usr/garyw/apps and:/usr/garyw/apps/theap'), ('UBERNEST-DEEP', 'nest:inside/usr/garyw and /usr/garyw/apps and:/usr/garyw/apps/theap')]\"\n\nENV4a = {\n    'PATH': '/bin',\n    'THEREPATH': '/there',\n    'ADMINVAR1': 'user',\n    'ADMINVAR2': 'none',\n}\n\nCONFIG4a = {\n    'env_set': {\n        'PATH': '/usr/local/bin:$(PATH)',\n        'ADMINVAR1': '$(ADMINVAR1:|NONE||$(ADMINVAR1:-admin))',\n        'ADMINVAR2': '$(ADMINVAR2:|NONE||$(ADMINVAR2:-admin))',\n        'ADMINVAR3': '$(ADMINVAR3:|NONE||$(ADMINVAR3:-admin))',\n    }\n}\n\nCONFIG4c = {\n    'env_set': {\n        'PATH': '/usr/python/bin:$(PATH)',\n        'PYPATH': '/pythonlibs:$(PYPATH)',\n        'MISCPATH': '/mislibs$(MISCPATH:+:)$(MISCPATH)',\n        'PYAGAIN': '/mislibs$(PYPATH:+:)$(PYPATH)',\n        'THEREPATH': '/mislibs$(THEREPATH:+:)$(THEREPATH)',\n    }\n}\n\nENV7 = {\n    \"FIRSTPORT\": '999',\n    \"ALTPORT\": \"777\",\n}\n\nCONFIG7a = {\n    'env_set': {\n        \"FIRSTPORT\": '$(FIRSTPORT:-443)',\n        \"SECONDPORT\": '$(SECONDPORT:-443)',\n        \"THIRDPORT\": '$(FIRSTPORT:-443)',\n        \"FOURTHPORT\": '$(ALTPORT:-443)',\n    }\n}\n\nRESULT7 = \"[('ALTPORT', '777'), ('FIRSTPORT', '999'), ('FOURTHPORT', '777'), ('SECONDPORT', '443'), ('THIRDPORT', '999')]\"\n\nENV8 = {\n    \"ONE\": \"number-1\",\n    \"TWO\": \"number-2\",\n    \"THREE\": \"number-3\",\n    \"IFONE\": \"set $(ONE:|onlyifone)\",\n    \"IFNOTONE\": \"ONE: $(ONE:|is_set|not_set)\",\n    \"IFNOTXXX\": \"XXX: $(XXX:|is_set|not_set)\",\n    \"IFNOTYYY\": \"XXX: $(XXX:|is_set|$(IFNOTYYY))\",\n    \"IFNOTZZZ\": \"XXX: $(XXX:|is_set|$(IFNOTXXX))\",\n    \"TRIO1\": \"T1-ONE: $(ONE:|number-1|It^s ^number-1^|It is not ^number-1^)\",\n    \"TRIO2\": \"T2-ONE: $(ONE:|number-2|It^s ^number-2^|It is not ^number-2^)\",\n    \"TRIO3a\": \"T3a-IFONE: $(IFNOTZZZ:|XXX: XXX: not_set|matches ^$(TRIO3b)^ correctly|Does not match correctly)\",\n    \"TRIO3b\": \"T3b-IFONE: $(IFNOTZZZ:|XXX: XXX: not_set|matches ^$(TRIO3a)^ correctly|Does not match correctly)\",\n    \"TRIO3c\": \"T3c-IFONE: $(IFNOTZZZ:|XXX: XXX: not_set|matches ^$(IFNOTXXX)^ correctly|Does not match correctly)\",\n    \"TRIO3d\": \"T3d-IFONE: $(IFNOTZZZ:|XXX: XXY: not_set|matches ^$(IFNOTXXX)^ correctly|Does not match correctly with $(IFNOTZZZ))\",\n    \"MUSTBE\": \"$(ONE:?Variable ONE is required)\",\n    \"SUB1\": \"$(ONE:/umb/oob/)\",\n    \"SUB2\": \"$(ONE:/umb/oob-$(TWO)-/)\",\n    \"SUB3\": r'$(ONE:/umb/oob\\/meyer/)',\n    \"SUB4\": r'$(SUB3:/\\//\\/(slash)/)',\n    \"SUB5\": r'$(ONE:/umB/oob\\/meyer/i)',\n    \"SUB6\": r'$(ONE:/umB/oob\\/meyer/)',\n    \"SUB7\": r'$(IFNOTONE:/ONE: (.+)/MODONE: \\1/)',\n}\n\nRESULT8 = \"[('IFNOTONE', 'ONE: is_set'), ('IFNOTXXX', 'XXX: not_set'), ('IFNOTYYY', 'XXX: '), ('IFNOTZZZ', 'XXX: XXX: not_set'), ('IFONE', 'set onlyifone'), ('MUSTBE', 'number-1'), ('ONE', 'number-1'), ('SUB1', 'noober-1'), ('SUB2', 'noob-number-2-er-1'), ('SUB3', 'noob/meyerer-1'), ('SUB4', 'noob/(slash)meyerer-1'), ('SUB5', 'noob/meyerer-1'), ('SUB6', 'number-1'), ('SUB7', 'MODONE: is_set'), ('THREE', 'number-3'), ('TRIO1', 'T1-ONE: It^s ^number-1^'), ('TRIO2', 'T2-ONE: It is not ^number-2^'), ('TRIO3a', 'T3a-IFONE: matches ^T3b-IFONE: matches ^T3a-IFONE: $(IFNOTZZZ:|XXX: XXX: not_set|matches ^$(TRIO3b)^ correctly|Does not match correctly)^ correctly^ correctly'), ('TRIO3b', 'T3b-IFONE: matches ^T3a-IFONE: $(IFNOTZZZ:|XXX: XXX: not_set|matches ^$(TRIO3b)^ correctly|Does not match correctly)^ correctly'), ('TRIO3c', 'T3c-IFONE: matches ^XXX: not_set^ correctly'), ('TRIO3d', 'T3d-IFONE: Does not match correctly with XXX: XXX: not_set'), ('TWO', 'number-2')]\"\n\nRESULT8 = \"[('IFNOTONE', 'ONE: is_set'), ('IFNOTXXX', 'XXX: not_set'), ('IFNOTYYY', 'XXX: '), ('IFNOTZZZ', 'XXX: XXX: not_set'), ('IFONE', 'set onlyifone'), ('MUSTBE', 'number-1'), ('ONE', 'number-1'), ('SUB1', 'noober-1'), ('SUB2', 'noob-number-2-er-1'), ('SUB3', 'noob/meyerer-1'), ('SUB4', 'noob/(slash)meyerer-1'), ('SUB5', 'noob/meyerer-1'), ('SUB6', 'number-1'), ('SUB7', 'MODONE: is_set'), ('THREE', 'number-3'), ('TRIO1', 'T1-ONE: It^s ^number-1^'), ('TRIO2', 'T2-ONE: It is not ^number-2^'), ('TRIO3a', 'T3a-IFONE: matches ^T3b-IFONE: matches ^T3a-IFONE: $(IFNOTZZZ:|XXX: XXX: not_set|matches ^$(TRIO3b)^ correctly|Does not match correctly)^ correctly^ correctly'), ('TRIO3b', 'T3b-IFONE: matches ^T3a-IFONE: matches ^T3b-IFONE: matches ^T3a-IFONE: $(IFNOTZZZ:|XXX: XXX: not_set|matches ^$(TRIO3b)^ correctly|Does not match correctly)^ correctly^ correctly^ correctly'), ('TRIO3c', 'T3c-IFONE: matches ^XXX: not_set^ correctly'), ('TRIO3d', 'T3d-IFONE: Does not match correctly with XXX: XXX: not_set'), ('TWO', 'number-2')]\"\n\ndef printdict(d, legend = \"Dict:\", compare = None):\n    if compare and isinstance(compare, str):\n        compare = dict(eval(compare))\n    print(legend)\n    for k in sorted(d.keys()):\n        print(\"{0} = {1}\".format(k,d[k]))\n        if compare and k in compare and d[k] != compare[k]:\n            print(\"  >> {0}\".format(compare[k]))\n\ndef canonical(d, nl = False):\n    if not nl:\n        return str([(k,d[k]) for k in sorted(d.keys())])\n    result = list()\n    for k in sorted(d.keys()):\n        result.append(\"('{0}', '{1}')\".format(k, d[k].replace(\"\\n\", \"\\\\\\\\n\")))\n    return \"[\" + (', '.join(result)) + \"]\";\n\nclass TestEnvOrder(unittest.TestCase):\n\n    maxDiff = 5000\n\n    def test_expand1(self):\n        env = Environment(from_env = ENV1).expanded()\n        #printdict(env)\n        envstr = canonical(env)\n        #print('RESULT1 = \"' + envstr + '\"')\n        self.assertEqual(envstr, RESULT1)\n\n    def test_expand2(self):\n        env = Environment(from_env = ENV2).expanded()\n        #printdict(env)\n        envstr = canonical(env)\n        #print('RESULT2 = \"' + envstr + '\"')\n        self.assertEqual(envstr, RESULT2)\n\n    def test_expand3(self):\n        env = Environment(from_env = ENV3).expanded()\n        #printdict(env, compare = RESULT3)\n        envstr = canonical(env)\n        #print('RESULT3 = \"' + canonical(env, True) + '\"')\n        self.assertEqual(envstr, RESULT3)\n\n    def test_expand4(self):\n        env = Environment(from_env = ENV4).expanded()\n        #printdict(env)\n        envstr = canonical(env)\n        #print('RESULT4 = \"' + envstr + '\"')\n        self.assertEqual(envstr, RESULT4)\n\n    def test_expand5(self):\n        \"Try simple expansion\"\n        env = Environment(from_env = ENV4).expanded()\n        self.assertEqual(env.expand(\"hello $(UBERNEST)\"), \n                         \"hello nest:inside/usr/garyw and /usr/garyw/apps and:/usr/garyw/apps/theap\")\n        self.assertEqual(env.expand(\"hello $(MAYBE5) and $(MAYBE4)\"), \"hello blach/foo and /usr/garyw/foo\")\n        self.assertEqual(env.expand(\"hello $(MAYBE5:+$(MAYBE5)b) and $(MAYBE41)\"), \"hello blach/foob and $(MAYBE41)\")\n        self.assertEqual(env.expand(\"hello $(MAYBE5:+$(MAYBE5)b) and $(MAYBE41:-gone$(MAYBE4))\"), \n                         \"hello blach/foob and gone/usr/garyw/foo\")\n\n    def test_expand6(self):\n        \"Try self-referential expansions\"\n        enva = Environment(ENV4a, CONFIG4a)\n        self.assertEqual(canonical(enva.expanded()),\n\"[('ADMINVAR1', 'user'), ('ADMINVAR2', ''), ('ADMINVAR3', 'admin'), ('PATH', '/usr/local/bin:/bin'), ('THEREPATH', '/there')]\")\n        envb = Environment(enva)\n        self.assertEqual(canonical(envb.expanded()),\n\"[('ADMINVAR1', 'user'), ('ADMINVAR2', ''), ('ADMINVAR3', 'admin'), ('PATH', '/usr/local/bin:/bin'), ('THEREPATH', '/there')]\")\n        envc = Environment(envb, CONFIG4c)\n        self.assertEqual(canonical(envc.expanded()),\n\"[('ADMINVAR1', 'user'), ('ADMINVAR2', ''), ('ADMINVAR3', 'admin'), ('MISCPATH', '/mislibs'), ('PATH', '/usr/python/bin:/usr/local/bin:/bin'), ('PYAGAIN', '/mislibs:/pythonlibs:'), ('PYPATH', '/pythonlibs:'), ('THEREPATH', '/mislibs:/there')]\")\n\n    def test_expand7(self):\n        \"Test some self-referential anomalies\"\n        env = Environment(ENV7, CONFIG7a).expanded()\n        envstr = canonical(env)\n        #print('RESULT7 = \"' + envstr + '\"')\n        self.assertEqual(envstr, RESULT7)\n\n    def test_expand8(self):\n        \"Test conditional expansion\"\n        env = Environment(from_env = ENV8).expanded()\n        #printdict(env, compare = RESULT8)\n        envstr = canonical(env)\n        #print('RESULT8 = \"' + envstr + '\"')\n        self.assertEqual(envstr, RESULT8)\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/env_parse.py",
    "content": "from prefix import *\n\nfrom chaperone.cutil.env import EnvScanner\n\nTEST1 = (\n    ('Nothing',),\n    ('A normal $(expansion) is here',),\n    ('An unterminated $(expansion is here',),\n    ('Two $(expansions) are $(also) here',),\n    ('Nested $(expansions are $(also) here) too.',),\n    ('Nested $(expansions are \"$(also\" here) too.',),\n    ('Nested $(expansions are [\"$(also\" here),$(next)] finally) too.',),\n    ('Ignore $(stuff))) like this.',),\n    ('escape \\\\$(stuff) like this.',),\n    ('exp $(stuff) but \\$(do not $(except [{$(foo)}] this) but \\${not} like this.',),\n    ('Nested ${expansions are [\"$(also\" here),$(next)] finally} too.',),\n)\n\nTEST1 = (\n    ('Nothing', 'Nothing'),\n    ('A normal $(expansion) is here', 'A normal <expansion> is here'),\n    ('An unterminated $(expansion is here', 'An unterminated $(expansion is here'),\n    ('Two $(expansions) are $(also) here', 'Two <expansions> are <also> here'),\n    ('Nested $(expansions are $(also) here) too.', 'Nested <expansions are $(also) here> too.'),\n    ('Nested $(expansions are \"$(also\" here) too.', 'Nested <expansions are \"$(also\" here> too.'),\n    ('Nested $(expansions are [\"$(also\" here),$(next)] finally) too.', 'Nested <expansions are [\"$(also\" here),$(next)] finally> too.'),\n    ('Ignore $(stuff))) like this.', 'Ignore <stuff>)) like this.'),\n    ('escape \\$(stuff) like this.', 'escape $(stuff) like this.'),\n    ('exp $(stuff) but \\$(do not $(except [{$(foo)}] this) but \\${not} like this.', 'exp <stuff> but $(do not <except [{$(foo)}] this> but ${not} like this.'),\n    ('Nested ${expansions are [\"$(also\" here),$(next)] finally} too.', 'Nested <expansions are [\"$(also\" here),$(next)] finally> too.'),\n)\n\nclass ScanTester:\n    \n    def __init__(self, test):\n        self._test = test\n        self._scanner = EnvScanner()\n\n    def run(self, tc):\n        for t in self._test:\n            r = self._scanner.parse(t[0], self.callback)\n            #print(\"    ('{0}', '{1}'),\".format(t[0], r))\n            tc.assertEqual(t[1], r)\n\n    def callback(self, buf, whole):\n        return \"<\"+buf+\">\"\n\n\nclass TestScanner(unittest.TestCase):\n\n    def test_parse1(self):\n        t = ScanTester(TEST1)\n        t.run(self)\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/events.py",
    "content": "from prefix import *\n\nfrom chaperone.cutil.events import EventSource\n\nclass handlers:\n    \n    def __init__(self):\n        self.results = list()\n\n    def handler1(self, val):\n        self.results.append(\"handler1:\" + val)\n\n    def handler2(self, val):\n        self.results.append(\"handler2:\" + val)\n\n    def handler3(self, val):\n        self.results.append(\"handler3:\" + val)\n\nclass TestEvents(unittest.TestCase):\n\n    def setUp(self):\n        self.h = handlers()\n        self.e = EventSource()\n\n    def test_event1(self):\n        self.e.add(onH1 = self.h.handler1)\n        self.e.add(onH1 = self.h.handler1)\n        self.e.onH1(\"First trigger\")\n        self.e.onH1(\"Second trigger\")\n        self.assertEqual(self.h.results,\n                         ['handler1:First trigger', 'handler1:First trigger', 'handler1:Second trigger', 'handler1:Second trigger'])\n        self.e.remove(onH1 = self.h.handler1)\n        self.e.onH1(\"Third trigger\")\n        self.e.remove(onH1 = self.h.handler1)\n        self.e.onH1(\"Fourth trigger\")\n        self.assertEqual(self.h.results,\n                         ['handler1:First trigger', 'handler1:First trigger', 'handler1:Second trigger', 'handler1:Second trigger', 'handler1:Third trigger'])\n\n    def test_event2(self):\n        self.e.add(onH1 = self.h.handler1)\n        self.assertRaisesRegex(TypeError, 'but 3 were given', lambda: self.e.onH1(\"arg1\", \"arg2\"))\n\n    def test_event3(self):\n        self.e.add(onMulti = self.h.handler1)\n        self.e.add(onMulti = self.h.handler2)\n        self.e.onMulti(\"TWO\")\n        self.e.add(onMulti = self.h.handler3)\n        self.e.onMulti(\"THREE\")\n        self.assertEqual(self.h.results,\n                         ['handler1:TWO', 'handler2:TWO', 'handler1:THREE', 'handler2:THREE', 'handler3:THREE'])\n        self.e.remove(onMulti = self.h.handler2)\n        self.e.onMulti(\"AFTER-REMOVE\")\n        self.assertEqual(self.h.results,\n                         ['handler1:TWO', 'handler2:TWO', 'handler1:THREE', 'handler2:THREE', 'handler3:THREE', 'handler1:AFTER-REMOVE', 'handler3:AFTER-REMOVE'])\n        self.e.remove(onMulti = self.h.handler1)\n        self.e.remove(onMulti = self.h.handler2)\n        self.e.remove(onMulti = self.h.handler3)\n        self.e.onMulti(\"EMPTY\")\n        self.assertEqual(self.h.results,\n                         ['handler1:TWO', 'handler2:TWO', 'handler1:THREE', 'handler2:THREE', 'handler3:THREE', 'handler1:AFTER-REMOVE', 'handler3:AFTER-REMOVE'])\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/prefix.py",
    "content": "import sys\nimport os\nimport unittest\n\nif sys.version_info < (3,):\n    print(\"You must run tests with Python 3 only.  Python 2 distributions are not supported.\")\n    exit(1)\n\n# Assure that packages in the same directory as ours (tests) can be used without concern for where\n# we are installed\nsys.path[0] = os.path.dirname(os.path.dirname(__file__))\n"
  },
  {
    "path": "tests/run-all-tests.sh",
    "content": "#!/bin/bash\n# Runs both unit tests as well as process integration tests\n\npython3 env_expand.py\npython3 env_parse.py\npython3 events.py\npython3 service_order.py\npython3 syslog_spec.py\n\n./run-el.sh\n"
  },
  {
    "path": "tests/run-el.sh",
    "content": "#!/bin/bash\n\nfunction relpath() { python -c \"import os,sys;print(os.path.relpath(*(sys.argv[1:])))\" \"$@\"; }\n\nexport PATH=$PWD/bin:$PATH\n\nif [ \"$1\" == '-n' ]; then\n  counter=$2\n  shift 2\n  for (( i=1; $i<=$counter; i++ )); do\n    export CHTEST_LOGDIR=$PWD/test_logs/n$i\n    $0 $* &\n  done\n  wait\n  exit\nfi\n\nif [ \"$1\" != \"\" ]; then\n  export CHTEST_ONLY_ENDSWITH=$1\nfi\n\ntest-driver el-tests/basic-1\ntest-driver el-tests/simple-1\ntest-driver el-tests/simple-2\ntest-driver el-tests/cron-1\ntest-driver el-tests/fork-1\ntest-driver el-tests/inetd-1\ntest-driver el-tests/notify-1\ntest-driver el-tests/exitkills-1\n"
  },
  {
    "path": "tests/run-shell.sh",
    "content": "#!/bin/bash\n\nif [ \"$1\" == \"\" ]; then\n  echo 'usage: run_shell.sh <relative-test-subdir-path>'\n  exit\nfi\n\nexport PATH=$PWD/bin:$PATH\nexport CHTEST_DOCKER_CMD=\"sdnotify-exec --noproxy --verbose --wait-stop docker run %{SOCKET_ARGS}\"\n\ntest-driver --shell el-tests/$1\n"
  },
  {
    "path": "tests/service_order.py",
    "content": "from prefix import *\n\nfrom chaperone.cutil.config import ServiceDict\n\nOT1 = {\n    'one.service': { },\n    'two.service': { 'service_groups': 'foobar', 'after': 'default' },\n    'three.service': { 'service_groups': 'system', 'before': 'four.service' },\n    'four.service': { 'service_groups': 'system', 'before': 'default' },\n    'five.service': { },\n    'six.service': { 'after': 'seven.service' },\n    'seven.service': { },\n    'eight.service': { 'service_groups': 'system', 'before': 'default' },\n}\n\nOT2 = {\n    'one.service': { },\n    'two.service': { 'service_groups': 'foobar', 'after': 'default' },\n    'three.service': { 'service_groups': 'system', 'before': 'two.service' },\n    'four.service': { 'service_groups': 'system', 'before': 'three.service' },\n    'five.service': { },\n    'six.service': { },\n    'seven.service': { }\n}\n\nOT3 = {\n    'one.service': { },\n    'two.service': { 'before': 'default' },\n    'three.service': { 'service_groups': 'system', 'before': 'four.service' },\n    'four.service': { 'service_groups': 'system', 'before': 'default' },\n    'five.service': { 'before': 'two.service' },\n    'six.service': { 'after': 'seven.service' },\n    'seven.service': { }\n}\n\ndef printlist(title, d):\n    return\n    print(title)\n    for item in d:\n        print(\"  \", item)\n\ndef checkorder(result, *series):\n    \"\"\"\n    Checks to be sure that the items listed in 'series' are in order in the result set.\n    \"\"\"\n    results = [r.name for r in result]\n    indexes = list(map(lambda item: results.index(item+\".service\"), series))\n    for n in range(len(indexes)-1):\n        if indexes[n] > indexes[n+1]:\n            return False\n    return True\n\nclass TestServiceOrder(unittest.TestCase):\n\n    def test_order1(self):\n        sc = ServiceDict(OT1.items())\n        slist = sc.get_startup_list()\n        printlist(\"startup list: \", slist)\n        self.assertTrue(checkorder(slist, 'three', 'four', 'seven', 'six', 'two'))\n        self.assertTrue(checkorder(slist, 'three', 'one', 'two'))\n        self.assertTrue(checkorder(slist, 'eight', 'one', 'two'))\n\n    def test_order2(self):\n        sc = ServiceDict(OT2.items())\n        slist = sc.get_startup_list()\n        printlist(\"startup list: \", slist)\n        self.assertTrue(checkorder(slist, 'four', 'three', 'two'))\n\n    def test_order3(self):\n        sc = ServiceDict(OT3.items())\n        self.assertRaisesRegex(Exception, '^circular', lambda: sc.get_startup_list())\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/syslog_spec.py",
    "content": "from prefix import *\n\nfrom chaperone.cutil.syslog import _syslog_spec_matcher\n\nSPECS = (\n    ('*.*',                                    '(True)'),\n    ('[crond].*',                              '((g and \"crond\" == g.lower()))'),\n    ('.*',                                     'Invalid log spec syntax: .*'),\n    ('kern.*;kern.!=crit',                     '((not (f==0) or not p==2)) and (((f==0)))'),\n    ('KERN.*;kern.!crit',                      '((not (f==0) or not p<=2)) and (((f==0)))'),\n    ('kern.crit',                              '((f==0) and p<=2)'),\n    ('*.=emerg;*.=crit',                       '(p==0) or (p==2)'),\n    ('/not and\\/or able/.*',                   '(bool(s._regexes[0].search(buf)))'),\n    ('*.*;![debian-start].*;authpriv,auth.!*', '(not ((g and \"debian-start\" == g.lower())) and (not (f==10 or f==4)))'),\n    ('*.*;![debian-start].*;!authpriv,auth.*', '(not ((g and \"debian-start\" == g.lower())) and not ((f==10 or f==4)))'),\n    ('*.*;![debian-start].*;!authpriv,auth.!crit', '(not ((g and \"debian-start\" == g.lower())) and (not (f==10 or f==4) and not p<=2))'),\n    ('kern.*',                                 '((f==0))'),\n    ('*.*;*.!*',                               '((False))'),\n    ('*.*;![chaperone].*',                     '(not ((g and \"chaperone\" == g.lower())))'),\n    ('kern.*;!auth,authpriv.*',                '(not ((f==4 or f==10))) and (((f==0)))'),\n    ('[cron].*;[daemon-tools].crit;/password/.!err', '((not bool(s._regexes[0].search(buf)) or not p<=3)) and (((g and \"cron\" == g.lower())) or ((g and \"daemon-tools\" == g.lower()) and p<=2))'),\n    ('kern.*;![cron].!err',                    '((not (g and \"cron\" == g.lower()) and not p<=3)) and (((f==0)))'),\n    ('[chaperone].err;[logrotate].err;!kern.*', '(not ((f==0))) and (((g and \"chaperone\" == g.lower()) and p<=3) or ((g and \"logrotate\" == g.lower()) and p<=3))'),\n    ('/panic/.*;/segfault/.*;*.!=debug',       '((not p==7)) and ((bool(s._regexes[0].search(buf))) or (bool(s._regexes[1].search(buf))))'),\n)\n\n\nclass TestSyslogSpec(unittest.TestCase):\n\n    def test_specs(self):\n        for s in SPECS:\n            try:\n                sm = _syslog_spec_matcher(s[0]).debugexpr\n            except Exception as ex:\n                sm = ex\n                if 'unexpected' in str(sm):\n                    raise\n            #Uncomment to generate the test table, but CHECK IT carefully!\n            #print(\"('{0:40} '{1}'),\".format(s[0]+\"',\", sm))\n            self.assertEqual(str(sm), s[1])\n\nif __name__ == '__main__':\n    unittest.main()\n"
  }
]