[
  {
    "path": ".gitbook.yaml",
    "content": "root: ./docs\n\nstructure:\n  readme: ../README.md\n  summary: README.md\n"
  },
  {
    "path": ".github/CODE_OF_CONDUCT.md",
    "content": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nIn the interest of fostering an open and welcoming environment, we as\ncontributors and maintainers pledge to making participation in our project and\nour community a harassment-free experience for everyone, regardless of age, body\nsize, disability, ethnicity, gender identity and expression, level of experience,\neducation, socio-economic status, nationality, personal appearance, race,\nreligion, or sexual identity and orientation.\n\n## Our Standards\n\nExamples of behavior that contributes to creating a positive environment\ninclude:\n\n* Using welcoming and inclusive language\n* Being respectful of differing viewpoints and experiences\n* Gracefully accepting constructive criticism\n* Focusing on what is best for the community\n* Showing empathy towards other community members\n\nExamples of unacceptable behavior by participants include:\n\n* The use of sexualized language or imagery and unwelcome sexual attention or\nadvances\n* Trolling, insulting/derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or electronic\naddress, without explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\nprofessional setting\n\n## Our Responsibilities\n\nProject maintainers are responsible for clarifying the standards of acceptable\nbehavior and are expected to take appropriate and fair corrective action in\nresponse to any instances of unacceptable behavior.\n\nProject maintainers have the right and responsibility to remove, edit, or\nreject comments, commits, code, wiki edits, issues, and other contributions\nthat are not aligned to this Code of Conduct, or to ban temporarily or\npermanently any contributor for other behaviors that they deem inappropriate,\nthreatening, offensive, or harmful.\n\n## Scope\n\nThis Code of Conduct applies both within project spaces and in public spaces\nwhen an individual is representing the project or its community. Examples of\nrepresenting a project or community include using an official project e-mail\naddress, posting via an official social media account, or acting as an appointed\nrepresentative at an online or offline event. Representation of a project may be\nfurther defined and clarified by project maintainers.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported by contacting the project team at conduct@sourced.tech. All\ncomplaints will be reviewed and investigated and will result in a response that\nis deemed necessary and appropriate to the circumstances. The project team is\nobligated to maintain confidentiality with regard to the reporter of an incident.\nFurther details of specific enforcement policies may be posted separately.\n\nProject maintainers who do not follow or enforce the Code of Conduct in good\nfaith may face temporary or permanent repercussions as determined by other\nmembers of the project's leadership.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,\navailable at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html\n\n[homepage]: https://www.contributor-covenant.org\n"
  },
  {
    "path": ".github/pull_request_template.md",
    "content": "\n\n\n---\n\n<!-- Please leave this template at the end of your description, checking the option that applies -->\n\n* [ ] I have updated the CHANGELOG file according to the conventions in [keepachangelog.com](https://keepachangelog.com)\n* [ ] This PR contains changes that do not require a mention in the CHANGELOG file"
  },
  {
    "path": ".gitignore",
    "content": ".ci/\nbuild/\n"
  },
  {
    "path": ".travis.yml",
    "content": "branches:\n  only:\n    - master\n    - /^v\\d+\\.\\d+(\\.\\d+)?(-\\S*)?$/\n\ndist: xenial\nsudo: required\n\nlanguage: go\ngo_import_path: github.com/src-d/sourced-ce\ngo:\n  - 1.13.x\nenv:\n  global:\n    - SOURCED_GITHUB_TOKEN=$GITHUB_TOKEN\n\nmatrix:\n  fast_finish: true\n\nservices:\n  - docker\n\nstages:\n  - name: tests\n  - name: release\n    if: tag IS present\n\njobs:\n  include:\n    - stage: tests\n      name: 'Go Unit Tests'\n      script:\n        - make packages\n        - make test-coverage codecov\n\n    - stage: tests\n      name: 'Integration Tests Linux'\n      script:\n        # cannot use 'make test-integration' because 'make clean' fails with\n        # GO111MODULE, see https://github.com/golang/go/issues/31002\n        - make build\n        - make test-integration-no-build\n\n    - stage: release\n      name: 'Release to GitHub'\n      script:\n        - make packages\n      deploy:\n        provider: releases\n        api_key: $GITHUB_TOKEN\n        file_glob: true\n        file:\n          - build/*.tar.gz\n        skip_cleanup: true\n        on:\n          all_branches: true\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),\nand this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).\n\nThe changes listed under `Unreleased` section have landed in master but are not yet released.\n\n\n## [Unreleased]\n\n### Components\n\n- `bblfsh/bblfshd` has been updated to [v2.15.0](https://github.com/bblfsh/bblfshd/releases/tag/v2.15.0).\n- `bblfsh/web` has been updated to [v0.11.4](https://github.com/bblfsh/web/releases/tag/v0.11.4).\n\t- Use the same logging level as the other components reading `LOG_LEVEL` enviroment value (default: `info`) (([#263](https://github.com/src-d/sourced-ce/pull/263)).\n- `srcd/sourced-ui` has been updated to [v0.8.1](https://github.com/src-d/sourced-ui/releases/tag/v0.8.1).\n\n\n### Fixed\n\n- Identify and show errors for old unsupported version of docker/docker-compose ([#253](https://github.com/src-d/sourced-ce/issues/253))\n\n## [v0.17.0](https://github.com/src-d/sourced-ce/releases/tag/v0.17.0) - 2019-10-01\n\n### Components\n\n- `srcd/sourced-ui` has been updated to [v0.7.0](https://github.com/src-d/sourced-ui/releases/tag/v0.7.0).\n- `srcd/gitcollector` has been updated to [v0.0.4](https://github.com/src-d/gitcollector/releases/tag/v0.0.4).\n\n### Fixed\n\n- More detailed error messages for file downloads ([#245](https://github.com/src-d/sourced-ce/pull/245)).\n\n### Changed\n\n- Make `sourced-ui` Superset celery workers run as separate containers ([#269](https://github.com/src-d/sourced-ui/issues/269)).\n- Remove need for `docker-compose.override.yml` ([#252](https://github.com/src-d/sourced-ui/issues/252)).\n\n### Internal\n\n- Development and building of source{d} CE now requires `go 1.13` ([#242](https://github.com/src-d/sourced-ce/pull/242)).\n\n### Upgrading\n\nInstall the new `v0.17.0` binary, then run `sourced compose download`. Because of a change in the `docker-compose.yml` file version, you must delete the file `~/.sourced/compose-files/__active__/docker-compose.override.yml` manually.\n\nIf you had a deployment running, you must re-deploy the containers with `sourced restart`. All your existing data will continue to work after the upgrade.\n\n```shell\n$ sourced version\nsourced version v0.16.0\n\n$ rm ~/.sourced/compose-files/__active__/docker-compose.override.yml\n\n$ sourced compose download\nDocker compose file successfully downloaded to your ~/.sourced/compose-files directory. It is now the active compose file.\nTo update your current installation use `sourced restart`\n\n$ sourced restart\n```\n\n\n## [v0.16.0](https://github.com/src-d/sourced-ce/releases/tag/v0.16.0) - 2019-09-16\n\n### Components\n\n- `srcd/sourced-ui` has been updated to [v0.6.0](https://github.com/src-d/sourced-ui/releases/tag/v0.6.0).\n- `bblfsh/web` has been updated to [v0.11.3](https://github.com/bblfsh/web/releases/tag/v0.11.3).\n\n### Fixed\n\n- Increase the timeout for the `start` command ([#219](https://github.com/src-d/sourced-ce/pull/219)).\n\n### Changed\n\n- `sourced compose list` shows an index number for each compose entry, and `sourced compose set` now accepts both the name or the index number (@cmbahadir) ([#199](https://github.com/src-d/sourced-ce/issues/199)).\n\n### Upgrading\n\nInstall the new `v0.16.0` binary, then run `sourced compose download`. If you had a deployment running, you can re-deploy the containers with `sourced restart`.\n\nPlease note: `sourced-ui` contains changes to the color palettes for the default dashboard charts, and these changes will only be visible when you run `sourced init local/org` with a new path or organization. This is a cosmetic improvement that you can ignore safely.\n\nIf you want to apply the new default dashboards over your existing deployment, you will need to run `sourced prune` (or `sourced prune --all`) and `sourced init local/org` again.\n\nImportant: running `prune` will delete all your current data and customizations, including charts or dashboards. You can choose to not `prune` your existing deployments, keeping you previous default dashboards and charts.\n\n```shell\n$ sourced version\nsourced version v0.16.0\n\n$ sourced compose download\nDocker compose file successfully downloaded to your ~/.sourced/compose-files directory. It is now the active compose file.\nTo update your current installation use `sourced restart`\n\n$ sourced status workdirs\n  bblfsh\n* src-d\n\n$ sourced prune --all\n$ sourced init orgs src-d\n$ sourced init orgs bblfsh\n```\n\n## [v0.15.1](https://github.com/src-d/sourced-ce/releases/tag/v0.15.1) - 2019-08-27\n\n### Fixed\n\n- Fix incompatibility of empty resource limits ([#227](https://github.com/src-d/sourced-ce/issues/227)).\n- Fix incorrect value for `GITCOLLECTOR_LIMIT_CPU` in some cases ([#225](https://github.com/src-d/sourced-ce/issues/225)).\n- Fix gitbase `LOG_LEVEL` environment variable in the compose file ([#228](https://github.com/src-d/sourced-ce/issues/228)).\n\n### Removed\n\n- Remove the `completion` sub-command on Windows, as it only works for bash ([#169](https://github.com/src-d/sourced-ce/issues/169)).\n\n### Upgrading\n\nInstall the new `v0.15.1` binary, then run `sourced compose download`.\n\nFor an upgrade from `v0.15.0`, you just need to run `sourced restart` to re-deploy the containers.\n\nFor an upgrade from `v0.14.0`, please see the upgrade instructions in the release notes for `v0.15.0`.\n\n\n## [v0.15.0](https://github.com/src-d/sourced-ce/releases/tag/v0.15.0) - 2019-08-21\n\n### Components\n\n- `srcd/sourced-ui` has been updated to [v0.5.0](https://github.com/src-d/sourced-ui/releases/tag/v0.5.0).\n- `srcd/ghsync` has been updated to [v0.2.0](https://github.com/src-d/ghsync/releases/tag/v0.2.0).\n\n### Added\n\n- Add a monitoring of containers state while waiting for the web UI to open during initialization ([#147](https://github.com/src-d/sourced-ce/issues/147)).\n- Exclude forks by default in `sourced init orgs`, adding a new flag `--with-forks` to include them if needed ([#109](https://github.com/src-d/sourced-ce/issues/109)).\n\n### Changed\n\n- Refactor of the `status` command ([#203](https://github.com/src-d/sourced-ce/issues/203)):\n  - `sourced status components` shows the previous output of `sourced status`\n  - `sourced status workdirs` replaces `sourced workdirs`\n  - `sourced status config` shows the contents of the Docker Compose environment variables. This is useful, for example, to check if the active working directory was configured to include or skip forks when downloading the data from GitHub\n  - `sourced status all` shows all of the above\n\n### Upgrading\n\nInstall the new `v0.15.0` binary, then run `sourced compose download`. If you had a deployment running, you can re-deploy the containers with `sourced restart`.\n\nPlease note: `sourced-ui` contains fixes for the default dashboard charts that will only be visible when you run `sourced init local/org` with a new path or organization.\nIf you want to apply the new default dashboards over your existing deployment, you will need to run `sourced prune` (or `sourced prune --all`) and `sourced init local/org` again.\n\nImportant: running `prune` will delete all your current data and customizations, including charts or dashboards. You can choose to not `prune` your existing deployments, keeping you previous default dashboards and charts.\n\n```shell\n$ sourced version\nsourced version v0.15.0 build 08-21-2019_08_30_24\n\n$ sourced compose download\nDocker compose file successfully downloaded to your ~/.sourced/compose-files directory. It is now the active compose file.\nTo update your current installation use `sourced restart`\n\n$ sourced status workdirs\n  bblfsh\n* src-d\n\n$ sourced prune --all\n$ sourced init orgs src-d\n$ sourced init orgs bblfsh\n```\n\n## [v0.14.0](https://github.com/src-d/sourced-ce/releases/tag/v0.14.0) - 2019-08-07\n\nInitial release of **source{d} Community Edition (CE)**, the data platform for your software development life cycle.\n\nThe `sourced` binary is a wrapper for Docker Compose that downloads the `docker-compose.yml` file from this repository, and includes the following sub commands:\n\n- `init`: Initialize source{d} to work on local or GitHub organization datasets\n  - `local`: Initialize source{d} to analyze local repositories\n  - `orgs`: Initialize source{d} to analyze GitHub organizations\n- `status`: Show the status of all components\n- `stop`: Stop any running components\n- `start`: Start any stopped components\n- `logs`: Show logs from components\n- `web`: Open the web interface in your browser\n- `sql`: Open a MySQL client connected to a SQL interface for Git\n- `prune`: Stop and remove components and resources\n- `workdirs` List all working directories\n- `compose`: Manage source{d} docker compose files\n  - `download`: Download docker compose files\n  - `list`: List the downloaded docker compose files\n  - `set`: Set the active docker compose file\n- `restart`: Update current installation according to the active docker compose file\n\n### Known Issues\n\n- On Windows, if you use `sourced init local` on a directory with a long path, you may encounter the following error:\n  ```\n  Can't find a suitable configuration file in this directory or any\n  parent. Are you in the right directory?\n  ```\n\n  This is caused by the [`MAX_PATH` limitation on windows](https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file#maximum-path-length-limitation). The only workaround is to move the target directory to a shorter path, closer to the root of your drive ([#191](https://github.com/src-d/sourced-ce/issues/191)).\n\n- Linux only: Docker installed from snap packages is not supported, please install it following [the official documentation](https://docs.docker.com/install/) ([#78](https://github.com/src-d/sourced-ce/issues/78)).\n\n### Upgrading\n\nFor internal releases we don't support upgrading. If you have a previous `sourced-ce` pre-release version installed, clean up all your data **before** downloading this release. This will delete everything, including the UI data for dashboards, charts, users, etc:\n\n```shell\nsourced prune --all\nrm -rf ~/.sourced\n```\n"
  },
  {
    "path": "DCO",
    "content": "Developer Certificate of Origin\nVersion 1.1\n\nCopyright (C) 2004, 2006 The Linux Foundation and its contributors.\n660 York Street, Suite 102,\nSan Francisco, CA 94110 USA\n\nEveryone is permitted to copy and distribute verbatim copies of this\nlicense document, but changing it is not allowed.\n\n\nDeveloper's Certificate of Origin 1.1\n\nBy making a contribution to this project, I certify that:\n\n(a) The contribution was created in whole or in part by me and I\n    have the right to submit it under the open source license\n    indicated in the file; or\n\n(b) The contribution is based upon previous work that, to the best\n    of my knowledge, is covered under an appropriate open source\n    license and I have the right under that license to submit that\n    work with modifications, whether created in whole or in part\n    by me, under the same open source license (unless I am\n    permitted to submit under a different license), as indicated\n    in the file; or\n\n(c) The contribution was provided directly to me by some other\n    person who certified (a), (b) or (c) and I have not modified\n    it.\n\n(d) I understand and agree that this project and the contribution\n    are public and that a record of the contribution (including all\n    personal information I submit with it, including my sign-off) is\n    maintained indefinitely and may be redistributed consistent with\n    this project or the open source license(s) involved.\n"
  },
  {
    "path": "LICENSE.md",
    "content": "                    GNU GENERAL PUBLIC LICENSE\n                    Version 3, 29 June 2007\n\nCopyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>\nEveryone is permitted to copy and distribute verbatim copies\nof this license document, but changing it is not allowed.\n\n                    Preamble\n\n  The GNU General Public License is a free, copyleft license for\nsoftware and other kinds of works.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nthe GNU General Public License is intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.  We, the Free Software Foundation, use the\nGNU General Public License for most of our software; it applies also to\nany other work released this way by its authors.  You can apply it to\nyour programs, too.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  To protect your rights, we need to prevent others from denying you\nthese rights or asking you to surrender the rights.  Therefore, you have\ncertain responsibilities if you distribute copies of the software, or if\nyou modify it: responsibilities to respect the freedom of others.\n\n  For example, if you distribute copies of such a program, whether\ngratis or for a fee, you must pass on to the recipients the same\nfreedoms that you received.  You must make sure that they, too, receive\nor can get the source code.  And you must show them these terms so they\nknow their rights.\n\n  Developers that use the GNU GPL protect your rights with two steps:\n(1) assert copyright on the software, and (2) offer you this License\ngiving you legal permission to copy, distribute and/or modify it.\n\n  For the developers' and authors' protection, the GPL clearly explains\nthat there is no warranty for this free software.  For both users' and\nauthors' sake, the GPL requires that modified versions be marked as\nchanged, so that their problems will not be attributed erroneously to\nauthors of previous versions.\n\n  Some devices are designed to deny users access to install or run\nmodified versions of the software inside them, although the manufacturer\ncan do so.  This is fundamentally incompatible with the aim of\nprotecting users' freedom to change the software.  The systematic\npattern of such abuse occurs in the area of products for individuals to\nuse, which is precisely where it is most unacceptable.  Therefore, we\nhave designed this version of the GPL to prohibit the practice for those\nproducts.  If such problems arise substantially in other domains, we\nstand ready to extend this provision to those domains in future versions\nof the GPL, as needed to protect the freedom of users.\n\n  Finally, every program is threatened constantly by software patents.\nStates should not allow patents to restrict development and use of\nsoftware on general-purpose computers, but in those that do, we wish to\navoid the special danger that patents applied to a free program could\nmake it effectively proprietary.  To prevent this, the GPL assures that\npatents cannot be used to render the program non-free.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                    TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Use with the GNU Affero General Public License.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU Affero General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the special requirements of the GNU Affero General Public License,\nsection 13, concerning interaction through a network will apply to the\ncombination as such.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU General Public License from time to time.  Such new versions will\nbe similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                    END OF TERMS AND CONDITIONS\n\nHow to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n      <one line to give the program's name and a brief idea of what it does.>\n      Copyright (C) <year>  <name of author>\n\n      This program is free software: you can redistribute it and/or modify\n      it under the terms of the GNU General Public License as published by\n      the Free Software Foundation, either version 3 of the License, or\n      (at your option) any later version.\n\n      This program is distributed in the hope that it will be useful,\n      but WITHOUT ANY WARRANTY; without even the implied warranty of\n      MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n      GNU General Public License for more details.\n\n      You should have received a copy of the GNU General Public License\n      along with this program.  If not, see <http://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If the program does terminal interaction, make it output a short\nnotice like this when it starts in an interactive mode:\n\n      <program>  Copyright (C) 2017 Sourced Technologies, S.L.\n      This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.\n      This is free software, and you are welcome to redistribute it\n      under certain conditions; type `show c' for details.\n\n  The hypothetical commands `show w' and `show c' should show the appropriate\nparts of the General Public License.  Of course, your program's commands\nmight be different; for a GUI interface, you would use an \"about box\".\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU GPL, see\n<http://www.gnu.org/licenses/>.\n\n  The GNU General Public License does not permit incorporating your program\ninto proprietary programs.  If your program is a subroutine library, you\nmay consider it more useful to permit linking proprietary applications with\nthe library.  If this is what you want to do, use the GNU Lesser General\nPublic License instead of this License.  But first, please read\n<http://www.gnu.org/philosophy/why-not-lgpl.html>.\n"
  },
  {
    "path": "MAINTAINERS",
    "content": "David Pordomingo <david@sourced.tech> (@dpordomingo)\nLou Marvin Caraig <marvin@sourced.tech> (@se7entyse7en)\n"
  },
  {
    "path": "Makefile",
    "content": "# Package configuration\nPROJECT = sourced-ce\nCOMMANDS = cmd/sourced\nPKG_OS ?= darwin linux windows\n\n# Including ci Makefile\nCI_REPOSITORY ?= https://github.com/src-d/ci.git\nCI_PATH ?= $(shell pwd)/.ci\nCI_VERSION ?= v1\n\nMAKEFILE := $(CI_PATH)/Makefile.main\n$(MAKEFILE):\n\tgit clone --quiet --branch $(CI_VERSION) --depth 1 $(CI_REPOSITORY) $(CI_PATH);\n\n-include $(MAKEFILE)\n\nGOTEST_BASE = go test -v -timeout 20m -parallel 1 -count 1 -ldflags \"$(LD_FLAGS)\"\nGOTEST_INTEGRATION = $(GOTEST_BASE) -tags=\"forceposix integration\"\n\nOS := $(shell uname)\n\n# override clean target from CI to avoid executing `go clean`\n# see https://github.com/src-d/sourced-ce/pull/154\nclean:\n\trm -rf $(BUILD_PATH) $(BIN_PATH) $(VENDOR_PATH)\n\nifeq ($(OS),Darwin)\ntest-integration-clean:\n\t$(eval TMPDIR_INTEGRATION_TEST := $(PWD)/integration-test-tmp)\n\t$(eval GOTEST_INTEGRATION := TMPDIR=$(TMPDIR_INTEGRATION_TEST) $(GOTEST_INTEGRATION))\n\trm -rf $(TMPDIR_INTEGRATION_TEST)\n\tmkdir $(TMPDIR_INTEGRATION_TEST)\nelse\ntest-integration-clean:\nendif\n\ntest-integration-no-build: test-integration-clean\n\t$(GOTEST_INTEGRATION) github.com/src-d/sourced-ce/test/\n\ntest-integration: clean build test-integration-no-build\n"
  },
  {
    "path": "NOTICE.md",
    "content": "sourced-ce is the data platform for your software development life cycle\n\nCopyright (C) 2019 source{d}\n\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program.  If not, see <http://www.gnu.org/licenses/>.\n"
  },
  {
    "path": "README.md",
    "content": "<a href=\"https://www.sourced.tech\">\n  <img src=\"docs/assets/sourced-community-edition.png\" alt=\"source{d} Community Edition (CE)\" height=\"120px\" />\n</a>\n\n**source{d} Community Edition (CE) is the data platform for your software development life cycle.**\n\n[![GitHub version](https://badge.fury.io/gh/src-d%2Fsourced-ce.svg)](https://github.com/src-d/sourced-ce/releases)\n[![Build Status](https://travis-ci.com/src-d/sourced-ce.svg?branch=master)](https://travis-ci.com/src-d/sourced-ce)\n![Beta](https://svg-badge.appspot.com/badge/stability/beta?color=D6604A)\n[![Go Report Card](https://goreportcard.com/badge/github.com/src-d/sourced-ce)](https://goreportcard.com/report/github.com/src-d/sourced-ce)\n[![GoDoc](https://godoc.org/github.com/src-d/sourced-ce?status.svg)](https://godoc.org/github.com/src-d/sourced-ce)\n\n[Website](https://www.sourced.tech) •\n[Documentation](https://docs.sourced.tech/community-edition) •\n[Blog](https://blog.sourced.tech) •\n[Slack](http://bit.ly/src-d-community) •\n[Twitter](https://twitter.com/sourcedtech)\n\n\n![source{d} CE dashboard](docs/assets/dashboard.png)\n\n## Introduction\n\n**source{d} Community Edition (CE)** helps you to manage all your code and engineering data in one place:\n\n- **Code Retrieval**: Retrieve and store the git history of the code of your organization as a dataset.\n- **Analysis in/for any Language**: Automatically identify languages, parse source code, and extract the pieces that matter in a language-agnostic way.\n- **History Analysis**: Extract information from the evolution, commits, and metadata of your codebase and from GitHub, generating detailed reports and insights.\n- **Familiar APIs**: Analyze your code through powerful SQL queries. Use tools you're familiar with to create reports and dashboards.\n\nThis repository contains the code of **source{d} Community Edition (CE)** and its project documentation, which you can also see properly rendered at [docs.sourced.tech/community-edition](https://docs.sourced.tech/community-edition).\n\n\n### Contents\n\n- [Introduction](README.md#introduction)\n- [Quick Start](README.md#quick-start)\n- [Architecture](README.md#architecture)\n- [Contributing](README.md#contributing)\n- [Community](README.md#community)\n- [Code of Conduct](README.md#code-of-conduct)\n- [License](README.md#license)\n\n## Quick Start\n\n**source{d} CE** supports Linux, macOS, and Windows.\n\nTo run it you only need:\n\n1. To have Docker installed in your PC\n1. Download `sourced` binary (for your OS) from [our releases](https://github.com/src-d/sourced-ce/releases)\n1. Run it:\n   ```bash\n   $ sourced init orgs --token=<github_token> <github_org_name>\n   ```\n   And log in into http://127.0.0.1:8088 with login: `admin`, and password: `admin`.\n\nIf you want more details of each step, you will find in the [**Quick Start Guide**](docs/quickstart/README.md) all the steps to get started with **source{d} CE**, from the installation of its dependencies to running SQL queries to inspect git repositories.\n\nIf you want to know more about **source{d} CE**, in the [next steps](docs/usage/README.md) section you will find some useful resources for guiding your experience using this tool.\n\nIf you have any problem running **source{d} CE** you can take a look at our [Frequently Asked Questions](docs/learn-more/troubleshooting.md) or  [Troubleshooting](docs/learn-more/troubleshooting.md) sections. You can also ask for help when using **source{d} CE** in our [source{d} Forum](https://forum.sourced.tech). If you spotted a bug, or you have a feature request, please [open an issue](https://github.com/src-d/sourced-ce/issues) to let us know about it.\n\n\n## Architecture\n\n_For more details on the architecture of this project, read [docs/learn-more/architecture.md](docs/learn-more/architecture.md)._\n\n**source{d} CE** is deployed as Docker containers, using Docker Compose.\n\nThis tool is a wrapper for Docker Compose to manage the compose files and its containers easily. Moreover, `sourced` does not require a local installation of Docker Compose, if it is not found it will be deployed inside a container.\n\nThe main entry point of **source{d} CE** is [sourced-ui](https://github.com/src-d/sourced-ui), the web interface from where you can access your data, create dashboards, run queries...\n\nThe data exposed by the web interface is prepared and processed by the following services:\n\n- [babelfish](https://doc.bblf.sh): universal code parser.\n- [gitcollector](https://github.com/src-d/gitcollector): fetches the git repositories owned by your organization.\n- [ghsync](https://github.com/src-d/ghsync): fetches metadata from GitHub (users, pull requests, issues...).\n- [gitbase](https://github.com/src-d/gitbase): SQL database interface to Git repositories.\n\n\n## Contributing\n\n[Contributions](https://github.com/src-d/sourced-ce/issues) are **welcome and very much appreciated** 🙌\nPlease refer to [our Contribution Guide](docs/CONTRIBUTING.md) for more details.\n\n\n## Community\n\nsource{d} has an amazing community of developers and contributors who are interested in Code As Data and/or Machine Learning on Code. Please join us! 👋\n\n- [Community](https://sourced.tech/community/)\n- [Slack](http://bit.ly/src-d-community)\n- [Twitter](https://twitter.com/sourcedtech)\n- [Email](mailto:hello@sourced.tech)\n\n\n## Code of Conduct\n\nAll activities under source{d} projects are governed by the\n[source{d} code of conduct](https://github.com/src-d/guide/blob/master/.github/CODE_OF_CONDUCT.md).\n\n\n## License\n\nGPL v3.0, see [LICENSE](LICENSE.md).\n"
  },
  {
    "path": "cmd/sourced/cmd/compose.go",
    "content": "package cmd\n\nimport (\n\t\"fmt\"\n\t\"strconv\"\n\n\tcomposefile \"github.com/src-d/sourced-ce/cmd/sourced/compose/file\"\n\n\t\"gopkg.in/src-d/go-cli.v0\"\n)\n\ntype composeCmd struct {\n\tcli.PlainCommand `name:\"compose\" short-description:\"Manage source{d} docker compose files\" long-description:\"Manage source{d} docker compose files\"`\n}\n\ntype composeDownloadCmd struct {\n\tCommand `name:\"download\" short-description:\"Download docker compose files\" long-description:\"Download docker compose files. By default the command downloads the file for this binary version.\\n\\nUse the 'version' argument to choose a specific revision from\\nthe https://github.com/src-d/sourced-ce repository, or to set a\\nURL to a docker-compose.yml file.\\n\\nExamples:\\n\\nsourced compose download\\nsourced compose download v0.0.1\\nsourced compose download master\\nsourced compose download https://raw.githubusercontent.com/src-d/sourced-ce/master/docker-compose.yml\"`\n\n\tArgs struct {\n\t\tVersion string `positional-arg-name:\"version\" description:\"Either a revision (tag, full sha1) or a URL to a docker-compose.yml file\"`\n\t} `positional-args:\"yes\"`\n}\n\nfunc (c *composeDownloadCmd) Execute(args []string) error {\n\tv := c.Args.Version\n\tif v == \"\" {\n\t\tv = version\n\t}\n\n\terr := composefile.ActivateFromRemote(v)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfmt.Println(\"Docker compose file successfully downloaded to your ~/.sourced/compose-files directory. It is now the active compose file.\")\n\tfmt.Println(\"To update your current installation use `sourced restart`\")\n\treturn nil\n}\n\ntype composeListCmd struct {\n\tCommand `name:\"list\" short-description:\"List the downloaded docker compose files\" long-description:\"List the downloaded docker compose files\"`\n}\n\nfunc (c *composeListCmd) Execute(args []string) error {\n\tactive, err := composefile.Active()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfiles, err := composefile.List()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfor index, file := range files {\n\t\tfmt.Printf(\"[%d]\", index)\n\t\tif file == active {\n\t\t\tfmt.Printf(\"* %s\\n\", file)\n\t\t} else {\n\t\t\tfmt.Printf(\"  %s\\n\", file)\n\t\t}\n\t}\n\n\treturn nil\n}\n\ntype composeSetDefaultCmd struct {\n\tCommand `name:\"set\" short-description:\"Set the active docker compose file\" long-description:\"Set the active docker compose file\"`\n\n\tArgs struct {\n\t\tFile string `positional-arg-name:\"index/name\" description:\"Provide name or index of compose file on 'sourced compose list'\"`\n\t} `positional-args:\"yes\" required:\"yes\"`\n}\n\nfunc (c *composeSetDefaultCmd) Execute(args []string) error {\n\tfiles, err := composefile.List()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tindex, err := strconv.Atoi(c.Args.File)\n\n\tif err == nil {\n\t\tif index >= 0 && index < len(files) {\n\t\t\tactive := files[index]\n\t\t\terr = composefile.SetActive(active)\n\t\t} else {\n\t\t\treturn fmt.Errorf(\"Index is out of range, please check the output of 'sourced compose list'\")\n\t\t}\n\n\t} else {\n\t\terr := composefile.SetActive(c.Args.File)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tfmt.Println(\"Active docker compose file was changed successfully.\")\n\tfmt.Println(\"To update your current installation use `sourced restart`\")\n\treturn nil\n}\n\nfunc init() {\n\tc := rootCmd.AddCommand(&composeCmd{})\n\tc.AddCommand(&composeDownloadCmd{})\n\tc.AddCommand(&composeListCmd{})\n\tc.AddCommand(&composeSetDefaultCmd{})\n}\n"
  },
  {
    "path": "cmd/sourced/cmd/init.go",
    "content": "package cmd\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose/workdir\"\n\t\"gopkg.in/src-d/go-cli.v0\"\n\n\t\"github.com/pkg/errors\"\n)\n\ntype initCmd struct {\n\tcli.PlainCommand `name:\"init\" short-description:\"Initialize source{d} to work on local or GitHub orgs datasets\" long-description:\"Initialize source{d} to work on local or Github orgs datasets\"`\n}\n\ntype initLocalCmd struct {\n\tCommand `name:\"local\" short-description:\"Initialize source{d} to analyze local repositories\" long-description:\"Install, initialize, and start all the required docker containers, networks, volumes, and images.\\n\\nThe repos directory argument must point to a directory containing git repositories.\\nIf it's not provided, the current working directory will be used.\"`\n\n\tArgs struct {\n\t\tReposdir string `positional-arg-name:\"workdir\"`\n\t} `positional-args:\"yes\"`\n}\n\nfunc (c *initLocalCmd) Execute(args []string) error {\n\twdHandler, err := workdir.NewHandler()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treposdir, err := c.reposdirArg()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\twd, err := workdir.InitLocal(reposdir)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err := activate(wdHandler, wd); err != nil {\n\t\treturn err\n\t}\n\n\treturn OpenUI(60 * time.Minute)\n}\n\nfunc (c *initLocalCmd) reposdirArg() (string, error) {\n\treposdir := c.Args.Reposdir\n\treposdir = strings.TrimSpace(reposdir)\n\n\tvar err error\n\tif reposdir == \"\" {\n\t\treposdir, err = os.Getwd()\n\t} else {\n\t\treposdir, err = filepath.Abs(reposdir)\n\t}\n\n\tif err != nil {\n\t\treturn \"\", errors.Wrap(err, \"could not get directory\")\n\t}\n\n\tinfo, err := os.Stat(reposdir)\n\tif err != nil || !info.IsDir() {\n\t\treturn \"\", fmt.Errorf(\"path '%s' is not a valid directory\", reposdir)\n\t}\n\n\treturn reposdir, nil\n}\n\ntype initOrgsCmd struct {\n\tCommand `name:\"orgs\" short-description:\"Initialize source{d} to analyze GitHub organizations\" long-description:\"Install, initialize, and start all the required docker containers, networks, volumes, and images.\\n\\nThe orgs argument must a comma-separated list of GitHub organization names to be analyzed.\"`\n\n\tToken     string `short:\"t\" long:\"token\" env:\"SOURCED_GITHUB_TOKEN\" description:\"GitHub token for the passed organizations. It should be granted with 'repo' and 'read:org' scopes.\" required:\"true\"`\n\tWithForks bool   `long:\"with-forks\" description:\"Download GitHub forked repositories\"`\n\tArgs      struct {\n\t\tOrgs []string `required:\"yes\"`\n\t} `positional-args:\"yes\" required:\"1\"`\n}\n\nfunc (c *initOrgsCmd) Execute(args []string) error {\n\twdHandler, err := workdir.NewHandler()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\torgs := c.orgsList()\n\tif err := c.validate(orgs); err != nil {\n\t\treturn err\n\t}\n\n\twd, err := workdir.InitOrgs(orgs, c.Token, c.WithForks)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err := activate(wdHandler, wd); err != nil {\n\t\treturn err\n\t}\n\n\treturn OpenUI(60 * time.Minute)\n}\n\n// allows to pass organizations separated not only by a space\n// but by comma as well\nfunc (c *initOrgsCmd) orgsList() []string {\n\torgs := c.Args.Orgs\n\tif len(c.Args.Orgs) == 1 {\n\t\torgs = strings.Split(c.Args.Orgs[0], \",\")\n\t}\n\n\tfor i, org := range orgs {\n\t\torgs[i] = strings.Trim(org, \" ,\")\n\t}\n\n\treturn orgs\n}\n\nfunc (c *initOrgsCmd) validate(orgs []string) error {\n\tclient := &http.Client{Transport: &authTransport{token: c.Token}}\n\tr, err := client.Get(\"https://api.github.com/user\")\n\tif err != nil {\n\t\treturn errors.Wrapf(err, \"could not validate user token\")\n\t}\n\tif r.StatusCode == http.StatusUnauthorized {\n\t\treturn fmt.Errorf(\"github token is not valid\")\n\t}\n\n\tfor _, org := range orgs {\n\t\tr, err := client.Get(\"https://api.github.com/orgs/\" + org)\n\t\tif err != nil {\n\t\t\treturn errors.Wrapf(err, \"could not validate organization\")\n\t\t}\n\t\tif r.StatusCode == http.StatusNotFound {\n\t\t\treturn fmt.Errorf(\"organization '%s' is not found\", org)\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc activate(wdHandler *workdir.Handler, workdir *workdir.Workdir) error {\n\t// Before setting a new workdir, stop the current containers\n\tcompose.Run(context.Background(), \"stop\")\n\n\terr := wdHandler.SetActive(workdir)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfmt.Printf(\"docker-compose working directory set to %s\\n\", workdir.Path)\n\treturn compose.Run(context.Background(), \"up\", \"--detach\")\n}\n\ntype authTransport struct {\n\ttoken string\n}\n\nfunc (t *authTransport) RoundTrip(r *http.Request) (*http.Response, error) {\n\tr.Header.Set(\"Authorization\", \"token \"+t.token)\n\treturn http.DefaultTransport.RoundTrip(r)\n}\n\nfunc init() {\n\tc := rootCmd.AddCommand(&initCmd{})\n\n\tc.AddCommand(&initOrgsCmd{})\n\tc.AddCommand(&initLocalCmd{})\n}\n"
  },
  {
    "path": "cmd/sourced/cmd/logs.go",
    "content": "package cmd\n\nimport (\n\t\"context\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose\"\n)\n\ntype logsCmd struct {\n\tCommand `name:\"logs\" short-description:\"Fetch the logs of source{d} components\" long-description:\"Fetch the logs of source{d} components\"`\n\n\tFollow bool `short:\"f\" long:\"follow\" description:\"Follow log output\"`\n\tArgs   struct {\n\t\tComponents []string `positional-arg-name:\"component\" description:\"Component names from where to fetch logs\"`\n\t} `positional-args:\"yes\"`\n}\n\nfunc (c *logsCmd) Execute(args []string) error {\n\tcommand := []string{\"logs\"}\n\n\tif c.Follow {\n\t\tcommand = append(command, \"--follow\")\n\t}\n\n\tif components := c.Args.Components; len(components) > 0 {\n\t\tcommand = append(command, components...)\n\t}\n\n\treturn compose.Run(context.Background(), command...)\n}\n\nfunc init() {\n\trootCmd.AddCommand(&logsCmd{})\n}\n"
  },
  {
    "path": "cmd/sourced/cmd/prune.go",
    "content": "package cmd\n\nimport (\n\t\"context\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose/workdir\"\n)\n\ntype pruneCmd struct {\n\tCommand `name:\"prune\" short-description:\"Stop and remove components and resources\" long-description:\"Stops containers and removes containers, networks, volumes and configuration created by 'init' for the current working directory.\\nTo delete resources for all working directories pass --all flag.\\nImages are not deleted unless you specify the --images flag.\"`\n\n\tAll    bool `short:\"a\" long:\"all\" description:\"Remove containers and resources for all working directories\"`\n\tImages bool `long:\"images\" description:\"Remove docker images\"`\n}\n\nfunc (c *pruneCmd) Execute(args []string) error {\n\tworkdirHandler, err := workdir.NewHandler()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif !c.All {\n\t\treturn c.pruneActive(workdirHandler)\n\t}\n\n\twds, err := workdirHandler.List()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfor _, wd := range wds {\n\t\tif err := workdirHandler.SetActive(wd); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif err = c.pruneActive(workdirHandler); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (c *pruneCmd) pruneActive(workdirHandler *workdir.Handler) error {\n\ta := []string{\"down\", \"--volumes\"}\n\tif c.Images {\n\t\ta = append(a, \"--rmi\", \"all\")\n\t}\n\n\tif err := compose.Run(context.Background(), a...); err != nil {\n\t\treturn err\n\t}\n\n\twd, err := workdirHandler.Active()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err := workdirHandler.Remove(wd); err != nil {\n\t\treturn err\n\t}\n\n\treturn workdirHandler.UnsetActive()\n}\n\nfunc init() {\n\trootCmd.AddCommand(&pruneCmd{})\n}\n"
  },
  {
    "path": "cmd/sourced/cmd/restart.go",
    "content": "package cmd\n\nimport (\n\t\"context\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose\"\n)\n\ntype restartCmd struct {\n\tCommand `name:\"restart\" short-description:\"Update current installation according to the active docker compose file\" long-description:\"Update current installation according to the active docker compose file. It only recreates the component containers, keeping all your data, as charts, dashboards, repositories and GitHub metadata.\"`\n}\n\nfunc (c *restartCmd) Execute(args []string) error {\n\treturn compose.Run(context.Background(), \"up\", \"--force-recreate\", \"--detach\")\n}\n\nfunc init() {\n\trootCmd.AddCommand(&restartCmd{})\n}\n"
  },
  {
    "path": "cmd/sourced/cmd/root.go",
    "content": "package cmd\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"runtime\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose/file\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose/workdir\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/dir\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/format\"\n\n\t\"gopkg.in/src-d/go-cli.v0\"\n)\n\nconst name = \"sourced\"\n\nvar version = \"master\"\n\nvar rootCmd = cli.NewNoDefaults(name, \"source{d} Community Edition & Enterprise Edition CLI client\")\n\n// Init sets the version rewritten by the CI build and adds default sub commands\nfunc Init(v, build string) {\n\tversion = v\n\n\trootCmd.AddCommand(&cli.VersionCommand{\n\t\tName:    name,\n\t\tVersion: version,\n\t\tBuild:   build,\n\t})\n\n\tif runtime.GOOS != \"windows\" {\n\t\trootCmd.AddCommand(&cli.CompletionCommand{\n\t\t\tName: name,\n\t\t}, cli.InitCompletionCommand(name))\n\t}\n}\n\n// Command implements the default group flags. It is meant to be embedded into\n// other application commands to provide default behavior for logging, config\ntype Command struct {\n\tcli.PlainCommand\n\tcli.LogOptions `group:\"Log Options\"`\n}\n\n// Execute adds all child commands to the root command and sets flags appropriately.\n// This is called by main.main(). It only needs to happen once to the rootCmd.\nfunc Execute() {\n\tif err := dir.Prepare(); err != nil {\n\t\tfmt.Println(err)\n\t\tlog(err)\n\t\tos.Exit(1)\n\t}\n\n\tif err := rootCmd.Run(os.Args); err != nil {\n\t\tlog(err)\n\t\tos.Exit(1)\n\t}\n}\n\nfunc log(err error) {\n\tswitch {\n\tcase workdir.ErrMalformed.Is(err) || dir.ErrNotExist.Is(err):\n\t\tprintRed(\"Cannot perform this action, source{d} needs to be initialized first with the 'init' sub command\")\n\tcase workdir.ErrInitFailed.Is(err):\n\t\tprintRed(\"Cannot perform this action, full re-initialization is needed, run 'prune' command first\")\n\tcase dir.ErrNotValid.Is(err):\n\t\tprintRed(\"Cannot perform this action, config directory is not valid\")\n\tcase compose.ErrComposeAlternative.Is(err):\n\t\tprintRed(\"docker-compose is not installed, and there was an error while trying to use the container alternative\")\n\t\tprintRed(\"  see: https://docs.sourced.tech/community-edition/quickstart/1-install-requirements#docker-compose\")\n\tcase file.ErrConfigDownload.Is(err):\n\t\tprintRed(\"The source{d} CE config file could not be set as active\")\n\tcase fmt.Sprintf(\"%T\", err) == \"*flags.Error\":\n\t\t// syntax error is already logged by go-cli\n\tdefault:\n\t\t// unknown errors have no special message\n\t}\n\n\tswitch {\n\tcase dir.ErrNetwork.Is(err):\n\t\t// TODO(dpordomingo): if start using \"https://golang.org/pkg/errors/\",\n\t\t//\t\t\t\t\t  we could do `var myErr ErrNetwork; errors.As(err, &myErr)`\n\t\t// \t\t\t\t\t  to provide more info about the actual network error\n\t\tprintRed(\"The resource could not be downloaded from the Internet\")\n\t\tprintRed(\"  see: https://docs.sourced.tech/community-edition/quickstart/1-install-requirements#internet-connection\")\n\tcase dir.ErrWrite.Is(err):\n\t\t// TODO(dpordomingo): see todo above\n\t\tprintRed(\"could not write file\")\n\n\t}\n}\n\nfunc printRed(message string) {\n\tfmt.Println(format.Colorize(format.Red, message))\n}\n"
  },
  {
    "path": "cmd/sourced/cmd/sql.go",
    "content": "package cmd\n\nimport (\n\t\"context\"\n\t\"os\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose\"\n\n\t\"golang.org/x/crypto/ssh/terminal\"\n)\n\ntype sqlCmd struct {\n\tCommand `name:\"sql\" short-description:\"Open a MySQL client connected to a SQL interface for Git\" long-description:\"Open a MySQL client connected to a SQL interface for Git\"`\n\n\tArgs struct {\n\t\tQuery string `positional-arg-name:\"query\" description:\"SQL query to be run by the SQL interface for Git\"`\n\t} `positional-args:\"yes\"`\n}\n\nfunc (c *sqlCmd) Execute(args []string) error {\n\tcommand := []string{\"exec\"}\n\tif !terminal.IsTerminal(int(os.Stdout.Fd())) || !terminal.IsTerminal(int(os.Stdin.Fd())) {\n\t\tcommand = append(command, \"-T\")\n\t}\n\tcommand = append(command, \"gitbase\", \"mysql\")\n\tif c.Args.Query != \"\" {\n\t\tcommand = append(command, \"--execute\", c.Args.Query)\n\t}\n\n\treturn compose.Run(context.Background(), command...)\n}\n\nfunc init() {\n\trootCmd.AddCommand(&sqlCmd{})\n}\n"
  },
  {
    "path": "cmd/sourced/cmd/start.go",
    "content": "package cmd\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose\"\n)\n\ntype startCmd struct {\n\tCommand `name:\"start\" short-description:\"Start any stopped components\" long-description:\"Start any stopped components.\\nThe containers must be initialized before with 'init'.\"`\n}\n\nfunc (c *startCmd) Execute(args []string) error {\n\tif err := compose.Run(context.Background(), \"start\"); err != nil {\n\t\treturn err\n\t}\n\n\treturn OpenUI(30 * time.Minute)\n\n}\n\nfunc init() {\n\trootCmd.AddCommand(&startCmd{})\n}\n"
  },
  {
    "path": "cmd/sourced/cmd/status.go",
    "content": "package cmd\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose/workdir\"\n)\n\ntype statusCmd struct {\n\tCommand `name:\"status\" short-description:\"Show the list of working directories and the current deployment\" long-description:\"Show the list of working directories and the current deployment\"`\n}\n\ntype statusAllCmd struct {\n\tCommand `name:\"all\" short-description:\"Show all the available status information\" long-description:\"Show all the available status information\"`\n}\n\nfunc (c *statusAllCmd) Execute(args []string) error {\n\tfmt.Print(\"List of all working directories:\\n\")\n\n\terr := printWorkdirsCmd()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tactive, err := activeWorkdir()\n\tif isNotExist(err) {\n\t\t// skip printing the config and components when there is no active dir\n\t\treturn nil\n\t}\n\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfmt.Print(\"\\nConfiguration used for the active working directory:\\n\\n\")\n\n\terr = printConfigCmd(active)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfmt.Print(\"\\nStatus of all components:\\n\\n\")\n\n\terr = printComponentsCmd()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\ntype statusComponentsCmd struct {\n\tCommand `name:\"components\" short-description:\"Show the status of the components containers\" long-description:\"Show the status of the components containers\"`\n}\n\nfunc (c *statusComponentsCmd) Execute(args []string) error {\n\treturn printComponentsCmd()\n}\n\nfunc printComponentsCmd() error {\n\treturn compose.Run(context.Background(), \"ps\")\n}\n\ntype statusWorkdirsCmd struct {\n\tCommand `name:\"workdirs\" short-description:\"List all working directories\" long-description:\"List all the previously initialized working directories\"`\n}\n\nfunc (c *statusWorkdirsCmd) Execute(args []string) error {\n\treturn printWorkdirsCmd()\n}\n\nfunc printWorkdirsCmd() error {\n\tworkdirHandler, err := workdir.NewHandler()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\twds, err := workdirHandler.List()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tactivePath, err := activeWorkdir()\n\t// active directory does not necessarily exist\n\tif err != nil && !isNotExist(err) {\n\t\treturn err\n\t}\n\n\tfor _, wd := range wds {\n\t\tif wd.Path == activePath {\n\t\t\tfmt.Printf(\"* %s\\n\", wd.Name)\n\t\t} else {\n\t\t\tfmt.Printf(\"  %s\\n\", wd.Name)\n\t\t}\n\t}\n\n\treturn nil\n}\n\ntype statusConfigCmd struct {\n\tCommand `name:\"config\" short-description:\"Show the configuration for the active working directory\" long-description:\"Show the docker-compose environment variables configuration for the active working directory\"`\n}\n\nfunc (c *statusConfigCmd) Execute(args []string) error {\n\tactive, err := activeWorkdir()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn printConfigCmd(active)\n}\n\nfunc printConfigCmd(path string) error {\n\tcontent, err := ioutil.ReadFile(filepath.Join(path, \".env\"))\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfmt.Printf(\"%s\\n\", content)\n\n\treturn nil\n}\n\nfunc isNotExist(err error) bool {\n\tif os.IsNotExist(err) {\n\t\treturn true\n\t}\n\n\tif cause, ok := err.(causer); ok {\n\t\treturn isNotExist(cause.Cause())\n\t}\n\n\treturn false\n}\n\ntype causer interface {\n\tCause() error\n}\n\nfunc activeWorkdir() (string, error) {\n\tworkdirHandler, err := workdir.NewHandler()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tactive, err := workdirHandler.Active()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn active.Path, err\n}\n\nfunc init() {\n\tc := rootCmd.AddCommand(&statusCmd{})\n\n\tc.AddCommand(&statusAllCmd{})\n\tc.AddCommand(&statusComponentsCmd{})\n\tc.AddCommand(&statusWorkdirsCmd{})\n\tc.AddCommand(&statusConfigCmd{})\n}\n"
  },
  {
    "path": "cmd/sourced/cmd/stop.go",
    "content": "package cmd\n\nimport (\n\t\"context\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose\"\n)\n\ntype stopCmd struct {\n\tCommand `name:\"stop\" short-description:\"Stop any running components\" long-description:\"Stop any running components without removing them.\\nThey can be started again with 'start'.\"`\n}\n\nfunc (c *stopCmd) Execute(args []string) error {\n\treturn compose.Run(context.Background(), \"stop\")\n}\n\nfunc init() {\n\trootCmd.AddCommand(&stopCmd{})\n}\n"
  },
  {
    "path": "cmd/sourced/cmd/web.go",
    "content": "package cmd\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"os\"\n\t\"os/exec\"\n\t\"regexp\"\n\t\"runtime\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/pkg/browser\"\n\t\"github.com/pkg/errors\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose\"\n)\n\n// The service name used in docker-compose.yml for the srcd/sourced-ui image\nconst containerName = \"sourced-ui\"\n\ntype webCmd struct {\n\tCommand `name:\"web\" short-description:\"Open the web interface in your browser.\" long-description:\"Open the web interface in your browser, by default at: http://127.0.0.1:8088 user:admin pass:admin\"`\n}\n\nfunc (c *webCmd) Execute(args []string) error {\n\treturn OpenUI(2 * time.Second)\n}\n\nfunc init() {\n\trootCmd.AddCommand(&webCmd{})\n}\n\nfunc openUI(address string) error {\n\t// docker-compose returns 0.0.0.0 which is correct for the bind address\n\t// but incorrect as connect address\n\turl := fmt.Sprintf(\"http://%s\", strings.Replace(address, \"0.0.0.0\", \"127.0.0.1\", 1))\n\n\tfor {\n\t\tclient := http.Client{Timeout: time.Second}\n\t\tif _, err := client.Get(url); err == nil {\n\t\t\tbreak\n\t\t}\n\n\t\ttime.Sleep(1 * time.Second)\n\t}\n\n\tif err := browser.OpenURL(url); err != nil {\n\t\treturn errors.Wrap(err, \"could not open the browser\")\n\t}\n\n\treturn nil\n}\n\nvar stateExtractor = regexp.MustCompile(`(?m)^srcd-\\w+.*(Up|Exit (\\d+))`)\n\nfunc checkServiceStatus(service string) error {\n\tvar stdout bytes.Buffer\n\tif err := compose.RunWithIO(context.Background(),\n\t\tos.Stdin, &stdout, nil, \"ps\", service); err != nil {\n\t\treturn errors.Wrapf(err, \"cannot get status service %s\", service)\n\t}\n\n\tmatches := stateExtractor.FindAllStringSubmatch(strings.TrimSpace(stdout.String()), -1)\n\tfor _, match := range matches {\n\t\tstate := match[1]\n\n\t\tif strings.HasPrefix(state, \"Exit\") {\n\t\t\tif service != \"ghsync\" && service != \"gitcollector\" {\n\t\t\t\treturn fmt.Errorf(\"service '%s' is in state '%s'\", service, state)\n\t\t\t}\n\n\t\t\treturnCode := state[len(\"Exit \"):len(state)]\n\t\t\tif returnCode != \"0\" {\n\t\t\t\treturn fmt.Errorf(\"service '%s' exited with return code: %s\", service, returnCode)\n\t\t\t}\n\n\t\t\tcontinue\n\t\t}\n\n\t\tif state != \"Up\" {\n\t\t\treturn fmt.Errorf(\"service '%s' is in state '%s'\", service, state)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// runMonitor checks the status of the containers in order to early exit in case\n// an unrecoverable error occurs.\n// The monitoring is performed by running `docker-compose ps <service>` for each\n// service returned by `docker-compose config --services`, and by grepping the\n// state from the stdout using a regex.\n// Getting the state of all the containers in a single pass by running `docker-compose ps`\n// and by using a multi-line regex to extract both service name and state is not reliable.\n// The reason is that the prefix of a container can be very long, especially for local\n// initialization, due to the value that we set for `COMPOSE_PROJECT_NAME` env var, and\n// docker-compose may split the name into multiple lines.\n// E.g.:\n//\n// Name                                                       Command                       State                                     Ports\n// ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n// srcd-l1vzzxjzl3nln2vudhlzztdlbi9qcm9qzwn0cy8uz28td29ya3nwywnll3nyyy9naxrodwiuy29tl3nln   /bin/bblfsh-web -addr :808 ...   Up                      0.0.0.0:9999->8080/tcp\n// 2vudhlzztdlbg_bblfsh-web_1\nfunc runMonitor(ch chan<- error) {\n\tvar servicesBuf bytes.Buffer\n\tif err := compose.RunWithIO(context.Background(),\n\t\tos.Stdin, &servicesBuf, nil, \"config\", \"--services\"); err != nil {\n\t\tch <- errors.Wrap(err, \"cannot get list of services\")\n\t\treturn\n\t}\n\n\tservices := strings.Split(strings.TrimSpace(servicesBuf.String()), \"\\n\")\n\n\tfor _, service := range services {\n\t\tif err := checkServiceStatus(service); err != nil {\n\t\t\tch <- err\n\t\t\treturn\n\t\t}\n\t\ttime.Sleep(time.Second)\n\t}\n}\n\nfunc getContainerPublicAddress(containerName, privatePort string) (string, error) {\n\tvar stdout bytes.Buffer\n\tfor {\n\t\terr := compose.RunWithIO(context.Background(), nil, &stdout, nil, \"port\", containerName, privatePort)\n\t\tif err == nil {\n\t\t\tbreak\n\t\t}\n\t\t// skip any unsuccessful command exits\n\t\tif _, ok := err.(*exec.ExitError); !ok {\n\t\t\treturn \"\", err\n\t\t}\n\n\t\ttime.Sleep(1 * time.Second)\n\t}\n\n\taddress := strings.TrimSpace(stdout.String())\n\tif address == \"\" {\n\t\treturn \"\", fmt.Errorf(\"could not find the public port of %s\", containerName)\n\t}\n\n\treturn address, nil\n}\n\n// OpenUI opens the browser with the UI.\nfunc OpenUI(timeout time.Duration) error {\n\tch := make(chan error)\n\n\tgo func() {\n\t\taddress, err := getContainerPublicAddress(containerName, \"8088\")\n\t\tif err != nil {\n\t\t\tch <- err\n\t\t\treturn\n\t\t}\n\n\t\tch <- openUI(address)\n\t}()\n\n\tgo runMonitor(ch)\n\n\tfmt.Println(`\nOnce source{d} is fully initialized, the UI will be available, by default at:\n  http://127.0.0.1:8088\n  user:admin\n  pass:admin\n\t`)\n\n\tif timeout > 5*time.Second {\n\t\tstopSpinner := startSpinner(\"Initializing source{d}...\")\n\t\tdefer stopSpinner()\n\t}\n\n\tselect {\n\tcase err := <-ch:\n\t\treturn err\n\tcase <-time.After(timeout):\n\t\treturn fmt.Errorf(\"error opening the UI, the container is not running after %v\", timeout)\n\t}\n}\n\ntype spinner struct {\n\tmsg      string\n\tcharset  []int\n\tinterval time.Duration\n\n\tstop chan bool\n}\n\nfunc startSpinner(msg string) func() {\n\tcharset := []int{'⠋', '⠙', '⠹', '⠸', '⠼', '⠴', '⠦', '⠧', '⠇', '⠏'}\n\tif runtime.GOOS == \"windows\" {\n\t\tcharset = []int{'|', '/', '-', '\\\\'}\n\t}\n\n\ts := &spinner{\n\t\tmsg:      msg,\n\t\tcharset:  charset,\n\t\tinterval: 200 * time.Millisecond,\n\t\tstop:     make(chan bool),\n\t}\n\ts.Start()\n\n\treturn s.Stop\n}\n\nfunc (s *spinner) Start() {\n\tgo s.printLoop()\n}\n\nfunc (s *spinner) Stop() {\n\ts.stop <- true\n}\n\nfunc (s *spinner) printLoop() {\n\ti := 0\n\tfor {\n\t\tselect {\n\t\tcase <-s.stop:\n\t\t\tfmt.Println(s.msg)\n\t\t\treturn\n\t\tdefault:\n\t\t\tchar := string(s.charset[i%len(s.charset)])\n\t\t\tif runtime.GOOS == \"windows\" {\n\t\t\t\tfmt.Printf(\"\\r%s %s\", s.msg, char)\n\t\t\t} else {\n\t\t\t\tfmt.Printf(\"%s %s\\n\\033[A\", s.msg, char)\n\t\t\t}\n\n\t\t\ttime.Sleep(s.interval)\n\t\t}\n\n\t\ti++\n\t\tif len(s.charset) == i {\n\t\t\ti = 0\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "cmd/sourced/compose/compose.go",
    "content": "package compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"regexp\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/blang/semver\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/compose/workdir\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/dir\"\n\n\t\"github.com/pkg/errors\"\n\tgoerrors \"gopkg.in/src-d/go-errors.v1\"\n)\n\n// v1.20.0 is the first version that supports `--compatibility` flag we rely on\n// there is no mention of it in changelog\n// the version has been found by trying downgrading unless it started to error\nvar minDockerComposeVersion = semver.Version{\n\tMajor: 1,\n\tMinor: 20,\n\tPatch: 0,\n}\n\n// this version is choosen to be always compatible with docker-compose version\n// docker-compose v1.20.0 introduced compose files version 3.6\n// which requires Docker Engine 18.02.0 or above\nvar minDockerVersion = semver.Version{\n\tMajor: 18,\n\tMinor: 2,\n\tPatch: 0,\n}\n\n// dockerComposeVersion is the version of docker-compose to download\n// if docker-compose isn't already present in the system\nconst dockerComposeVersion = \"1.24.0\"\n\nvar composeContainerURL = fmt.Sprintf(\"https://github.com/docker/compose/releases/download/%s/run.sh\", dockerComposeVersion)\n\n// ErrComposeAlternative is returned when docker-compose alternative could not be installed\nvar ErrComposeAlternative = goerrors.NewKind(\"error while trying docker-compose container alternative\")\n\ntype Compose struct {\n\tbin            string\n\tworkdirHandler *workdir.Handler\n}\n\nfunc (c *Compose) Run(ctx context.Context, arg ...string) error {\n\treturn c.RunWithIO(ctx, os.Stdin, os.Stdout, os.Stderr, arg...)\n}\n\nfunc (c *Compose) RunWithIO(ctx context.Context, stdin io.Reader,\n\tstdout, stderr io.Writer, arg ...string) error {\n\targ = append([]string{\"--compatibility\"}, arg...)\n\tcmd := exec.CommandContext(ctx, c.bin, arg...)\n\n\twd, err := c.workdirHandler.Active()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err := c.workdirHandler.Validate(wd); err != nil {\n\t\treturn err\n\t}\n\n\tcmd.Dir = wd.Path\n\tcmd.Stdin = stdin\n\tcmd.Stdout = stdout\n\tcmd.Stderr = stderr\n\n\treturn cmd.Run()\n}\n\nfunc newCompose() (*Compose, error) {\n\t// check docker first and exit fast\n\tdockerVersion, err := getDockerVersion()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif !dockerVersion.GE(minDockerVersion) {\n\t\treturn nil, fmt.Errorf(\"Minimal required docker version is %s but %s found\", minDockerVersion, dockerVersion)\n\t}\n\n\tworkdirHandler, err := workdir.NewHandler()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tbin, err := getOrInstallComposeBinary()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdockerComposeVersion, err := getDockerComposeVersion(bin)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif !dockerComposeVersion.GE(minDockerComposeVersion) {\n\t\treturn nil, fmt.Errorf(\"Minimal required docker-compose version is %s but %s found\", minDockerComposeVersion, dockerComposeVersion)\n\t}\n\n\treturn &Compose{\n\t\tbin:            bin,\n\t\tworkdirHandler: workdirHandler,\n\t}, nil\n}\n\nfunc getOrInstallComposeBinary() (string, error) {\n\tpath, err := exec.LookPath(\"docker-compose\")\n\tif err == nil {\n\t\tbin := strings.TrimSpace(path)\n\t\tif bin != \"\" {\n\t\t\treturn bin, nil\n\t\t}\n\t}\n\n\tpath, err = getOrInstallComposeContainer()\n\tif err != nil {\n\t\treturn \"\", ErrComposeAlternative.Wrap(err)\n\t}\n\n\treturn path, nil\n}\n\nfunc getOrInstallComposeContainer() (altPath string, err error) {\n\tdatadir, err := dir.Path()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tdirPath := filepath.Join(datadir, \"bin\")\n\tpath := filepath.Join(dirPath, fmt.Sprintf(\"docker-compose-%s.sh\", dockerComposeVersion))\n\n\treadExecAccessMode := os.FileMode(0500)\n\n\tif info, err := os.Stat(path); err == nil {\n\t\tif info.Mode()&readExecAccessMode != readExecAccessMode {\n\t\t\treturn \"\", fmt.Errorf(\"%s can not be run\", path)\n\t\t}\n\n\t\treturn path, nil\n\t} else if !os.IsNotExist(err) {\n\t\treturn \"\", err\n\t}\n\n\tif err := downloadCompose(path); err != nil {\n\t\treturn \"\", err\n\t}\n\n\tcmd := exec.CommandContext(context.Background(), \"chmod\", \"+x\", path)\n\tif err := cmd.Run(); err != nil {\n\t\treturn \"\", errors.Wrapf(err, \"cannot change permission to %s\", path)\n\t}\n\n\treturn path, nil\n}\n\nfunc downloadCompose(path string) error {\n\tif runtime.GOOS == \"windows\" {\n\t\treturn fmt.Errorf(\"compose in container is not compatible with Windows\")\n\t}\n\n\treturn dir.DownloadURL(composeContainerURL, path)\n}\n\nfunc Run(ctx context.Context, arg ...string) error {\n\tcomp, err := newCompose()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn comp.Run(ctx, arg...)\n}\n\nfunc RunWithIO(ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, arg ...string) error {\n\tcomp, err := newCompose()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn comp.RunWithIO(ctx, stdin, stdout, stderr, arg...)\n}\n\nvar dockerVersionRe = regexp.MustCompile(`version (\\d+).(\\d+).(\\d+)`)\nvar dockerComposeVersionRe = regexp.MustCompile(`version (\\d+.\\d+.\\d+)`)\n\n// docker doesn't use semver, so simple `semver.Parse` would fail\n// but semver.Version struct fits us to allow simple comparation\nfunc getDockerVersion() (*semver.Version, error) {\n\tif _, err := exec.LookPath(\"docker\"); err != nil {\n\t\treturn nil, err\n\t}\n\n\tout, err := exec.Command(\"docker\", \"--version\").Output()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tsubmatches := dockerVersionRe.FindSubmatch(out)\n\tif len(submatches) != 4 {\n\t\treturn nil, fmt.Errorf(\"can't parse docker version\")\n\t}\n\n\tv := &semver.Version{}\n\tv.Major, err = strconv.ParseUint(string(submatches[1]), 10, 64)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"can't parse docker version\")\n\t}\n\tv.Minor, err = strconv.ParseUint(string(submatches[2]), 10, 64)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"can't parse docker version\")\n\t}\n\tv.Patch, err = strconv.ParseUint(string(submatches[3]), 10, 64)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"can't parse docker version\")\n\t}\n\n\treturn v, nil\n}\n\nfunc getDockerComposeVersion(bin string) (*semver.Version, error) {\n\tout, err := exec.Command(bin, \"--version\").Output()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tsubmatches := dockerComposeVersionRe.FindSubmatch(out)\n\tif len(submatches) != 2 {\n\t\treturn nil, fmt.Errorf(\"can't parse docker-compose version\")\n\t}\n\n\tv, err := semver.ParseTolerant(string(submatches[1]))\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"can't parse docker-compose version: %s\", err)\n\t}\n\n\treturn &v, nil\n}\n"
  },
  {
    "path": "cmd/sourced/compose/file/file.go",
    "content": "// Package file provides functions to manage docker compose files inside the\n// $HOME/.sourced/compose-files directory\npackage file\n\nimport (\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"net/url\"\n\t\"os\"\n\t\"path/filepath\"\n\n\tdatadir \"github.com/src-d/sourced-ce/cmd/sourced/dir\"\n\n\t\"github.com/pkg/errors\"\n\tgoerrors \"gopkg.in/src-d/go-errors.v1\"\n)\n\n// ErrConfigDownload is returned when docker-compose.yml could not be downloaded\nvar ErrConfigDownload = goerrors.NewKind(\"docker-compose.yml config file could not be downloaded\")\n\n// ErrConfigActivation is returned when docker-compose.yml could not be set as active\nvar ErrConfigActivation = goerrors.NewKind(\"docker-compose.yml could not be set as active\")\n\nconst (\n\torgName         = \"src-d\"\n\trepoName        = \"sourced-ce\"\n\tcomposeFileTmpl = \"https://raw.githubusercontent.com/%s/%s/%s/docker-compose.yml\"\n)\n\nvar version = \"master\"\n\n// activeDir is the name of the directory containing the symlink to the\n// active docker compose file\nconst activeDir = \"__active__\"\n\n// RevOrURL is a revision (tag name, full sha1) or a valid URL to a\n// docker-compose.yml file\ntype RevOrURL = string\n\n// composeFileURL returns the URL to download the raw github docker-compose.yml\n// file for the given revision (tag or full sha1)\nfunc composeFileURL(revision string) string {\n\treturn fmt.Sprintf(composeFileTmpl, orgName, repoName, revision)\n}\n\n// SetVersion sets the version rewritten by the CI build\nfunc SetVersion(v string) {\n\tversion = v\n}\n\n// InitDefault checks if there is an active docker compose file, and if there\n// isn't the file for this release is downloaded.\n// The current build version must be set with SetVersion.\n// It returns the absolute path to the active docker-compose.yml file\nfunc InitDefault() (string, error) {\n\tactiveFilePath, err := path(activeDir)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\t_, err = os.Stat(activeFilePath)\n\tif err == nil {\n\t\treturn activeFilePath, nil\n\t}\n\n\tif !os.IsNotExist(err) {\n\t\treturn \"\", err\n\t}\n\n\terr = ActivateFromRemote(version)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn activeFilePath, nil\n}\n\n// ActivateFromRemote downloads the docker-compose.yml file from the given revision\n// or URL, and sets it as the active compose file.\nfunc ActivateFromRemote(revOrURL RevOrURL) (err error) {\n\tvar url string\n\tif isURL(revOrURL) {\n\t\turl = revOrURL\n\t} else {\n\t\turl = composeFileURL(revOrURL)\n\t}\n\n\toutPath, err := path(revOrURL)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = datadir.DownloadURL(url, outPath)\n\tif err != nil {\n\t\treturn ErrConfigDownload.Wrap(err)\n\t}\n\n\terr = SetActive(revOrURL)\n\tif err != nil {\n\t\treturn ErrConfigActivation.Wrap(err)\n\t}\n\n\treturn nil\n}\n\n// SetActive makes a symlink from\n// $HOME/.sourced/compose-files/__active__/docker-compose.yml to the compose file\n// for the given revision or URL.\nfunc SetActive(revOrURL RevOrURL) error {\n\tfilePath, err := path(revOrURL)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif _, err := os.Stat(filePath); err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\treturn errors.Wrapf(err, \"could not find a docker-compose.yml file in `%s`\", filePath)\n\t\t}\n\n\t\treturn err\n\t}\n\n\tactiveFilePath, err := path(activeDir)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = os.MkdirAll(filepath.Dir(activeFilePath), os.ModePerm)\n\tif err != nil {\n\t\treturn errors.Wrapf(err, \"error while creating directory for %s\", activeFilePath)\n\t}\n\n\tif _, err := os.Lstat(activeFilePath); err == nil {\n\t\tif err := os.Remove(activeFilePath); err != nil {\n\t\t\treturn errors.Wrap(err, \"failed to unlink\")\n\t\t}\n\t}\n\n\treturn os.Symlink(filePath, activeFilePath)\n}\n\n// Active returns the revision (tag name, full sha1) or the URL of the active\n// docker compose file\nfunc Active() (RevOrURL, error) {\n\tactiveFilePath, err := path(activeDir)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tif _, err := os.Stat(activeFilePath); err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\treturn \"\", nil\n\t\t}\n\n\t\treturn \"\", err\n\t}\n\n\tdest, err := filepath.EvalSymlinks(activeFilePath)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\t_, name := filepath.Split(filepath.Dir(dest))\n\treturn composeName(name), nil\n}\n\n// List returns a list of installed docker compose files. Each name is the\n// revision (tag name, full sha1) or the URL\nfunc List() ([]RevOrURL, error) {\n\tlist := []RevOrURL{}\n\n\tdir, err := dir()\n\tif err != nil {\n\t\treturn list, err\n\t}\n\n\tif _, err := os.Stat(dir); err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\treturn list, nil\n\t\t}\n\n\t\treturn list, err\n\t}\n\n\tfiles, err := ioutil.ReadDir(dir)\n\tif err != nil {\n\t\treturn list, err\n\t}\n\n\tfor _, f := range files {\n\t\tentry := f.Name()\n\t\tif entry == activeDir {\n\t\t\tcontinue\n\t\t}\n\n\t\tlist = append(list, composeName(entry))\n\t}\n\n\treturn list, nil\n}\n\nfunc composeName(rev string) string {\n\tif decoded, err := base64.URLEncoding.DecodeString(rev); err == nil {\n\t\treturn string(decoded)\n\t}\n\n\treturn rev\n}\n\nfunc isURL(revOrURL RevOrURL) bool {\n\t_, err := url.ParseRequestURI(revOrURL)\n\treturn err == nil\n}\n\n// dir returns the absolute path for $HOME/.sourced/compose-files\nfunc dir() (string, error) {\n\tpath, err := datadir.Path()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn filepath.Join(path, \"compose-files\"), nil\n}\n\n// path returns the absolute path to\n// $HOME/.sourced/compose-files/revOrURL/docker-compose.yml\nfunc path(revOrURL RevOrURL) (string, error) {\n\tcomposeDirPath, err := dir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tsubPath := revOrURL\n\tif isURL(revOrURL) {\n\t\tsubPath = base64.URLEncoding.EncodeToString([]byte(revOrURL))\n\t}\n\n\tdirPath := filepath.Join(composeDirPath, subPath)\n\n\treturn filepath.Join(dirPath, \"docker-compose.yml\"), nil\n}\n"
  },
  {
    "path": "cmd/sourced/compose/workdir/env_file_test.go",
    "content": "package workdir\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\n// default limits depend on host system and can't be used in tests\nfunc setResourceLimits(f *envFile) {\n\tf.GitcollectorLimitCPU = 2.0\n\tf.GitbaseLimitCPU = 1.5\n\tf.GitbaseLimitMem = 100\n}\n\nconst localContent = `COMPOSE_PROJECT_NAME=srcd-dir-name\nGITBASE_VOLUME_TYPE=bind\nGITBASE_VOLUME_SOURCE=repo-dir\nGITBASE_SIVA=\nGITHUB_ORGANIZATIONS=\nGITHUB_TOKEN=\nNO_FORKS=\nGITBASE_LIMIT_CPU=1.5\nGITCOLLECTOR_LIMIT_CPU=2\nGITBASE_LIMIT_MEM=100\n`\n\nconst orgsContent = `COMPOSE_PROJECT_NAME=srcd-dir-name\nGITBASE_VOLUME_TYPE=volume\nGITBASE_VOLUME_SOURCE=gitbase_repositories\nGITBASE_SIVA=true\nGITHUB_ORGANIZATIONS=org1,org2\nGITHUB_TOKEN=token\nNO_FORKS=true\nGITBASE_LIMIT_CPU=1.5\nGITCOLLECTOR_LIMIT_CPU=2\nGITBASE_LIMIT_MEM=100\n`\n\nconst emptyContent = `COMPOSE_PROJECT_NAME=\nGITBASE_VOLUME_TYPE=\nGITBASE_VOLUME_SOURCE=\nGITBASE_SIVA=\nGITHUB_ORGANIZATIONS=\nGITHUB_TOKEN=\nNO_FORKS=\nGITBASE_LIMIT_CPU=0\nGITCOLLECTOR_LIMIT_CPU=0\nGITBASE_LIMIT_MEM=0\n`\n\nfunc TestEnvMarshal(t *testing.T) {\n\tassert := assert.New(t)\n\n\tf := newLocalEnvFile(\"dir-name\", \"repo-dir\")\n\tsetResourceLimits(&f)\n\tb, err := f.MarshalEnv()\n\tassert.Nil(err)\n\tassert.Equal(localContent, strings.ReplaceAll(string(b), \"\\r\\n\", \"\\n\"))\n\n\tf = newOrgEnvFile(\"dir-name\", []string{\"org1\", \"org2\"}, \"token\", false)\n\tsetResourceLimits(&f)\n\tb, err = f.MarshalEnv()\n\tassert.Nil(err)\n\tassert.Equal(orgsContent, strings.ReplaceAll(string(b), \"\\r\\n\", \"\\n\"))\n\n\tf = envFile{}\n\tb, err = f.MarshalEnv()\n\tassert.Nil(err)\n\tassert.Equal(emptyContent, strings.ReplaceAll(string(b), \"\\r\\n\", \"\\n\"))\n}\n\nfunc TestEnvUnmarshal(t *testing.T) {\n\tassert := assert.New(t)\n\n\tb := []byte(localContent)\n\tf := envFile{}\n\tassert.Nil(f.UnmarshalEnv(b))\n\tassert.Equal(envFile{\n\t\tComposeProjectName:  \"srcd-dir-name\",\n\t\tGitbaseVolumeType:   \"bind\",\n\t\tGitbaseVolumeSource: \"repo-dir\",\n\n\t\tGitcollectorLimitCPU: 2.0,\n\t\tGitbaseLimitCPU:      1.5,\n\t\tGitbaseLimitMem:      100,\n\t}, f)\n\n\tb = []byte(orgsContent)\n\tf = envFile{}\n\tassert.Nil(f.UnmarshalEnv(b))\n\tassert.Equal(envFile{\n\t\tComposeProjectName:  \"srcd-dir-name\",\n\t\tGitbaseVolumeType:   \"volume\",\n\t\tGitbaseVolumeSource: \"gitbase_repositories\",\n\t\tGitbaseSiva:         true,\n\t\tGithubOrganizations: []string{\"org1\", \"org2\"},\n\t\tGithubToken:         \"token\",\n\t\tNoForks:             true,\n\n\t\tGitcollectorLimitCPU: 2.0,\n\t\tGitbaseLimitCPU:      1.5,\n\t\tGitbaseLimitMem:      100,\n\t}, f)\n\n\tb = []byte(\"\")\n\tf = envFile{}\n\tassert.Nil(f.UnmarshalEnv(b))\n\n\tb = []byte(\" COMPOSE_PROJECT_NAME=srcd-dir-name  \\n\\n  GITBASE_VOLUME_TYPE=volume  \")\n\tf = envFile{}\n\tassert.Nil(f.UnmarshalEnv(b))\n\tassert.Equal(envFile{\n\t\tComposeProjectName: \"srcd-dir-name\",\n\t\tGitbaseVolumeType:  \"volume\",\n\t}, f)\n\n\tb = []byte(\"UNKNOWN=1\")\n\tf = envFile{}\n\tassert.Nil(f.UnmarshalEnv(b))\n}\n"
  },
  {
    "path": "cmd/sourced/compose/workdir/factory.go",
    "content": "package workdir\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"reflect\"\n\t\"runtime\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/pbnjay/memory\"\n\t\"github.com/pkg/errors\"\n\t\"github.com/serenize/snaker\"\n\tcomposefile \"github.com/src-d/sourced-ce/cmd/sourced/compose/file\"\n)\n\n// InitLocal initializes the workdir for local path and returns the Workdir instance\nfunc InitLocal(reposdir string) (*Workdir, error) {\n\tdirName := encodeDirName(reposdir)\n\tenvf := newLocalEnvFile(dirName, reposdir)\n\n\treturn initialize(dirName, \"local\", envf)\n}\n\n// InitOrgs initializes the workdir for organizations and returns the Workdir instance\nfunc InitOrgs(orgs []string, token string, withForks bool) (*Workdir, error) {\n\t// be indifferent to the order of passed organizations\n\tsort.Strings(orgs)\n\tdirName := encodeDirName(strings.Join(orgs, \",\"))\n\n\tenvf := envFile{}\n\terr := readEnvFile(dirName, \"orgs\", &envf)\n\tif err == nil && envf.NoForks == withForks {\n\t\treturn nil, ErrInitFailed.Wrap(\n\t\t\tfmt.Errorf(\"workdir was previously initialized with a different value for forks support\"))\n\t}\n\tif err != nil && !os.IsNotExist(err) {\n\t\treturn nil, err\n\t}\n\n\t// re-create env file to make sure all fields are updated\n\tenvf = newOrgEnvFile(dirName, orgs, token, withForks)\n\n\treturn initialize(dirName, \"orgs\", envf)\n}\n\nfunc readEnvFile(dirName string, subPath string, envf *envFile) error {\n\tworkdir, err := workdirPath(dirName, subPath)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tenvPath := filepath.Join(workdir, \".env\")\n\tb, err := ioutil.ReadFile(envPath)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn envf.UnmarshalEnv(b)\n}\n\nfunc encodeDirName(dirName string) string {\n\treturn base64.URLEncoding.EncodeToString([]byte(dirName))\n}\n\nfunc workdirPath(dirName string, subPath string) (string, error) {\n\tpath, err := workdirsPath()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tworkdir := filepath.Join(path, subPath, dirName)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn workdir, nil\n}\n\nfunc initialize(dirName string, subPath string, envf envFile) (*Workdir, error) {\n\tpath, err := workdirsPath()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tworkdir, err := workdirPath(dirName, subPath)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\terr = os.MkdirAll(workdir, 0755)\n\tif err != nil {\n\t\treturn nil, errors.Wrap(err, \"could not create working directory\")\n\t}\n\n\tdefaultFilePath, err := composefile.InitDefault()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tcomposePath := filepath.Join(workdir, \"docker-compose.yml\")\n\tif err := link(defaultFilePath, composePath); err != nil {\n\t\treturn nil, err\n\t}\n\n\tenvPath := filepath.Join(workdir, \".env\")\n\tcontents, err := envf.MarshalEnv()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\terr = ioutil.WriteFile(envPath, contents, 0644)\n\n\tif err != nil {\n\t\treturn nil, errors.Wrap(err, \"could not write .env file\")\n\t}\n\n\tb := &builder{workdirsPath: path}\n\treturn b.Build(workdir)\n}\n\ntype envFile struct {\n\tComposeProjectName string\n\n\tGitbaseVolumeType   string\n\tGitbaseVolumeSource string\n\tGitbaseSiva         bool\n\n\tGithubOrganizations []string\n\tGithubToken         string\n\n\tNoForks bool\n\n\tGitbaseLimitCPU      float32\n\tGitcollectorLimitCPU float32\n\tGitbaseLimitMem      uint64\n}\n\nfunc newLocalEnvFile(dirName, repoDir string) envFile {\n\tf := envFile{\n\t\tComposeProjectName: fmt.Sprintf(\"srcd-%s\", dirName),\n\n\t\tGitbaseVolumeType:   \"bind\",\n\t\tGitbaseVolumeSource: repoDir,\n\t}\n\tf.addResourceLimits()\n\n\treturn f\n}\n\nfunc newOrgEnvFile(dirName string, orgs []string, token string, withForks bool) envFile {\n\tf := envFile{\n\t\tComposeProjectName: fmt.Sprintf(\"srcd-%s\", dirName),\n\n\t\tGitbaseVolumeType:   \"volume\",\n\t\tGitbaseVolumeSource: \"gitbase_repositories\",\n\t\tGitbaseSiva:         true,\n\n\t\tGithubOrganizations: orgs,\n\t\tGithubToken:         token,\n\n\t\tNoForks: !withForks,\n\t}\n\tf.addResourceLimits()\n\n\treturn f\n}\n\nfunc (f *envFile) addResourceLimits() {\n\t// limit CPU for containers\n\tdockerCPUs, err := dockerNumCPU()\n\tif err != nil { // show warning\n\t\tfmt.Println(err)\n\t}\n\t// apply gitbase resource limits only when docker runs without any global limits\n\t// it's default behaviour on linux\n\tif runtime.NumCPU() == dockerCPUs {\n\t\tf.GitbaseLimitCPU = float32(dockerCPUs) - 0.1\n\t}\n\t// always apply gitcollector limit\n\tif dockerCPUs > 0 {\n\t\thalfCPUs := float32(dockerCPUs) / 2.0\n\t\t// let container consume more than a half if there is only one cpu available\n\t\t// otherwise it will be too slow\n\t\tif halfCPUs < 1 {\n\t\t\thalfCPUs = 1\n\t\t}\n\t\tf.GitcollectorLimitCPU = halfCPUs - 0.1\n\t}\n\n\t// limit memory for containers\n\tdockerMem, err := dockerTotalMem()\n\tif err != nil { // show warning\n\t\tfmt.Println(err)\n\t}\n\t// apply memory limits only when only when docker runs without any global limits\n\t// it's default behaviour on linux\n\tif dockerMem == memory.TotalMemory() {\n\t\tf.GitbaseLimitMem = uint64(float64(dockerMem) * 0.9)\n\t}\n}\n\nvar newlineChar = \"\\n\"\n\nfunc init() {\n\tif runtime.GOOS == \"windows\" {\n\t\tnewlineChar = \"\\r\\n\"\n\t}\n}\n\n// implementation can be moved to separate package if we need to marshal any other structs\n// supports only simple types\nfunc (f envFile) MarshalEnv() ([]byte, error) {\n\tvar b bytes.Buffer\n\n\tv := reflect.ValueOf(f)\n\trType := v.Type()\n\n\tfor i := 0; i < rType.NumField(); i++ {\n\t\tfield := rType.Field(i)\n\t\tfieldEl := v.Field(i)\n\t\tif field.Anonymous {\n\t\t\tpanic(\"struct composition isn't supported\")\n\t\t}\n\n\t\tname := strings.ToUpper(snaker.CamelToSnake(field.Name))\n\t\tswitch field.Type.Kind() {\n\t\tcase reflect.Slice:\n\t\t\tslice := make([]string, fieldEl.Len())\n\t\t\tfor i := 0; i < fieldEl.Len(); i++ {\n\t\t\t\tslice[i] = fmt.Sprintf(\"%v\", fieldEl.Index(i).Interface())\n\t\t\t}\n\t\t\tfmt.Fprintf(&b, \"%s=%v%s\", name, strings.Join(slice, \",\"), newlineChar)\n\t\tcase reflect.Bool:\n\t\t\t// marshal false value as empty string instead of \"false\" string\n\t\t\tif fieldEl.Interface().(bool) {\n\t\t\t\tfmt.Fprintf(&b, \"%s=true%s\", name, newlineChar)\n\t\t\t} else {\n\t\t\t\tfmt.Fprintf(&b, \"%s=%s\", name, newlineChar)\n\t\t\t}\n\t\tdefault:\n\t\t\tfmt.Fprintf(&b, \"%s=%v%s\", name, fieldEl.Interface(), newlineChar)\n\t\t}\n\t}\n\n\treturn b.Bytes(), nil\n}\n\n// implementation can be moved to separate package if we need to unmarshal any other structs\n// supports only simple types\nfunc (f *envFile) UnmarshalEnv(b []byte) error {\n\tv := reflect.ValueOf(f).Elem()\n\n\tr := bytes.NewReader(b)\n\tscanner := bufio.NewScanner(r)\n\tfor scanner.Scan() {\n\t\tline := strings.TrimSpace(scanner.Text())\n\t\tif line == \"\" || !strings.Contains(line, \"=\") {\n\t\t\tcontinue\n\t\t}\n\n\t\tparts := strings.SplitN(line, \"=\", 2)\n\t\tname := parts[0]\n\t\tvalue := parts[1]\n\t\tfield := v.FieldByName(snaker.SnakeToCamel(strings.ToLower(name)))\n\t\t// skip unknown values\n\t\tif !field.IsValid() {\n\t\t\tcontinue\n\t\t}\n\t\t// skip empty values\n\t\tif value == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tswitch field.Kind() {\n\t\tcase reflect.String:\n\t\t\tfield.SetString(value)\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\ti, err := strconv.ParseInt(value, 10, 64)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"can't parse variable %s with value %s: %v\", name, value, err)\n\t\t\t}\n\t\t\tfield.SetInt(i)\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\ti, err := strconv.ParseUint(value, 10, 64)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"can't parse variable %s with value %s: %v\", name, value, err)\n\t\t\t}\n\t\t\tfield.SetUint(i)\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\ti, err := strconv.ParseFloat(value, 64)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"can't parse variable %s with value %s: %v\", name, value, err)\n\t\t\t}\n\t\t\tfield.SetFloat(i)\n\t\tcase reflect.Bool:\n\t\t\tif value == \"true\" {\n\t\t\t\tfield.SetBool(true)\n\t\t\t} else {\n\t\t\t\tfield.SetBool(false)\n\t\t\t}\n\t\tcase reflect.Slice:\n\t\t\tif field.Type().Elem().Kind() != reflect.String {\n\t\t\t\tpanic(\"only slices of strings are supported\")\n\t\t\t}\n\t\t\tvs := strings.Split(value, \",\")\n\t\t\tslice := reflect.MakeSlice(field.Type(), len(vs), len(vs))\n\t\t\tfor i, v := range vs {\n\t\t\t\tslice.Index(i).SetString(v)\n\t\t\t}\n\t\t\tfield.Set(slice)\n\t\tdefault:\n\t\t\tpanic(fmt.Sprintf(\"unsupported type: %v\", field.Kind()))\n\t\t}\n\t}\n\n\treturn scanner.Err()\n}\n\n// returns number of CPUs available to docker\nfunc dockerNumCPU() (int, error) {\n\t// use cli instead of connection to docker server directly\n\t// in case server exposed by http or non-default socket path\n\tinfo, err := exec.Command(\"docker\", \"info\", \"--format\", \"{{.NCPU}}\").Output()\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\n\tcpus, err := strconv.Atoi(strings.TrimSpace(string(info)))\n\tif err != nil || cpus == 0 {\n\t\treturn 0, fmt.Errorf(\"Couldn't get number of available CPUs in docker\")\n\t}\n\n\treturn cpus, nil\n}\n\n// returns total memory in bytes available to docker\nfunc dockerTotalMem() (uint64, error) {\n\tinfo, err := exec.Command(\"docker\", \"info\", \"--format\", \"{{.MemTotal}}\").Output()\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\n\tmem, err := strconv.ParseUint(strings.TrimSpace(string(info)), 10, 64)\n\tif err != nil || mem == 0 {\n\t\treturn 0, fmt.Errorf(\"Couldn't get of available memory in docker\")\n\t}\n\n\treturn mem, nil\n}\n"
  },
  {
    "path": "cmd/sourced/compose/workdir/factory_test.go",
    "content": "package workdir\n\nimport (\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/suite\"\n)\n\ntype FactorySuite struct {\n\tsuite.Suite\n\n\toriginSrcdDir string\n}\n\nfunc TestFactorySuite(t *testing.T) {\n\tsuite.Run(t, &FactorySuite{})\n}\n\nfunc (s *FactorySuite) BeforeTest(suiteName, testName string) {\n\ts.originSrcdDir = os.Getenv(\"SOURCED_DIR\")\n\n\t// on macOs os.TempDir returns symlink and tests fails\n\ttmpDir, _ := filepath.EvalSymlinks(os.TempDir())\n\tsrdPath := path.Join(tmpDir, testName)\n\terr := os.MkdirAll(srdPath, os.ModePerm)\n\ts.Nil(err)\n\n\tos.Setenv(\"SOURCED_DIR\", srdPath)\n}\n\nfunc (s *FactorySuite) AfterTest(suiteName, testName string) {\n\tos.RemoveAll(os.Getenv(\"SOURCED_DIR\"))\n\tos.Setenv(\"SOURCED_DIR\", s.originSrcdDir)\n}\n\nfunc (s *FactorySuite) TestInitLocal() {\n\treposdir := \"some-dir\"\n\twd, err := InitLocal(reposdir)\n\ts.Nil(err)\n\ts.Equal(Local, wd.Type)\n\ts.Equal(reposdir, wd.Name)\n\n\t// check docker-compose.yml exists\n\tcomposeYmlPath := path.Join(wd.Path, \"docker-compose.yml\")\n\t_, err = os.Stat(composeYmlPath)\n\ts.Nil(err)\n\n\t// check .env file\n\tenvPath := path.Join(wd.Path, \".env\")\n\t_, err = os.Stat(envPath)\n\ts.Nil(err)\n\n\tenvf := envFile{}\n\ts.Nil(readEnvFile(encodeDirName(reposdir), \"local\", &envf))\n\n\ts.Equal(reposdir, envf.GitbaseVolumeSource)\n\ts.False(envf.NoForks)\n}\n\nfunc (s *FactorySuite) TestInitOrgs() {\n\torgs := []string{\"org2\", \"org1\"}\n\tname := \"org1,org2\"\n\ttoken := \"some-token\"\n\twd, err := InitOrgs(orgs, token, true)\n\ts.Nil(err)\n\ts.Equal(Orgs, wd.Type)\n\ts.Equal(name, wd.Name)\n\n\t// check docker-compose.yml exists\n\tcomposeYmlPath := path.Join(wd.Path, \"docker-compose.yml\")\n\t_, err = os.Stat(composeYmlPath)\n\ts.Nil(err)\n\n\t// check .env file\n\tenvPath := path.Join(wd.Path, \".env\")\n\t_, err = os.Stat(envPath)\n\ts.Nil(err)\n\n\tenvf := envFile{}\n\ts.Nil(readEnvFile(encodeDirName(name), \"orgs\", &envf))\n\n\ts.Equal(\"gitbase_repositories\", envf.GitbaseVolumeSource)\n\ts.Equal(orgs, envf.GithubOrganizations)\n\ts.Equal(token, envf.GithubToken)\n\ts.False(envf.NoForks)\n}\n\nfunc (s *FactorySuite) TestReInitForksOrgs() {\n\torgs := []string{\"org2\", \"org1\"}\n\t_, err := InitOrgs(orgs, \"\", false)\n\ts.Nil(err)\n\n\t_, err = InitOrgs(orgs, \"\", true)\n\ts.EqualError(err, \"initialization failed: workdir was previously initialized with a different value for forks support\")\n}\n"
  },
  {
    "path": "cmd/sourced/compose/workdir/handler.go",
    "content": "package workdir\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"github.com/pkg/errors\"\n)\n\n// Handler provides a way to interact with all the workdirs by exposing the following operations:\n//   - read/set/unset active workdir,\n//   - remove/validate a workdir,\n//   - list workdirs.\ntype Handler struct {\n\tworkdirsPath string\n\tbuilder      *builder\n}\n\n// NewHandler creates a handler that manages workdirs in the path returned by\n// the `workdirsPath` function\nfunc NewHandler() (*Handler, error) {\n\tpath, err := workdirsPath()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &Handler{\n\t\tworkdirsPath: path,\n\t\tbuilder:      &builder{workdirsPath: path},\n\t}, nil\n}\n\n// SetActive creates a symlink from the fixed active workdir path to the prodived workdir\nfunc (h *Handler) SetActive(w *Workdir) error {\n\tpath := h.activeAbsolutePath()\n\n\tif err := h.UnsetActive(); err != nil {\n\t\treturn err\n\t}\n\n\terr := os.Symlink(w.Path, path)\n\tif os.IsExist(err) {\n\t\treturn nil\n\t}\n\n\treturn err\n}\n\n// UnsetActive removes symlink for active workdir\nfunc (h *Handler) UnsetActive() error {\n\tpath := h.activeAbsolutePath()\n\n\t_, err := os.Lstat(path)\n\tif !os.IsNotExist(err) {\n\t\terr = os.Remove(path)\n\t\tif err != nil {\n\t\t\treturn errors.Wrap(err, \"could not delete the previous active workdir directory symlink\")\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Active returns active working directory\nfunc (h *Handler) Active() (*Workdir, error) {\n\tpath := h.activeAbsolutePath()\n\n\tresolvedPath, err := filepath.EvalSymlinks(path)\n\tif os.IsNotExist(err) {\n\t\treturn nil, ErrMalformed.Wrap(err, \"active\")\n\t}\n\n\treturn h.builder.Build(resolvedPath)\n}\n\n// List returns array of working directories\nfunc (h *Handler) List() ([]*Workdir, error) {\n\tdirs := make([]string, 0)\n\terr := filepath.Walk(h.workdirsPath, func(path string, info os.FileInfo, err error) error {\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif !info.IsDir() {\n\t\t\treturn nil\n\t\t}\n\t\tfor _, f := range RequiredFiles {\n\t\t\tif !hasContent(path, f) {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\n\t\tdirs = append(dirs, path)\n\t\treturn nil\n\t})\n\n\tif os.IsNotExist(err) {\n\t\treturn nil, ErrMalformed.Wrap(err, h.workdirsPath)\n\t}\n\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\twds := make([]*Workdir, 0, len(dirs))\n\tfor _, p := range dirs {\n\t\twd, err := h.builder.Build(p)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\twds = append(wds, wd)\n\t}\n\n\treturn wds, nil\n\n}\n\n// Validate validates that the passed working directoy is valid\n// It's path must be a directory (or a symlink) containing docker-compose.yml and .env files\nfunc (h *Handler) Validate(w *Workdir) error {\n\tpointedDir, err := filepath.EvalSymlinks(w.Path)\n\tif err != nil {\n\t\treturn ErrMalformed.Wrap(fmt.Errorf(\"is not a directory\"), w.Path)\n\t}\n\n\tif info, err := os.Lstat(pointedDir); err != nil || !info.IsDir() {\n\t\treturn ErrMalformed.Wrap(fmt.Errorf(\"is not a directory\"), pointedDir)\n\t}\n\n\tfor _, f := range RequiredFiles {\n\t\tif !hasContent(pointedDir, f) {\n\t\t\treturn ErrMalformed.Wrap(fmt.Errorf(\"%s not found\", f), pointedDir)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Remove removes working directory by removing required and optional files,\n// and recursively removes directories up to the workdirs root as long as they are empty\nfunc (h *Handler) Remove(w *Workdir) error {\n\tpath := w.Path\n\tvar subPath string\n\tswitch w.Type {\n\tcase Local:\n\t\tsubPath = \"local\"\n\tcase Orgs:\n\t\tsubPath = \"orgs\"\n\t}\n\n\tbasePath := filepath.Join(h.workdirsPath, subPath)\n\n\tfor _, f := range RequiredFiles {\n\t\tfile := filepath.Join(path, f)\n\t\tif _, err := os.Stat(file); os.IsNotExist(err) {\n\t\t\tcontinue\n\t\t}\n\n\t\tif err := os.Remove(file); err != nil {\n\t\t\treturn errors.Wrap(err, \"could not remove from workdir directory\")\n\t\t}\n\t}\n\n\tfor {\n\t\tfiles, err := ioutil.ReadDir(path)\n\t\tif err != nil {\n\t\t\treturn errors.Wrap(err, \"could not read workdir directory\")\n\t\t}\n\t\tif len(files) > 0 {\n\t\t\treturn nil\n\t\t}\n\n\t\tif err := os.Remove(path); err != nil {\n\t\t\treturn errors.Wrap(err, \"could not delete workdir directory\")\n\t\t}\n\n\t\tpath = filepath.Dir(path)\n\t\tif path == basePath {\n\t\t\treturn nil\n\t\t}\n\t}\n}\n\nfunc (h *Handler) activeAbsolutePath() string {\n\treturn filepath.Join(h.workdirsPath, activeDir)\n}\n"
  },
  {
    "path": "cmd/sourced/compose/workdir/handler_test.go",
    "content": "package workdir\n\nimport (\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/suite\"\n)\n\ntype HandlerSuite struct {\n\tsuite.Suite\n\n\th             *Handler\n\toriginSrcdDir string\n}\n\nfunc TestHandlerSuite(t *testing.T) {\n\tsuite.Run(t, &HandlerSuite{})\n}\n\nfunc (s *HandlerSuite) BeforeTest(suiteName, testName string) {\n\ts.originSrcdDir = os.Getenv(\"SOURCED_DIR\")\n\n\t// on macOs os.TempDir returns symlink and tests fails\n\ttmpDir, _ := filepath.EvalSymlinks(os.TempDir())\n\tsrdPath := path.Join(tmpDir, testName)\n\terr := os.MkdirAll(srdPath, os.ModePerm)\n\ts.Nil(err)\n\n\tos.Setenv(\"SOURCED_DIR\", srdPath)\n\n\ts.h, err = NewHandler()\n\ts.Nil(err)\n}\n\nfunc (s *HandlerSuite) AfterTest(suiteName, testName string) {\n\tos.RemoveAll(filepath.Dir(s.h.workdirsPath))\n\tos.Setenv(\"SOURCED_DIR\", s.originSrcdDir)\n}\n\n// This tests only public interface without checking implementation (filesystem) details\nfunc (s *HandlerSuite) TestSuccessFlow() {\n\twd := s.createWd(\"flow\")\n\n\ts.Nil(s.h.Validate(wd))\n\ts.Nil(s.h.SetActive(wd))\n\n\tactive, err := s.h.Active()\n\ts.Nil(err)\n\ts.Equal(wd, active)\n\n\ts.Nil(s.h.UnsetActive())\n\n\t_, err = s.h.Active()\n\ts.True(ErrMalformed.Is(err))\n\n\twds, err := s.h.List()\n\ts.Nil(err)\n\ts.Len(wds, 1)\n\ts.Equal(wd, wds[0])\n\n\ts.Nil(s.h.Remove(wd))\n\n\twds, err = s.h.List()\n\ts.Nil(err)\n\ts.Len(wds, 0)\n}\n\n// All tests below rely on implementation details to check error cases\n\nfunc (s *HandlerSuite) TestSetActiveOk() {\n\twd := s.createWd(\"some\")\n\n\t// non-active before\n\ts.Nil(s.h.SetActive(wd))\n\t// re-activation should also work\n\ts.Nil(s.h.SetActive(wd))\n\n\t// validate link points correctly\n\ttarget, err := filepath.EvalSymlinks(path.Join(s.h.workdirsPath, activeDir))\n\ts.Nil(err)\n\ts.Equal(wd.Path, target)\n}\n\nfunc (s *HandlerSuite) TestSetActiveError() {\n\twd := s.createWd(\"some\")\n\n\t// break active path by making it dir with files\n\tactivePath := path.Join(s.h.workdirsPath, activeDir)\n\ts.Nil(os.MkdirAll(activePath, os.ModePerm))\n\t_, err := os.Create(path.Join(activePath, \"some-file\"))\n\ts.Nil(err)\n\n\ts.Error(s.h.SetActive(wd))\n}\n\nfunc (s *HandlerSuite) TestUnsetActiveOk() {\n\tactivePath := path.Join(s.h.workdirsPath, activeDir)\n\ts.Nil(os.MkdirAll(s.h.workdirsPath, os.ModePerm))\n\t_, err := os.Create(activePath)\n\ts.Nil(err)\n\n\ts.Nil(s.h.UnsetActive())\n\t// unset without active dir\n\ts.Nil(s.h.UnsetActive())\n\n\t// validate we deleted the file\n\t_, err = os.Stat(activePath)\n\ts.True(os.IsNotExist(err))\n}\n\nfunc (s *HandlerSuite) TestUnsetActiveError() {\n\t// break active path by making it dir with files\n\tactivePath := path.Join(s.h.workdirsPath, activeDir)\n\ts.Nil(os.MkdirAll(activePath, os.ModePerm))\n\t_, err := os.Create(path.Join(activePath, \"some-file\"))\n\ts.Nil(err)\n\n\ts.Error(s.h.UnsetActive())\n}\n\nfunc (s *HandlerSuite) TestValidateError() {\n\t// dir doesn't exist\n\twd, err := s.h.builder.Build(path.Join(s.h.workdirsPath, \"local\", \"some\"))\n\ts.Nil(err)\n\terr = s.h.Validate(wd)\n\ts.True(ErrMalformed.Is(err))\n\n\t// dir is a file\n\ts.Nil(os.MkdirAll(path.Join(s.h.workdirsPath, \"local\"), os.ModePerm))\n\t_, err = os.Create(wd.Path)\n\ts.Nil(err)\n\n\terr = s.h.Validate(wd)\n\ts.True(ErrMalformed.Is(err))\n\ts.Nil(os.RemoveAll(wd.Path))\n\n\t// dir is empty\n\ts.Nil(os.MkdirAll(wd.Path, os.ModePerm))\n\terr = s.h.Validate(wd)\n\ts.True(ErrMalformed.Is(err))\n}\n\nfunc (s *HandlerSuite) TestListOk() {\n\ts.Nil(os.MkdirAll(s.h.workdirsPath, os.ModePerm))\n\n\t// empty results\n\twds, err := s.h.List()\n\ts.Nil(err)\n\ts.Len(wds, 0)\n\n\t// multiple results\n\ts.createWd(\"one\")\n\ts.createWd(\"two\")\n\n\twds, err = s.h.List()\n\ts.Nil(err)\n\ts.Len(wds, 2)\n\n\ts.Equal(\"one\", wds[0].Name)\n\ts.Equal(\"two\", wds[1].Name)\n\n\t// incorrect directory should be skipped\n\twd, err := s.h.builder.Build(path.Join(s.h.workdirsPath, \"local\", \"some\"))\n\ts.Nil(err)\n\ts.Nil(os.MkdirAll(wd.Path, os.ModePerm))\n\n\twds, err = s.h.List()\n\ts.Nil(err)\n\ts.Len(wds, 2)\n}\n\nfunc (s *HandlerSuite) TestListError() {\n\t// workdirs dir doesn't exist\n\t_, err := s.h.List()\n\ts.True(ErrMalformed.Is(err))\n}\n\nfunc (s *HandlerSuite) TestRemoveOk() {\n\t// local\n\twd, err := InitLocal(\"local\")\n\ts.Nil(err)\n\ts.Nil(s.h.Remove(wd))\n\t_, err = os.Stat(wd.Path)\n\ts.True(os.IsNotExist(err))\n\n\t// org\n\twd, err = InitOrgs([]string{\"some-org\"}, \"token\", false)\n\ts.Nil(err)\n\ts.Nil(s.h.Remove(wd))\n\t_, err = os.Stat(wd.Path)\n\ts.True(os.IsNotExist(err))\n\n\t// skip deleting dir with extra files\n\twd = s.createWd(\"some\")\n\t_, err = os.Create(path.Join(wd.Path, \"some-file\"))\n\ts.Nil(err)\n\n\ts.Nil(s.h.Remove(wd))\n\t_, err = os.Stat(wd.Path)\n\ts.Nil(err)\n}\n\nfunc (s *HandlerSuite) createWd(name string) *Workdir {\n\twd, err := InitLocal(name)\n\ts.Nil(err)\n\treturn wd\n}\n"
  },
  {
    "path": "cmd/sourced/compose/workdir/workdir.go",
    "content": "package workdir\n\nimport (\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/pkg/errors\"\n\tgoerrors \"gopkg.in/src-d/go-errors.v1\"\n\n\tdatadir \"github.com/src-d/sourced-ce/cmd/sourced/dir\"\n)\n\nconst activeDir = \"__active__\"\n\nvar (\n\t// RequiredFiles list of required files in a directory to treat it as a working directory\n\tRequiredFiles = []string{\".env\", \"docker-compose.yml\"}\n\n\t// ErrMalformed is the returned error when the workdir is wrong\n\tErrMalformed = goerrors.NewKind(\"workdir %s is not valid\")\n\n\t// ErrInitFailed is an error returned on workdir initialization for custom cases\n\tErrInitFailed = goerrors.NewKind(\"initialization failed\")\n)\n\n// Type defines the type of the workdir\ntype Type int\n\nconst (\n\t// None refers to a failure in identifying the type of the workdir\n\tNone Type = iota\n\t// Local refers to a workdir that has been initialized for local repos\n\tLocal\n\t// Orgs refers to a workdir that has been initialized for organizations\n\tOrgs\n)\n\n// Workdir represents a workdir associated with a local or an orgs initialization\ntype Workdir struct {\n\t// Type is the type of working directory\n\tType Type\n\t// Name is a human-friendly string to identify the workdir\n\tName string\n\t// Path is the absolute path corresponding to the workdir\n\tPath string\n}\n\ntype builder struct {\n\tworkdirsPath string\n}\n\n// build returns the Workdir instance corresponding to the provided absolute path\n// the path must be inside `workdirsPath`\nfunc (b *builder) Build(path string) (*Workdir, error) {\n\twdType, err := b.typeFromPath(path)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif wdType == None {\n\t\treturn nil, fmt.Errorf(\"invalid workdir type for path %s\", path)\n\t}\n\n\twdName, err := b.workdirName(wdType, path)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &Workdir{\n\t\tType: wdType,\n\t\tName: wdName,\n\t\tPath: path,\n\t}, nil\n}\n\n// workdirName returns the workdir name given its type and absolute path\nfunc (b *builder) workdirName(wdType Type, path string) (string, error) {\n\tvar subPath string\n\tswitch wdType {\n\tcase Local:\n\t\tsubPath = \"local\"\n\tcase Orgs:\n\t\tsubPath = \"orgs\"\n\t}\n\n\tencoded, err := filepath.Rel(filepath.Join(b.workdirsPath, subPath), path)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tdecoded, err := base64.URLEncoding.DecodeString(encoded)\n\tif err == nil {\n\t\treturn string(decoded), nil\n\t}\n\n\treturn \"\", err\n}\n\n// typeFromPath returns the workdir type corresponding to the provided absolute path\nfunc (b *builder) typeFromPath(path string) (Type, error) {\n\tsuffix, err := filepath.Rel(b.workdirsPath, path)\n\tif err != nil {\n\t\treturn None, err\n\t}\n\n\tswitch filepath.Dir(suffix) {\n\tcase \"local\":\n\t\treturn Local, nil\n\tcase \"orgs\":\n\t\treturn Orgs, nil\n\tdefault:\n\t\treturn None, nil\n\t}\n}\n\nfunc hasContent(path, file string) bool {\n\tempty, err := isEmptyFile(filepath.Join(path, file))\n\treturn !empty && err == nil\n}\n\n// isEmptyFile returns true if the file does not exist or if it exists but\n// contains empty text\nfunc isEmptyFile(path string) (bool, error) {\n\t_, err := os.Stat(path)\n\tif err != nil {\n\t\tif !os.IsNotExist(err) {\n\t\t\treturn false, err\n\t\t}\n\n\t\treturn true, nil\n\t}\n\n\tcontents, err := ioutil.ReadFile(path)\n\tif err != nil {\n\t\treturn false, err\n\t}\n\n\tstrContents := string(contents)\n\treturn strings.TrimSpace(strContents) == \"\", nil\n}\n\nfunc link(linkTargetPath, linkPath string) error {\n\t_, err := os.Stat(linkPath)\n\tif err == nil {\n\t\treturn nil\n\t}\n\n\tif !os.IsNotExist(err) {\n\t\treturn errors.Wrap(err, \"could not read the existing FILE_NAME file\")\n\t}\n\n\terr = os.Symlink(linkTargetPath, linkPath)\n\treturn errors.Wrap(err, fmt.Sprintf(\"could not create symlink to %s\", linkTargetPath))\n}\n\nfunc workdirsPath() (string, error) {\n\tpath, err := datadir.Path()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn filepath.Join(path, \"workdirs\"), nil\n}\n"
  },
  {
    "path": "cmd/sourced/compose/workdir/workdir_test.go",
    "content": "package workdir\n\nimport (\n\t\"os\"\n\t\"path\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestBuilder(t *testing.T) {\n\tassert := assert.New(t)\n\n\tworkdirsPath := path.Join(os.TempDir(), \"builder\")\n\tdefer func() {\n\t\tos.RemoveAll(workdirsPath)\n\t}()\n\n\tb := builder{workdirsPath: workdirsPath}\n\n\t// incorrect: not in workdirsPath\n\t_, err := b.Build(\"/not/in/workdirs\")\n\tassert.EqualError(err, \"invalid workdir type for path /not/in/workdirs\")\n\n\t// incorrect: unknown type\n\tunknownDir := path.Join(workdirsPath, \"unknown\")\n\t_, err = b.Build(unknownDir)\n\tassert.EqualError(err, \"invalid workdir type for path \"+unknownDir)\n\n\t// local\n\tname := \"some\"\n\tlocalDir := path.Join(workdirsPath, \"local\", encodeDirName(name))\n\twd, err := b.Build(localDir)\n\tassert.Nil(err)\n\tassert.Equal(Local, wd.Type)\n\tassert.Equal(name, wd.Name)\n\tassert.Equal(localDir, wd.Path)\n\n\t// org\n\torgDir := path.Join(workdirsPath, \"orgs\", encodeDirName(name))\n\twd, err = b.Build(orgDir)\n\tassert.Nil(err)\n\tassert.Equal(Orgs, wd.Type)\n\tassert.Equal(name, wd.Name)\n\tassert.Equal(orgDir, wd.Path)\n}\n\nfunc TestIsEmptyFile(t *testing.T) {\n\tassert := assert.New(t)\n\n\t// not exist\n\tok, err := isEmptyFile(\"/does/not/exist\")\n\tassert.Nil(err)\n\tassert.True(ok)\n\n\t// empty\n\temptyPath := path.Join(os.TempDir(), \"empty\")\n\tdefer func() {\n\t\tos.RemoveAll(emptyPath)\n\t}()\n\tf, err := os.Create(emptyPath)\n\tassert.Nil(err)\n\tassert.Nil(f.Close())\n\n\tok, err = isEmptyFile(emptyPath)\n\tassert.Nil(err)\n\tassert.True(ok)\n\n\t// not empty\n\tnonEmptyPath := path.Join(os.TempDir(), \"non-empty\")\n\tdefer func() {\n\t\tos.RemoveAll(nonEmptyPath)\n\t}()\n\tf, err = os.Create(nonEmptyPath)\n\tassert.Nil(err)\n\t_, err = f.Write([]byte(\"some content\"))\n\tassert.Nil(err)\n\tassert.Nil(f.Close())\n\n\tok, err = isEmptyFile(nonEmptyPath)\n\tassert.Nil(err)\n\tassert.False(ok)\n}\n"
  },
  {
    "path": "cmd/sourced/dir/dir.go",
    "content": "// Package dir provides functions to manage the config directories.\npackage dir\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"github.com/pkg/errors\"\n\tgoerrors \"gopkg.in/src-d/go-errors.v1\"\n)\n\n// ErrNotExist is returned when config dir does not exists\nvar ErrNotExist = goerrors.NewKind(\"%s does not exist\")\n\n// ErrNotValid is returned when config dir is not valid\nvar ErrNotValid = goerrors.NewKind(\"%s is not a valid config directory: %s\")\n\n// ErrNetwork is returned when could not download\nvar ErrNetwork = goerrors.NewKind(\"network error downloading %s\")\n\n// ErrWrite is returned when could not write\nvar ErrWrite = goerrors.NewKind(\"write error at %s\")\n\n// Path returns the absolute path for $SOURCED_DIR, or $HOME/.sourced if unset\n// and returns an error if it does not exist or it could not be read.\nfunc Path() (string, error) {\n\tsrcdDir, err := srcdPath()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tif err := validate(srcdDir); err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn srcdDir, nil\n}\n\nfunc srcdPath() (string, error) {\n\tif d := os.Getenv(\"SOURCED_DIR\"); d != \"\" {\n\t\tabs, err := filepath.Abs(d)\n\t\tif err != nil {\n\t\t\treturn \"\", errors.Wrap(err, fmt.Sprintf(\"could not resolve SOURCED_DIR='%s'\", d))\n\t\t}\n\n\t\treturn abs, nil\n\t}\n\n\thomedir, err := os.UserHomeDir()\n\tif err != nil {\n\t\treturn \"\", errors.Wrap(err, \"could not detect home directory\")\n\t}\n\n\treturn filepath.Join(homedir, \".sourced\"), nil\n}\n\n// Prepare tries to create the config directory, returning an error if it could not\n// be created, or nil if already exist or was successfully created.\nfunc Prepare() error {\n\tsrcdDir, err := srcdPath()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = validate(srcdDir)\n\tif ErrNotExist.Is(err) {\n\t\tif err := os.MkdirAll(srcdDir, os.ModePerm); err != nil {\n\t\t\treturn ErrNotValid.New(srcdDir, err)\n\t\t}\n\n\t\treturn nil\n\t}\n\n\treturn err\n}\n\n// validate validates that the passed config dir path is valid\nfunc validate(path string) error {\n\tinfo, err := os.Stat(path)\n\tif os.IsNotExist(err) {\n\t\treturn ErrNotExist.New(path)\n\t}\n\n\tif err != nil {\n\t\treturn ErrNotValid.New(path, err)\n\t}\n\n\tif !info.IsDir() {\n\t\treturn ErrNotValid.New(path, \"it is not a directory\")\n\t}\n\n\treadWriteAccessMode := os.FileMode(0700)\n\tif info.Mode()&readWriteAccessMode != readWriteAccessMode {\n\t\treturn ErrNotValid.New(path, \"it has no read-write access\")\n\t}\n\n\treturn nil\n}\n\n// DownloadURL downloads the given url to a file to the\n// dst path, creating the directory if it's needed\nfunc DownloadURL(url, dst string) (err error) {\n\tresp, err := http.Get(url)\n\tif err != nil {\n\t\treturn ErrNetwork.Wrap(err, url)\n\t}\n\tdefer resp.Body.Close()\n\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn ErrNetwork.Wrap(fmt.Errorf(\"HTTP status %v\", resp.Status), url)\n\t}\n\n\tif err := os.MkdirAll(filepath.Dir(dst), os.ModePerm); err != nil {\n\t\treturn ErrWrite.Wrap(err, filepath.Dir(dst))\n\t}\n\n\tout, err := os.Create(dst)\n\tif err != nil {\n\t\treturn ErrWrite.Wrap(err, dst)\n\t}\n\tdefer out.Close()\n\n\t_, err = io.Copy(out, resp.Body)\n\tif err != nil {\n\t\treturn ErrWrite.Wrap(err, dst)\n\t}\n\n\treturn nil\n}\n\n// TmpPath returns the absolute path for /tmp/srcd\nfunc TmpPath() string {\n\treturn filepath.Join(os.TempDir(), \"srcd\")\n}\n"
  },
  {
    "path": "cmd/sourced/dir/dir_test.go",
    "content": "package dir\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"os\"\n\t\"path\"\n\t\"testing\"\n\n\t\"github.com/pkg/errors\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestValidate(t *testing.T) {\n\tassert := assert.New(t)\n\n\terr := validate(\"/does/not/exists\")\n\tassert.True(ErrNotExist.Is(err))\n\tassert.EqualError(err, \"/does/not/exists does not exist\")\n\n\t// with a file\n\ttmpFile := path.Join(os.TempDir(), \"tmp-file\")\n\tf, err := os.Create(tmpFile)\n\tassert.Nil(err)\n\tassert.Nil(f.Close())\n\tdefer func() {\n\t\tos.RemoveAll(tmpFile)\n\t}()\n\n\terr = validate(tmpFile)\n\tassert.True(ErrNotValid.Is(err))\n\tassert.EqualError(err, tmpFile+\" is not a valid config directory: it is not a directory\")\n\n\t// with a dir\n\ttmpDir := path.Join(os.TempDir(), \"tmp-dir\")\n\tassert.Nil(os.Mkdir(tmpDir, os.ModePerm))\n\tdefer func() {\n\t\tos.RemoveAll(tmpDir)\n\t}()\n\n\terr = validate(tmpDir)\n\tassert.Nil(err)\n\n\t// read only\n\tassert.Nil(os.Chmod(tmpDir, 0444))\n\terr = validate(tmpDir)\n\tassert.True(ErrNotValid.Is(err))\n\tassert.EqualError(err, tmpDir+\" is not a valid config directory: it has no read-write access\")\n\n\t// write only\n\tassert.Nil(os.Chmod(tmpDir, 0222))\n\terr = validate(tmpDir)\n\tassert.True(ErrNotValid.Is(err))\n\tassert.EqualError(err, tmpDir+\" is not a valid config directory: it has no read-write access\")\n}\n\nfunc TestPrepare(t *testing.T) {\n\tassert := assert.New(t)\n\n\ttmpDir := path.Join(os.TempDir(), \"tmp-dir\")\n\tassert.Nil(os.Mkdir(tmpDir, os.ModePerm))\n\tdefer func() {\n\t\tos.RemoveAll(tmpDir)\n\t}()\n\n\toriginSrcdDir := os.Getenv(\"SOURCED_DIR\")\n\tdefer func() {\n\t\tos.Setenv(\"SOURCED_DIR\", originSrcdDir)\n\t}()\n\n\tos.Setenv(\"SOURCED_DIR\", tmpDir)\n\tassert.Nil(Prepare())\n\n\ttoCreateDir := path.Join(os.TempDir(), \"to-create-dir\")\n\tdefer func() {\n\t\tos.RemoveAll(toCreateDir)\n\t}()\n\t_, err := os.Stat(toCreateDir)\n\tassert.True(os.IsNotExist(err))\n\n\tos.Setenv(\"SOURCED_DIR\", toCreateDir)\n\tassert.Nil(Prepare())\n\t_, err = os.Stat(toCreateDir)\n\tassert.Nil(err)\n}\n\nfunc TestDownloadURL(t *testing.T) {\n\tassert := assert.New(t)\n\n\t// success\n\tfileContext := []byte(\"hello\")\n\thandler := func(w http.ResponseWriter, r *http.Request) {\n\t\tw.WriteHeader(http.StatusOK)\n\t\tw.Write(fileContext)\n\t}\n\tserver := httptest.NewServer(http.HandlerFunc(handler))\n\tdirPath := path.Join(os.TempDir(), \"some-dir\")\n\tfilePath := path.Join(dirPath, \"file-to-download\")\n\tdefer func() {\n\t\tos.RemoveAll(dirPath)\n\t}()\n\n\tassert.Nil(DownloadURL(server.URL, filePath))\n\t_, err := os.Stat(filePath)\n\tassert.Nil(err)\n\n\tb, err := ioutil.ReadFile(filePath)\n\tassert.Nil(err)\n\tassert.Equal(fileContext, b)\n\n\t// error\n\thandler = func(w http.ResponseWriter, r *http.Request) {\n\t\tw.WriteHeader(http.StatusNotFound)\n\t}\n\tserver = httptest.NewServer(http.HandlerFunc(handler))\n\terr = DownloadURL(server.URL, \"/dev/null\")\n\terrExpected := errors.Wrapf(\n\t\tfmt.Errorf(\"HTTP status %v\", \"404 Not Found\"),\n\t\t\"network error downloading %s\", server.URL,\n\t)\n\n\tassert.EqualError(err, errExpected.Error())\n}\n"
  },
  {
    "path": "cmd/sourced/format/colors.go",
    "content": "package format\n\nimport (\n\t\"fmt\"\n\t\"runtime\"\n)\n\n// Color represents a color code\ntype Color string\n\nconst (\n\t// Red for errors\n\tRed Color = \"31\"\n\t// Yellow for warnings\n\tYellow Color = \"33\"\n)\n\n// Colorize returns the passed string with the passed color\nfunc Colorize(color Color, s string) string {\n\tif runtime.GOOS == \"windows\" {\n\t\treturn s\n\t}\n\n\treturn fmt.Sprintf(\"\\x1b[%sm%s\\x1b[0m\", color, s)\n}\n"
  },
  {
    "path": "cmd/sourced/main.go",
    "content": "package main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/cmd\"\n\tcomposefile \"github.com/src-d/sourced-ce/cmd/sourced/compose/file\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/format\"\n\t\"github.com/src-d/sourced-ce/cmd/sourced/release\"\n)\n\n// this variable is rewritten during the CI build step\nvar version = \"master\"\nvar build = \"dev\"\n\nfunc main() {\n\tcomposefile.SetVersion(version)\n\tcmd.Init(version, build)\n\n\tcheckUpdates()\n\n\tcmd.Execute()\n}\n\nfunc checkUpdates() {\n\tif version == \"master\" {\n\t\treturn\n\t}\n\n\tupdate, latest, err := release.FindUpdates(version)\n\tif err != nil {\n\t\treturn\n\t}\n\n\tif update {\n\t\ts := fmt.Sprintf(\n\t\t\t`There is a newer version. Current version: %s, latest version: %s\nPlease go to https://github.com/src-d/sourced-ce/releases/latest to upgrade.\n`, version, latest)\n\n\t\tfmt.Println(format.Colorize(format.Yellow, s))\n\t}\n}\n"
  },
  {
    "path": "cmd/sourced/release/release.go",
    "content": "// Package release deals with versioning and releases\npackage release\n\nimport (\n\t\"context\"\n\t\"net/http\"\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/dir\"\n\n\t\"github.com/blang/semver\"\n\t\"github.com/google/go-github/v25/github\"\n\t\"github.com/gregjones/httpcache\"\n\t\"github.com/gregjones/httpcache/diskcache\"\n)\n\n// FindUpdates calls the GitHub API to check the latest release tag. It returns\n// true if the latest stable release is newer than the current tag, and also\n// that latest tag name.\nfunc FindUpdates(current string) (update bool, latest string, err error) {\n\tcurrentV, err := semver.ParseTolerant(current)\n\tif err != nil {\n\t\treturn false, \"\", err\n\t}\n\n\tdiskcachePath := filepath.Join(dir.TmpPath(), \"httpcache\")\n\terr = os.MkdirAll(diskcachePath, os.ModePerm)\n\tif err != nil {\n\t\treturn false, \"\", err\n\t}\n\n\tcache := diskcache.New(diskcachePath)\n\tclient := github.NewClient(&http.Client{Transport: httpcache.NewTransport(cache)})\n\n\trel, _, err := client.Repositories.GetLatestRelease(context.Background(), \"src-d\", \"sourced-ce\")\n\tif err != nil {\n\t\treturn false, \"\", err\n\t}\n\n\tlatestV, err := semver.ParseTolerant(rel.GetTagName())\n\tif err != nil {\n\t\treturn false, \"\", err\n\t}\n\n\tupdate = latestV.GT(currentV)\n\tlatest = latestV.String()\n\n\treturn update, latest, nil\n}\n"
  },
  {
    "path": "cmd/sourced/release/release_test.go",
    "content": "package release\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"net/http\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n\n\t\"github.com/src-d/sourced-ce/cmd/sourced/dir\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\ntype testCase struct {\n\tresponseTag string // tag returned by github\n\tcurrent     string\n\tupdate      bool\n\tlatest      string\n}\n\nfunc TestFindUpdatesSuccess(t *testing.T) {\n\tos.RemoveAll(filepath.Join(dir.TmpPath(), \"httpcache\"))\n\n\tcases := []testCase{\n\t\t{\n\t\t\tresponseTag: \"v0.14.0\",\n\t\t\tcurrent:     \"v0.14.0\",\n\t\t\tupdate:      false,\n\t\t\tlatest:      \"0.14.0\",\n\t\t},\n\t\t{\n\t\t\tresponseTag: \"v0.11.0\",\n\t\t\tcurrent:     \"v0.14.0\",\n\t\t\tupdate:      false,\n\t\t\tlatest:      \"0.11.0\",\n\t\t},\n\t\t{\n\t\t\tresponseTag: \"v0.14.0\",\n\t\t\tcurrent:     \"v0.13.0\",\n\t\t\tupdate:      true,\n\t\t\tlatest:      \"0.14.0\",\n\t\t},\n\t\t{\n\t\t\tresponseTag: \"v0.14.0\",\n\t\t\tcurrent:     \"v0.13.1\",\n\t\t\tupdate:      true,\n\t\t\tlatest:      \"0.14.0\",\n\t\t},\n\t}\n\n\tfor _, c := range cases {\n\t\tname := fmt.Sprintf(\"%s_to_%s\", c.current, c.responseTag)\n\t\tt.Run(name, func(t *testing.T) {\n\t\t\trestore := mockGithub(c.responseTag)\n\t\t\tdefer restore()\n\n\t\t\tupdate, latest, err := FindUpdates(c.current)\n\t\t\tassert.Nil(t, err)\n\t\t\tassert.Equal(t, c.update, update)\n\t\t\tassert.Equal(t, c.latest, latest)\n\t\t})\n\t}\n}\n\nfunc mockGithub(tag string) func() {\n\toriginalTransport := http.DefaultTransport\n\n\thttp.DefaultTransport = &ghTransport{tag: tag}\n\treturn func() {\n\t\thttp.DefaultTransport = originalTransport\n\t}\n}\n\ntype ghTransport struct {\n\ttag string\n}\n\nfunc (t *ghTransport) RoundTrip(*http.Request) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tHeader:     make(http.Header),\n\t\tBody:       ioutil.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{\"tag_name\": \"%s\"}`, t.tag))),\n\t}, nil\n}\n"
  },
  {
    "path": "docker-compose.yml",
    "content": "version: '3.4'\n\nx-superset-env: &superset-env\n  SYNC_MODE: ${GITBASE_SIVA}\n  ADMIN_LOGIN: admin\n  ADMIN_FIRST_NAME: admin\n  ADMIN_LAST_NAME: admin\n  ADMIN_EMAIL: admin@example.com\n  ADMIN_PASSWORD: admin\n  POSTGRES_DB: superset\n  POSTGRES_USER: superset\n  POSTGRES_PASSWORD: superset\n  POSTGRES_HOST: postgres\n  POSTGRES_PORT: 5432\n  REDIS_HOST: redis\n  REDIS_PORT: 6379\n  GITBASE_DB: gitbase\n  GITBASE_USER: root\n  GITBASE_PASSWORD:\n  GITBASE_HOST: gitbase\n  GITBASE_PORT: 3306\n  METADATA_DB: metadata\n  METADATA_USER: metadata\n  METADATA_PASSWORD: metadata\n  METADATA_HOST: metadatadb\n  METADATA_PORT: 5432\n  BBLFSH_WEB_HOST: bblfsh-web\n  BBLFSH_WEB_PORT: 8080\n\nservices:\n  bblfsh:\n    image: bblfsh/bblfshd:v2.15.0-drivers\n    restart: unless-stopped\n    privileged: true\n    ports:\n      - 9432:9432\n\n  gitcollector:\n    image: srcd/gitcollector:v0.0.4\n    # wait for db\n    command: ['/bin/sh', '-c', 'sleep 10s && gitcollector download']\n    environment:\n      GITHUB_ORGANIZATIONS: ${GITHUB_ORGANIZATIONS-}\n      GITHUB_TOKEN: ${GITHUB_TOKEN-}\n      # use main db\n      GITCOLLECTOR_METRICS_DB_URI: postgresql://superset:superset@postgres:5432/superset?sslmode=disable\n      GITCOLLECTOR_NO_UPDATES: 'true'\n      GITCOLLECTOR_NO_FORKS: ${NO_FORKS-true}\n      LOG_LEVEL: ${LOG_LEVEL-info}\n    depends_on:\n      - postgres\n    volumes:\n      - type: ${GITBASE_VOLUME_TYPE}\n        source: ${GITBASE_VOLUME_SOURCE}\n        target: /library\n        consistency: delegated\n    deploy:\n      resources:\n        limits:\n          cpus: ${GITCOLLECTOR_LIMIT_CPU-0.0}\n\n  ghsync:\n    image: srcd/ghsync:v0.2.0\n    entrypoint: ['/bin/sh']\n    # wait for db to be created\n    # we need to use something like https://github.com/vishnubob/wait-for-it\n    # or implement wait in ghsync itself\n    command: ['-c', 'sleep 10s && ghsync migrate && ghsync shallow']\n    depends_on:\n      - metadatadb\n    environment:\n      GHSYNC_ORGS: ${GITHUB_ORGANIZATIONS-}\n      GHSYNC_TOKEN: ${GITHUB_TOKEN-}\n      GHSYNC_POSTGRES_DB: metadata\n      GHSYNC_POSTGRES_USER: metadata\n      GHSYNC_POSTGRES_PASSWORD: metadata\n      GHSYNC_POSTGRES_HOST: metadatadb\n      GHSYNC_POSTGRES_PORT: 5432\n      GHSYNC_NO_FORKS: ${NO_FORKS-true}\n      LOG_LEVEL: ${LOG_LEVEL-info}\n\n  gitbase:\n    image: srcd/gitbase:v0.23.1\n    restart: unless-stopped\n    ports:\n      - 3306:3306\n    environment:\n      BBLFSH_ENDPOINT: bblfsh:9432\n      SIVA: ${GITBASE_SIVA}\n      GITBASE_LOG_LEVEL: ${LOG_LEVEL-info}\n    depends_on:\n      - bblfsh\n    volumes:\n      - type: ${GITBASE_VOLUME_TYPE}\n        source: ${GITBASE_VOLUME_SOURCE}\n        target: /opt/repos\n        read_only: true\n        consistency: delegated\n      - gitbase_indexes:/var/lib/gitbase/index\n    deploy:\n      resources:\n        limits:\n          cpus: ${GITBASE_LIMIT_CPU-0.0}\n          memory: ${GITBASE_LIMIT_MEM-0}\n\n  bblfsh-web:\n    image: bblfsh/web:v0.11.4\n    restart: unless-stopped\n    command: -bblfsh-addr bblfsh:9432\n    ports:\n      - 9999:8080\n    depends_on:\n      - bblfsh\n    environment:\n      LOG_LEVEL: ${LOG_LEVEL-info}\n\n  redis:\n    image: redis:5-alpine\n    restart: unless-stopped\n    ports:\n      - 6379:6379\n    volumes:\n      - redis:/data\n\n  postgres:\n    image: postgres:10-alpine\n    restart: unless-stopped\n    environment:\n      POSTGRES_DB: superset\n      POSTGRES_PASSWORD: superset\n      POSTGRES_USER: superset\n    ports:\n      - 5432:5432\n    volumes:\n      - postgres:/var/lib/postgresql/data\n\n  metadatadb:\n    image: postgres:10-alpine\n    restart: unless-stopped\n    environment:\n      POSTGRES_DB: metadata\n      POSTGRES_PASSWORD: metadata\n      POSTGRES_USER: metadata\n    ports:\n      - 5433:5432\n    volumes:\n      - metadata:/var/lib/postgresql/data\n\n  sourced-ui:\n    image: srcd/sourced-ui:v0.8.1\n    restart: unless-stopped\n    environment:\n      <<: *superset-env\n      SUPERSET_ENV: production\n    ports:\n      - 8088:8088\n    depends_on:\n      - postgres\n      - metadatadb\n      - redis\n      - gitbase\n      - bblfsh-web\n\n  sourced-ui-celery:\n    image: srcd/sourced-ui:v0.8.1\n    restart: unless-stopped\n    environment:\n      <<: *superset-env\n      SUPERSET_ENV: celery\n    depends_on:\n      - postgres\n      - metadatadb\n      - redis\n      - gitbase\n      - sourced-ui\n\nvolumes:\n  gitbase_repositories:\n    external: false\n  gitbase_indexes:\n    external: false\n  metadata:\n    external: false\n  postgres:\n    external: false\n  redis:\n    external: false\n"
  },
  {
    "path": "docs/CONTRIBUTING.md",
    "content": "# Contribution Guidelines\n\nAs all source{d} projects, this project follows the\n[source{d} Contributing Guidelines](https://github.com/src-d/guide/blob/master/engineering/documents/CONTRIBUTING.md).\n\n\n# Additional Contribution Guidelines\n\nIn addition to the [source{d} Contributing Guidelines](https://github.com/src-d/guide/blob/master/engineering/documents/CONTRIBUTING.md), this project follows the following guidelines.\n\n\n## Changelog\n\nThis project lists the important changes between releases in the [`CHANGELOG.md`](../CHANGELOG.md) file.\n\nIf you open a PR, you should also add a brief summary in the `CHANGELOG.md` mentioning the new feature, change or bugfix that you proposed.\n\n\n## How To Restore Dashboards and Charts to Defaults\n\nThe official way to restore **source{d} CE** to its initial state, is to remove the running components with\n`sourced prune --all`, and then init again with `sourced init`.\n\nIn some circumstances you need to restore only the state modified from the UI (charts, dashboards, saved queries, users,\nroles, etcetera), using the default ones for the version of **source{d} CE** that you're currently using, and preserve\nthe repositories and metadata fetched from GitHub organizations.\n\nTo do so, you only need to delete the docker volume containing the PostgreSQL database, and restart **source{d} CE**.\nIt can be done following these steps if you already have [Docker Compose](https://docs.docker.com/compose/) installed:\n\n```shell\n$ cd ~/.sourced/workdirs/__active__\n$ source .env\n$ docker-compose stop postgres\n$ docker-compose rm -f postgres\n$ ENV_PREFIX=`awk '{print tolower($0)}' <<< ${COMPOSE_PROJECT_NAME}`\n$ docker volume rm ${ENV_PREFIX}_postgres\n$ docker-compose up -d postgres\n$ docker-compose exec -u superset sourced-ui bash -c 'sleep 10s && python bootstrap.py'\n```\n"
  },
  {
    "path": "docs/README.md",
    "content": "# Table of contents\n\n* [Introduction](../README.md)\n* [Quickstart](./quickstart/README.md)\n    * [Dependencies](./quickstart/1-install-requirements.md)\n    * [Install **source{d} CE**](./quickstart/2-install-sourced.md)\n    * [Run **source{d} CE**](./quickstart/3-init-sourced.md)\n    * [Explore Your Data](./quickstart/4-explore-sourced.md)\n    * [Next steps](./usage/README.md)\n\n## Usage\n* [sourced Command Reference](./usage/commands.md)\n* [Multiple Datasets](./usage/multiple-datasets.md)\n* [SQL Examples](./usage/examples.md)\n* [Babelfish UAST](./usage/bblfsh.md)\n\n## Learn More\n\n* [FAQ](./learn-more/faq.md)\n* [Troubleshooting](./learn-more/troubleshooting.md)\n* [Architecture](./learn-more/architecture.md)\n* [Contribute](./CONTRIBUTING.md)\n* [Changelog](../CHANGELOG.md)\n* [License](../LICENSE.md)\n\n## Resources\n\n* [GitHub Repository](https://github.com/src-d/sourced-ce)\n* [Product Page](https://www.sourced.tech)\n* [Book a Demo](https://go.sourced.tech/community-demo)\n* [Get in Touch With Us](http://go.sourced.tech/contact)\n* [Join Us on Slack](https://sourced-community.slack.com/join/shared_invite/enQtMjc4Njk5MzEyNzM2LTFjNzY4NjEwZGEwMzRiNTM4MzRlMzQ4MmIzZjkwZmZlM2NjODUxZmJjNDI1OTcxNDAyMmZlNmFjODZlNTg0YWM)\n* [source{d} Forum](https://forum.sourced.tech)\n"
  },
  {
    "path": "docs/learn-more/architecture.md",
    "content": "#  source{d} Community Editon Architecture\n\n**source{d} Community Editon** provides a frictionless experience for trying\nsource{d} for Code Analysis.\n\n\n## Technical Architecture\n\nThe `sourced` binary, a single CLI binary [written in Go](../../cmd/sourced/main.go),\nis the user's main interaction mechanism with **source{d} CE**.\nIt is also the only piece (other than Docker) that the user will need to explicitly\ndownload on their machine to get started.\n\nThe `sourced` binary manages the different installed environments and their\nconfigurations, acting as a wrapper of Docker Compose.\n\nThe whole architecture is based on Docker containers, orchestrated by Docker Compose\nand managed by `sourced`.\n\n\n## Components of source{d}\n\n**source{d} CE** relies on different components to handle different use cases\nand to cover different functionalities. Each component is implemented as a running\nDocker container.\n\n- `bblfsh`: parses source code into UASTs using [Babelfish](https://docs.sourced.tech/babelfish/);\nyou can learn more about it in our [Babelfish UAST guide](usage/bblfsh.md)\n- `gitbase`: runs [gitbase](https://docs.sourced.tech/gitbase), a SQL database\ninterface to Git repositories.\n- `gitcollector`: is responsible for fetching repositories from the organizations\nused to initialize **source{d} CE**. It uses [gitcollector](https://github.com/src-d/gitcollector).\n- `ghsync`: is responsible for fetching repository metadata from the organizations\nused to initialize **source{d} CE**. It uses [ghsync](https://github.com/src-d/ghsync)\n- `metadatadb`: runs the PostgreSQL database that stores the repositories\nmetadata (users, pull requests, issues...) extracted by `ghsync`.\n- `postgres`: runs the PostgreSQL database that stores the state of the UI\n(charts, dashboards, users, saved queries and such).\n- `sourced-ui`: runs the **source{d} CE** Web Interface. This component queries\ndata from `bblfsh`, `gitbase`, `metadatadb` and `postgres`.\n\nSome of these components can be accessed from the outside as described by\n[Docker Networking section](#docker-networking).\n\n\n## Docker Set Up\n\nIn order to make this work in the easiest way, some design decisions were made:\n\n### Isolated Environments.\n\n_Read more in [Working With Multiple Data Sets](../usage/multiple-datasets.md)_\n\nEach dataset runs in an isolated environment, and only one environment can run\nat the same time.\nEach environment is defined by one `docker-compose.yml` and one `.env`, stored\nin `~/.sourced`.\n\n### Docker Naming\n\nAll the Docker containers from the same environment share its prefix:\n`srcd-<HASH>_` followed by the name of the service running inside, e.g\n`srcd-c3jjlwq_gitbase_1` and `srcd-c3jjlwq_bblfsh_1` will contain gitbase and\nbabelfish for the same environment.\n\n### Docker Networking\n\nIn order to provide communication between the multiple containers started, all of\nthem are attached to the same single bridge network. The network name also has\nthe same prefix than the containers inside the same environment, e.g.\n`srcd-c3jjlwq_default`.\n\nSome environment services can be accessed from the outside, using their exposed\nport and connection values:\n- `bblfsh`:\n    - port: `9432`\n- `gitbase`:\n    - port: `3306`\n    - database: `gitbase`\n    - user: `root`\n- `metadatadb`:\n    - port: `5433`\n    - database: `metadata`\n    - user: `metadata`\n    - password: `metadata`\n- `sourced-ui`:\n    - port: `8088`\n\n### Persistence\n\nTo prevent losing data when restarting services, or upgrading containers, the data\nis stored in volumes. These volumes also share the same prefix with the containers\nin the same environment, e.g. `srcd-c3jjlwq_gitbase_repositories`.\n\nThese are the most relevant volumes:\n- `gitbase_repositories`, stores the repositories to be analyzed\n- `gitbase_indexes`, stores the gitbases indexes\n- `metadata`, stores the metadata from GitHub pull requests, issues, users...\n- `postgres`, stores the dashboards and charts used by the web interface\n"
  },
  {
    "path": "docs/learn-more/faq.md",
    "content": "# Frequently Asked Questions\n\n_For tips and advices to deal with unexpected errors, please refer to [Troubleshooting guide](./troubleshooting.md)_\n\n## Index\n\n- [Where Can I Find More Assistance to Run source{d} or Notify You About Any Issue or Suggestion?](#where-can-i-find-more-assistance-to-run-source-d-or-notify-you-about-any-issue-or-suggestion)\n- [How Can I Update My Version Of **source{d} CE**?](#how-can-i-update-my-version-of-source-d-ce)\n- [How to Update the Data from the Organizations That I'm Analyzing](#how-to-update-the-data-from-the-organizations-being-analyzed)\n- [Can I Query Gitbase or Babelfish with External Tools?](#can-i-query-gitbase-or-babelfish-with-external-tools)\n- [Where Can I Read More About the Web Interface?](#where-can-i-read-more-about-the-web-interface)\n- [I Get IOError Permission denied](#i-get-ioerror-permission-denied)\n- [Why Do I Need Internet Connection?](#why-do-i-need-internet-connection)\n\n\n## Where Can I Find More Assistance to Run source{d} or Notify You About Any Issue or Suggestion?\n\n_If you're dealing with an error or something that you think that can be caused\nby an unexpected error, please refer to our [Troubleshooting guide](./troubleshooting.md).\nWith the info that you can obtain following those steps, you could fix the problem\nor you will be able to explain it better in the following channels:_\n\n* [open an issue](https://github.com/src-d/sourced-ce/issues), if you want to\nsuggest a new feature, if you need assistance with a contribution, or if you\nfound any bug.\n* [Visit the source{d} Forum](https://forum.sourced.tech) where users and community\nmembers discuss anything source{d} related. You will find there some common questions\nfrom other source{d} users, or ask yours.\n* [join our community on Slack](https://sourced-community.slack.com/join/shared_invite/enQtMjc4Njk5MzEyNzM2LTFjNzY4NjEwZGEwMzRiNTM4MzRlMzQ4MmIzZjkwZmZlM2NjODUxZmJjNDI1OTcxNDAyMmZlNmFjODZlNTg0YWM),\nand talk with some of our engineers.\n\n\n## How Can I Update My Version Of source{d} CE?\n\nWhen there is a new release of **source{d} CE**, it is noticed every time a `sourced`\ncommand is called. When it happens you can download the new version from\n[src-d/sourced-ce/releases/latest](https://github.com/src-d/sourced-ce/releases/latest),\nand proceed as it follows:\n\n(You can also follow these steps if you want to update to any beta version, to\ndowngrade, or to use your own built version of **source{d} CE**)\n\n1. replace your current version of `sourced` from its current location with the\none you're installing ([see Quickstart. Install](quickstart/2-install-sourced.md)),\nand confirm it was done by running `sourced version`.\n1. run `sourced compose download` to download the new configuration.\n1. run `sourced restart` to apply the new configuration.\n\nThis process will reinstall **source{d} CE** with the new components, but it will\nkeep your current data (repositories, metadata, charts, dashboards, etc) of your\nexistent workdirs.\n\nIf you want to replace all your current customizations &mdash;including charts and\ndashboards&mdash;, with the ones from the release that you just installed, the\nofficial way to proceed is to `prune` the running workdirs, and `init` them again.\n\n_**disclaimer:** pruning a workdir will delete all its data: its saved queries\nand charts, and if you were using repositories and metadata downloaded from a\nGitHub organization, they will be deleted, and downloaded again._\n\n1. `sourced status workdirs` to get the list of your current workdirs\n1. Prune the workdirs you need, or prune all of them at once running\n`sourced prune --all`\n1. `sourced init [local|orgs] ...` for each workdir again, to initialize them with\nthe new configuration.\n\n\n## How to Update the Data from the Organizations Being Analyzed\n\nThere is no way to update imported data, and\n[when a scraper is restarted](./troubleshooting.md#how-can-i-restart-one-scraper),\nit procedes as it follows:\n\n### gitcollector\n\nOrganizations and repositories are downloaded independently, so if they fail,\nthe process is not stopped until all the organizations and repositories have been\niterated.\n\nIf `gitcollector` is restarted, it will download more repositories, but it won’t\nupdate any of the already existent ones. You can see the progress of the new process\nin the welcome dashboard; since already existent repositories won't be updated,\nthose will appear as `failed` in progress status.\n\n### ghsync\n\nThe way how metadata is imported by `ghsync` is a bit different, and it is done\nsequentially per each organization, so if any step fails, the whole importation\nwill fail.\n\nPull requests, issues, and users of the same organization, are imported in that\norder in separate transaction each one, and if one transaction fails, the process\nwill be stopped so the next ones won't be processed.\n\nOnce the three different entities have been imported, the organization will be\nconsidered as \"done\", and restarting `ghsync` won't cause to update its data.\n\nIf `ghsync` is restarted, it will only import data from organizations that could\nnot be finished considering the rules explained above. The process of `ghsync`\nwill be updated in the welcome dashboard and if an organization was already\nimported, it will appear as \"nothing imported\" in the status chart.\n\n\n## Can I Query Gitbase or Babelfish with External Tools?\n\nYes, as explained in our docs about [**source{d} CE** Architecture](./architecture.md#docker-networking),\nthese and other components are exposed to the host machine, to be used by third\nparty tools like [Jupyter Notebook](https://jupyter.org/),\n[gitbase clients](https://docs.sourced.tech/gitbase/using-gitbase/supported-clients)\nand [Babelfish clients](https://docs.sourced.tech/babelfish/using-babelfish/clients).\n\nThe connection values that you should use to connect to these components, are\ndefined in the [`docker-compose.yml`](../docker-compose.yml), and sumarized in\nthe [Architecture documentation](./architecture.md#docker-networking)\n\n\n## Where Can I Read More About the Web Interface?\n\nThe user interface is based in the open-sourced [Apache Superset](http://superset.apache.org),\nso you can also refer to [Superset tutorials](http://superset.apache.org/tutorial.html)\nfor advanced usage of the web interface.\n\n\n## I Get IOError Permission denied\n\nIf you get this error message:\n\n```\nIOError: [Errno 13] Permission denied: u'./.env'\n```\n\nThis may happen if you have installed Docker from a snap package. This installation mode is not supported, please install it following [the official documentation](./quickstart/1-install-requirements.md#install-docker) (See [#78](https://github.com/src-d/sourced-ce/issues/78)).\n\n\n## Why Do I Need Internet Connection?\n\nsource{d} CE automatically fetches some resources from the Internet when they are not found locally:\n\n- the source{d} CE configuration is fetched automatically when initializing it for the first time, using the proper version for the current version of `sourced`, e.g. if using `v0.16.0` it will automatically fetch `https://raw.githubusercontent.com/src-d/sourced-ce/v0.16.0/docker-compose.yml`.\n- to download the docker images of the source{d} CE components when initializing source{d} for the first time, or when initializing it after changing its configuration.\n- to download repositories and its metadata from GitHub when you initialize source{d} CE with `sourced init orgs`.\n- to download and install [Docker Compose alternative](#docker-compose) if there is no local installation of Docker Compose.\n\nIf your connection to the network does not let source{d} CE to access to Internet, you should manually provide all these dependencies.\n"
  },
  {
    "path": "docs/learn-more/troubleshooting.md",
    "content": "\n# Troubleshooting:\n\n_For commonly asked questions and their answers, you can refer to the [FAQ](./faq.md)_\n\nCurrently, **source{d} CE** does not expose nor log all errors directly into the\nUI. In the current stage of **source{d} CE**, following these steps is the\nbetter way to know if something is failing, why, and to know how to recover the\napp from some problems. The first two steps use to be always mandatory:\n\n1. **[To see if any component is broken](#how-can-i-see-the-status-of-source-d-ce-components)**\n1. **[To see the logs of the running components](#how-can-i-see-logs-of-the-running-components)**\n1. [To know if scrapers finished their job](#how-can-i-see-what-happened-with-the-scrapers)\n  - [To restart one scraper](#how-can-i-restart-one-scraper)\n1. [To restart o initialize **source{d} CE** again](#how-to-restart-source-d-ce)\n1. [To ask for help if the issue could not be solved](./faq.md#where-can-i-find-more-assistance-to-run-source-d-or-notify-you-about-any-issue-or-suggestion)\n\nOther issues that we detected, and which are strictly related to the UI are:\n\n- [When I Try to Create a Chart from a Query, Nothing Happens.](#when-i-try-to-create-a-chart-from-a-query-nothing-happens)\n- [When I Try to Export a Dashboard, Nothing Happens.](#when-i-try-to-export-a-dashboard-nothing-happens)\n- [The Dashboard Takes a Long to Load, and the UI Freezes.](#the-dashboard-takes-a-long-to-load-and-the-ui-freezes)\n\n\n## source{d} CE Fails During Its Initialization\n\nThe initialization can fail fast if there is any port conflict, or missing config\nfile, etcetera; those errors are clearly logged in the terminal when they appear.\n\nIf when initializing **source{d} CE**, all the required components appear as created,\nbut the loading spinner keeps spinning forever (more than 1 minute can be symptomatic),\nthere can be an underlying problem causing the UI not to be opened. In this\nsituation you should:\n\n1. **[See if any component is broken](#how-can-i-see-the-status-of-source-d-ce-components)**\n1. **[See app logs or certain component logs](#how-can-i-see-logs-of-the-running-components)**\n1. [Restart o initialize **source{d} CE** again](#how-to-restart-source-d-ce)\n1. [To ask for help if the issue could not be solved](./faq.md#where-can-i-find-more-assistance-to-run-source-d-or-notify-you-about-any-issue-or-suggestion)\n\n\n## How Can I See the Status of source{d} CE Components?\n\nTo see the status of **source{d} CE** components, just run:\n\n```\n$ sourced status\n\nName                      Command                   State         Ports\n------------------------------------------------------------------------------\nsrcd-xxx_sourced-ui_1    /entrypoint.sh             Up (healthy)  :8088->8088\nsrcd-xxx_gitbase_1       ./init.sh                  Up            :3306->3306\nsrcd-xxx_bblfsh_1        /tini -- bblfshd           Up            :9432->9432\nsrcd-xxx_bblfsh-web_1    /bin/bblfsh-web -addr ...  Up            :9999->8080\nsrcd-xxx_metadatadb_1    docker-entrypoint.sh  ...  Up            :5433->5432\nsrcd-xxx_postgres_1      docker-entrypoint.sh  ...  Up            :5432->5432\nsrcd-xxx_redis_1         docker-entrypoint.sh  ...  Up            :6379->6379\nsrcd-xxx_ghsync_1        /bin/sh -c sleep 10s  ...  Exit 0\nsrcd-xxx_gitcollector_1  /bin/dumb-init -- /bi ...  Exit 0\n\n```\n\nIt will report the status of all **source{d} CE** component. All components should\nbe `Up`, but the scrapers: `ghsync` and `gitcollector`; these exceptions are\nexplanined in [How Can I See What Happened with the Scrapers?](#how-can-i-see-what-happened-with-the-scrapers)\n\nIf any component is not `Up` (but the scrapers), here are some key points to\nunderstand what might be happening:\n\n- All the components (but the scrapers) are restarted by Docker Compose\nautomatically &mdash;process that can take some seconds&mdash;; if the component\nenters in a restart loop, something wrong is happening.\n- When any component is failing, or died, you should\n[see its logs to understand what is happening](#how-can-i-see-logs-of-the-running-components)\n\nWhen one of the required components fails, it uses to print an error in the UI,\n\ne.g. `lost connection to mysql server during query` while running a query might\nmean that `gitbase` went down. \n\ne.g. `unable to establish a connection with the bblfsh server: deadline exceeded`\nin SQL Lab might mean that `bblfsh` went down.\n\nIf the failing component is not successfully restarted in a few seconds, or if it\ngoes down when running certain queries, it could be a good idea to [open an issue](https://github.com/src-d/sourced-ce/issues)\ndescribing the problem.\n\n\n## How Can I See Logs of The Running Components?\n\n```shell\n$ sourced logs [-f] [components...]\n```\n\nAdding `-f` will keep the connection opened, and the logs will appear as they\ncome instead of exiting after the last logged one.\n\nYou can pass a space-separated list of component names to see only their logs\n(i.e. `sourced-ui`, `gitbase`, `bblfsh`, `gitcollector`, `ghsync`, `metadatadb`, `postgres`, `redis`).\nIf you do not pass any component name, there will appear the logs of all of them.\n\nCurrently, there is no way to filter by error level, so you could try with `grep`,\ne.g. \n\n```shell\nsourced logs gitcollector | grep error\n```\n\nwill output only log lines where `error` word appears.\n\n\n## How Can I See What Happened with the Scrapers?\n\n_When **souece{d} CE** is initialized with `sourced init local`, the scrapers are\nnot relevant because the repositories to analyze comes from your local data, so\n`ghsync` and `gitcollector` status is not relevant in this case._\n\nWhen running **souece{d} CE** to analyze data from a list of GitHub organizations,\n`gitcollector` component is in charge of fetching  GitHub repositories and `ghsync`\ncomponent is in charge of fetching GitHub metadata (issues, pull requests...)\n\nOnce the UI is opened, you can see the progress of the importation in the welcome\ndashboard, reporting the data imported, skipped, failed and completed. The process\ncan take many minutes if the organization is big, so be patient. You can manually\nrefresh both charts to confirm that the process is progressing, and it is not stuck.\nIf you believe that there can be any problem during the process, the better way\nto find what is happening is:\n\n- **[check the components status](#how-can-i-see-the-status-of-source-d-ce-components)\nwith `sourced status`**; `gitcollector` and `ghsync` should be `Up` (the process\ndidn't finish yet), or `Exit 0` (the process finished succesfully). They are\nindependent components, so they can finish on different order depending on how\nmany repositories or metadata is needed to process.\n\n- **[check the logs](#how-can-i-see-logs-of-the-running-components) of the failing component with `sourced logs [-f] {gitcollector,ghsync}`**\nto get more info about the errors found.\n\n\n## How Can I Restart One Scraper?\n\n_Restarting a scraper should be done to recover from temporal problems like\nconnectivity loss, or lack of space in disc, not\n[to update the data you're analyzing](./faq.md#how-to-update-the-data-from-the-organizations-being-analyzed)_\n\n**source{d} CE** does not provide way to start only one scraper. The recommended way\nto restart them would be [to restart the whole **source{d} CE**](#how-to-restart-source-d-ce),\nwhich is fast and safe for your data. In order to restart **source{d} CE**, run:\n\n```shell\n$ sourced restart\n```\n\n_Read more about [which data will be imported after restarting a scraper](./faq.md#how-to-update-the-data-from-the-organizations-being-Analyzed)_\n\nIf you feel comfortable enough with Docker Compose, you could also try restarting\neach scraper separatelly, running:\n\n```shell\n$ cd ~/.sourced/workdirs/__active__\n$ docker-compose run gitcollector # to restart gitcollector\n$ docker-compose run ghsync       # to restart ghsync\n```\n\n\n## How to Restart source{d} CE\n\nRestarting **source{d} CE**, can fix some errors and is also the official way to\nrestart the scrapers. It is also needed after downloading a new config (by running\n`sourced compose download`). **source{d} CE** is restarted with the command:\n\n```shell\n$ sourced restart\n```\n\nIt only recreates the component containers, keeping all your data, like charts,\ndashboards, repositories, and GitHub metadata.\n\n\n## When I Try to Create a Chart from a Query, Nothing Happens.\n\nThe charts can be created from the SQL Lab, using the `Explore` button once you\nrun a query. If nothing happens, the browser may be blocking the new window that\nshould be opened to edit the new chart. You should configure your browser to let\nsource{d} UI to open pop-ups (e.g. in Chrome it is done allowing `127.0.0.1:8088`\nto handle `pop-ups and redirects` from the `Site Settings` menu).\n\n\n## When I Try to Export a Dashboard, Nothing Happens.\n\nIf nothing happens when pressing the `Export` button from the dashboard list, then\nyou should configure your browser to let source{d} UI to open pop-ups (e.g. in\nChrome it is done allowing `127.0.0.1:8088` to handle `pop-ups and redirects`\nfrom the `Site Settings` menu)\n\n\n## The Dashboard Takes a Long to Load and the UI Freezes.\n\n_This is a known issue that we're trying to address, but here is more info about it._\n\nIn some circumstances, loading the data for the dashboards can take some time,\nand the UI can be frozen in the meanwhile. It can happen &mdash;on big datasets&mdash;,\nthe first time you access the dashboards, or when they are refreshed.\n\nThere are some limitations with how Apache Superset handles long-running SQL\nqueries, which may affect the dashboard charts. Since most of the charts of the\nOverview dashboard loads its data from gitbase, its queries can take more time\nthan the expected for the UI.\n\nWhen it happens, the UI can be frozen, or you can get this message in some charts:\n>_Query timeout - visualization queries are set to timeout at 300 seconds.\nPerhaps your data has grown, your database is under unusual load, or you are\nsimply querying a data source that is too large to be processed within the timeout\nrange. If that is the case, we recommend that you summarize your data further._\n\nWhen it occurs, you should wait till the UI is responsive again, and separately\nrefresh each failing chart with its `force refresh` option (on its top-right corner).\nWith some big datasets, it took 3 refreshes and 15 minutes to get data for all charts.\n"
  },
  {
    "path": "docs/quickstart/1-install-requirements.md",
    "content": "# Install source{d} Community Edition Dependencies\n\n## Install Docker\n\n_Please note that Docker Toolbox is not supported. In case that you're running Docker Toolbox, please consider updating to newer Docker Desktop for Mac or Docker Desktop for Windows._\n\n_For Linux installations, using Docker installed from a snap package is not supported._\n\nFollow the instructions based on your OS:\n\n- [Docker for Ubuntu Linux](https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-ce-1)\n- [Docker for Arch Linux](https://wiki.archlinux.org/index.php/Docker#Installation)\n- [Docker for macOS](https://store.docker.com/editions/community/docker-ce-desktop-mac)\n- [Docker Desktop for Windows](https://hub.docker.com/editions/community/docker-ce-desktop-windows) (Make sure to read the [system requirements for Docker on Windows](https://docs.docker.com/docker-for-windows/install/).)\n\nMinimal supported version 18.02.0.\n\n## Docker Compose\n\n**source{d} CE** is deployed as Docker containers, using [Docker Compose](https://docs.docker.com/compose), but it is not required to have a local installation of Docker Compose; if it is not found it will be downloaded from docker sources, and deployed inside a container.\n\nIf you prefer a local installation of Docker Compose, or you have no access to internet to download it, you can follow the [Docker Compose install guide](https://docs.docker.com/compose/install)\n\nMinimal supported version 1.20.0.\n\n## Internet Connection\n\nsource{d} CE automatically fetches some resources from the Internet when they are not found locally, so in order to use all source{d} CE capacities, Internet connection is needed.\n\nFor more details, you can refer to [Why Do I Need Internet Connection?](../learn-more/faq.md#why-do-i-need-internet-connection)\n"
  },
  {
    "path": "docs/quickstart/2-install-sourced.md",
    "content": "# Install source{d} Community Edition\n\nDownload the **[latest release](https://github.com/src-d/sourced-ce/releases/latest)** for your Linux, macOS (Darwin) or Windows.\n\n## On Linux or macOS\n\nExtract `sourced` binary from the release you downloaded, and move it into your bin folder to make it executable from any directory:\n\n```bash\n$ tar -xvf path/to/sourced-ce_REPLACE-VERSION_REPLACE-OS_amd64.tar.gz\n$ sudo mv path/to/sourced-ce_REPLACE-OS_amd64/sourced /usr/local/bin/\n```\n\n## On Windows\n\n*Please note that from now on we assume that the commands are executed in `powershell` and not in `cmd`.*\n\nCreate a directory for `sourced.exe` and add it to your `$PATH`, running these commands in a powershell as administrator:\n```powershell\nmkdir 'C:\\Program Files\\sourced'\n# Add the directory to the `%path%` to make it available from anywhere\nsetx /M PATH \"$($env:path);C:\\Program Files\\sourced\"\n# Now open a new powershell to apply the changes\n```\n\nExtract the `sourced.exe` executable from the release you downloaded, and copy it into the directory you created in the previous step:\n```powershell\nmv \\path\\to\\sourced-ce_windows_amd64\\sourced.exe 'C:\\Program Files\\sourced'\n```\n"
  },
  {
    "path": "docs/quickstart/3-init-sourced.md",
    "content": "# Initialize source{d} Community Edition\n\n_For the full list of the sub-commands offered by `sourced`, please take a look\nat [the `sourced` sub-commands inventory](../usage/commands.md)._\n\n**source{d} CE** can be initialized from 2 different data sources: GitHub organizations, or local Git repositories.\n\nPlease note that you have to choose one data source to initialize **source{d} CE**, but you can have more than one isolated environment, and they can have different sources. See the guide about [Working With Multiple Data Sets](../usage/multiple-datasets.md) for more details.\n\n**source{d} CE** will download and install Docker images on demand. Therefore, the first time you run some of these commands, they might take a bit of time to start up. Subsequent runs will be faster.\n\n\n## From GitHub Organizations\n\nWhen using GitHub organizations to populate the **source{d} CE** database you only need to provide a list of organization names and a [GitHub personal access token](https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/). If no scope is granted to the user token, only public\ndata will be fetched. To let **source{d} CE** access also private repositories and hidden users, the token should\nhave the following scopes enabled:\n\n- `repo` Full control of private repositories\n- `read:org` Read org and team membership, read org projects\n\n\nUse this command to initialize, e.g.\n\n```shell\n$ sourced init orgs --token <token> src-d,bblfsh\n```\n\nIt will also download, in the background, the repositories of the passed GitHub organizations, and their metadata: pull requests, issues, users...\n\n\n## From Local Repositories\n\n```shell\n$ sourced init local </path/to/repositories>\n```\n\nIt will initialize **source{d} CE** to analyze the git repositories under the passed path, or under the current directory if no one is passed. The repositories will be found recursively.\n\n**Note for macOS:**\nDocker for Mac [requires enabling file sharing](https://docs.docker.com/docker-for-mac/troubleshoot/#volume-mounting-requires-file-sharing-for-any-project-directories-outside-of-users) for any path outside of `/Users`.\n\n**Note for Windows:** Docker for Windows [requires shared drives](https://docs.docker.com/docker-for-windows/#shared-drives). Other than that, it's important to use a working directory that doesn't include any sub-directory whose access is not readable by the user running `sourced`. For example, using `C:\\Users` as workdir, will most probably not work. For more details see [this issue](https://github.com/src-d/engine/issues/250).\n\n\n## What's Next?\n\nOnce **source{d} CE** has been initialized, it will automatically open the web UI.\nIf the UI is not opened automatically, you can use `sourced web` command, or visit http://127.0.0.1:8088.\n\nUse login: `admin` and password: `admin`, to access the web interface.\n"
  },
  {
    "path": "docs/quickstart/4-explore-sourced.md",
    "content": "# Explore source{d} CE Web Interface\n\n_If you have any problem running **source{d} CE** you can take a look to our [Troubleshooting](../learn-more/troubleshooting.md) section, and to our [source{d} Forum](https://forum.sourced.tech), where you can also ask for help when using **source{d} CE**. If you spotted a bug, or have a feature request, please [open an issue](https://github.com/src-d/sourced-ce/issues) to let us know abut it._\n\n_In some circumstances, loading the data for the dashboards can take some time, and the UI can be frozen in the meanwhile. It can happen &mdash;on big datasets&mdash;, the first time you access the dashboards, or when they are refreshed. Please, take a look to our\n[Troubleshooting](../learn-more/troubleshooting.md#the-dashboard-takes-a-long-to-load-and-the-ui-freezes)\nto get more info about this exact issue._\n\nOnce **source{d} CE** has been [initialized with `sourced init`](./3-init-sourced.md), it will automatically open the web UI. If the UI is not automatically opened, you can use `sourced web` command, or visit http://127.0.0.1:8088.\n\nUse login: `admin` and password: `admin`, to access the web interface.\n\nIf you [initialized **source{d} CE** from GitHub Organizations](./3-init-sourced.md#from-github-oganizations), its repositories and metadata will be downloaded on background, and it will be available graduatelly. You will find more info in the welcome dashboard once you log in.\n\n\n## Sections\n\nThe most relevant features that **source{d} CE** Web Interface offers are:\n- **[SQL Lab](#sql-lab-querying-code-and-metadata)**, to query your repositories and its GitHub metadata.\n- **[Babelfish web](#uast-parsing-code)**, web interface to parse files into UAST.\n- **[Dashboards](#dashboards)**, to aggregate charts for exploring and visualizing your data.\n- **Charts**, to see your data with a rich set of data visualizations.\n- A flexible UI to manage users, data sources, export data...\n\nThe user interface is based in the open-sourced [Apache Superset](http://superset.incubator.apache.org), so you can also refer to their documentation for advanced usage of the web interface.\n\n\n## SQL Lab. Querying Code and Metadata\n\n_If you prefer to work within the terminal via command line, you can open a SQL REPL running `sourced sql`_\n\nUsing the `SQL Lab` tab, from the web interface, you can analyze your dataset using SQL queries, and create charts from those queries with the `Explore` button.\n\nYou can find some sample queries in the [examples](../usage/examples.md).\n\nIf you want to know what the database schema looks like you can use either regular `SHOW` or `DESCRIBE` queries, or you can refer to the [diagram about gitbase entities and relations](https://docs.sourced.tech/gitbase/using-gitbase/schema#database-diagram).\n\n```bash\n$ sourced sql \"SHOW tables;\"\n+--------------+\n|    TABLE     |\n+--------------+\n| blobs        |\n| commit_blobs |\n| commit_files |\n| commit_trees |\n| commits      |\n| files        |\n| ref_commits  |\n| refs         |\n| remotes      |\n| repositories |\n| tree_entries |\n+--------------+\n```\n\n```bash\n$ sourced sql \"DESCRIBE TABLE commits;\"\n+---------------------+-----------+\n|        NAME         |   TYPE    |\n+---------------------+-----------+\n| repository_id       | TEXT      |\n| commit_hash         | TEXT      |\n| commit_author_name  | TEXT      |\n| commit_author_email | TEXT      |\n| commit_author_when  | TIMESTAMP |\n| committer_name      | TEXT      |\n| committer_email     | TEXT      |\n| committer_when      | TIMESTAMP |\n| commit_message      | TEXT      |\n| tree_hash           | TEXT      |\n| commit_parents      | JSON      |\n+---------------------+-----------+\n```\n\n\n## UAST. Parsing code\n\n_Please, refer to the [quick explanation about what Babelfish is](../usage/bblfsh.md) to know more about it._\n\nYou can get UASTs from the `UAST` tab (parsing files by direct input), or using the `UAST` gitbase function over blob contents on `SQL Lab` tab.\n\n\n## Dashboards\n\n_Please, refer to [Superset Tutorial, creating your first dashboard](http://superset.incubator.apache.org/tutorial.html) for more details._\n\nThe dashboards let you aggregate custom charts to show in the same place different metrics for your repositories.\n\nYou can create them:\n- From the `Dashboard` tab, adding a new one, and then selecting new charts.\n- From any chart view, the `Save` button will let you to add it into a new or existent one. \n"
  },
  {
    "path": "docs/quickstart/README.md",
    "content": "# Quickstart\n\nThis guide covers the full setup journey, from zero to populated dashboard with **source{d} Community Edition**.\n\nThis process is divided into the following steps:\n\n1. [Install **source{d} CE** Dependencies](./1-install-requirements.md)\n1. [Install **source{d} CE**](./2-install-sourced.md)\n1. [Initialize the Dataset](./3-init-sourced.md):\n    - using local git data\n    - using git repositories from your GitHub org\n1. [Explore Your Dataset](./4-explore-sourced.md)\n"
  },
  {
    "path": "docs/usage/README.md",
    "content": "# Usage of source{d} Community Edition\n\nOnce you know how to install and run **source{d} Community Edition**, you will find in this section some useful resources for guiding your first steps using this tool.\n\n- [`sourced` Command Reference](./commands.md)\n- [Using Multiple Datasets](./multiple-datasets.md) will show you how to analyze different datasets, no matter if they are stored locally or in GitHub.\n- [Some SQL Examples to Explore Your Dataset](./examples.md)\n- [Babelfish UAST](./bblfsh.md), about how to extract code features and understand code structure in a language-agnostic way\".\n\nIf you are interested in the different components of **source{d} Community Edition**, you can read more about it in the [docs about architecture](../learn-more/architecture.md)\n\nSome common questions are answered in the [FAQ](../learn-more/faq.md), and common problems and how to solve them in the  [Troubleshooting](../learn-more/troubleshooting.md) guide. If you have any question about source{d} you can ask in our [source{d} Forum](https://forum.sourced.tech). If you spotted a bug, or have a feature request, please [open an issue](https://github.com/src-d/sourced-ce/issues) to let us know about it.\n"
  },
  {
    "path": "docs/usage/bblfsh.md",
    "content": "# Babelfish UAST\n\n_In the [Babelfish documentation](https://docs.sourced.tech/babelfish/), you will\nfind detailed information about Babelfish specifications, usage, examples, etc._\n\nOne of the most important components of **source{d} CE** is the UAST, which stands for:\n[Universal Abstract Syntax Tree](https://docs.sourced.tech/babelfish/uast/uast-specification-v2).\n\nUASTs are a normalized form of a programming language's AST, annotated with language-agnostic roles and transformed with language-agnostic concepts (e.g. Functions, Imports, etc.).\n\nThese enable an advanced static analysis of code and easy feature extraction for statistics or [Machine Learning on Code](https://github.com/src-d/awesome-machine-learning-on-source-code).\n\n\n## UAST Usage\n\nFrom the web interface, you can use the `UAST` tab, to parse files by direct input, or you can also get UASTs from the `SQL Lab` tab, using the `UAST(content)` [gitbase function](https://docs.sourced.tech/gitbase/using-gitbase/functions).\n\nFor the whole syntax about how to query the UASTs, you can refer to [How To Query UASTs With Babelfish](https://docs.sourced.tech/babelfish/using-babelfish/uast-querying)\n\n\n## Supported Languages\n\nTo see which languages are available, check the table of [Babelfish supported languages](https://docs.sourced.tech/babelfish/languages).\n\n\n## Clients and Connectors\n\nThe language parsing server (Babelfish) is available from the web interface, but you can also connect to the parsing server, deployed by **source{d} CE**, with several language clients, currently supported and maintained:\n\n- [Babelfish Go Client](https://github.com/bblfsh/go-client)\n- [Babelfish Python Client](https://github.com/bblfsh/client-python)\n- [Babelfish Scala Client](https://github.com/bblfsh/client-scala)\n"
  },
  {
    "path": "docs/usage/commands.md",
    "content": "# List of `sourced` Sub-Commands\n\n`sourced` binary offers you different kinds of sub-commands:\n- [to manage their containers](#manage-containers)\n- [to manage **source{d} CE** configuration](#manage-configuration)\n- [to open interfaces to access its data](#open-interfaces)\n- [show info about the command](#others)\n\nHere is the list of all these commands and its description; you can get more info about each one\nadding `--help` when you run it.\n\n\n## Manage Containers\n\n### sourced init\n\n_There is a dedicated section to document this command in the quickstart about [how to initialize **source{d} CE**](../quickstart/3-init-sourced.md)_\n\nThis command installs and initializes **source{d} CE** docker containers, networks, and volumes, downloading its docker images if needed.\n\nIt can work over a local repository or a list of GitHub organizations.\n\n**source{d} CE** will download and install Docker images on demand. Therefore, the first time you run some of these commands, they might take a bit of time to start up. Subsequent runs will be faster.\n\nOnce **source{d} CE** has been initialized, it will automatically open the web UI.\nIf the UI is not opened automatically, you can use [`sourced web`](#sourced-web) command, or visit http://127.0.0.1:8088.\n\nUse login: `admin` and password: `admin`, to access the web interface.\n\n#### sourced init orgs\n\n```shell\n$ sourced init orgs --token=_USER_TOKEN_ [--with-forks] org1,org2...\n```\n\nInstalls and initializes **source{d} CE** for a list of GitHub organizations, downloading their repositories and\nmetadata: Users, PullRequests, Issues...\n\nThe `orgs` argument must be a comma-separated list of GitHub organizations.\n\nThe `--token` must contain a valid GitHub user token for the given organizations. It should be granted with\n'repo' and'read:org' scopes.\n\nIf `--with-forks` is passed, it will also fetch repositories who are marked as forks.\n\n#### sourced init local\n\n```shell\n$ sourced init local [/path/to/repos]\n```\n\nInstalls and initializes **source{d} CE** using a local directory containing the git repositories to be processed by **source{d} CE**. If the local path to the `workdir` is not provided, the current working directory will be used.\n\n### sourced start\n\nStarts all the components that were initialized with `init` and then stopped with `stop`.\n\n### sourced stop\n\nStops all running containers without removing them. They can be started again with `start`.\n\n### sourced prune\n\nStops containers and removes containers, networks, volumes, and configurations created by `init` for the current working directory.\n\nTo delete resources for all the installed working directories, add the `--all` flag.\n\nContainer images are not deleted unless you specify the `--images` flag.\n\nIf you want to completely uninstall `sourced` you must also delete the `~/.sourced` directory.\n\n### sourced logs\n\nShow logs from source{d} components.\n\nIf `--follow` is used the logs are shown as they are logged until you exit with `Ctrl+C`.\n\nYou can optionally pass component names to see only their logs.\n\n```shell\n$ sourced logs\n$ sourced logs --follow\n$ sourced logs --follow gitbase bblfsh\n```\n\n\n## Manage Configuration\n\n### sourced status\n\nShows the status of **source{d} CE** components, the installed working directories and the current deployment.\n\n#### sourced status all\n\nShow all the available status information, from the `components`, `config` and `workdirs`, sub-commands below.\n\n#### sourced status components\n\nShows the status of the components containers of the running working directory\n\n#### sourced status config\n\nShows the docker-compose environment variables configuration for the active working directory\n\n#### sourced status workdirs\n\nLists all the previously initialized working directories\n\n### sourced compose\n\nManages Docker Compose files in the `~/.sourced` directory with the following subcommands:\n\n### sourced compose download\n\nDownload the `docker-compose.yml` file to define **source{d} CE** services. By default, the command downloads the file for this binary version, but you can also download other version or any other custom one using its URL.\n\nExamples:\n```shell\n$ sourced compose download\n$ sourced compose download v0.0.1\n$ sourced compose download master\n$ sourced compose download https://raw.githubusercontent.com/src-d/sourced-ce/master/docker-compose.yml\n```\n\n### sourced compose list\n\nLists the available `docker-compose.yml` files, and shows which one is active.\nYou can activate any other with `compose set`.\n\n### sourced compose set\n\nSets the active `docker-compose.yml` file. Accepts either the name or index of the compose file as returned by 'compose list'.\n\n#### sourced restart\n\nUpdates current installation according to the active docker compose file.\n\nIt only recreates the component containers, keeping all your data, as charts, dashboards, repositories and GitHub metadata.\n\n\n## Open Interfaces\n\n### sourced sql\n\nOpens a MySQL client connected to gitbase.\n\nYou can also pass a SQL query to be run by gitbase instead of opening the REPL, e.g.\n```shell\n$ sourced sql \"show databases\"\n\n+----------+\n| Database |\n+----------+\n| gitbase  |\n+----------+\n```\n\n**source{d} CE** SQL supports a [UAST](#babelfish-uast) function that returns a Universal AST for the selected source text. UAST values are returned as binary blobs and are best visualized in the [SQL Lab, from the web interface](../quickstart/4-explore-sourced.md#sql-lab-querying-code) rather than the CLI where are seen as binary data.\n\n### sourced web\n\nOpens the web interface in your browser.\n\nUse login: `admin` and password: `admin`, to access the web interface.\n\n\n## Others\n\n### sourced version\n\nShows the version of the `sourced` command being used.\n\n### sourced completion\n\nPrints a bash completion script for sourced; you can place its output in\n`/etc/bash_completion.d/sourced`, or add it to your `.bashrc` running:\n\n```shell\n$ echo \"source <(sourced completion)\" >> ~/.bashrc\n```\n"
  },
  {
    "path": "docs/usage/examples.md",
    "content": "# SQL Examples to Analyze Your Data\n\n_If you want to know what the database schema looks like, you can refer to the [diagram about gitbase entities and relations](https://docs.sourced.tech/gitbase/using-gitbase/schema#database-diagram), or just use regular `SHOW` or `DESCRIBE` queries._\n\n_In gitbase repository, you will find more [SQL examples of queries](https://docs.sourced.tech/gitbase/using-gitbase/examples)._\n\n\n## Index\n\n* [Queries For Repositories](#queries-for-repositories)\n* [Queries With Files](#queries-with-files)\n* [Queries With UASTs](#queries-with-uasts)\n* [Queries About Comitters](#queries-about-comitters)\n\n\n## Queries For Repositories\n\n**Show me the repositories I am analyzing:**\n\n```sql\nSELECT * FROM repositories;\n```\n\n**Last commit messages for HEAD for every repository**\n\n```sql\nSELECT commit_message\nFROM refs\nNATURAL JOIN commits\nWHERE ref_name = 'HEAD';\n```\n\n**Top 10 repositories by commit count from [HEAD](https://git-scm.com/book/en/v2/Git-Internals-Git-References#ref_the_ref):**\n\n```sql\nSELECT repository_id,commit_count\nFROM (\n    SELECT\n        repository_id,\n        COUNT(*) AS commit_count\n    FROM ref_commits\n    WHERE ref_name = 'HEAD'\n    GROUP BY repository_id\n) AS q\nORDER BY commit_count DESC\nLIMIT 10;\n```\n\n**10 top repos by file count at HEAD**\n\n```sql\nSELECT repository_id, num_files FROM (\n    SELECT COUNT(*) num_files, repository_id\n    FROM refs\n    NATURAL JOIN commit_files\n    WHERE ref_name = 'HEAD'\n    GROUP BY repository_id\n) AS t\nORDER BY num_files DESC\nLIMIT 10;\n```\n\n\n## Queries With Files\n\n**Query for all LICENSE & README files across history:**\n\n```sql\nSELECT file_path, repository_id, blob_size\nFROM files\nWHERE\n    file_path = 'LICENSE'\n    OR file_path = 'README.md';\n```\n\n**Query all files at [HEAD](https://git-scm.com/book/en/v2/Git-Internals-Git-References#ref_the_ref):**\n\n```sql\nSELECT cf.file_path, f.blob_size\nFROM ref_commits rc\nNATURAL JOIN commit_files cf\nNATURAL JOIN files f\nWHERE\n    rc.ref_name = 'HEAD'\n    AND rc.history_index = 0;\n```\n\n\n## Queries With UASTs\n\n_**Note**: UAST values are returned as binary blobs; they're best visualized in the web UI interface rather than the CLI where are seen as binary data._\n\n**Retrieve the UAST for all files at [HEAD](https://git-scm.com/book/en/v2/Git-Internals-Git-References#ref_the_ref):**\n\n```sql\nSELECT * FROM (\n    SELECT cf.file_path,\n        UAST(f.blob_content, LANGUAGE(f.file_path,  f.blob_content)) as uast\n    FROM ref_commits r\n    NATURAL JOIN commit_files cf\n    NATURAL JOIN files f\n    WHERE\n        r.ref_name = 'HEAD'\n        AND r.history_index = 0\n) t WHERE uast != '';\n```\n\n\n## Queries About Comitters\n\n**Top committers per repository**\n\n```sql\nSELECT * FROM (\n    SELECT\n        commit_author_email as author,\n        repository_id as id,\n        count(*) as num_commits\n    FROM commits\n    GROUP BY commit_author_email, repository_id\n) AS t\nORDER BY num_commits DESC;\n```\n"
  },
  {
    "path": "docs/usage/multiple-datasets.md",
    "content": "# Working With Multiple Data Sets\n\nYou can deploy more than one **source{d} CE** instance with different sets of organizations, or repositories, to analyze.\n\nFor example, you may have initially started **source{d} CE** with the repositories in the `src-d` organization, with the command:\n```shell\n$ sourced init orgs --token <token> src-d\n```\n\nAfter a while, you may want to analyze the data on another set of repositories. You can run `sourced init` again with a different organization:\n```shell\n$ sourced init orgs --token <token> bblfsh\n```\n\nThis command will then stop all the running containers used for the previous dataset, create an isolated environment for the new data, and create a new, clean deployment.\n\nPlease note that each path will have an isolated deployment. This means that for example any chart or dashboard created for the deployment for `src-d` will not be available to the new deployment for `bblfsh`.\n\nEach isolated environment is persistent (unless you run `sourced prune`). Which means that if you decide to re-deploy **source{d} CE** using the original organization:\n```shell\n$ sourced init orgs --token <token> src-d\n```\n\nYou will get back to the previous state, and things like charts and dashboards will be restored.\n\nThese isolated environments also allow you to deploy **source{d} CE** using a local set of Git repositories. For example, if we want a third deployment to analyze repositories already existing in the `~/repos` directory, we just need to run `init` again:\n\n```shell\n$ sourced init local ~/repos\n```\n\nYou can list all the installed instances, and know which one is active at any moment by running `sourced status workdirs`.\n\nIf you are familiar with Docker Compose and you want more control over the underlying resources, you can explore the contents of your `~/.sourced` directory. There you will find a `docker-compose.yml` and `.env` files for each set of repositories used by `sourced init`.\n\n_You can read more about how the environments are isolated in the **source{d} CE**\n[architecture docs](../learn-more/architecture.md)_\n"
  },
  {
    "path": "go.mod",
    "content": "module github.com/src-d/sourced-ce\n\ngo 1.13\n\n// See https://github.com/gotestyourself/gotest.tools/issues/156\n// replace gotest.tools => gotest.tools v2.3.0\nreplace gotest.tools => gotest.tools v0.0.0-20181223230014-1083505acf35\n\nrequire (\n\tgithub.com/PuerkitoBio/goquery v1.5.0\n\tgithub.com/blang/semver v3.5.1+incompatible\n\tgithub.com/golang/protobuf v1.3.1 // indirect\n\tgithub.com/google/btree v1.0.0 // indirect\n\tgithub.com/google/go-github/v25 v25.1.1\n\tgithub.com/gregjones/httpcache v0.0.0-20190212212710-3befbb6ad0cc\n\tgithub.com/jessevdk/go-flags v1.4.0 // indirect\n\tgithub.com/kami-zh/go-capturer v0.0.0-20171211120116-e492ea43421d // indirect\n\tgithub.com/lib/pq v1.1.1\n\tgithub.com/mattn/go-colorable v0.1.1 // indirect\n\tgithub.com/mattn/go-isatty v0.0.7 // indirect\n\tgithub.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b // indirect\n\tgithub.com/onsi/ginkgo v1.8.0 // indirect\n\tgithub.com/onsi/gomega v1.5.0 // indirect\n\tgithub.com/pbnjay/memory v0.0.0-20190104145345-974d429e7ae4\n\tgithub.com/peterbourgon/diskv v2.0.1+incompatible // indirect\n\tgithub.com/pkg/browser v0.0.0-20180916011732-0a3d74bf9ce4\n\tgithub.com/pkg/errors v0.8.1\n\tgithub.com/serenize/snaker v0.0.0-20171204205717-a683aaf2d516\n\tgithub.com/sirupsen/logrus v1.4.1 // indirect\n\tgithub.com/src-d/envconfig v1.0.0 // indirect\n\tgithub.com/stretchr/testify v1.3.0\n\tgithub.com/x-cray/logrus-prefixed-formatter v0.5.2 // indirect\n\tgolang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734\n\tgolang.org/x/net v0.0.0-20190503192946-f4e77d36d62c\n\tgopkg.in/src-d/go-cli.v0 v0.0.0-20190422143124-3a646154da79\n\tgopkg.in/src-d/go-errors.v1 v1.0.0\n\tgopkg.in/src-d/go-log.v1 v1.0.2 // indirect\n\tgopkg.in/yaml.v2 v2.2.2 // indirect\n\tgotest.tools v0.0.0-00010101000000-000000000000\n)\n"
  },
  {
    "path": "go.sum",
    "content": "github.com/PuerkitoBio/goquery v1.5.0 h1:uGvmFXOA73IKluu/F84Xd1tt/z07GYm8X49XKHP7EJk=\ngithub.com/PuerkitoBio/goquery v1.5.0/go.mod h1:qD2PgZ9lccMbQlc7eEOjaeRlFQON7xY8kdmcsrnKqMg=\ngithub.com/andybalholm/cascadia v1.0.0 h1:hOCXnnZ5A+3eVDX8pvgl4kofXv2ELss0bKcqRySc45o=\ngithub.com/andybalholm/cascadia v1.0.0/go.mod h1:GsXiBklL0woXo1j/WYWtSYYC4ouU9PqHO0sqidkEA4Y=\ngithub.com/blang/semver v3.5.1+incompatible h1:cQNTCjp13qL8KC3Nbxr/y2Bqb63oX6wdnnjpJbkM4JQ=\ngithub.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=\ngithub.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=\ngithub.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=\ngithub.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=\ngithub.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=\ngithub.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/google/btree v1.0.0 h1:0udJVsspx3VBr5FwtLhQQtuAsVc79tTq0ocGIPAU6qo=\ngithub.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=\ngithub.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ=\ngithub.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=\ngithub.com/google/go-github/v25 v25.1.1 h1:6eW++i/CXcR5GKfYaaJT7oJJtHNU+/iiw55noEPNVao=\ngithub.com/google/go-github/v25 v25.1.1/go.mod h1:6z5pC69qHtrPJ0sXPsj4BLnd82b+r6sLB7qcBoRZqpw=\ngithub.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk=\ngithub.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=\ngithub.com/gregjones/httpcache v0.0.0-20190212212710-3befbb6ad0cc h1:f8eY6cV/x1x+HLjOp4r72s/31/V2aTUtg5oKRRPf8/Q=\ngithub.com/gregjones/httpcache v0.0.0-20190212212710-3befbb6ad0cc/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=\ngithub.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=\ngithub.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=\ngithub.com/jessevdk/go-flags v1.4.0 h1:4IU2WS7AumrZ/40jfhf4QVDMsQwqA7VEHozFRrGARJA=\ngithub.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=\ngithub.com/kami-zh/go-capturer v0.0.0-20171211120116-e492ea43421d h1:cVtBfNW5XTHiKQe7jDaDBSh/EVM4XLPutLAGboIXuM0=\ngithub.com/kami-zh/go-capturer v0.0.0-20171211120116-e492ea43421d/go.mod h1:P2viExyCEfeWGU259JnaQ34Inuec4R38JCyBx2edgD0=\ngithub.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=\ngithub.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=\ngithub.com/lib/pq v1.1.1 h1:sJZmqHoEaY7f+NPP8pgLB/WxulyR3fewgCM2qaSlBb4=\ngithub.com/lib/pq v1.1.1/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=\ngithub.com/mattn/go-colorable v0.1.1 h1:G1f5SKeVxmagw/IyvzvtZE4Gybcc4Tr1tf7I8z0XgOg=\ngithub.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=\ngithub.com/mattn/go-isatty v0.0.5 h1:tHXDdz1cpzGaovsTB+TVB8q90WEokoVmfMqoVcrLUgw=\ngithub.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=\ngithub.com/mattn/go-isatty v0.0.7 h1:UvyT9uN+3r7yLEYSlJsbQGdsaB/a0DlgWP3pql6iwOc=\ngithub.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=\ngithub.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b h1:j7+1HpAFS1zy5+Q4qx1fWh90gTKwiN4QCGoY9TWyyO4=\ngithub.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=\ngithub.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=\ngithub.com/onsi/ginkgo v1.8.0 h1:VkHVNpR4iVnU8XQR6DBm8BqYjN7CRzw+xKUbVVbbW9w=\ngithub.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=\ngithub.com/onsi/gomega v1.5.0 h1:izbySO9zDPmjJ8rDjLvkA2zJHIo+HkYXHnf7eN7SSyo=\ngithub.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=\ngithub.com/pbnjay/memory v0.0.0-20190104145345-974d429e7ae4 h1:MfIUBZ1bz7TgvQLVa/yPJZOGeKEgs6eTKUjz3zB4B+U=\ngithub.com/pbnjay/memory v0.0.0-20190104145345-974d429e7ae4/go.mod h1:RMU2gJXhratVxBDTFeOdNhd540tG57lt9FIUV0YLvIQ=\ngithub.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=\ngithub.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=\ngithub.com/pkg/browser v0.0.0-20180916011732-0a3d74bf9ce4 h1:49lOXmGaUpV9Fz3gd7TFZY106KVlPVa5jcYD1gaQf98=\ngithub.com/pkg/browser v0.0.0-20180916011732-0a3d74bf9ce4/go.mod h1:4OwLy04Bl9Ef3GJJCoec+30X3LQs/0/m4HFRt/2LUSA=\ngithub.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=\ngithub.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=\ngithub.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/serenize/snaker v0.0.0-20171204205717-a683aaf2d516 h1:ofR1ZdrNSkiWcMsRrubK9tb2/SlZVWttAfqUjJi6QYc=\ngithub.com/serenize/snaker v0.0.0-20171204205717-a683aaf2d516/go.mod h1:Yow6lPLSAXx2ifx470yD/nUe22Dv5vBvxK/UK9UUTVs=\ngithub.com/sirupsen/logrus v1.4.1 h1:GL2rEmy6nsikmW0r8opw9JIRScdMF5hA8cOYLH7In1k=\ngithub.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=\ngithub.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=\ngithub.com/src-d/envconfig v1.0.0 h1:/AJi6DtjFhZKNx3OB2qMsq7y4yT5//AeSZIe7rk+PX8=\ngithub.com/src-d/envconfig v1.0.0/go.mod h1:Q9YQZ7BKITldTBnoxsE5gOeB5y66RyPXeue/R4aaNBc=\ngithub.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=\ngithub.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=\ngithub.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=\ngithub.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=\ngithub.com/x-cray/logrus-prefixed-formatter v0.5.2 h1:00txxvfBM9muc0jiLIEAkAcIMJzfthRT6usrui8uGmg=\ngithub.com/x-cray/logrus-prefixed-formatter v0.5.2/go.mod h1:2duySbKsL6M18s5GU7VPsoEPHyzalCE06qoARUCeBBE=\ngolang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=\ngolang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734 h1:p/H982KKEjUnLJkM3tt/LemDnOc1GiZL5FCVlORJ5zo=\ngolang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=\ngolang.org/x/net v0.0.0-20180218175443-cbe0f9307d01/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3 h1:0GoQqolDA55aaLxZyTzK/Y2ePZzZTUrRacwib7cNsYQ=\ngolang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/net v0.0.0-20190503192946-f4e77d36d62c h1:uOCk1iQW6Vc18bnC13MfzScl+wdKBmM9Y9kU7Z83/lw=\ngolang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=\ngolang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190412213103-97732733099d h1:+R4KGOnez64A81RvjARKc4UT5/tI9ujCIVX+P5KiHuI=\ngolang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=\ngolang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=\ngolang.org/x/tools v0.0.0-20180810170437-e96c4e24768d/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=\ngoogle.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=\ngopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=\ngopkg.in/src-d/go-cli.v0 v0.0.0-20190422143124-3a646154da79 h1:MBr8uUjT5gZe4udsLGNZ3nWy/+ck0LSB91mQjgpwLUI=\ngopkg.in/src-d/go-cli.v0 v0.0.0-20190422143124-3a646154da79/go.mod h1:z+K8VcOYVYcSwSjGebuDL6176A1XskgbtNl64NSg+n8=\ngopkg.in/src-d/go-errors.v1 v1.0.0 h1:cooGdZnCjYbeS1zb1s6pVAAimTdKceRrpn7aKOnNIfc=\ngopkg.in/src-d/go-errors.v1 v1.0.0/go.mod h1:q1cBlomlw2FnDBDNGlnh6X0jPihy+QxZfMMNxPCbdYg=\ngopkg.in/src-d/go-log.v1 v1.0.2 h1:dED4100pntH4l3qOTgD1xebQR6pVU8tuPbUCmqiMsb0=\ngopkg.in/src-d/go-log.v1 v1.0.2/go.mod h1:GN34hKP0g305ysm2/hctJ0Y8nWP3zxXXJ8GFabTyABE=\ngopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=\ngopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=\ngopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=\ngopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngotest.tools v0.0.0-20181223230014-1083505acf35 h1:zpdCK+REwbk+rqjJmHhiCN6iBIigrZ39glqSF0P3KF0=\ngotest.tools v0.0.0-20181223230014-1083505acf35/go.mod h1:R//lfYlUuTOTfblYI3lGoAAAebUdzjvbmQsuB7Ykd90=\n"
  },
  {
    "path": "run-integration-tests.bat",
    "content": ":: we can't use makefile for windows because it depends on CI makefile which depends on shell\r\n\r\n:: compile sourced-ce\r\ngo build -tags=forceposix -o build/sourced-ce_windows_amd64/sourced.exe ./cmd/sourced\r\n\r\n:: run tests\r\nset TMPDIR_INTEGRATION_TEST=C:\\tmp\r\n:: see https://stackoverflow.com/questions/22948189/batch-getting-the-directory-is-not-empty-on-rmdir-command\r\ndel /f /s /q %TMPDIR_INTEGRATION_TEST% 1>nul\r\nrmdir /s /q %TMPDIR_INTEGRATION_TEST%\r\nmkdir %TMPDIR_INTEGRATION_TEST%\r\nset TMP=%TMPDIR_INTEGRATION_TEST%\r\ngo test -timeout 20m -parallel 1 -count 1 -tags=\"forceposix integration\" github.com/src-d/sourced-ce/test/ -v\r\n"
  },
  {
    "path": "test/commander.go",
    "content": "// +build integration\n\npackage test\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"time\"\n\n\t\"gotest.tools/icmd\"\n)\n\ntype Commander struct {\n\tbin        string\n\tsourcedDir string\n}\n\n// RunCmd runs a command with the appropriate SOURCED_DIR and returns a Result\nfunc (s *Commander) RunCmd(args []string, cmdOperators ...icmd.CmdOp) *icmd.Result {\n\tc := icmd.Command(s.bin, args...)\n\n\tnewEnv := append(os.Environ(),\n\t\tfmt.Sprintf(\"SOURCED_DIR=%s\", s.sourcedDir))\n\top := append(cmdOperators, icmd.WithEnv(newEnv...))\n\n\treturn icmd.RunCmd(c, op...)\n}\n\n// RunCommand is a convenience wrapper for RunCmd that accepts variadic arguments.\n// It runs a command with the appropriate SOURCED_DIR and returns a Result\nfunc (s *Commander) RunCommand(args ...string) *icmd.Result {\n\treturn s.RunCmd(args)\n}\n\n// RunCommandWithTimeout runs a command with the given timeout\nfunc (s *Commander) RunCommandWithTimeout(timeout time.Duration, args ...string) *icmd.Result {\n\treturn s.RunCmd(args, icmd.WithTimeout(timeout))\n}\n"
  },
  {
    "path": "test/common.go",
    "content": "// +build integration\n\npackage test\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"log\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/suite\"\n)\n\n// TODO (carlosms) this could be build/bin, workaround for https://github.com/src-d/ci/issues/97\nvar srcdBin = fmt.Sprintf(\"../build/sourced-ce_%s_%s/sourced\", runtime.GOOS, runtime.GOARCH)\n\nfunc init() {\n\tif os.Getenv(\"SOURCED_BIN\") != \"\" {\n\t\tsrcdBin = os.Getenv(\"SOURCED_BIN\")\n\t}\n}\n\ntype IntegrationSuite struct {\n\tsuite.Suite\n\t*Commander\n\tTestDir string\n}\n\nfunc (s *IntegrationSuite) SetupTest() {\n\ttestDir, err := ioutil.TempDir(\"\", strings.Replace(s.T().Name(), \"/\", \"_\", -1))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tif runtime.GOOS == \"windows\" {\n\t\ttestDir, err = filepath.EvalSymlinks(testDir)\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\t}\n\n\ts.TestDir = testDir\n\ts.Commander = &Commander{bin: srcdBin, sourcedDir: filepath.Join(s.TestDir, \"sourced\")}\n\n\t// Instead of downloading the compose file, copy the local file\n\terr = os.MkdirAll(filepath.Join(s.sourcedDir, \"compose-files\", \"local\"), os.ModePerm)\n\ts.Require().NoError(err)\n\n\tp, _ := filepath.Abs(filepath.FromSlash(\"../docker-compose.yml\"))\n\tbytes, err := ioutil.ReadFile(p)\n\ts.Require().NoError(err)\n\terr = ioutil.WriteFile(filepath.Join(s.sourcedDir, \"compose-files\", \"local\", \"docker-compose.yml\"), bytes, 0644)\n\ts.Require().NoError(err)\n\n\tr := s.RunCommand(\"compose\", \"set\", \"local\")\n\ts.Require().NoError(r.Error, r.Combined())\n}\n\nfunc (s *IntegrationSuite) TearDownTest() {\n\t// don't run prune on failed test to help debug. But stop the containers\n\t// to avoid port conflicts in the next test\n\tif s.T().Failed() {\n\t\ts.RunCommand(\"stop\")\n\t\ts.T().Logf(\"Test failed. sourced data dir left in %s\", s.TestDir)\n\t\ts.T().Logf(\"Probably there are also docker volumes left untouched\")\n\t\treturn\n\t}\n\n\ts.RunCommand(\"prune\", \"--all\")\n\n\tos.RemoveAll(s.TestDir)\n}\n\nfunc (s *IntegrationSuite) testSQL() {\n\ttestCases := []string{\n\t\t\"show tables\",\n\t\t\"show tables;\",\n\t\t\" show tables ; \",\n\t\t\"/* comment */ show tables;\",\n\t\t`/* multi line\n\t\t\tcomment */\n\t\t\tshow tables;`,\n\t}\n\n\tshowTablesOutput :=\n\t\t`Table\nblobs\ncommit_blobs\ncommit_files\ncommit_trees\ncommits\nfiles\nref_commits\nrefs\nremotes\nrepositories\ntree_entries\n`\n\n\tfor _, query := range testCases {\n\t\ts.T().Run(query, func(t *testing.T) {\n\t\t\tassert := assert.New(t)\n\n\t\t\tr := s.RunCommand(\"sql\", query)\n\t\t\tassert.NoError(r.Error, r.Combined())\n\n\t\t\tassert.Contains(r.Stdout(), showTablesOutput)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "test/compose_test.go",
    "content": "// +build integration\n\npackage test\n\nimport (\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/suite\"\n)\n\ntype ComposeTestSuite struct {\n\tIntegrationSuite\n}\n\nfunc TestComposeTestSuite(t *testing.T) {\n\titt := ComposeTestSuite{}\n\tsuite.Run(t, &itt)\n}\n\nfunc (s *ComposeTestSuite) TestListComposeFiles() {\n\tr := s.RunCommand(\"compose\", \"list\")\n\ts.Contains(r.Stdout(), \"[0]* local\")\n}\n\nfunc (s *ComposeTestSuite) TestSetComposeFile() {\n\tr := s.RunCommand(\"compose\", \"set\", \"0\")\n\ts.Contains(r.Stdout(), \"Active docker compose file was changed successfully\")\n\n\tr = s.RunCommand(\"compose\", \"list\")\n\ts.Contains(r.Stdout(), \"[0]* local\")\n}\n\nfunc (s *ComposeTestSuite) TestSetComposeFilIndexOutOfRange() {\n\tr := s.RunCommand(\"compose\", \"set\", \"5\")\n\ts.Contains(r.Stderr(), \"Index is out of range, please check the output of 'sourced compose list'\")\n}\n\nfunc (s *ComposeTestSuite) TestSetComposeNotFound() {\n\tr := s.RunCommand(\"compose\", \"set\", \"NotFound\")\n\ts.Error(r.Error)\n}\n\nfunc (s *ComposeTestSuite) TestSetComposeFilesWithStringIndex() {\n\tr := s.RunCommand(\"compose\", \"set\", \"local\")\n\ts.Contains(r.Stdout(), \"Active docker compose file was changed successfully\")\n\n\tr = s.RunCommand(\"compose\", \"list\")\n\ts.Contains(r.Stdout(), \"[0]* local\")\n}\n"
  },
  {
    "path": "test/init_local_test.go",
    "content": "// +build integration\n\npackage test\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/require\"\n\t\"github.com/stretchr/testify/suite\"\n)\n\ntype InitLocalTestSuite struct {\n\tIntegrationSuite\n}\n\nfunc TestInitLocalTestSuite(t *testing.T) {\n\titt := InitLocalTestSuite{}\n\tsuite.Run(t, &itt)\n}\n\nfunc (s *InitLocalTestSuite) TestWithInvalidWorkdir() {\n\trequire := s.Require()\n\n\tinvalidWorkDir := filepath.Join(s.TestDir, \"invalid-workdir\")\n\n\t_, err := os.Create(invalidWorkDir)\n\tif err != nil {\n\t\ts.T().Fatal(err)\n\t}\n\n\tr := s.RunCommand(\"init\", \"local\", invalidWorkDir)\n\trequire.Error(r.Error)\n\n\trequire.Equal(\n\t\tfmt.Sprintf(\"path '%s' is not a valid directory\\n\", invalidWorkDir),\n\t\tr.Stderr(),\n\t)\n}\n\nfunc (s *InitLocalTestSuite) TestChangeWorkdir() {\n\treq := s.Require()\n\n\tr := s.RunCommand(\"status\", \"workdirs\")\n\treq.Error(r.Error)\n\n\t// Create 2 workdirs, each with a repo\n\tworkdirA := filepath.Join(s.TestDir, \"workdir_a\")\n\tworkdirB := filepath.Join(s.TestDir, \"workdir_b\")\n\tpathA := filepath.Join(workdirA, \"repo_a\")\n\tpathB := filepath.Join(workdirB, \"repo_b\")\n\n\ts.initGitRepo(pathA)\n\ts.initGitRepo(pathB)\n\n\t// init with workdir A\n\tr = s.RunCommand(\"init\", \"local\", workdirA)\n\treq.NoError(r.Error, r.Combined())\n\n\tr = s.RunCommand(\"status\", \"workdirs\")\n\treq.NoError(r.Error, r.Combined())\n\n\treq.Equal(fmt.Sprintf(\"* %v\\n\", workdirA), r.Stdout())\n\n\tr = s.RunCommand(\"sql\", \"select * from repositories\")\n\treq.NoError(r.Error, r.Combined())\n\n\treq.Contains(r.Stdout(),\n\t\t`repository_id\nrepo_a\n`)\n\n\t// init with workdir B\n\tr = s.RunCommand(\"init\", \"local\", workdirB)\n\treq.NoError(r.Error, r.Combined())\n\n\tr = s.RunCommand(\"status\", \"workdirs\")\n\treq.NoError(r.Error, r.Combined())\n\n\treq.Equal(fmt.Sprintf(\"  %v\\n* %v\\n\", workdirA, workdirB), r.Stdout())\n\n\tr = s.RunCommand(\"sql\", \"select * from repositories\")\n\treq.NoError(r.Error, r.Combined())\n\n\treq.Contains(r.Stdout(),\n\t\t`repository_id\nrepo_b\n`)\n\n\t// Test SQL queries. This should be a different test, but since starting\n\t// the environment takes a long time, it is bundled together here to speed up\n\t// the tests\n\ts.testSQL()\n\n\tclient, err := newSupersetClient()\n\treq.NoError(err)\n\n\t// Test the list of dashboards created in superset\n\ts.T().Run(\"dashboard-list\", func(t *testing.T) {\n\t\treq := require.New(t)\n\n\t\tlinks, err := client.dashboards()\n\t\treq.NoError(err)\n\n\t\ts.Equal([]string{\n\t\t\t`<a href=\"/superset/dashboard/1/\">Overview</a>`,\n\t\t\t`<a href=\"/superset/dashboard/welcome-local/\">Welcome</a>`,\n\t\t}, links)\n\t})\n\n\t// Test gitbase queries through superset\n\ts.T().Run(\"superset-gitbase\", func(t *testing.T) {\n\t\treq := require.New(t)\n\n\t\trows, err := client.gitbase(\"select * from repositories\")\n\t\treq.NoError(err)\n\n\t\ts.Equal([]map[string]interface{}{\n\t\t\t{\"repository_id\": \"repo_b\"},\n\t\t}, rows)\n\t})\n\n\t// Test bblfsh queries through superset\n\ts.T().Run(\"superset-bblfsh\", func(t *testing.T) {\n\t\treq := require.New(t)\n\n\t\tlang, err := client.bblfsh(\"hello.js\", `console.log(\"hello\");`)\n\t\treq.NoError(err)\n\t\treq.Equal(\"javascript\", lang)\n\t})\n\n\t// Test gitbase can connect to bblfsh with a SQL query that uses UAST\n\ts.T().Run(\"gitbase-bblfsh\", func(t *testing.T) {\n\t\treq := require.New(t)\n\n\t\trows, err := client.gitbase(\n\t\t\t`SELECT UAST('console.log(\"hello\");', 'javascript') AS uast`)\n\t\treq.NoError(err)\n\n\t\treq.Len(rows, 1)\n\t\treq.NotEmpty(rows[0][\"uast\"])\n\t})\n}\n\nfunc (s *InitLocalTestSuite) initGitRepo(path string) {\n\ts.T().Helper()\n\n\terr := os.MkdirAll(path, os.ModePerm)\n\ts.Require().NoError(err)\n\n\tcmd := exec.Command(\"git\", \"init\", path)\n\terr = cmd.Run()\n\ts.Require().NoError(err)\n}\n"
  },
  {
    "path": "test/init_orgs_test.go",
    "content": "// +build integration\n\npackage test\n\nimport (\n\t\"database/sql\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t_ \"github.com/lib/pq\"\n\t\"github.com/stretchr/testify/require\"\n\t\"github.com/stretchr/testify/suite\"\n)\n\ntype InitOrgsTestSuite struct {\n\tIntegrationSuite\n}\n\nfunc TestInitOrgsTestSuite(t *testing.T) {\n\titt := InitOrgsTestSuite{}\n\n\tif os.Getenv(\"SOURCED_GITHUB_TOKEN\") == \"\" {\n\t\tt.Skip(\"SOURCED_GITHUB_TOKEN is not set\")\n\t\treturn\n\t}\n\n\tsuite.Run(t, &itt)\n}\n\nfunc checkGhsync(require *require.Assertions, repos int) {\n\tconnStr := \"user=metadata password=metadata dbname=metadata port=5433 sslmode=disable\"\n\tdb, err := sql.Open(\"postgres\", connStr)\n\trequire.NoError(err)\n\tdefer db.Close()\n\n\tvar id int\n\tvar org, entity string\n\tvar done, failed, total int\n\n\t// try for 2 minutes\n\tfor i := 0; i < 24; i++ {\n\t\ttime.Sleep(5 * time.Second)\n\n\t\trow := db.QueryRow(\"SELECT * FROM status\")\n\t\terr = row.Scan(&id, &org, &entity, &done, &failed, &total)\n\t\tif err == sql.ErrNoRows {\n\t\t\tcontinue\n\t\t}\n\t\trequire.NoError(err)\n\n\t\tif done == repos {\n\t\t\tbreak\n\t\t}\n\t}\n\n\trequire.Equal(repos, done,\n\t\t\"id = %v, org = %v, entity = %v, done = %v, failed = %v, total = %v\",\n\t\tid, org, entity, done, failed, total)\n}\n\nfunc checkGitcollector(require *require.Assertions, repos int) {\n\tconnStr := \"user=superset password=superset dbname=superset port=5432 sslmode=disable\"\n\tdb, err := sql.Open(\"postgres\", connStr)\n\trequire.NoError(err)\n\tdefer db.Close()\n\n\tvar org string\n\tvar discovered, downloaded, updated, failed int\n\n\t// try for 2 minutes\n\tfor i := 0; i < 24; i++ {\n\t\ttime.Sleep(5 * time.Second)\n\n\t\trow := db.QueryRow(\"SELECT * FROM gitcollector_metrics\")\n\t\terr = row.Scan(&org, &discovered, &downloaded, &updated, &failed)\n\t\tif err == sql.ErrNoRows {\n\t\t\tcontinue\n\t\t}\n\t\trequire.NoError(err)\n\n\t\tif downloaded == repos {\n\t\t\tbreak\n\t\t}\n\t}\n\n\trequire.Equal(repos, downloaded,\n\t\t\"org = %v, discovered = %v, downloaded = %v, updated = %v, failed = %v\",\n\t\torg, discovered, downloaded, updated, failed)\n}\n\nfunc (s *InitOrgsTestSuite) TestOneOrg() {\n\treq := s.Require()\n\n\t// TODO will need to change with https://github.com/src-d/sourced-ce/issues/144\n\tr := s.RunCommand(\"status\", \"workdirs\")\n\treq.Error(r.Error)\n\n\tr = s.RunCommand(\"init\", \"orgs\", \"golang-migrate\")\n\treq.NoError(r.Error, r.Combined())\n\n\tr = s.RunCommand(\"status\", \"workdirs\")\n\treq.NoError(r.Error, r.Combined())\n\n\treq.Equal(\"* golang-migrate\\n\", r.Stdout())\n\n\tcheckGhsync(req, 1)\n\tcheckGitcollector(req, 1)\n\n\t// Check gitbase can also see the repositories\n\tr = s.RunCommand(\"sql\", \"select * from repositories where repository_id='github.com/golang-migrate/migrate'\")\n\treq.NoError(r.Error, r.Combined())\n\n\treq.Contains(r.Stdout(),\n\t\t`repository_id\ngithub.com/golang-migrate/migrate\n`)\n\n\t// Test SQL queries. This should be a different test, but since starting\n\t// the environment takes a long time, it is bundled together here to speed up\n\t// the tests\n\ts.testSQL()\n\n\tclient, err := newSupersetClient()\n\treq.NoError(err)\n\n\t// Test the list of dashboards created in superset\n\ts.T().Run(\"dashboard-list\", func(t *testing.T) {\n\t\treq := require.New(t)\n\n\t\tlinks, err := client.dashboards()\n\t\treq.NoError(err)\n\n\t\ts.Equal([]string{\n\t\t\t`<a href=\"/superset/dashboard/1/\">Overview</a>`,\n\t\t\t`<a href=\"/superset/dashboard/2/\">Welcome</a>`,\n\t\t\t`<a href=\"/superset/dashboard/3/\">Collaboration</a>`,\n\t\t}, links)\n\t})\n\n\t// Test gitbase queries through superset\n\ts.T().Run(\"superset-gitbase\", func(t *testing.T) {\n\t\treq := require.New(t)\n\n\t\trows, err := client.gitbase(\"select * from repositories\")\n\t\treq.NoError(err)\n\n\t\ts.Equal([]map[string]interface{}{\n\t\t\t{\"repository_id\": \"github.com/golang-migrate/migrate\"},\n\t\t}, rows)\n\t})\n\n\t// Test metadata queries through superset\n\ts.T().Run(\"superset-metadata\", func(t *testing.T) {\n\t\treq := require.New(t)\n\n\t\trows, err := client.metadata(\"select * from organizations\")\n\t\treq.NoError(err)\n\t\treq.Len(rows, 1)\n\n\t\ts.Equal(\"golang-migrate\", rows[0][\"login\"])\n\t})\n\n\t// Test bblfsh queries through superset\n\ts.T().Run(\"superset-bblfsh\", func(t *testing.T) {\n\t\treq := require.New(t)\n\n\t\tlang, err := client.bblfsh(\"hello.js\", `console.log(\"hello\");`)\n\t\treq.NoError(err)\n\t\treq.Equal(\"javascript\", lang)\n\t})\n\n\t// Test gitbase can connect to bblfsh with a SQL query that uses UAST\n\ts.T().Run(\"gitbase-bblfsh\", func(t *testing.T) {\n\t\treq := require.New(t)\n\n\t\trows, err := client.gitbase(\n\t\t\t`SELECT UAST('console.log(\"hello\");', 'javascript') AS uast`)\n\t\treq.NoError(err)\n\n\t\treq.Len(rows, 1)\n\t\treq.NotEmpty(rows[0][\"uast\"])\n\t})\n}\n"
  },
  {
    "path": "test/superset.go",
    "content": "// +build integration\n\npackage test\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"log\"\n\t\"net/http\"\n\t\"net/http/cookiejar\"\n\t\"net/url\"\n\n\t\"github.com/PuerkitoBio/goquery\"\n\t\"golang.org/x/net/publicsuffix\"\n)\n\ntype supersetClient struct {\n\t*http.Client\n\tcsrf string\n}\n\nfunc newSupersetClient() (*supersetClient, error) {\n\t// Superset client needs a session cookie\n\tjar, err := cookiejar.New(&cookiejar.Options{PublicSuffixList: publicsuffix.List})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tclient := &http.Client{\n\t\tJar: jar,\n\t}\n\n\t// To POST in /login we need to first GET /login, read the hidden CSRF value\n\t// from the HTML, and send it back in the POST\n\tres, err := client.Get(\"http://127.0.0.1:8088/login/\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tdoc, err := goquery.NewDocumentFromReader(res.Body)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tcsrf, ok := doc.Find(\"input#csrf_token\").First().Attr(\"value\")\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"CSRF token was not found in the /login page\")\n\t}\n\n\tres, err = client.PostForm(\"http://127.0.0.1:8088/login/\", url.Values{\n\t\t\"csrf_token\": {csrf},\n\t\t\"username\":   {\"admin\"},\n\t\t\"password\":   {\"admin\"},\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// After a successful POST the client is authenticated and can be used to call\n\t// the API\n\treturn &supersetClient{client, csrf}, nil\n}\n\nfunc (c *supersetClient) dashboards() ([]string, error) {\n\tres, err := c.Get(\"http://127.0.0.1:8088/dashboard/api/read\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar decoded struct {\n\t\tResult []struct {\n\t\t\tLink string `json:\"dashboard_link\"`\n\t\t}\n\t}\n\n\terr = json.NewDecoder(res.Body).Decode(&decoded)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tlinks := []string{}\n\tfor _, result := range decoded.Result {\n\t\tlinks = append(links, result.Link)\n\t}\n\n\treturn links, nil\n}\n\nfunc (c *supersetClient) sql(query, dbId, schema string) ([]map[string]interface{}, error) {\n\tres, err := c.PostForm(\"http://127.0.0.1:8088/superset/sql_json/\", url.Values{\n\t\t//\"client_id\":      {\"\"},\n\t\t\"database_id\": {dbId},\n\t\t\"json\":        {\"true\"},\n\t\t\"runAsync\":    {\"false\"},\n\t\t\"schema\":      {schema},\n\t\t\"sql\":         {query},\n\t\t//\"sql_editor_id\":  {\"jzl0KCm5Z\"},\n\t\t//\"tab\":            {\"Untitled Query 2\"},\n\t\t//\"tmp_table_name\": {\"\"},\n\t\t//\"select_as_cta\":  {\"false\"},\n\t\t//\"templateParams\": {\"{}\"},\n\t\t//\"queryLimit\": {\"1000\"},\n\n\t\t\"csrf_token\": {c.csrf},\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar decoded struct {\n\t\tStatus string\n\t\tError  string\n\t\tData   []map[string]interface{}\n\t}\n\n\terr = json.NewDecoder(res.Body).Decode(&decoded)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif decoded.Status != \"success\" {\n\t\treturn nil, fmt.Errorf(\"/sql_json endpoint returned an error: \" + decoded.Error)\n\t}\n\n\treturn decoded.Data, nil\n}\n\nfunc (c *supersetClient) gitbase(query string) ([]map[string]interface{}, error) {\n\treturn c.sql(query, \"1\", \"gitbase\")\n}\n\nfunc (c *supersetClient) metadata(query string) ([]map[string]interface{}, error) {\n\treturn c.sql(query, \"2\", \"public\")\n}\n\nfunc (c *supersetClient) bblfsh(filename, content string) (string, error) {\n\tvar jsonStr = []byte(fmt.Sprintf(\n\t\t`{\"mode\":\"semantic\", \"filename\":%q, \"content\":%q, \"query\":\"\"}`,\n\t\tfilename, content))\n\n\treq, err := http.NewRequest(\"POST\", \"http://127.0.0.1:8088/bblfsh/api/parse\", bytes.NewBuffer(jsonStr))\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treq.Header.Set(\"Content-Type\", \"application/json\")\n\n\tres, err := c.Do(req)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tvar decoded struct {\n\t\tStatus   int\n\t\tLanguage string\n\t\t// Uast     interface{}\n\t\tErrors []struct{ Message string }\n\t}\n\n\terr = json.NewDecoder(res.Body).Decode(&decoded)\n\tif err != nil {\n\t\tbody := \"\"\n\t\tbytes, readErr := ioutil.ReadAll(res.Body)\n\t\tif readErr != nil {\n\t\t\tbody = string(bytes)\n\t\t}\n\n\t\treturn \"\", fmt.Errorf(\"could not decode the response body: %s, err: %s\", body, err)\n\t}\n\n\tif decoded.Status != 0 {\n\t\treturn \"\", fmt.Errorf(\"/bblfsh/api/parse endpoint returned an error: %v\", decoded.Errors)\n\t}\n\n\treturn decoded.Language, nil\n}\n"
  }
]