Showing preview only (237K chars total). Download the full file or copy to clipboard to get everything.
Repository: simonw/s3-credentials
Branch: main
Commit: 4611f333407d
Files: 29
Total size: 226.7 KB
Directory structure:
gitextract_jov9st3v/
├── .github/
│ ├── dependabot.yml
│ └── workflows/
│ ├── publish.yml
│ └── test.yml
├── .gitignore
├── .readthedocs.yaml
├── LICENSE
├── README.md
├── docs/
│ ├── .gitignore
│ ├── Makefile
│ ├── conf.py
│ ├── configuration.md
│ ├── contributing.md
│ ├── create.md
│ ├── help.md
│ ├── index.md
│ ├── localserver.md
│ ├── other-commands.md
│ ├── policy-documents.md
│ └── requirements.txt
├── pyproject.toml
├── s3_credentials/
│ ├── __init__.py
│ ├── cli.py
│ ├── localserver.py
│ └── policies.py
└── tests/
├── conftest.py
├── test_dry_run.py
├── test_integration.py
├── test_localserver.py
└── test_s3_credentials.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/dependabot.yml
================================================
version: 2
updates:
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "daily"
================================================
FILE: .github/workflows/publish.yml
================================================
name: Publish Python Package
on:
release:
types: [created]
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13", "3.14"]
steps:
- uses: actions/checkout@v6
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v6
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: pyproject.toml
- name: Install dependencies
run: |
pip install . --group dev
- name: Run tests
run: |
pytest
deploy:
runs-on: ubuntu-latest
needs: [test]
environment: release
permissions:
id-token: write
steps:
- uses: actions/checkout@v6
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.14"
cache: pip
cache-dependency-path: pyproject.toml
- name: Install dependencies
run: |
pip install setuptools wheel build
- name: Build
run: |
python -m build
- name: Publish
uses: pypa/gh-action-pypi-publish@release/v1
================================================
FILE: .github/workflows/test.yml
================================================
name: Test
on: [push, pull_request]
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13", "3.14"]
steps:
- uses: actions/checkout@v6
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v6
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: pyproject.toml
- name: Install dependencies
run: |
pip install . --group dev
- name: Run tests
run: |
pytest
- name: Check if cog needs to run
run: |
cog --check README.md
cog --check docs/*.md
================================================
FILE: .gitignore
================================================
.venv
__pycache__/
*.py[cod]
*$py.class
venv
.eggs
.pytest_cache
*.egg-info
.DS_Store
================================================
FILE: .readthedocs.yaml
================================================
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.11"
sphinx:
configuration: docs/conf.py
formats:
- pdf
- epub
python:
install:
- requirements: docs/requirements.txt
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: README.md
================================================
# s3-credentials
[](https://pypi.org/project/s3-credentials/)
[](https://github.com/simonw/s3-credentials/releases)
[](https://github.com/simonw/s3-credentials/actions?query=workflow%3ATest)
[](https://s3-credentials.readthedocs.org/)
[](https://github.com/simonw/s3-credentials/blob/master/LICENSE)
A tool for creating credentials for accessing S3 buckets
For project background, see [s3-credentials: a tool for creating credentials for S3 buckets](https://simonwillison.net/2021/Nov/3/s3-credentials/) on my blog.
## Installation
pip install s3-credentials
## Basic usage
To create a new S3 bucket and output credentials that can be used with only that bucket:
```
% s3-credentials create my-new-s3-bucket --create-bucket
Created bucket: my-new-s3-bucket
Created user: s3.read-write.my-new-s3-bucket with permissions boundary: arn:aws:iam::aws:policy/AmazonS3FullAccess
Attached policy s3.read-write.my-new-s3-bucket to user s3.read-write.my-new-s3-bucket
Created access key for user: s3.read-write.my-new-s3-bucket
{
"UserName": "s3.read-write.my-new-s3-bucket",
"AccessKeyId": "AKIAWXFXAIOZOYLZAEW5",
"Status": "Active",
"SecretAccessKey": "...",
"CreateDate": "2021-11-03 01:38:24+00:00"
}
```
The tool can do a lot more than this. See the [documentation](https://s3-credentials.readthedocs.io/) for details.
## Documentation
- [Full documentation](https://s3-credentials.readthedocs.io/)
- [Command help reference](https://s3-credentials.readthedocs.io/en/stable/help.html)
- [Release notes](https://github.com/simonw/s3-credentials/releases)
================================================
FILE: docs/.gitignore
================================================
_build
================================================
FILE: docs/Makefile
================================================
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = sqlite-utils
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
livehtml:
sphinx-autobuild -b html "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(0)
================================================
FILE: docs/conf.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from subprocess import PIPE, Popen
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ["myst_parser"]
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = ".rst"
# The master toctree document.
master_doc = "index"
# General information about the project.
project = "s3-credentials"
copyright = "2022, Simon Willison"
author = "Simon Willison"
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
pipe = Popen("git describe --tags --always", stdout=PIPE, shell=True)
git_version = pipe.stdout.read().decode("utf8")
if git_version:
version = git_version.rsplit("-", 1)[0]
release = git_version
else:
version = ""
release = ""
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = "en"
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "furo"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {}
html_title = "s3-credentials"
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = "s3-credentials-doc"
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(
master_doc,
"s3-credentials.tex",
"s3-credentials documentation",
"Simon Willison",
"manual",
)
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(
master_doc,
"s3-credentials",
"s3-credentials documentation",
[author],
1,
)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(
master_doc,
"s3-credentials",
"s3-credentials documentation",
author,
"s3-credentials",
" A tool for creating credentials for accessing S3 buckets ",
"Miscellaneous",
)
]
================================================
FILE: docs/configuration.md
================================================
# Configuration
This tool uses [boto3](https://boto3.amazonaws.com/) under the hood which supports [a number of different ways](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) of providing your AWS credentials.
If you have an existing `~/.aws/config` or `~/.aws/credentials` file the tool will use that.
One way to create those files is using the `aws configure` command, available if you first run `pip install awscli`.
Alternatively, you can set the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables before calling this tool.
You can also use the `--access-key=`, `--secret-key=`, `--session-token` and `--auth` options documented below.
## Common command options
All of the `s3-credentials` commands also accept the following options for authenticating against AWS:
- `--access-key`: AWS access key ID
- `--secret-key`: AWS secret access key
- `--session-token`: AWS session token
- `--endpoint-url`: Custom endpoint URL
- `--auth`: file (or `-` for standard input) containing credentials to use
The file passed to `--auth` can be either a JSON file or an INI file. JSON files should contain the following:
```json
{
"AccessKeyId": "AKIAWXFXAIOZA5IR5PY4",
"SecretAccessKey": "g63..."
}
```
The JSON file can also optionally include a session token in a `"SessionToken"` key.
The INI format variant of this file should look like this:
```ini
[default]
aws_access_key_id=AKIAWXFXAIOZNCR2ST7S
aws_secret_access_key=g63...
```
Any section headers will do - the tool will use the information from the first section it finds in the file which has a `aws_access_key_id` key.
These auth file formats are the same as those that can be created using the `create` command.
================================================
FILE: docs/contributing.md
================================================
# Contributing
To contribute to this tool, first checkout [the code](https://github.com/simonw/s3-credentials). You can run the tests locally using `pytest` and `uv`:
cd s3-credentials
uv run pytest
Any changes to the generated policies require an update to the docs using [Cog](https://github.com/nedbat/cog):
uv run poe cog
To preview the documentation locally, you can use:
uv run poe livehtml
## Integration tests
The main tests all use stubbed interfaces to AWS, so will not make any outbound API calls.
There is also a suite of integration tests in `tests/test_integration.py` which DO make API calls to AWS, using credentials from your environment variables or `~/.aws/credentials` file.
These tests are skipped by default. If you have AWS configured with an account that has permission to run the actions required by `s3-credentials` (create users, roles, buckets etc) you can run these tests using:
uv run pytest --integration
The tests will create a number of different users and buckets and should then delete them once they finish running.
================================================
FILE: docs/create.md
================================================
# Creating S3 credentials
The `s3-credentials create` command is the core feature of this tool. Pass it one or more S3 bucket names, specify a policy (read-write, read-only or write-only) and it will return AWS credentials that can be used to access those buckets.
These credentials can be **temporary** or **permanent**.
- Temporary credentials can last for between 15 minutes and 12 hours. They are created using [STS.AssumeRole()](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html).
- Permanent credentials never expire. They are created by first creating a dedicated AWS user, then assigning a policy to that user and creating and returning an access key for it.
Make sure to record the `SecretAccessKey` because it will only be displayed once and cannot be recreated later on.
In this example I create permanent credentials for reading and writing files in my `static.niche-museums.com` S3 bucket:
```
% s3-credentials create static.niche-museums.com
Created user: s3.read-write.static.niche-museums.com with permissions boundary: arn:aws:iam::aws:policy/AmazonS3FullAccess
Attached policy s3.read-write.static.niche-museums.com to user s3.read-write.static.niche-museums.com
Created access key for user: s3.read-write.static.niche-museums.com
{
"UserName": "s3.read-write.static.niche-museums.com",
"AccessKeyId": "AKIAWXFXAIOZOYLZAEW5",
"Status": "Active",
"SecretAccessKey": "...",
"CreateDate": "2021-11-03 01:38:24+00:00"
}
```
If you add `--format ini` the credentials will be output in INI format, suitable for pasting into a `~/.aws/credentials` file:
```
% s3-credentials create static.niche-museums.com --format ini > ini.txt
Created user: s3.read-write.static.niche-museums.com with permissions boundary: arn:aws:iam::aws:policy/AmazonS3FullAccess
Attached policy s3.read-write.static.niche-museums.com to user s3.read-write.static.niche-museums.com
Created access key for user: s3.read-write.static.niche-museums.com
% cat ini.txt
[default]
aws_access_key_id=AKIAWXFXAIOZKGXI4PVO
aws_secret_access_key=...
```
To create temporary credentials, add `--duration 15m` (or `1h` or `1200s`). The specified duration must be between 15 minutes and 12 hours.
```
% s3-credentials create static.niche-museums.com --duration 15m
Assume role against arn:aws:iam::462092780466:role/s3-credentials.AmazonS3FullAccess for 900s
{
"AccessKeyId": "ASIAWXFXAIOZPAHAYHUG",
"SecretAccessKey": "Nrnoc...",
"SessionToken": "FwoGZXIvYXd...mr9Fjs=",
"Expiration": "2021-11-11 03:24:07+00:00"
}
```
When using temporary credentials the session token must be passed in addition to the access key and secret key.
The `create` command has a number of options:
- `--format TEXT`: The output format to use. Defaults to `json`, but can also be `ini`.
- `--duration 15m`: For temporary credentials, how long should they last? This can be specified in seconds, minutes or hours using a suffix of `s`, `m` or `h` - but must be between 15 minutes and 12 hours.
- `--username TEXT`: The username to use for the user that is created by the command (or the username of an existing user if you do not want to create a new one). If ommitted a default such as `s3.read-write.static.niche-museums.com` will be used.
- `-c, --create-bucket`: Create the buckets if they do not exist. Without this any missing buckets will be treated as an error.
- `--prefix my-prefix/`: Credentials should only allow access to keys in the S3 bucket that start with this prefix.
- `--public`: When creating a bucket, set it so that any file uploaded to that bucket can be downloaded by anyone who knows its filename. This attaches the {ref}`public_bucket_policy` and sets the `PublicAccessBlockConfiguration` to `false` for [every option](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PublicAccessBlockConfiguration.html).
- `--website`: Sets the bucket to public and configures it to act as a website, with `index.html` treated as an index page and `error.html` used to display custom errors. The URL for the website will be `http://<bucket-name>.s3-website.<region>.amazonaws.com/` - the region defaults to `us-east-1` unless you specify a `--bucket-region`.
- `--read-only`: The user should only be allowed to read files from the bucket.
- `--write-only`: The user should only be allowed to write files to the bucket, but not read them. This can be useful for logging and backups.
- `--policy filepath-or-string`: A custom policy document (as a file path, literal JSON string or `-` for standard input) - see below.
- `--statement json-statement`: Custom JSON statement block to be added to the generated policy.
- `--bucket-region`: If creating buckets, the region in which they should be created.
- `--silent`: Don't output details of what is happening, just output the JSON for the created access credentials at the end.
- `--dry-run`: Output details of AWS changes that would have been made without applying them.
- `--user-permissions-boundary`: Custom [permissions boundary](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) to use for users created by this tool. The default is to restrict those users to only interacting with S3, taking the `--read-only` option into account. Use `none` to create users without any permissions boundary at all.
## Changes that will be made to your AWS account
How the tool works varies depending on if you are creating temporary or permanent credentials.
For permanent credentials, the steps are as follows:
1. Confirm that each of the specified buckets exists. If they do not and `--create-bucket` was passed create them - otherwise exit with an error.
2. If a username was not specified, derive a username using the `s3.$permission.$buckets` format.
3. If a user with that username does not exist, create one with an S3 permissions boundary of [AmazonS3ReadOnlyAccess](https://github.com/glassechidna/trackiam/blob/master/policies/AmazonS3ReadOnlyAccess.json) for `--read-only` or [AmazonS3FullAccess](https://github.com/glassechidna/trackiam/blob/master/policies/AmazonS3FullAccess.json) otherwise - unless `--user-permissions-boundary=none` was passed, or a custom permissions boundary string.
4. For each specified bucket, add an inline IAM policy to the user that gives them permission to either read-only, write-only or read-write against that bucket.
5. Create a new access key for that user and output the key and its secret to the console.
For temporary credentials:
1. Confirm or create buckets, in the same way as for permanent credentials.
2. Check if an AWS role called `s3-credentials.AmazonS3FullAccess` exists. If it does not exist create it, configured to allow the user's AWS account to assume it and with the `arn:aws:iam::aws:policy/AmazonS3FullAccess` policy attached.
3. Use `STS.AssumeRole()` to return temporary credentials that are restricted to just the specified buckets and specified read-only/read-write/write-only policy.
You can run the `create` command with the `--dry-run` option to see a summary of changes that would be applied, including details of generated policy documents, without actually applying those changes.
## Using a custom policy
The policy documents applied by this tool [are listed here](policy-documents.md).
If you want to use a custom policy document you can do so using the `--policy` option.
First, create your policy document as a JSON file that looks something like this:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetObject*", "s3:ListBucket"],
"Resource": [
"arn:aws:s3:::$!BUCKET_NAME!$",
"arn:aws:s3:::$!BUCKET_NAME!$/*"
],
}
]
}
```
Note the `$!BUCKET_NAME!$` strings - these will be replaced with the name of the relevant S3 bucket before the policy is applied.
Save that as `custom-policy.json` and apply it using the following command:
% s3-credentials create my-s3-bucket \
--policy custom-policy.json
You can also pass `-` to read from standard input, or you can pass the literal JSON string directly to the `--policy` option:
```
% s3-credentials create my-s3-bucket --policy '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetObject*", "s3:ListBucket"],
"Resource": [
"arn:aws:s3:::$!BUCKET_NAME!$",
"arn:aws:s3:::$!BUCKET_NAME!$/*"
],
}
]
}'
```
You can also specify one or more extra statement blocks that should be added to the generated policy, using `--statement JSON`. This example enables the AWS `textract:` APIs for the generated credentials, useful for using with the [s3-ocr](https://datasette.io/tools/s3-ocr) tool:
```
% s3-credentials create my-s3-bucket --statement '{
"Effect": "Allow",
"Action": "textract:*",
"Resource": "*"
}'
```
================================================
FILE: docs/help.md
================================================
# Command help
This page shows the `--help` output for all of the `s3-credentials` commands.
<!-- [[[cog
import cog
from s3_credentials import cli
from click.testing import CliRunner
runner = CliRunner()
# Get a list of all the commands
result = runner.invoke(cli.cli, ["--help"])
lines = result.output.split("Commands:")[1].strip().split("\n")
commands = [l.strip().split()[0] for l in lines if l]
for command in [""] + commands:
result = runner.invoke(cli.cli, ([command] if command else []) + ["--help"])
help = result.output.replace("Usage: cli", "Usage: s3-credentials")
cog.out(
"## s3-credentials {} --help\n\n```\n{}\n```\n".format(command, help.strip())
)
]]] -->
## s3-credentials --help
```
Usage: s3-credentials [OPTIONS] COMMAND [ARGS]...
A tool for creating credentials for accessing S3 buckets
Documentation: https://s3-credentials.readthedocs.io/
Options:
--version Show the version and exit.
--help Show this message and exit.
Commands:
create Create and return new AWS credentials for...
debug-bucket Run a bunch of diagnostics to help debug a bucket
delete-objects Delete one or more object from an S3 bucket
delete-user Delete specified users, their access keys and...
get-bucket-policy Get bucket policy for a bucket
get-cors-policy Get CORS policy for a bucket
get-object Download an object from an S3 bucket
get-objects Download multiple objects from an S3 bucket
get-public-access-block Get the public access settings for an S3 bucket
list-bucket List contents of bucket
list-buckets List buckets
list-roles List roles
list-user-policies List inline policies for specified users
list-users List all users for this account
localserver Start a localhost server that serves S3...
policy Output generated JSON policy for one or more...
put-object Upload an object to an S3 bucket
put-objects Upload multiple objects to an S3 bucket
set-bucket-policy Set bucket policy for a bucket
set-cors-policy Set CORS policy for a bucket
set-public-access-block Configure public access settings for an S3 bucket.
whoami Identify currently authenticated user
```
## s3-credentials create --help
```
Usage: s3-credentials create [OPTIONS] BUCKETS...
Create and return new AWS credentials for specified S3 buckets - optionally
also creating the bucket if it does not yet exist.
To create a new bucket and output read-write credentials:
s3-credentials create my-new-bucket -c
To create read-only credentials for an existing bucket:
s3-credentials create my-existing-bucket --read-only
To create write-only credentials that are only valid for 15 minutes:
s3-credentials create my-existing-bucket --write-only -d 15m
Options:
-f, --format [ini|json] Output format for credentials
-d, --duration DURATION How long should these credentials work for?
Default is forever, use 3600 for 3600 seconds,
15m for 15 minutes, 1h for 1 hour
--username TEXT Username to create or existing user to use
-c, --create-bucket Create buckets if they do not already exist
--prefix TEXT Restrict to keys starting with this prefix
--public Make the created bucket public: anyone will be
able to download files if they know their name
--website Configure bucket to act as a website, using
index.html and error.html
--read-only Only allow reading from the bucket
--write-only Only allow writing to the bucket
--policy POLICY Path to a policy.json file, or literal JSON
string - $!BUCKET_NAME!$ will be replaced with
the name of the bucket
--statement STATEMENT JSON statement to add to the policy
--bucket-region TEXT Region in which to create buckets
--silent Don't show performed steps
--dry-run Show steps without executing them
--user-permissions-boundary TEXT
Custom permissions boundary to use for created
users, or 'none' to create without. Defaults
to limiting to S3 based on --read-only and
--write-only options.
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials debug-bucket --help
```
Usage: s3-credentials debug-bucket [OPTIONS] BUCKET
Run a bunch of diagnostics to help debug a bucket
s3-credentials debug-bucket my-bucket
Options:
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials delete-objects --help
```
Usage: s3-credentials delete-objects [OPTIONS] BUCKET [KEYS]...
Delete one or more object from an S3 bucket
Pass one or more keys to delete them:
s3-credentials delete-objects my-bucket one.txt two.txt
To delete all files matching a prefix, pass --prefix:
s3-credentials delete-objects my-bucket --prefix my-folder/
Options:
--prefix TEXT Delete everything with this prefix
-s, --silent Don't show informational output
-d, --dry-run Show keys that would be deleted without deleting them
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials delete-user --help
```
Usage: s3-credentials delete-user [OPTIONS] USERNAMES...
Delete specified users, their access keys and their inline policies
s3-credentials delete-user username1 username2
Options:
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials get-bucket-policy --help
```
Usage: s3-credentials get-bucket-policy [OPTIONS] BUCKET
Get bucket policy for a bucket
s3-credentials get-bucket-policy my-bucket
Returns the bucket policy for this bucket, if set, as JSON
Options:
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials get-cors-policy --help
```
Usage: s3-credentials get-cors-policy [OPTIONS] BUCKET
Get CORS policy for a bucket
s3-credentials get-cors-policy my-bucket
Returns the CORS policy for this bucket, if set, as JSON
Options:
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials get-object --help
```
Usage: s3-credentials get-object [OPTIONS] BUCKET KEY
Download an object from an S3 bucket
To see the contents of the bucket on standard output:
s3-credentials get-object my-bucket hello.txt
To save to a file:
s3-credentials get-object my-bucket hello.txt -o hello.txt
Options:
-o, --output FILE Write to this file instead of stdout
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials get-objects --help
```
Usage: s3-credentials get-objects [OPTIONS] BUCKET [KEYS]...
Download multiple objects from an S3 bucket
To download everything, run:
s3-credentials get-objects my-bucket
Files will be saved to a directory called my-bucket. Use -o dirname to save to
a different directory.
To download specific keys, list them:
s3-credentials get-objects my-bucket one.txt path/two.txt
To download files matching a glob-style pattern, use:
s3-credentials get-objects my-bucket --pattern '*/*.js'
Options:
-o, --output DIRECTORY Write to this directory instead of one matching the
bucket name
-p, --pattern TEXT Glob patterns for files to download, e.g. '*/*.js'
-s, --silent Don't show progress bar
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials get-public-access-block --help
```
Usage: s3-credentials get-public-access-block [OPTIONS] BUCKET
Get the public access settings for an S3 bucket
Example usage:
s3-credentials get-public-access-block my-bucket
Options:
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials list-bucket --help
```
Usage: s3-credentials list-bucket [OPTIONS] BUCKET
List contents of bucket
To list the contents of a bucket as JSON:
s3-credentials list-bucket my-bucket
Add --csv or --csv for CSV or TSV format:
s3-credentials list-bucket my-bucket --csv
Add --urls to get an extra URL field for each key:
s3-credentials list-bucket my-bucket --urls
Options:
--prefix TEXT List keys starting with this prefix
--urls Show URLs for each key
--nl Output newline-delimited JSON
--csv Output CSV
--tsv Output TSV
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials list-buckets --help
```
Usage: s3-credentials list-buckets [OPTIONS] [BUCKETS]...
List buckets
To list all buckets and their creation time as JSON:
s3-credentials list-buckets
Add --csv or --csv for CSV or TSV format:
s3-credentials list-buckets --csv
For extra details per bucket (much slower) add --details
s3-credentials list-buckets --details
Options:
--details Include extra bucket details (slower)
--nl Output newline-delimited JSON
--csv Output CSV
--tsv Output TSV
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials list-roles --help
```
Usage: s3-credentials list-roles [OPTIONS] [ROLE_NAMES]...
List roles
To list all roles for this AWS account:
s3-credentials list-roles
Add --csv or --csv for CSV or TSV format:
s3-credentials list-roles --csv
For extra details per role (much slower) add --details
s3-credentials list-roles --details
Options:
--details Include attached policies (slower)
--nl Output newline-delimited JSON
--csv Output CSV
--tsv Output TSV
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials list-user-policies --help
```
Usage: s3-credentials list-user-policies [OPTIONS] [USERNAMES]...
List inline policies for specified users
s3-credentials list-user-policies username
Returns policies for all users if no usernames are provided.
Options:
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials list-users --help
```
Usage: s3-credentials list-users [OPTIONS]
List all users for this account
s3-credentials list-users
Add --csv or --csv for CSV or TSV format:
s3-credentials list-users --csv
Options:
--nl Output newline-delimited JSON
--csv Output CSV
--tsv Output TSV
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials localserver --help
```
Usage: s3-credentials localserver [OPTIONS] BUCKET
Start a localhost server that serves S3 credentials.
The server responds to GET requests on / with JSON containing temporary AWS
credentials that allow access to the specified bucket.
Credentials are cached and refreshed automatically based on the --duration
setting.
To start a server that serves read-only credentials for a bucket, with
credentials valid for 1 hour:
s3-credentials localserver my-bucket --read-only --duration 1h
To run on a different port:
s3-credentials localserver my-bucket --duration 1h --port 9000
Options:
-p, --port INTEGER Port to run the server on (default: 8094)
--host TEXT Host to bind the server to (default: localhost)
--read-only Only allow reading from the bucket
--write-only Only allow writing to the bucket
--prefix TEXT Restrict to keys starting with this prefix
--statement STATEMENT JSON statement to add to the policy
-d, --duration DURATION How long should credentials be valid for, e.g. 15m,
1h, 12h [required]
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials policy --help
```
Usage: s3-credentials policy [OPTIONS] BUCKETS...
Output generated JSON policy for one or more buckets
Takes the same options as s3-credentials create
To output a read-only JSON policy for a bucket:
s3-credentials policy my-bucket --read-only
Options:
--read-only Only allow reading from the bucket
--write-only Only allow writing to the bucket
--prefix TEXT Restrict to keys starting with this prefix e.g. foo/
--statement STATEMENT JSON statement to add to the policy
--public-bucket Bucket policy for allowing public access
--help Show this message and exit.
```
## s3-credentials put-object --help
```
Usage: s3-credentials put-object [OPTIONS] BUCKET KEY PATH
Upload an object to an S3 bucket
To upload a file to /my-key.txt in the my-bucket bucket:
s3-credentials put-object my-bucket my-key.txt /path/to/file.txt
Use - to upload content from standard input:
echo "Hello" | s3-credentials put-object my-bucket hello.txt -
Options:
--content-type TEXT Content-Type to use (default is auto-detected based on
file extension)
-s, --silent Don't show progress bar
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials put-objects --help
```
Usage: s3-credentials put-objects [OPTIONS] BUCKET OBJECTS...
Upload multiple objects to an S3 bucket
Pass one or more files to upload them:
s3-credentials put-objects my-bucket one.txt two.txt
These will be saved to the root of the bucket. To save to a different location
use the --prefix option:
s3-credentials put-objects my-bucket one.txt two.txt --prefix my-folder
This will upload them my-folder/one.txt and my-folder/two.txt.
If you pass a directory it will be uploaded recursively:
s3-credentials put-objects my-bucket my-folder
This will create keys in my-folder/... in the S3 bucket.
To upload all files in a folder to the root of the bucket instead use this:
s3-credentials put-objects my-bucket my-folder/*
Options:
--prefix TEXT Prefix to add to the files within the bucket
-s, --silent Don't show progress bar
--dry-run Show steps without executing them
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials set-bucket-policy --help
```
Usage: s3-credentials set-bucket-policy [OPTIONS] BUCKET
Set bucket policy for a bucket
s3-credentials set-bucket-policy my-bucket --policy-file policy.json
Or to set a policy that allows GET requests from all:
s3-credentials set-bucket-policy my-bucket --allow-all-get
Options:
--policy-file FILENAME
--allow-all-get Allow GET requests from all
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials set-cors-policy --help
```
Usage: s3-credentials set-cors-policy [OPTIONS] BUCKET
Set CORS policy for a bucket
To allow GET requests from any origin:
s3-credentials set-cors-policy my-bucket
To allow GET and PUT from a specific origin and expose ETag headers:
s3-credentials set-cors-policy my-bucket \
--allowed-method GET \
--allowed-method PUT \
--allowed-origin https://www.example.com/ \
--expose-header ETag
Options:
-m, --allowed-method TEXT Allowed method e.g. GET
-h, --allowed-header TEXT Allowed header e.g. Authorization
-o, --allowed-origin TEXT Allowed origin e.g. https://www.example.com/
-e, --expose-header TEXT Header to expose e.g. ETag
--max-age-seconds INTEGER How long to cache preflight requests
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials set-public-access-block --help
```
Usage: s3-credentials set-public-access-block [OPTIONS] BUCKET
Configure public access settings for an S3 bucket.
Example:
s3-credentials set-public-access-block my-bucket --block-public-acls false
To allow full public access to the bucket, use the --allow-public-access flag:
s3-credentials set-public-access-block my-bucket --allow-public-access
Options:
--block-public-acls BOOLEAN Block public ACLs for the bucket (true/false).
--ignore-public-acls BOOLEAN Ignore public ACLs for the bucket
(true/false).
--block-public-policy BOOLEAN Block public bucket policies (true/false).
--restrict-public-buckets BOOLEAN
Restrict public buckets (true/false).
--allow-public-access Set all public access settings to false
(allows full public access).
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
## s3-credentials whoami --help
```
Usage: s3-credentials whoami [OPTIONS]
Identify currently authenticated user
Options:
--access-key TEXT AWS access key ID
--secret-key TEXT AWS secret access key
--session-token TEXT AWS session token
--endpoint-url TEXT Custom endpoint URL
-a, --auth FILENAME Path to JSON/INI file containing credentials
--help Show this message and exit.
```
<!-- [[[end]]] -->
================================================
FILE: docs/index.md
================================================
# s3-credentials
[](https://pypi.org/project/s3-credentials/)
[](https://github.com/simonw/s3-credentials/releases)
[](https://github.com/simonw/s3-credentials/actions?query=workflow%3ATest)
[](https://s3-credentials.readthedocs.org/)
[](https://github.com/simonw/s3-credentials/blob/master/LICENSE)
A tool for creating credentials for accessing S3 buckets
For project background, see [s3-credentials: a tool for creating credentials for S3 buckets](https://simonwillison.net/2021/Nov/3/s3-credentials/) on my blog.
Why would you need this? If you want to read and write to an S3 bucket from an automated script somewhere, you'll need an access key and secret key to authenticate your calls. This tool helps you create those with the most restrictive permissions possible.
If your code is running in EC2 or Lambda you can likely solve this [using roles instead](https://aws.amazon.com/premiumsupport/knowledge-center/lambda-execution-role-s3-bucket/). This tool is mainly useful for when you are interacting with S3 from outside the boundaries of AWS itself.
## Installation
Install this tool using `pip`:
$ pip install s3-credentials
## Documentation
```{toctree}
---
maxdepth: 3
---
configuration
create
localserver
other-commands
policy-documents
help
contributing
```
## Tips
You can see a log of changes made by this tool using AWS CloudTrail - the following link should provide an Event History interface showing revelant changes made to your AWS account such as `CreateAccessKey`, `CreateUser`, `PutUserPolicy` and more:
<https://console.aws.amazon.com/cloudtrail/home>
You can view a list of your S3 buckets and confirm that they have the desired permissions and properties here:
<https://console.aws.amazon.com/s3/home>
The management interface for an individual bucket is at `https://console.aws.amazon.com/s3/buckets/NAME-OF-BUCKET`
================================================
FILE: docs/localserver.md
================================================
# Local credential server
The `s3-credentials localserver` command starts a local HTTP server that serves temporary S3 credentials. This is useful when you need to provide credentials to applications that can fetch them from an HTTP endpoint.
## Basic usage
To start a server that serves credentials for a bucket:
```bash
s3-credentials localserver my-bucket --duration 1h
```
This starts a server on `localhost:8094` that responds to `GET /` requests with JSON containing temporary AWS credentials.
The server will output:
```
Generating initial credentials...
Serving read-write credentials for bucket 'my-bucket' at http://localhost:8094/
Duration: 3600 seconds
Press Ctrl+C to stop
```
## Fetching credentials
Once the server is running, fetch credentials with:
```bash
curl http://localhost:8094/
```
This returns JSON in the [AWS credential_process format](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html):
```json
{
"Version": 1,
"AccessKeyId": "ASIAWXFXAIOZPAHAYHUG",
"SecretAccessKey": "Nrnoc...",
"SessionToken": "FwoGZXIvYXd...mr9Fjs=",
"Expiration": "2025-12-16T12:00:00+00:00"
}
```
## Options
### Duration (required)
The `--duration` or `-d` option specifies how long credentials should be valid for. This must be between 15 minutes and 12 hours:
```bash
# 15 minutes
s3-credentials localserver my-bucket --duration 15m
# 1 hour
s3-credentials localserver my-bucket --duration 1h
# 12 hours
s3-credentials localserver my-bucket --duration 12h
```
### Port
Change the port with `-p` or `--port`:
```bash
s3-credentials localserver my-bucket --duration 1h --port 9000
```
### Host
Change the host to bind to with `--host`:
```bash
s3-credentials localserver my-bucket --duration 1h --host 0.0.0.0
```
### Read-only or write-only access
By default, credentials have read-write access. Use `--read-only` or `--write-only` for more restricted access:
```bash
# Read-only access
s3-credentials localserver my-bucket --duration 1h --read-only
# Write-only access
s3-credentials localserver my-bucket --duration 1h --write-only
```
### Prefix restriction
Restrict access to keys with a specific prefix:
```bash
s3-credentials localserver my-bucket --duration 1h --prefix "uploads/"
```
### Custom policy statements
Add custom IAM policy statements with `--statement`:
```bash
s3-credentials localserver my-bucket --duration 1h \
--statement '{"Effect": "Allow", "Action": "textract:*", "Resource": "*"}'
```
## Credential caching
The server caches credentials internally and serves the same credentials until they expire. When the duration elapses, the server automatically generates new credentials.
This avoids issues with multiple simultaneous requests all triggering credential generation (dogpile effect), and ensures that applications fetching credentials within a short time window all receive the same credentials.
## Example: Using with AWS CLI profiles
You can configure an AWS CLI profile to fetch credentials from the local server. Add to your `~/.aws/config`:
```ini
[profile localserver]
credential_process = curl -s http://localhost:8094/
```
Then use:
```bash
aws s3 ls s3://my-bucket/ --profile localserver
```
================================================
FILE: docs/other-commands.md
================================================
# Other commands
```{contents}
---
local:
class: this-will-duplicate-information-and-it-is-still-useful-here
---
```
## policy
You can use the `s3-credentials policy` command to generate the JSON policy document that would be used without applying it. The command takes one or more required bucket names and a subset of the options available on the `create` command:
- `--read-only` - generate a read-only policy
- `--write-only` - generate a write-only policy
- `--prefix` - policy should be restricted to keys in the bucket that start with this prefix
- `--statement json-statement`: Custom JSON statement block
- `--public-bucket` - generate a bucket policy for a public bucket
With none of these options it defaults to a read-write policy.
```bash
s3-credentials policy my-bucket --read-only
```
```
{
"Version": "2012-10-17",
...
```
## whoami
To see which user you are authenticated as:
```bash
s3-credentials whoami
```
This will output JSON representing the currently authenticated user.
Using this with the `--auth` option is useful for verifying created credentials:
```bash
s3-credentials create static.niche-museums.com --read-only > auth.json
```
```bash
s3-credentials whoami --auth auth.json
```
```json
{
"UserId": "AIDAWXFXAIOZPIZC6MHAG",
"Account": "462092780466",
"Arn": "arn:aws:iam::462092780466:user/s3.read-only.static.niche-museums.com"
}
```
## list-users
To see a list of all users that exist for your AWS account:
```bash
s3-credentials list-users
```
This will return a pretty-printed array of JSON objects by default.
Add `--nl` to collapse these to single lines as valid newline-delimited JSON.
Add `--csv` or `--tsv` to get back CSV or TSV data.
## list-buckets
Shows a list of all buckets in your AWS account.
```bash
s3-credentials list-buckets
```
```json
[
{
"Name": "aws-cloudtrail-logs-462092780466-f2c900d3",
"CreationDate": "2021-03-25 22:19:54+00:00"
},
{
"Name": "simonw-test-bucket-for-s3-credentials",
"CreationDate": "2021-11-03 21:46:12+00:00"
}
]
```
With no extra arguments this will show all available buckets - you can also add one or more explicit bucket names to see just those buckets:
```bash
s3-credentials list-buckets simonw-test-bucket-for-s3-credentials
```
```json
[
{
"Name": "simonw-test-bucket-for-s3-credentials",
"CreationDate": "2021-11-03 21:46:12+00:00"
}
]
```
This accepts the same `--nl`, `--csv` and `--tsv` options as `list-users`.
Add `--details` to include details of the bucket ACL, website configuration and public access block settings. This is useful for running a security audit of your buckets.
Using `--details` adds several additional API calls for each bucket, so it is advisable to use it with one or more explicit bucket names.
```bash
s3-credentials list-buckets simonw-test-public-website-bucket --details
```
```json
[
{
"Name": "simonw-test-public-website-bucket",
"CreationDate": "2021-11-08 22:53:30+00:00",
"region": "us-east-1",
"bucket_acl": {
"Owner": {
"DisplayName": "simon",
"ID": "abcdeabcdeabcdeabcdeabcdeabcde0001"
},
"Grants": [
{
"Grantee": {
"DisplayName": "simon",
"ID": "abcdeabcdeabcdeabcdeabcdeabcde0001",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
}
]
},
"public_access_block": null,
"bucket_website": {
"IndexDocument": {
"Suffix": "index.html"
},
"ErrorDocument": {
"Key": "error.html"
},
"url": "http://simonw-test-public-website-bucket.s3-website.us-east-1.amazonaws.com/"
}
}
]
```
A bucket with `public_access_block` might look like this:
```json
{
"Name": "aws-cloudtrail-logs-462092780466-f2c900d3",
"CreationDate": "2021-03-25 22:19:54+00:00",
"bucket_acl": {
"Owner": {
"DisplayName": "simon",
"ID": "abcdeabcdeabcdeabcdeabcdeabcde0001"
},
"Grants": [
{
"Grantee": {
"DisplayName": "simon",
"ID": "abcdeabcdeabcdeabcdeabcdeabcde0001",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
}
]
},
"public_access_block": {
"BlockPublicAcls": true,
"IgnorePublicAcls": true,
"BlockPublicPolicy": true,
"RestrictPublicBuckets": true
},
"bucket_website": null
}
```
## list-bucket
To list the contents of a bucket, use `list-bucket`:
```bash
s3-credentials list-bucket static.niche-museums.com
```
```json
[
{
"Key": "Griffith-Observatory.jpg",
"LastModified": "2020-01-05 16:51:01+00:00",
"ETag": "\"a4cff17d189e7eb0c4d3bf0257e56885\"",
"Size": 3360040,
"StorageClass": "STANDARD"
},
{
"Key": "IMG_0353.jpeg",
"LastModified": "2019-10-25 02:50:49+00:00",
"ETag": "\"d45bab0b65c0e4b03b2ac0359c7267e3\"",
"Size": 2581023,
"StorageClass": "STANDARD"
}
]
```
You can use the `--prefix myprefix/` option to list only keys that start with a specific prefix.
The commmand accepts the same `--nl`, `--csv` and `--tsv` options as `list-users`.
Add `--urls` to include a `URL` field in the output providing the full URL to each object.
## list-user-policies
To see a list of inline policies belonging to users:
```bash
s3-credentials list-user-policies s3.read-write.static.niche-museums.com
```
```
User: s3.read-write.static.niche-museums.com
PolicyName: s3.read-write.static.niche-museums.com
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::static.niche-museums.com"
]
},
{
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": [
"arn:aws:s3:::static.niche-museums.com/*"
]
}
]
}
```
You can pass any number of usernames here. If you don't specify a username the tool will loop through every user belonging to your account:
```bash
s3-credentials list-user-policies
```
## list-roles
The `list-roles` command lists all of the roles available for the authenticated account.
Add `--details` to fetch the inline and attached managed policies for each row as well - this is slower as it needs to make several additional API calls for each role.
You can optionally add one or more role names to the command to display and fetch details about just those specific roles.
Example usage:
```bash
s3-credentials list-roles AWSServiceRoleForLightsail --details
```
```json
[
{
"Path": "/aws-service-role/lightsail.amazonaws.com/",
"RoleName": "AWSServiceRoleForLightsail",
"RoleId": "AROAWXFXAIOZG5ACQ5NZ5",
"Arn": "arn:aws:iam::462092780466:role/aws-service-role/lightsail.amazonaws.com/AWSServiceRoleForLightsail",
"CreateDate": "2021-01-15 21:41:48+00:00",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lightsail.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"MaxSessionDuration": 3600,
"inline_policies": [
{
"RoleName": "AWSServiceRoleForLightsail",
"PolicyName": "LightsailExportAccess",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:DescribeKey",
"kms:CreateGrant"
],
"Resource": "arn:aws:kms:*:451833091580:key/*"
},
{
"Effect": "Allow",
"Action": [
"cloudformation:DescribeStacks"
],
"Resource": "arn:aws:cloudformation:*:*:stack/*/*"
}
]
}
}
],
"attached_policies": [
{
"PolicyName": "LightsailExportAccess",
"PolicyId": "ANPAJ4LZGPQLZWMVR4WMQ",
"Arn": "arn:aws:iam::aws:policy/aws-service-role/LightsailExportAccess",
"Path": "/aws-service-role/",
"DefaultVersionId": "v2",
"AttachmentCount": 1,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"Description": "AWS Lightsail service linked role policy which grants permissions to export resources",
"CreateDate": "2018-09-28 16:35:54+00:00",
"UpdateDate": "2022-01-15 01:45:33+00:00",
"Tags": [],
"PolicyVersion": {
"Document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:DeleteServiceLinkedRole",
"iam:GetServiceLinkedRoleDeletionStatus"
],
"Resource": "arn:aws:iam::*:role/aws-service-role/lightsail.amazonaws.com/AWSServiceRoleForLightsail*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CopySnapshot",
"ec2:DescribeSnapshots",
"ec2:CopyImage",
"ec2:DescribeImages"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetAccountPublicAccessBlock"
],
"Resource": "*"
}
]
},
"VersionId": "v2",
"IsDefaultVersion": true,
"CreateDate": "2022-01-15 01:45:33+00:00"
}
}
]
}
]
```
Add `--nl` to collapse these to single lines as valid newline-delimited JSON.
Add `--csv` or `--tsv` to get back CSV or TSV data.
## delete-user
In trying out this tool it's possible you will create several different user accounts that you later decide to clean up.
Deleting AWS users is a little fiddly: you first need to delete their access keys, then their inline policies and finally the user themselves.
The `s3-credentials delete-user` handles this for you:
```bash
s3-credentials delete-user s3.read-write.simonw-test-bucket-10
```
```
User: s3.read-write.simonw-test-bucket-10
Deleted policy: s3.read-write.simonw-test-bucket-10
Deleted access key: AKIAWXFXAIOZK3GPEIWR
Deleted user
```
You can pass it multiple usernames to delete multiple users at a time.
## put-object
You can upload a file to a key in an S3 bucket using `s3-credentials put-object`:
```bash
s3-credentials put-object my-bucket my-key.txt /path/to/file.txt
```
Use `-` as the file name to upload from standard input:
```bash
echo "Hello" | s3-credentials put-object my-bucket hello.txt -
```
This command shows a progress bar by default. Use `-s` or `--silent` to hide the progress bar.
The `Content-Type` on the uploaded object will be automatically set based on the file extension. If you are using standard input, or you want to over-ride the detected type, you can do so using the `--content-type` option:
```bash
echo "<h1>Hello World</h1>" | \
s3-credentials put-object my-bucket hello.html - --content-type "text/html"
```
## put-objects
`s3-credentials put-objects` can be used to upload more than one file at once.
Pass one or more filenames to upload them to the root of your bucket:
```bash
s3-credentials put-objects my-bucket one.txt two.txt three.txt
```
Use `--prefix my-prefix` to upload them to the specified prefix:
```bash
s3-credentials put-objects my-bucket one.txt --prefix my-prefix
```
This will upload the file to `my-prefix/one.txt`.
Pass one or more directories to upload the contents of those directories.
`.` uploads everything in your current directory:
```bash
s3-credentials put-objects my-bucket .
```
Passing directory names will upload the directory and all of its contents:
```bash
s3-credentials put-objects my-bucket my-directory
```
If `my-directory` had files `one.txt` and `two.txt` in it, the result would be:
```
my-directory/one.txt
my-directory/two.txt
```
A progress bar will be shown by default. Use `-s` or `--silent` to hide it.
Add `--dry-run` to get a preview of what would be uploaded without uploading anything:
```bash
s3-credentials put-objects my-bucket . --dry-run
```
```
out/IMG_1254.jpeg => s3://my-bucket/out/IMG_1254.jpeg
out/alverstone-mead-2.jpg => s3://my-bucket/out/alverstone-mead-2.jpg
out/alverstone-mead-1.jpg => s3://my-bucket/out/alverstone-mead-1.jpg
```
## delete-objects
`s3-credentials delete-objects` can be used to delete one or more keys from the bucket.
Pass one or more keys to delete them:
```bash
s3-credentials delete-objects my-bucket one.txt two.txt three.txt
```
Use `--prefix my-prefix` to delete all keys with the specified prefix:
```bash
s3-credentials delete-objects my-bucket --prefix my-prefix
```
Pass `-d` or `--dry-run` to perform a dry-run of the deletion, which will list the keys that would be deleted without actually deleting them.
```bash
s3-credentials delete-objects my-bucket --prefix my-prefix --dry-run
```
## get-object
To download a file from a bucket use `s3-credentials get-object`:
```bash
s3-credentials get-object my-bucket hello.txt
```
This defaults to outputting the downloaded file to the terminal. You can instead direct it to save to a file on disk using the `-o` or `--output` option:
```bash
s3-credentials get-object my-bucket hello.txt -o /path/to/hello.txt
```
## get-objects
`s3-credentials get-objects` can be used to download multiple files from a bucket at once.
Without extra arguments, this downloads everything:
```bash
s3-credentials get-objects my-bucket
```
Files will be written to the current directory by default, preserving their directory structure from the bucket.
To write to a different directory use `--output` or `-o`:
```bash
s3-credentials get-objects my-bucket -o /path/to/output
```
To download multiple specific files, add them as arguments to the command:
```bash
s3-credentials get-objects my-bucket one.txt two.txt path/to/three.txt
```
You can pass one or more `--pattern` or `-p` options to download files matching a specific pattern:
```bash
s3-credentials get-objects my-bucket -p "*.txt" -p "static/*.css"
```
Here the `*` wildcard will match any sequence of characters, including `/`. `?` will match a single character.
A progress bar will be shown by default. Use `-s` or `--silent` to hide it.
## set-cors-policy and get-cors-policy
You can set the [CORS policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html) for a bucket using the `set-cors-policy` command. S3 CORS policies are set at the bucket level - they cannot be set for individual items.
First, create the bucket. Make sure to make it `--public`:
```bash
s3-credentials create my-cors-bucket --public -c
```
You can set a default CORS policy - allowing `GET` requests from any origin - like this:
```bash
s3-credentials set-cors-policy my-cors-bucket
```
You can use the `get-cors-policy` command to confirm the policy you have set:
```bash
s3-credentials get-cors-policy my-cors-bucket
```
```json
[
{
"ID": "set-by-s3-credentials",
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
]
}
]
```
To customize the CORS policy, use the following options:
- `-m/--allowed-method` - Allowed method e.g. `GET`
- `-h/--allowed-header` - Allowed header e.g. `Authorization`
- `-o/--allowed-origin` - Allowed origin e.g. `https://www.example.com/`
- `-e/--expose-header` - Header to expose e.g. `ETag`
- `--max-age-seconds` - How long to cache preflight requests
Each of these can be passed multiple times with the exception of `--max-age-seconds`.
The following example allows GET and PUT methods from code running on `https://www.example.com/`, allows the incoming `Authorization` header and exposes the `ETag` header. It also sets the client to cache preflight requests for 60 seconds:
```bash
s3-credentials set-cors-policy my-cors-bucket2 \
--allowed-method GET \
--allowed-method PUT \
--allowed-origin https://www.example.com/ \
--expose-header ETag \
--max-age-seconds 60
```
## debug-bucket
The `debug-bucket` command is useful for diagnosing issues with a bucket:
```bash
s3-credentials debug-bucket my-bucket
```
Example output:
```
Bucket ACL:
{
"Owner": {
"DisplayName": "username",
"ID": "cc8ca3a037c6a7c1fa7580076bf7cd1949b3f2f58f01c9df9e53c51f6a249910"
},
"Grants": [
{
"Grantee": {
"DisplayName": "username",
"ID": "cc8ca3a037c6a7c1fa7580076bf7cd1949b3f2f58f01c9df9e53c51f6a249910",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
}
]
}
Bucket policy status:
{
"PolicyStatus": {
"IsPublic": true
}
}
Bucket public access block:
{
"PublicAccessBlockConfiguration": {
"BlockPublicAcls": false,
"IgnorePublicAcls": false,
"BlockPublicPolicy": false,
"RestrictPublicBuckets": false
}
}
```
## get-bucket-policy
The `get-bucket-policy` command displays the current bucket policy for a bucket:
```bash
s3-credentials get-bucket-policy my-bucket
```
Example output:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAllGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
```
## set-bucket-policy
The `set-bucket-policy` command can be used to set a bucket policy for a bucket:
```bash
s3-credentials set-bucket-policy my-bucket --policy-file policy.json
```
Or for the common case of setting a policy to allow GET access to all buckets:
```bash
s3-credentials set-bucket-policy my-bucket --allow-all-get
```
## get-public-access-block
The `get-public-access-block` command displays the current public access block configuration for a bucket:
```bash
s3-credentials get-public-access-block my-bucket
```
Example output:
```json
{
"BlockPublicAcls": false,
"IgnorePublicAcls": false,
"BlockPublicPolicy": false,
"RestrictPublicBuckets": false
}
```
## set-public-access-block
The `set-public-access-block` command can be used to set the public access block configuration for a bucket:
```bash
s3-credentials set-public-access-block my-bucket \
--block-public-acls true \
--ignore-public-acls true \
--block-public-policy true \
--restrict-public-buckets true
```
Each of the above options accepts `true` or `false`.
You can use the `--allow-public-access` shortcut to set everything to `false` in one go:
```bash
s3-credentials set-public-access-block my-bucket \
--allow-public-access
```
================================================
FILE: docs/policy-documents.md
================================================
# Policy documents
The IAM policies generated by this tool for a bucket called `my-s3-bucket` would look like this:
## read-write (default)
<!-- [[[cog
import cog, json
from s3_credentials import cli
from click.testing import CliRunner
runner = CliRunner()
result = runner.invoke(cli.cli, ["policy", "my-s3-bucket"])
cog.out(
"```\n{}\n```".format(json.dumps(json.loads(result.output), indent=2))
)
]]] -->
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectLegalHold",
"s3:GetObjectRetention",
"s3:GetObjectTagging"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket/*"
]
}
]
}
```
<!-- [[[end]]] -->
## `--read-only`
<!-- [[[cog
result = runner.invoke(cli.cli, ["policy", "my-s3-bucket", "--read-only"])
cog.out(
"```\n{}\n```".format(json.dumps(json.loads(result.output), indent=2))
)
]]] -->
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectLegalHold",
"s3:GetObjectRetention",
"s3:GetObjectTagging"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket/*"
]
}
]
}
```
<!-- [[[end]]] -->
## `--write-only`
<!-- [[[cog
result = runner.invoke(cli.cli, ["policy", "my-s3-bucket", "--write-only"])
cog.out(
"```\n{}\n```".format(json.dumps(json.loads(result.output), indent=2))
)
]]] -->
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket/*"
]
}
]
}
```
<!-- [[[end]]] -->
## `--prefix my-prefix/`
<!-- [[[cog
result = runner.invoke(cli.cli, ["policy", "my-s3-bucket", "--prefix", "my-prefix/"])
cog.out(
"```\n{}\n```".format(json.dumps(json.loads(result.output), indent=2))
)
]]] -->
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"my-prefix/*"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectLegalHold",
"s3:GetObjectRetention",
"s3:GetObjectTagging"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket/my-prefix/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket/my-prefix/*"
]
}
]
}
```
<!-- [[[end]]] -->
## `--prefix my-prefix/ --read-only`
<!-- [[[cog
result = runner.invoke(cli.cli, ["policy", "my-s3-bucket", "--prefix", "my-prefix/", "--read-only"])
cog.out(
"```\n{}\n```".format(json.dumps(json.loads(result.output), indent=2))
)
]]] -->
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"my-prefix/*"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectLegalHold",
"s3:GetObjectRetention",
"s3:GetObjectTagging"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket/my-prefix/*"
]
}
]
}
```
<!-- [[[end]]] -->
## `--prefix my-prefix/ --write-only`
<!-- [[[cog
result = runner.invoke(cli.cli, ["policy", "my-s3-bucket", "--prefix", "my-prefix/", "--write-only"])
cog.out(
"```\n{}\n```".format(json.dumps(json.loads(result.output), indent=2))
)
]]] -->
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket/my-prefix/*"
]
}
]
}
```
<!-- [[[end]]] -->
(public_bucket_policy)=
## public bucket policy
Buckets created using the `--public` option will have the following bucket policy attached to them:
<!-- [[[cog
result = runner.invoke(cli.cli, ["policy", "my-s3-bucket", "--public-bucket"])
cog.out(
"```\n{}\n```".format(json.dumps(json.loads(result.output), indent=2))
)
]]] -->
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAllGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-s3-bucket/*"
]
}
]
}
```
<!-- [[[end]]] -->
================================================
FILE: docs/requirements.txt
================================================
furo
sphinx-autobuild
myst-parser
cogapp
================================================
FILE: pyproject.toml
================================================
[project]
name = "s3-credentials"
version = "0.17"
description = "A tool for creating credentials for accessing S3 buckets"
readme = "README.md"
authors = [{name = "Simon Willison"}]
license = {text = "Apache-2.0"}
requires-python = ">=3.10"
dependencies = [
"click",
"boto3",
]
[project.urls]
Homepage = "https://github.com/simonw/s3-credentials"
Issues = "https://github.com/simonw/s3-credentials/issues"
CI = "https://github.com/simonw/s3-credentials/actions"
Changelog = "https://github.com/simonw/s3-credentials/releases"
[project.scripts]
s3-credentials = "s3_credentials.cli:cli"
[tool.poe.tasks]
docs.cmd = "sphinx-build -M html docs docs/_build"
docs.help = "Build the docs"
livehtml.cmd = "sphinx-autobuild -b html docs docs/_build"
livehtml.help = "Live-reloading docs server"
cog.cmd = "cog -r docs/*.md"
cog.help = "Regenerate cog snippets in the docs"
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[dependency-groups]
test = [
"pytest",
"pytest-mock",
"cogapp",
"moto>=5.0.4",
]
docs = [
"furo",
"sphinx-autobuild",
"myst-parser",
"cogapp",
]
dev = [
{include-group = "test"},
{include-group = "docs"},
"poethepoet>=0.38.0",
]
================================================
FILE: s3_credentials/__init__.py
================================================
================================================
FILE: s3_credentials/cli.py
================================================
from re import A
import boto3
import botocore
import click
import configparser
from csv import DictWriter
import fnmatch
import io
import itertools
import json
import mimetypes
import os
import pathlib
import re
import sys
import textwrap
from . import policies
PUBLIC_ACCESS_BLOCK_CONFIGURATION = {
"BlockPublicAcls": False,
"IgnorePublicAcls": False,
"BlockPublicPolicy": False,
"RestrictPublicBuckets": False,
}
def bucket_exists(s3, bucket):
try:
s3.head_bucket(Bucket=bucket)
return True
except botocore.exceptions.ClientError:
return False
def user_exists(iam, username):
try:
iam.get_user(UserName=username)
return True
except iam.exceptions.NoSuchEntityException:
return False
def common_boto3_options(fn):
for decorator in reversed(
(
click.option(
"--access-key",
help="AWS access key ID",
),
click.option(
"--secret-key",
help="AWS secret access key",
),
click.option(
"--session-token",
help="AWS session token",
),
click.option(
"--endpoint-url",
help="Custom endpoint URL",
),
click.option(
"-a",
"--auth",
type=click.File("r"),
help="Path to JSON/INI file containing credentials",
),
)
):
fn = decorator(fn)
return fn
def common_output_options(fn):
for decorator in reversed(
(
click.option("--nl", help="Output newline-delimited JSON", is_flag=True),
click.option("--csv", help="Output CSV", is_flag=True),
click.option("--tsv", help="Output TSV", is_flag=True),
)
):
fn = decorator(fn)
return fn
@click.group()
@click.version_option()
def cli():
"""
A tool for creating credentials for accessing S3 buckets
Documentation: https://s3-credentials.readthedocs.io/
"""
class PolicyParam(click.ParamType):
"Returns string of guaranteed well-formed JSON"
name = "policy"
def convert(self, policy, param, ctx):
if policy.strip().startswith("{"):
# Verify policy string is valid JSON
try:
json.loads(policy)
except ValueError:
self.fail("Invalid JSON string")
return policy
else:
# Assume policy is a file path or '-'
try:
with click.open_file(policy) as f:
contents = f.read()
try:
json.loads(contents)
return contents
except ValueError:
self.fail(
"{} contained invalid JSON".format(
"Input" if policy == "-" else "File"
)
)
except FileNotFoundError:
self.fail("File not found")
class DurationParam(click.ParamType):
name = "duration"
pattern = re.compile(r"^(\d+)(m|h|s)?$")
def convert(self, value, param, ctx):
match = self.pattern.match(value)
if match is None:
self.fail("Duration must be of form 3600s or 15m or 2h")
integer_string, suffix = match.groups()
integer = int(integer_string)
if suffix == "m":
integer *= 60
elif suffix == "h":
integer *= 3600
# Must be between 15 minutes and 12 hours
if not (15 * 60 <= integer <= 12 * 60 * 60):
self.fail("Duration must be between 15 minutes and 12 hours")
return integer
class StatementParam(click.ParamType):
"Ensures statement is valid JSON with required fields"
name = "statement"
def convert(self, statement, param, ctx):
try:
data = json.loads(statement)
except ValueError:
self.fail("Invalid JSON string")
if not isinstance(data, dict):
self.fail("JSON must be an object")
missing_keys = {"Effect", "Action", "Resource"} - data.keys()
if missing_keys:
self.fail(
"Statement JSON missing required keys: {}".format(
", ".join(sorted(missing_keys))
)
)
return data
@cli.command()
@click.argument(
"buckets",
nargs=-1,
required=True,
)
@click.option("--read-only", help="Only allow reading from the bucket", is_flag=True)
@click.option("--write-only", help="Only allow writing to the bucket", is_flag=True)
@click.option(
"--prefix", help="Restrict to keys starting with this prefix e.g. foo/", default="*"
)
@click.option(
"extra_statements",
"--statement",
multiple=True,
type=StatementParam(),
help="JSON statement to add to the policy",
)
@click.option(
"--public-bucket",
help="Bucket policy for allowing public access",
is_flag=True,
)
def policy(buckets, read_only, write_only, prefix, extra_statements, public_bucket):
"""
Output generated JSON policy for one or more buckets
Takes the same options as s3-credentials create
To output a read-only JSON policy for a bucket:
s3-credentials policy my-bucket --read-only
"""
"Generate JSON policy for one or more buckets"
if public_bucket:
if len(buckets) != 1:
raise click.ClickException(
"--public-bucket policy can only be generated for a single bucket"
)
click.echo(
json.dumps(policies.bucket_policy_allow_all_get(buckets[0]), indent=4)
)
return
permission = "read-write"
if read_only:
permission = "read-only"
if write_only:
permission = "write-only"
statements = []
if permission == "read-write":
for bucket in buckets:
statements.extend(policies.read_write_statements(bucket, prefix))
elif permission == "read-only":
for bucket in buckets:
statements.extend(policies.read_only_statements(bucket, prefix))
elif permission == "write-only":
for bucket in buckets:
statements.extend(policies.write_only_statements(bucket, prefix))
else:
assert False, "Unknown permission: {}".format(permission)
if extra_statements:
statements.extend(extra_statements)
bucket_access_policy = policies.wrap_policy(statements)
click.echo(json.dumps(bucket_access_policy, indent=4))
@cli.command()
@click.argument(
"buckets",
nargs=-1,
required=True,
)
@click.option(
"format_",
"-f",
"--format",
type=click.Choice(["ini", "json"]),
default="json",
help="Output format for credentials",
)
@click.option(
"-d",
"--duration",
type=DurationParam(),
help="How long should these credentials work for? Default is forever, use 3600 for 3600 seconds, 15m for 15 minutes, 1h for 1 hour",
)
@click.option("--username", help="Username to create or existing user to use")
@click.option(
"-c",
"--create-bucket",
help="Create buckets if they do not already exist",
is_flag=True,
)
@click.option(
"--prefix", help="Restrict to keys starting with this prefix", default="*"
)
@click.option(
"--public",
help="Make the created bucket public: anyone will be able to download files if they know their name",
is_flag=True,
)
@click.option(
"--website",
help="Configure bucket to act as a website, using index.html and error.html",
is_flag=True,
)
@click.option("--read-only", help="Only allow reading from the bucket", is_flag=True)
@click.option("--write-only", help="Only allow writing to the bucket", is_flag=True)
@click.option(
"--policy",
type=PolicyParam(),
help="Path to a policy.json file, or literal JSON string - $!BUCKET_NAME!$ will be replaced with the name of the bucket",
)
@click.option(
"extra_statements",
"--statement",
multiple=True,
type=StatementParam(),
help="JSON statement to add to the policy",
)
@click.option("--bucket-region", help="Region in which to create buckets")
@click.option("--silent", help="Don't show performed steps", is_flag=True)
@click.option("--dry-run", help="Show steps without executing them", is_flag=True)
@click.option(
"--user-permissions-boundary",
help=(
"Custom permissions boundary to use for created users, or 'none' to "
"create without. Defaults to limiting to S3 based on "
"--read-only and --write-only options."
),
)
@common_boto3_options
def create(
buckets,
format_,
duration,
username,
create_bucket,
prefix,
public,
website,
read_only,
write_only,
policy,
extra_statements,
bucket_region,
user_permissions_boundary,
silent,
dry_run,
**boto_options,
):
"""
Create and return new AWS credentials for specified S3 buckets - optionally
also creating the bucket if it does not yet exist.
To create a new bucket and output read-write credentials:
s3-credentials create my-new-bucket -c
To create read-only credentials for an existing bucket:
s3-credentials create my-existing-bucket --read-only
To create write-only credentials that are only valid for 15 minutes:
s3-credentials create my-existing-bucket --write-only -d 15m
"""
if read_only and write_only:
raise click.ClickException(
"Cannot use --read-only and --write-only at the same time"
)
extra_statements = list(extra_statements)
def log(message):
if not silent:
click.echo(message, err=True)
permission = "read-write"
if read_only:
permission = "read-only"
if write_only:
permission = "write-only"
if not user_permissions_boundary and (policy or extra_statements):
user_permissions_boundary = "none"
if website:
public = True
s3 = None
iam = None
sts = None
if not dry_run:
s3 = make_client("s3", **boto_options)
iam = make_client("iam", **boto_options)
sts = make_client("sts", **boto_options)
# Verify buckets
for bucket in buckets:
# Create bucket if it doesn't exist
if dry_run or (not bucket_exists(s3, bucket)):
if (not dry_run) and (not create_bucket):
raise click.ClickException(
"Bucket does not exist: {} - try --create-bucket to create it".format(
bucket
)
)
if dry_run or create_bucket:
kwargs = {}
if bucket_region:
kwargs = {
"CreateBucketConfiguration": {
"LocationConstraint": bucket_region
}
}
bucket_policy = {}
if public:
bucket_policy = policies.bucket_policy_allow_all_get(bucket)
if dry_run:
click.echo(
"Would create bucket: '{}'{}".format(
bucket,
(
" with args {}".format(json.dumps(kwargs, indent=4))
if kwargs
else ""
),
)
)
if public:
click.echo(
"... then add this public access block configuration:"
)
click.echo(json.dumps(PUBLIC_ACCESS_BLOCK_CONFIGURATION))
if bucket_policy:
click.echo("... then attach the following bucket policy to it:")
click.echo(json.dumps(bucket_policy, indent=4))
if website:
click.echo(
"... then configure index.html and error.html website settings"
)
else:
s3.create_bucket(Bucket=bucket, **kwargs)
info = "Created bucket: {}".format(bucket)
if bucket_region:
info += " in region: {}".format(bucket_region)
log(info)
if public:
s3.put_public_access_block(
Bucket=bucket,
PublicAccessBlockConfiguration=PUBLIC_ACCESS_BLOCK_CONFIGURATION,
)
log("Set public access block configuration")
if bucket_policy:
s3.put_bucket_policy(
Bucket=bucket, Policy=json.dumps(bucket_policy)
)
log("Attached bucket policy allowing public access")
if website:
s3.put_bucket_website(
Bucket=bucket,
WebsiteConfiguration={
"ErrorDocument": {"Key": "error.html"},
"IndexDocument": {"Suffix": "index.html"},
},
)
log(
"Configured website: IndexDocument=index.html, ErrorDocument=error.html"
)
# At this point the buckets definitely exist - create the inline policy for assume_role()
assume_role_policy = {}
if policy:
assume_role_policy = json.loads(policy.replace("$!BUCKET_NAME!$", bucket))
else:
statements = []
if permission == "read-write":
for bucket in buckets:
statements.extend(policies.read_write_statements(bucket, prefix))
elif permission == "read-only":
for bucket in buckets:
statements.extend(policies.read_only_statements(bucket, prefix))
elif permission == "write-only":
for bucket in buckets:
statements.extend(policies.write_only_statements(bucket, prefix))
else:
assert False, "Unknown permission: {}".format(permission)
statements.extend(extra_statements)
assume_role_policy = policies.wrap_policy(statements)
if duration:
# We're going to use sts.assume_role() rather than creating a user
if dry_run:
click.echo("Would ensure role: 's3-credentials.AmazonS3FullAccess'")
click.echo(
"Would assume role using following policy for {} seconds:".format(
duration
)
)
click.echo(json.dumps(assume_role_policy, indent=4))
else:
s3_role_arn = ensure_s3_role_exists(iam, sts)
log("Assume role against {} for {}s".format(s3_role_arn, duration))
credentials_response = sts.assume_role(
RoleArn=s3_role_arn,
RoleSessionName="s3.{permission}.{buckets}".format(
permission="custom" if (policy or extra_statements) else permission,
buckets=",".join(buckets),
),
Policy=json.dumps(assume_role_policy),
DurationSeconds=duration,
)
if format_ == "ini":
click.echo(
(
"[default]\naws_access_key_id={}\n"
"aws_secret_access_key={}\naws_session_token={}"
).format(
credentials_response["Credentials"]["AccessKeyId"],
credentials_response["Credentials"]["SecretAccessKey"],
credentials_response["Credentials"]["SessionToken"],
)
)
else:
click.echo(
json.dumps(
credentials_response["Credentials"], indent=4, default=str
)
)
return
# No duration, so wo create a new user so we can issue non-expiring credentials
if not username:
# Default username is "s3.read-write.bucket1,bucket2"
username = "s3.{permission}.{buckets}".format(
permission="custom" if (policy or extra_statements) else permission,
buckets=",".join(buckets),
)
if dry_run or (not user_exists(iam, username)):
kwargs = {"UserName": username}
if user_permissions_boundary != "none":
# This is a user-account level limitation, it does not grant
# permissions on its own but is a useful extra level of defense
# https://github.com/simonw/s3-credentials/issues/1#issuecomment-958201717
if not user_permissions_boundary:
# Pick one based on --read-only/--write-only
if read_only:
user_permissions_boundary = (
"arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
)
else:
# Need full access in order to be able to write
user_permissions_boundary = (
"arn:aws:iam::aws:policy/AmazonS3FullAccess"
)
kwargs["PermissionsBoundary"] = user_permissions_boundary
info = " user: '{}'".format(username)
if user_permissions_boundary != "none":
info += " with permissions boundary: '{}'".format(user_permissions_boundary)
if dry_run:
click.echo("Would create{}".format(info))
else:
iam.create_user(**kwargs)
log("Created {}".format(info))
# Add inline policies to the user so they can access the buckets
user_policy = {}
for bucket in buckets:
policy_name = "s3.{permission}.{bucket}".format(
permission="custom" if (policy or extra_statements) else permission,
bucket=bucket,
)
if policy:
user_policy = json.loads(policy.replace("$!BUCKET_NAME!$", bucket))
else:
if permission == "read-write":
user_policy = policies.read_write(bucket, prefix, extra_statements)
elif permission == "read-only":
user_policy = policies.read_only(bucket, prefix, extra_statements)
elif permission == "write-only":
user_policy = policies.write_only(bucket, prefix, extra_statements)
else:
assert False, "Unknown permission: {}".format(permission)
if dry_run:
click.echo(
"Would attach policy called '{}' to user '{}', details:\n{}".format(
policy_name,
username,
json.dumps(user_policy, indent=4),
)
)
else:
iam.put_user_policy(
PolicyDocument=json.dumps(user_policy),
PolicyName=policy_name,
UserName=username,
)
log("Attached policy {} to user {}".format(policy_name, username))
# Retrieve and print out the credentials
if dry_run:
click.echo("Would call create access key for user '{}'".format(username))
else:
response = iam.create_access_key(
UserName=username,
)
log("Created access key for user: {}".format(username))
if format_ == "ini":
click.echo(
("[default]\naws_access_key_id={}\n" "aws_secret_access_key={}").format(
response["AccessKey"]["AccessKeyId"],
response["AccessKey"]["SecretAccessKey"],
)
)
elif format_ == "json":
click.echo(json.dumps(response["AccessKey"], indent=4, default=str))
@cli.command()
@common_boto3_options
def whoami(**boto_options):
"Identify currently authenticated user"
sts = make_client("sts", **boto_options)
identity = sts.get_caller_identity()
identity.pop("ResponseMetadata")
click.echo(json.dumps(identity, indent=4, default=str))
@cli.command()
@common_output_options
@common_boto3_options
def list_users(nl, csv, tsv, **boto_options):
"""
List all users for this account
s3-credentials list-users
Add --csv or --csv for CSV or TSV format:
s3-credentials list-users --csv
"""
iam = make_client("iam", **boto_options)
output(
paginate(iam, "list_users", "Users"),
(
"UserName",
"UserId",
"Arn",
"Path",
"CreateDate",
"PasswordLastUsed",
"PermissionsBoundary",
"Tags",
),
nl,
csv,
tsv,
)
@cli.command()
@click.argument("role_names", nargs=-1)
@click.option("--details", help="Include attached policies (slower)", is_flag=True)
@common_output_options
@common_boto3_options
def list_roles(role_names, details, nl, csv, tsv, **boto_options):
"""
List roles
To list all roles for this AWS account:
s3-credentials list-roles
Add --csv or --csv for CSV or TSV format:
s3-credentials list-roles --csv
For extra details per role (much slower) add --details
s3-credentials list-roles --details
"""
iam = make_client("iam", **boto_options)
headers = (
"Path",
"RoleName",
"RoleId",
"Arn",
"CreateDate",
"AssumeRolePolicyDocument",
"Description",
"MaxSessionDuration",
"PermissionsBoundary",
"Tags",
"RoleLastUsed",
)
if details:
headers += ("inline_policies", "attached_policies")
def iterate():
for role in paginate(iam, "list_roles", "Roles"):
if role_names and role["RoleName"] not in role_names:
continue
if details:
role_name = role["RoleName"]
role["inline_policies"] = []
# Get inline policy names, then policy for each one
for policy_name in paginate(
iam, "list_role_policies", "PolicyNames", RoleName=role_name
):
role_policy_response = iam.get_role_policy(
RoleName=role_name,
PolicyName=policy_name,
)
role_policy_response.pop("ResponseMetadata", None)
role["inline_policies"].append(role_policy_response)
# Get attached managed policies
role["attached_policies"] = []
for attached in paginate(
iam,
"list_attached_role_policies",
"AttachedPolicies",
RoleName=role_name,
):
policy_arn = attached["PolicyArn"]
attached_policy_response = iam.get_policy(
PolicyArn=policy_arn,
)
policy_details = attached_policy_response["Policy"]
# Also need to fetch the policy JSON
version_id = policy_details["DefaultVersionId"]
policy_version_response = iam.get_policy_version(
PolicyArn=policy_arn,
VersionId=version_id,
)
policy_details["PolicyVersion"] = policy_version_response[
"PolicyVersion"
]
role["attached_policies"].append(policy_details)
yield role
output(iterate(), headers, nl, csv, tsv)
@cli.command()
@click.argument("usernames", nargs=-1)
@common_boto3_options
def list_user_policies(usernames, **boto_options):
"""
List inline policies for specified users
s3-credentials list-user-policies username
Returns policies for all users if no usernames are provided.
"""
iam = make_client("iam", **boto_options)
if not usernames:
usernames = [user["UserName"] for user in paginate(iam, "list_users", "Users")]
for username in usernames:
click.echo("User: {}".format(username))
for policy_name in paginate(
iam, "list_user_policies", "PolicyNames", UserName=username
):
click.echo("PolicyName: {}".format(policy_name))
policy_response = iam.get_user_policy(
UserName=username, PolicyName=policy_name
)
click.echo(
json.dumps(policy_response["PolicyDocument"], indent=4, default=str)
)
@cli.command()
@click.argument("buckets", nargs=-1)
@click.option("--details", help="Include extra bucket details (slower)", is_flag=True)
@common_output_options
@common_boto3_options
def list_buckets(buckets, details, nl, csv, tsv, **boto_options):
"""
List buckets
To list all buckets and their creation time as JSON:
s3-credentials list-buckets
Add --csv or --csv for CSV or TSV format:
s3-credentials list-buckets --csv
For extra details per bucket (much slower) add --details
s3-credentials list-buckets --details
"""
s3 = make_client("s3", **boto_options)
headers = ["Name", "CreationDate"]
if details:
headers += ["bucket_acl", "public_access_block", "bucket_website"]
def iterator():
for bucket in s3.list_buckets()["Buckets"]:
if buckets and (bucket["Name"] not in buckets):
continue
if details:
bucket_acl = dict(
(key, value)
for key, value in s3.get_bucket_acl(
Bucket=bucket["Name"],
).items()
if key != "ResponseMetadata"
)
region = s3.get_bucket_location(Bucket=bucket["Name"])[
"LocationConstraint"
]
if region is None:
# "Buckets in Region us-east-1 have a LocationConstraint of null"
region = "us-east-1"
try:
pab = s3.get_public_access_block(
Bucket=bucket["Name"],
)["PublicAccessBlockConfiguration"]
except s3.exceptions.ClientError:
pab = None
try:
bucket_website = dict(
(key, value)
for key, value in s3.get_bucket_website(
Bucket=bucket["Name"],
).items()
if key != "ResponseMetadata"
)
bucket_website["url"] = (
"http://{}.s3-website.{}.amazonaws.com/".format(
bucket["Name"], region
)
)
except s3.exceptions.ClientError:
bucket_website = None
bucket["region"] = region
bucket["bucket_acl"] = bucket_acl
bucket["public_access_block"] = pab
bucket["bucket_website"] = bucket_website
yield bucket
output(iterator(), headers, nl, csv, tsv)
@cli.command()
@click.argument("usernames", nargs=-1, required=True)
@common_boto3_options
def delete_user(usernames, **boto_options):
"""
Delete specified users, their access keys and their inline policies
s3-credentials delete-user username1 username2
"""
iam = make_client("iam", **boto_options)
for username in usernames:
click.echo("User: {}".format(username))
# Fetch and delete their policies
policy_names_to_delete = list(
paginate(iam, "list_user_policies", "PolicyNames", UserName=username)
)
for policy_name in policy_names_to_delete:
iam.delete_user_policy(
UserName=username,
PolicyName=policy_name,
)
click.echo(" Deleted policy: {}".format(policy_name))
# Fetch and delete their access keys
access_key_ids_to_delete = [
access_key["AccessKeyId"]
for access_key in paginate(
iam, "list_access_keys", "AccessKeyMetadata", UserName=username
)
]
for access_key_id in access_key_ids_to_delete:
iam.delete_access_key(
UserName=username,
AccessKeyId=access_key_id,
)
click.echo(" Deleted access key: {}".format(access_key_id))
iam.delete_user(UserName=username)
click.echo(" Deleted user")
def make_client(service, access_key, secret_key, session_token, endpoint_url, auth):
if auth:
if access_key or secret_key or session_token:
raise click.ClickException(
"--auth cannot be used with --access-key, --secret-key or --session-token"
)
auth_content = auth.read().strip()
if auth_content.startswith("{"):
# Treat as JSON
decoded = json.loads(auth_content)
access_key = decoded.get("AccessKeyId")
secret_key = decoded.get("SecretAccessKey")
session_token = decoded.get("SessionToken")
else:
# Treat as INI
config = configparser.ConfigParser()
config.read_string(auth_content)
# Use the first section that has an aws_access_key_id
for section in config.sections():
if "aws_access_key_id" in config[section]:
access_key = config[section].get("aws_access_key_id")
secret_key = config[section].get("aws_secret_access_key")
session_token = config[section].get("aws_session_token")
break
kwargs = {}
if access_key:
kwargs["aws_access_key_id"] = access_key
if secret_key:
kwargs["aws_secret_access_key"] = secret_key
if session_token:
kwargs["aws_session_token"] = session_token
if endpoint_url:
kwargs["endpoint_url"] = endpoint_url
return boto3.client(service, **kwargs)
def ensure_s3_role_exists(iam, sts):
"Create s3-credentials.AmazonS3FullAccess role if not exists, return ARN"
role_name = "s3-credentials.AmazonS3FullAccess"
account_id = sts.get_caller_identity()["Account"]
try:
role = iam.get_role(RoleName=role_name)
return role["Role"]["Arn"]
except iam.exceptions.NoSuchEntityException:
create_role_response = iam.create_role(
Description=(
"Role used by the s3-credentials tool to create time-limited "
"credentials that are restricted to specific buckets"
),
RoleName=role_name,
AssumeRolePolicyDocument=json.dumps(
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::{}:root".format(account_id)
},
"Action": "sts:AssumeRole",
}
],
}
),
MaxSessionDuration=12 * 60 * 60,
)
# Attach AmazonS3FullAccess to it - note that even though we use full access
# on the role itself any time we call sts.assume_role() we attach an additional
# policy to ensure reduced access for the temporary credentials
iam.attach_role_policy(
RoleName="s3-credentials.AmazonS3FullAccess",
PolicyArn="arn:aws:iam::aws:policy/AmazonS3FullAccess",
)
return create_role_response["Role"]["Arn"]
@cli.command()
@click.argument("bucket")
@click.option("--prefix", help="List keys starting with this prefix")
@click.option("--urls", is_flag=True, help="Show URLs for each key")
@common_output_options
@common_boto3_options
def list_bucket(bucket, prefix, urls, nl, csv, tsv, **boto_options):
"""
List contents of bucket
To list the contents of a bucket as JSON:
s3-credentials list-bucket my-bucket
Add --csv or --csv for CSV or TSV format:
s3-credentials list-bucket my-bucket --csv
Add --urls to get an extra URL field for each key:
s3-credentials list-bucket my-bucket --urls
"""
s3 = make_client("s3", **boto_options)
kwargs = {"Bucket": bucket}
if prefix:
kwargs["Prefix"] = prefix
fields = ["Key", "LastModified", "ETag", "Size", "StorageClass", "Owner"]
if urls:
fields.append("URL")
items = paginate(s3, "list_objects_v2", "Contents", **kwargs)
if urls:
items = (
dict(item, URL="https://s3.amazonaws.com/{}/{}".format(bucket, item["Key"]))
for item in items
)
try:
output(
items,
fields,
nl,
csv,
tsv,
)
except botocore.exceptions.ClientError as e:
raise click.ClickException(e)
@cli.command()
@click.argument("bucket")
@click.argument("key")
@click.argument(
"path",
type=click.Path(
exists=True, file_okay=True, dir_okay=False, readable=True, allow_dash=True
),
)
@click.option(
"--content-type",
help="Content-Type to use (default is auto-detected based on file extension)",
)
@click.option("silent", "-s", "--silent", is_flag=True, help="Don't show progress bar")
@common_boto3_options
def put_object(bucket, key, path, content_type, silent, **boto_options):
"""
Upload an object to an S3 bucket
To upload a file to /my-key.txt in the my-bucket bucket:
s3-credentials put-object my-bucket my-key.txt /path/to/file.txt
Use - to upload content from standard input:
echo "Hello" | s3-credentials put-object my-bucket hello.txt -
"""
s3 = make_client("s3", **boto_options)
size = None
extra_args = {}
if path == "-":
# boto needs to be able to seek
fp = io.BytesIO(sys.stdin.buffer.read())
if not silent:
size = fp.getbuffer().nbytes
else:
if not content_type:
content_type = mimetypes.guess_type(path)[0]
fp = click.open_file(path, "rb")
if not silent:
size = os.path.getsize(path)
if content_type is not None:
extra_args["ContentType"] = content_type
if not silent:
# Show progress bar
with click.progressbar(length=size, label="Uploading", file=sys.stderr) as bar:
s3.upload_fileobj(
fp, bucket, key, Callback=bar.update, ExtraArgs=extra_args
)
else:
s3.upload_fileobj(fp, bucket, key, ExtraArgs=extra_args)
@cli.command()
@click.argument("bucket")
@click.argument(
"objects",
nargs=-1,
required=True,
)
@click.option(
"--prefix",
help="Prefix to add to the files within the bucket",
)
@click.option("silent", "-s", "--silent", is_flag=True, help="Don't show progress bar")
@click.option("--dry-run", help="Show steps without executing them", is_flag=True)
@common_boto3_options
def put_objects(bucket, objects, prefix, silent, dry_run, **boto_options):
"""
Upload multiple objects to an S3 bucket
Pass one or more files to upload them:
s3-credentials put-objects my-bucket one.txt two.txt
These will be saved to the root of the bucket. To save to a different location
use the --prefix option:
s3-credentials put-objects my-bucket one.txt two.txt --prefix my-folder
This will upload them my-folder/one.txt and my-folder/two.txt.
If you pass a directory it will be uploaded recursively:
s3-credentials put-objects my-bucket my-folder
This will create keys in my-folder/... in the S3 bucket.
To upload all files in a folder to the root of the bucket instead use this:
s3-credentials put-objects my-bucket my-folder/*
"""
s3 = make_client("s3", **boto_options)
if prefix and not prefix.endswith("/"):
prefix = prefix + "/"
total_size = 0
# Figure out files to upload and their keys
paths = [] # (path, key)
for obj in objects:
path = pathlib.Path(obj)
if path.is_file():
# Just use the filename as the key
paths.append((path, path.name))
elif path.is_dir():
# Key is the relative path within the directory
for p in path.glob("**/*"):
if p.is_file():
paths.append((p, str(p.relative_to(path.parent))))
def upload(path, key, callback=None):
final_key = key
if prefix:
final_key = prefix + key
if dry_run:
click.echo("{} => s3://{}/{}".format(path, bucket, final_key))
else:
s3.upload_file(
Filename=str(path), Bucket=bucket, Key=final_key, Callback=callback
)
if not silent and not dry_run:
total_size = sum(p[0].stat().st_size for p in paths)
with click.progressbar(
length=total_size,
label="Uploading {} ({} file{})".format(
format_bytes(total_size),
len(paths),
"s" if len(paths) != 1 else "",
),
file=sys.stderr,
) as bar:
for path, key in paths:
upload(path, key, bar.update)
else:
for path, key in paths:
upload(path, key)
@cli.command()
@click.argument("bucket")
@click.argument("key")
@click.option(
"output",
"-o",
"--output",
type=click.Path(file_okay=True, dir_okay=False, writable=True, allow_dash=False),
help="Write to this file instead of stdout",
)
@common_boto3_options
def get_object(bucket, key, output, **boto_options):
"""
Download an object from an S3 bucket
To see the contents of the bucket on standard output:
s3-credentials get-object my-bucket hello.txt
To save to a file:
s3-credentials get-object my-bucket hello.txt -o hello.txt
"""
s3 = make_client("s3", **boto_options)
if not output:
fp = sys.stdout.buffer
else:
fp = click.open_file(output, "wb")
s3.download_fileobj(bucket, key, fp)
@cli.command()
@click.argument("bucket")
@click.argument(
"keys",
nargs=-1,
required=False,
)
@click.option(
"output",
"-o",
"--output",
type=click.Path(file_okay=False, dir_okay=True, writable=True, allow_dash=False),
help="Write to this directory instead of one matching the bucket name",
)
@click.option(
"patterns",
"-p",
"--pattern",
multiple=True,
help="Glob patterns for files to download, e.g. '*/*.js'",
)
@click.option("silent", "-s", "--silent", is_flag=True, help="Don't show progress bar")
@common_boto3_options
def get_objects(bucket, keys, output, patterns, silent, **boto_options):
"""
Download multiple objects from an S3 bucket
To download everything, run:
s3-credentials get-objects my-bucket
Files will be saved to a directory called my-bucket. Use -o dirname to save to a
different directory.
To download specific keys, list them:
s3-credentials get-objects my-bucket one.txt path/two.txt
To download files matching a glob-style pattern, use:
s3-credentials get-objects my-bucket --pattern '*/*.js'
"""
s3 = make_client("s3", **boto_options)
# If user specified keys and no patterns, use the keys they specified
keys_to_download = list(keys)
key_sizes = {}
if keys and not silent:
# Get sizes of those keys for progress bar
for key in keys:
try:
key_sizes[key] = s3.head_object(Bucket=bucket, Key=key)["ContentLength"]
except botocore.exceptions.ClientError:
# Ignore errors - they will be reported later
key_sizes[key] = 0
if (not keys) or patterns:
# Fetch all keys, then filter them if --pattern
all_key_infos = list(paginate(s3, "list_objects_v2", "Contents", Bucket=bucket))
if patterns:
filtered = []
for pattern in patterns:
filtered.extend(
fnmatch.filter((k["Key"] for k in all_key_infos), pattern)
)
keys_to_download.extend(filtered)
else:
keys_to_download.extend(k["Key"] for k in all_key_infos)
if not silent:
key_set = set(keys_to_download)
for key in all_key_infos:
if key["Key"] in key_set:
key_sizes[key["Key"]] = key["Size"]
output_dir = pathlib.Path(output or ".")
if not output_dir.exists():
output_dir.mkdir(parents=True)
errors = []
def download(key, callback=None):
# Ensure directory for key exists
key_dir = (output_dir / key).parent
if not key_dir.exists():
key_dir.mkdir(parents=True)
try:
s3.download_file(bucket, key, str(output_dir / key), Callback=callback)
except botocore.exceptions.ClientError as e:
errors.append("Not found: {}".format(key))
if not silent:
total_size = sum(key_sizes.values())
with click.progressbar(
length=total_size,
label="Downloading {} ({} file{})".format(
format_bytes(total_size),
len(key_sizes),
"s" if len(key_sizes) != 1 else "",
),
file=sys.stderr,
) as bar:
for key in keys_to_download:
download(key, bar.update)
else:
for key in keys_to_download:
download(key)
if errors:
raise click.ClickException("\n".join(errors))
@cli.command()
@click.argument("bucket")
@click.option(
"allowed_methods",
"-m",
"--allowed-method",
multiple=True,
help="Allowed method e.g. GET",
)
@click.option(
"allowed_headers",
"-h",
"--allowed-header",
multiple=True,
help="Allowed header e.g. Authorization",
)
@click.option(
"allowed_origins",
"-o",
"--allowed-origin",
multiple=True,
help="Allowed origin e.g. https://www.example.com/",
)
@click.option(
"expose_headers",
"-e",
"--expose-header",
multiple=True,
help="Header to expose e.g. ETag",
)
@click.option(
"max_age_seconds",
"--max-age-seconds",
type=int,
help="How long to cache preflight requests",
)
@common_boto3_options
def set_cors_policy(
bucket,
allowed_methods,
allowed_headers,
allowed_origins,
expose_headers,
max_age_seconds,
**boto_options,
):
"""
Set CORS policy for a bucket
To allow GET requests from any origin:
s3-credentials set-cors-policy my-bucket
To allow GET and PUT from a specific origin and expose ETag headers:
\b
s3-credentials set-cors-policy my-bucket \\
--allowed-method GET \\
--allowed-method PUT \\
--allowed-origin https://www.example.com/ \\
--expose-header ETag
"""
s3 = make_client("s3", **boto_options)
if not bucket_exists(s3, bucket):
raise click.ClickException("Bucket {} does not exists".format(bucket))
cors_rule = {
"ID": "set-by-s3-credentials",
"AllowedOrigins": allowed_origins or ["*"],
"AllowedHeaders": allowed_headers,
"AllowedMethods": allowed_methods or ["GET"],
"ExposeHeaders": expose_headers,
}
if max_age_seconds:
cors_rule["MaxAgeSeconds"] = max_age_seconds
try:
s3.put_bucket_cors(Bucket=bucket, CORSConfiguration={"CORSRules": [cors_rule]})
except botocore.exceptions.ClientError as e:
raise click.ClickException(e)
@cli.command()
@click.argument("bucket")
@common_boto3_options
def get_cors_policy(bucket, **boto_options):
"""
Get CORS policy for a bucket
s3-credentials get-cors-policy my-bucket
Returns the CORS policy for this bucket, if set, as JSON
"""
s3 = make_client("s3", **boto_options)
try:
response = s3.get_bucket_cors(Bucket=bucket)
except botocore.exceptions.ClientError as e:
raise click.ClickException(e)
click.echo(json.dumps(response["CORSRules"], indent=4, default=str))
@cli.command()
@click.argument("bucket")
@common_boto3_options
def get_bucket_policy(bucket, **boto_options):
"""
Get bucket policy for a bucket
s3-credentials get-bucket-policy my-bucket
Returns the bucket policy for this bucket, if set, as JSON
"""
s3 = make_client("s3", **boto_options)
try:
response = s3.get_bucket_policy(Bucket=bucket)
except botocore.exceptions.ClientError as e:
raise click.ClickException(e)
click.echo(json.dumps(json.loads(response["Policy"]), indent=4, default=str))
@cli.command()
@click.argument("bucket")
@click.option("--policy-file", type=click.File("r"))
@click.option("--allow-all-get", is_flag=True, help="Allow GET requests from all")
@common_boto3_options
def set_bucket_policy(bucket, policy_file, allow_all_get, **boto_options):
"""
Set bucket policy for a bucket
s3-credentials set-bucket-policy my-bucket --policy-file policy.json
Or to set a policy that allows GET requests from all:
s3-credentials set-bucket-policy my-bucket --allow-all-get
"""
s3 = make_client("s3", **boto_options)
if allow_all_get and policy_file:
raise click.ClickException("Cannot pass both --allow-all-get and --policy-file")
if allow_all_get:
policy = policies.bucket_policy_allow_all_get(bucket)
else:
policy = json.load(policy_file)
try:
s3.put_bucket_policy(Bucket=bucket, Policy=json.dumps(policy))
except botocore.exceptions.ClientError as e:
raise click.ClickException(e)
click.echo("Policy set:\n" + json.dumps(policy, indent=4), err=True)
def without_response_metadata(data):
return dict(
(key, value) for key, value in data.items() if key != "ResponseMetadata"
)
@cli.command()
@click.argument("bucket")
@common_boto3_options
def debug_bucket(bucket, **boto_options):
"""
Run a bunch of diagnostics to help debug a bucket
s3-credentials debug-bucket my-bucket
"""
s3 = make_client("s3", **boto_options)
try:
bucket_acl = s3.get_bucket_acl(Bucket=bucket)
click.echo("Bucket ACL:")
click.echo(json.dumps(without_response_metadata(bucket_acl), indent=4))
except Exception as ex:
print(f"Error checking bucket ACL: {ex}")
try:
bucket_policy_status = s3.get_bucket_policy_status(Bucket=bucket)
click.echo("Bucket policy status:")
click.echo(
json.dumps(without_response_metadata(bucket_policy_status), indent=4)
)
except Exception as ex:
print(f"Error checking bucket policy status: {ex}")
try:
bucket_public_access_block = s3.get_public_access_block(Bucket=bucket)
click.echo("Bucket public access block:")
click.echo(
json.dumps(without_response_metadata(bucket_public_access_block), indent=4)
)
except Exception as ex:
print(f"Error checking bucket public access block: {ex}")
@cli.command()
@click.argument("bucket")
@click.argument(
"keys",
nargs=-1,
)
@click.option(
"--prefix",
help="Delete everything with this prefix",
)
@click.option(
"silent", "-s", "--silent", is_flag=True, help="Don't show informational output"
)
@click.option(
"dry_run",
"-d",
"--dry-run",
is_flag=True,
help="Show keys that would be deleted without deleting them",
)
@common_boto3_options
def delete_objects(bucket, keys, prefix, silent, dry_run, **boto_options):
"""
Delete one or more object from an S3 bucket
Pass one or more keys to delete them:
s3-credentials delete-objects my-bucket one.txt two.txt
To delete all files matching a prefix, pass --prefix:
s3-credentials delete-objects my-bucket --prefix my-folder/
"""
s3 = make_client("s3", **boto_options)
if keys and prefix:
raise click.ClickException("Cannot pass both keys and --prefix")
if not keys and not prefix:
raise click.ClickException("Specify one or more keys or use --prefix")
if prefix:
# List all keys with this prefix
paginator = s3.get_paginator("list_objects_v2")
response_iterator = paginator.paginate(Bucket=bucket, Prefix=prefix)
keys = []
for response in response_iterator:
keys.extend([obj["Key"] for obj in response.get("Contents", [])])
if not silent:
click.echo(
"Deleting {} object{} from {}".format(
len(keys), "s" if len(keys) != 1 else "", bucket
),
err=True,
)
if dry_run:
click.echo("The following keys would be deleted:")
for key in keys:
click.echo(key)
return
for batch in batches(keys, 1000):
# Remove any rogue \r characters:
batch = [k.strip() for k in batch]
response = s3.delete_objects(
Bucket=bucket, Delete={"Objects": [{"Key": key} for key in batch]}
)
if response.get("Errors"):
click.echo(
"Errors deleting objects: {}".format(response["Errors"]), err=True
)
@cli.command()
@click.argument("bucket", required=True)
@common_boto3_options
def get_public_access_block(bucket, **boto_options):
"""
Get the public access settings for an S3 bucket
Example usage:
s3-credentials get-public-access-block my-bucket
"""
s3 = make_client("s3", **boto_options)
try:
response = s3.get_public_access_block(Bucket=bucket)
except botocore.exceptions.ClientError as e:
raise click.ClickException(e)
click.echo(json.dumps(response["PublicAccessBlockConfiguration"], indent=4))
@cli.command()
@click.argument("bucket", required=True)
@click.option(
"--block-public-acls",
type=bool,
default=None,
help="Block public ACLs for the bucket (true/false).",
)
@click.option(
"--ignore-public-acls",
type=bool,
default=None,
help="Ignore public ACLs for the bucket (true/false).",
)
@click.option(
"--block-public-policy",
type=bool,
default=None,
help="Block public bucket policies (true/false).",
)
@click.option(
"--restrict-public-buckets",
type=bool,
default=None,
help="Restrict public buckets (true/false).",
)
@click.option(
"--allow-public-access",
is_flag=True,
help="Set all public access settings to false (allows full public access).",
)
@common_boto3_options
def set_public_access_block(
bucket,
block_public_acls,
ignore_public_acls,
block_public_policy,
restrict_public_buckets,
allow_public_access,
**boto_options,
):
"""
Configure public access settings for an S3 bucket.
Example:
s3-credentials set-public-access-block my-bucket --block-public-acls false
To allow full public access to the bucket, use the --allow-public-access flag:
s3-credentials set-public-access-block my-bucket --allow-public-access
"""
s3 = make_client("s3", **boto_options)
# Default public access block configuration
public_access_block_config = {}
if allow_public_access:
# Set all settings to False if --allow-public-access is provided
public_access_block_config = {
"BlockPublicAcls": False,
"IgnorePublicAcls": False,
"BlockPublicPolicy": False,
"RestrictPublicBuckets": False,
}
else:
# Add values only if they are explicitly provided
if block_public_acls is not None:
public_access_block_config["BlockPublicAcls"] = block_public_acls
if ignore_public_acls is not None:
public_access_block_config["IgnorePublicAcls"] = ignore_public_acls
if block_public_policy is not None:
public_access_block_config["BlockPublicPolicy"] = block_public_policy
if restrict_public_buckets is not None:
public_access_block_config["RestrictPublicBuckets"] = (
restrict_public_buckets
)
if not public_access_block_config:
raise click.ClickException(
"No valid options provided. Use --help to see available options."
)
# Apply the public access block configuration to the bucket
s3.put_public_access_block(
Bucket=bucket, PublicAccessBlockConfiguration=public_access_block_config
)
click.echo(
f"Updated public access block settings for bucket '{bucket}': {public_access_block_config}",
err=True,
)
@cli.command()
@click.argument("bucket")
@click.option(
"-p",
"--port",
type=int,
default=8094,
help="Port to run the server on (default: 8094)",
)
@click.option(
"--host",
default="localhost",
help="Host to bind the server to (default: localhost)",
)
@click.option("--read-only", help="Only allow reading from the bucket", is_flag=True)
@click.option("--write-only", help="Only allow writing to the bucket", is_flag=True)
@click.option(
"--prefix", help="Restrict to keys starting with this prefix", default="*"
)
@click.option(
"extra_statements",
"--statement",
multiple=True,
type=StatementParam(),
help="JSON statement to add to the policy",
)
@click.option(
"-d",
"--duration",
type=DurationParam(),
required=True,
help="How long should credentials be valid for, e.g. 15m, 1h, 12h",
)
@common_boto3_options
def localserver(
bucket,
port,
host,
read_only,
write_only,
prefix,
extra_statements,
duration,
**boto_options,
):
"""
Start a localhost server that serves S3 credentials.
The server responds to GET requests on / with JSON containing temporary
AWS credentials that allow access to the specified bucket.
Credentials are cached and refreshed automatically based on the
--duration setting.
To start a server that serves read-only credentials for a bucket,
with credentials valid for 1 hour:
s3-credentials localserver my-bucket --read-only --duration 1h
To run on a different port:
s3-credentials localserver my-bucket --duration 1h --port 9000
"""
from . import localserver as localserver_module
if read_only and write_only:
raise click.ClickException(
"Cannot use --read-only and --write-only at the same time"
)
extra_statements = list(extra_statements)
permission = "read-write"
if read_only:
permission = "read-only"
if write_only:
permission = "write-only"
# Create AWS clients
iam = make_client("iam", **boto_options)
sts = make_client("sts", **boto_options)
s3 = make_client("s3", **boto_options)
# Verify bucket exists
if not bucket_exists(s3, bucket):
raise click.ClickException("Bucket does not exist: {}".format(bucket))
try:
localserver_module.run_server(
bucket=bucket,
port=port,
host=host,
permission=permission,
prefix=prefix,
duration=duration,
extra_statements=extra_statements,
iam=iam,
sts=sts,
)
except Exception as e:
raise click.ClickException("Failed to start server: {}".format(e))
def output(iterator, headers, nl, csv, tsv):
if nl:
for item in iterator:
click.echo(json.dumps(item, default=str))
elif csv or tsv:
writer = DictWriter(
sys.stdout, headers, dialect="excel-tab" if tsv else "excel"
)
writer.writeheader()
writer.writerows(fix_json(row) for row in iterator)
else:
for line in stream_indented_json(iterator):
click.echo(line)
def stream_indented_json(iterator, indent=2):
# We have to iterate two-at-a-time so we can know if we
# should output a trailing comma or if we have reached
# the last item.
current_iter, next_iter = itertools.tee(iterator, 2)
next(next_iter, None)
first = True
for item, next_item in itertools.zip_longest(current_iter, next_iter):
is_last = next_item is None
data = item
line = "{first}{serialized}{separator}{last}".format(
first="[\n" if first else "",
serialized=textwrap.indent(
json.dumps(data, indent=indent, default=str), " " * indent
),
separator="," if not is_last else "",
last="\n]" if is_last else "",
)
yield line
first = False
if first:
# We didn't output anything, so yield the empty list
yield "[]"
def paginate(service, method, list_key, **kwargs):
paginator = service.get_paginator(method)
for response in paginator.paginate(**kwargs):
yield from response.get(list_key) or []
def fix_json(row):
# If a key value is list or dict, json encode it
return dict(
[
(
key,
(
json.dumps(value, indent=2, default=str)
if isinstance(value, (dict, list, tuple))
else value
),
)
for key, value in row.items()
]
)
def format_bytes(size):
for x in ("bytes", "KB", "MB", "GB", "TB"):
if size < 1024:
return "{:3.1f} {}".format(size, x)
size /= 1024
return size
def batches(all, batch_size):
return [all[i : i + batch_size] for i in range(0, len(all), batch_size)]
================================================
FILE: s3_credentials/localserver.py
================================================
"""
Local server for serving S3 credentials via HTTP.
"""
import datetime
from http.server import HTTPServer, BaseHTTPRequestHandler
import json
import threading
import time
import click
from . import policies
from .cli import ensure_s3_role_exists
class CredentialCache:
"""Thread-safe credential cache that regenerates credentials on expiry."""
def __init__(
self, iam, sts, bucket, permission, prefix, duration, extra_statements
):
self.iam = iam
self.sts = sts
self.bucket = bucket
self.permission = permission
self.prefix = prefix
self.duration = duration
self.extra_statements = extra_statements
self._credentials = None
self._expiry_time = None
self._lock = threading.Lock()
self._generating = False
def _generate_policy(self):
"""Generate the IAM policy for bucket access."""
statements = []
if self.permission == "read-write":
statements.extend(policies.read_write_statements(self.bucket, self.prefix))
elif self.permission == "read-only":
statements.extend(policies.read_only_statements(self.bucket, self.prefix))
elif self.permission == "write-only":
statements.extend(policies.write_only_statements(self.bucket, self.prefix))
if self.extra_statements:
statements.extend(self.extra_statements)
return policies.wrap_policy(statements)
def _generate_credentials(self):
"""Generate new temporary credentials using STS assume_role."""
s3_role_arn = ensure_s3_role_exists(self.iam, self.sts)
policy_document = self._generate_policy()
credentials_response = self.sts.assume_role(
RoleArn=s3_role_arn,
RoleSessionName="s3.{permission}.{bucket}".format(
permission=self.permission,
bucket=self.bucket,
),
Policy=json.dumps(policy_document),
DurationSeconds=self.duration,
)
return credentials_response["Credentials"]
def get_credentials(self):
"""Get cached credentials, regenerating if expired or about to expire."""
current_time = time.time()
# Check if we need new credentials
with self._lock:
if self._credentials is not None and self._expiry_time is not None:
# Return cached credentials if still valid
if current_time < self._expiry_time:
return self._credentials
# Need to generate new credentials
# Check if another thread is already generating
if self._generating:
# Wait for the other thread to finish
while self._generating:
self._lock.release()
time.sleep(0.1)
self._lock.acquire()
return self._credentials
# Mark that we're generating
self._generating = True
try:
# Generate new credentials outside the lock
credentials = self._generate_credentials()
with self._lock:
self._credentials = credentials
# Set expiry time to duration from now
self._expiry_time = current_time + self.duration
self._generating = False
return credentials
except Exception:
with self._lock:
self._generating = False
raise
def make_credential_handler(credential_cache):
"""Create an HTTP request handler class with access to the credential cache."""
class CredentialHandler(BaseHTTPRequestHandler):
def log_message(self, format, *args):
# Log to stderr with timestamp
click.echo(
"{} - {}".format(
datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
format % args,
),
err=True,
)
def do_GET(self):
if self.path != "/":
self.send_response(404)
self.send_header("Content-Type", "application/json")
self.end_headers()
self.wfile.write(json.dumps({"error": "Not found"}).encode())
return
try:
credentials = credential_cache.get_credentials()
response_data = {
"Version": 1,
"AccessKeyId": credentials["AccessKeyId"],
"SecretAccessKey": credentials["SecretAccessKey"],
"SessionToken": credentials["SessionToken"],
"Expiration": (
credentials["Expiration"].isoformat()
if hasattr(credentials["Expiration"], "isoformat")
else str(credentials["Expiration"])
),
}
self.send_response(200)
self.send_header("Content-Type", "application/json")
self.end_headers()
self.wfile.write(json.dumps(response_data, indent=2).encode())
except Exception as e:
self.send_response(500)
self.send_header("Content-Type", "application/json")
self.end_headers()
self.wfile.write(json.dumps({"error": str(e)}).encode())
return CredentialHandler
def run_server(
bucket,
port,
host,
permission,
prefix,
duration,
extra_statements,
iam,
sts,
):
"""Run the credential server."""
# Create credential cache
credential_cache = CredentialCache(
iam=iam,
sts=sts,
bucket=bucket,
permission=permission,
prefix=prefix,
duration=duration,
extra_statements=extra_statements,
)
# Pre-generate credentials to catch any errors early
click.echo("Generating initial credentials...", err=True)
credential_cache.get_credentials()
# Create and start server
handler = make_credential_handler(credential_cache)
server = HTTPServer((host, port), handler)
click.echo(
"Serving {} credentials for bucket '{}' at http://{}:{}/".format(
permission, bucket, host, port
),
err=True,
)
click.echo("Duration: {} seconds".format(duration), err=True)
click.echo("Press Ctrl+C to stop", err=True)
try:
server.serve_forever()
except KeyboardInterrupt:
click.echo("\nShutting down server...", err=True)
server.shutdown()
================================================
FILE: s3_credentials/policies.py
================================================
def read_write(bucket, prefix="*", extra_statements=None):
statements = read_write_statements(bucket, prefix=prefix)
if extra_statements:
statements.extend(extra_statements)
return wrap_policy(statements)
def read_write_statements(bucket, prefix="*"):
# https://github.com/simonw/s3-credentials/issues/24
if not prefix.endswith("*"):
prefix += "*"
return read_only_statements(bucket, prefix) + [
{
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:DeleteObject"],
"Resource": ["arn:aws:s3:::{}/{}".format(bucket, prefix)],
}
]
def read_only(bucket, prefix="*", extra_statements=None):
statements = read_only_statements(bucket, prefix=prefix)
if extra_statements:
statements.extend(extra_statements)
return wrap_policy(statements)
def read_only_statements(bucket, prefix="*"):
# https://github.com/simonw/s3-credentials/issues/23
statements = []
if not prefix.endswith("*"):
prefix += "*"
if prefix != "*":
statements.append(
{
"Effect": "Allow",
"Action": ["s3:GetBucketLocation"],
"Resource": ["arn:aws:s3:::{}".format(bucket)],
}
)
statements.append(
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::{}".format(bucket)],
"Condition": {
"StringLike": {
# Note that prefix must end in / if user wants to limit to a folder
"s3:prefix": [prefix]
}
},
}
)
else:
# We can combine s3:GetBucketLocation and s3:ListBucket into one
statements.append(
{
"Effect": "Allow",
"Action": ["s3:ListBucket", "s3:GetBucketLocation"],
"Resource": ["arn:aws:s3:::{}".format(bucket)],
}
)
return statements + [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectLegalHold",
"s3:GetObjectRetention",
"s3:GetObjectTagging",
],
"Resource": ["arn:aws:s3:::{}/{}".format(bucket, prefix)],
},
]
def write_only(bucket, prefix="*", extra_statements=None):
statements = write_only_statements(bucket, prefix=prefix)
if extra_statements:
statements.extend(extra_statements)
return wrap_policy(statements)
def write_only_statements(bucket, prefix="*"):
# https://github.com/simonw/s3-credentials/issues/25
if not prefix.endswith("*"):
prefix += "*"
return [
{
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": ["arn:aws:s3:::{}/{}".format(bucket, prefix)],
}
]
def wrap_policy(statements):
return {"Version": "2012-10-17", "Statement": statements}
def bucket_policy_allow_all_get(bucket):
return {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAllGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::{}/*".format(bucket)],
}
],
}
================================================
FILE: tests/conftest.py
================================================
import boto3
import logging
import os
import pytest
from moto import mock_aws
def pytest_addoption(parser):
parser.addoption(
"--integration",
action="store_true",
default=False,
help="run integration tests",
)
parser.addoption(
"--boto-logging",
action="store_true",
default=False,
help="turn on boto3 logging",
)
def pytest_configure(config):
config.addinivalue_line(
"markers",
"integration: mark test as integration test, only run with --integration",
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--boto-logging"):
boto3.set_stream_logger("botocore.endpoint", logging.DEBUG)
if config.getoption("--integration"):
# Also run integration tests
return
skip_slow = pytest.mark.skip(reason="use --integration option to run")
for item in items:
if "integration" in item.keywords:
item.add_marker(skip_slow)
@pytest.fixture(scope="function")
def aws_credentials():
"""Mocked AWS Credentials for moto."""
os.environ["AWS_ACCESS_KEY_ID"] = "testing"
os.environ["AWS_SECRET_ACCESS_KEY"] = "testing"
os.environ["AWS_SECURITY_TOKEN"] = "testing"
os.environ["AWS_SESSION_TOKEN"] = "testing"
os.environ["AWS_DEFAULT_REGION"] = "us-east-1"
@pytest.fixture(scope="function")
def moto_s3(aws_credentials):
with mock_aws():
client = boto3.client("s3", region_name="us-east-1")
client.create_bucket(Bucket="my-bucket")
yield client
@pytest.fixture(scope="function")
def moto_s3_populated(moto_s3):
for key in ("one.txt", "directory/two.txt", "directory/three.json"):
moto_s3.put_object(Bucket="my-bucket", Key=key, Body=key.encode("utf-8"))
yield moto_s3
================================================
FILE: tests/test_dry_run.py
================================================
from click.testing import CliRunner
from s3_credentials.cli import cli
import pytest
import re
import textwrap
def assert_match_with_wildcards(pattern, input):
# Pattern language is simple: '*' becomes '.*?'
bits = pattern.split("*")
regex = "^{}$".format(".*?".join(re.escape(bit) for bit in bits))
print(regex)
match = re.compile(regex.strip(), re.DOTALL).match(input.strip())
if match is None:
# Build a useful message
message = "Pattern:\n{}\n\nDoes not match input:\n\n{}".format(pattern, input)
bad_bits = [bit for bit in bits if bit not in input]
if bad_bits:
message += "\nThese parts were not found in the input:\n\n"
for bit in bad_bits:
message += textwrap.indent("{}\n\n".format(bit), " ")
assert False, message
@pytest.mark.parametrize(
"options,expected",
(
(
[],
(
"""Would create bucket: 'my-bucket'
Would create user: 's3.read-write.my-bucket' with permissions boundary: 'arn:aws:iam::aws:policy/AmazonS3FullAccess'
Would attach policy called 's3.read-write.my-bucket' to user 's3.read-write.my-bucket', details:*
Would call create access key for user 's3.read-write.my-bucket'"""
),
),
(
["--username", "frank"],
(
"""Would create bucket: 'my-bucket'
Would create user: 'frank' with permissions boundary: 'arn:aws:iam::aws:policy/AmazonS3FullAccess'
Would attach policy called 's3.read-write.my-bucket' to user 'frank', details:*
Would call create access key for user 'frank'"""
),
),
(
["--duration", "20m"],
(
"""Would create bucket: 'my-bucket'
Would ensure role: 's3-credentials.AmazonS3FullAccess'
Would assume role using following policy for 1200 seconds:*"""
),
),
(
["--public"],
(
"""Would create bucket: 'my-bucket'
... then add this public access block configuration:
{"BlockPublicAcls": false, "IgnorePublicAcls": false, "BlockPublicPolicy": false, "RestrictPublicBuckets": false}
... then attach the following bucket policy to it:*
Would create user: 's3.read-write.my-bucket' with permissions boundary: 'arn:aws:iam::aws:policy/AmazonS3FullAccess'
Would attach policy called 's3.read-write.my-bucket' to user 's3.read-write.my-bucket', details:*
Would call create access key for user 's3.read-write.my-bucket'"""
),
),
(
[
"--statement",
'{"Effect": "Allow", "Action": "textract:*", "Resource": "*"}',
],
(
"""Would create bucket: 'my-bucket'
Would create user: 's3.custom.my-bucket'
*"Action": "textract:*"""
),
),
),
)
def test_dry_run(options, expected):
runner = CliRunner()
result = runner.invoke(cli, ["create", "my-bucket", "--dry-run"] + options)
assert result.exit_code == 0, result.output
assert_match_with_wildcards(expected, result.output)
================================================
FILE: tests/test_integration.py
================================================
# These integration tests only run with "pytest --integration" -
# they execute live calls against AWS using environment variables
# and clean up after themselves
from click.testing import CliRunner
from s3_credentials.cli import bucket_exists, cli
import botocore
import boto3
import datetime
import json
import pytest
import secrets
import time
import urllib
# Mark all tests in this module with "integration":
pytestmark = pytest.mark.integration
@pytest.fixture(autouse=True)
def cleanup():
cleanup_any_resources()
yield
cleanup_any_resources()
def test_create_bucket_with_read_write(tmpdir):
bucket_name = "s3-credentials-tests.read-write.{}".format(secrets.token_hex(4))
# Bucket should not exist
s3 = boto3.client("s3")
assert not bucket_exists(s3, bucket_name)
credentials = get_output("create", bucket_name, "-c")
credentials_decoded = json.loads(credentials)
credentials_s3 = boto3.session.Session(
aws_access_key_id=credentials_decoded["AccessKeyId"],
aws_secret_access_key=credentials_decoded["SecretAccessKey"],
).client("s3")
# Bucket should exist - found I needed to sleep(10) before put-object would work
time.sleep(10)
assert bucket_exists(s3, bucket_name)
# Use the credentials to write a file to that bucket
test_write = tmpdir / "test-write.txt"
test_write.write_text("hello", "utf-8")
get_output("put-object", bucket_name, "test-write.txt", str(test_write))
credentials_s3.put_object(
Body="hello".encode("utf-8"), Bucket=bucket_name, Key="test-write.txt"
)
# Use default s3 client to check that the write succeeded
get_object_response = s3.get_object(Bucket=bucket_name, Key="test-write.txt")
assert get_object_response["Body"].read() == b"hello"
# Check we can read the file using the credentials too
output = get_output("get-object", bucket_name, "test-write.txt")
assert output == "hello"
def test_create_bucket_read_only_duration_15():
bucket_name = "s3-credentials-tests.read-only.{}".format(secrets.token_hex(4))
s3 = boto3.client("s3")
assert not bucket_exists(s3, bucket_name)
credentials_decoded = json.loads(
get_output("create", bucket_name, "-c", "--duration", "15m", "--read-only")
)
assert set(credentials_decoded.keys()) == {
"AccessKeyId",
"SecretAccessKey",
"SessionToken",
"Expiration",
}
# Expiration should be ~15 minutes in the future
delta = (
datetime.datetime.fromisoformat(credentials_decoded["Expiration"])
- datetime.datetime.now(datetime.timezone.utc)
).total_seconds()
# Should be around about 900 seconds
assert 800 < delta < 1000
# Wait for everything to exist
time.sleep(10)
# Create client with these credentials
credentials_s3 = boto3.session.Session(
aws_access_key_id=credentials_decoded["AccessKeyId"],
aws_secret_access_key=credentials_decoded["SecretAccessKey"],
aws_session_token=credentials_decoded["SessionToken"],
).client("s3")
# Client should NOT be allowed to write objects
with pytest.raises(botocore.exceptions.ClientError):
credentials_s3.put_object(
Body="hello".encode("utf-8"), Bucket=bucket_name, Key="hello.txt"
)
# Write an object using root credentials
s3.put_object(
Body="hello read-only".encode("utf-8"),
Bucket=bucket_name,
Key="hello-read-only.txt",
)
# Client should be able to read this
assert (
read_file(credentials_s3, bucket_name, "hello-read-only.txt")
== "hello read-only"
)
def test_read_write_bucket_prefix_temporary_credentials():
bucket_name = "s3-credentials-tests.read-write-prefix.{}".format(
secrets.token_hex(4)
)
s3 = boto3.client("s3")
assert not bucket_exists(s3, bucket_name)
credentials_decoded = json.loads(
get_output(
"create", bucket_name, "-c", "--duration", "15m", "--prefix", "my/prefix/"
)
)
# Wait for everything to exist
time.sleep(10)
# Create client with these credentials
credentials_s3 = boto3.session.Session(
aws_access_key_id=credentials_decoded["AccessKeyId"],
aws_secret_access_key=credentials_decoded["SecretAccessKey"],
aws_session_token=credentials_decoded["SessionToken"],
).client("s3")
# Write file with root credentials that I should not be able to see
s3.put_object(
Body="hello".encode("utf-8"),
Bucket=bucket_name,
Key="should-not-be-visible.txt",
)
# I should be able to write to and read from /my/prefix/file.txt
credentials_s3.put_object(
Body="hello".encode("utf-8"),
Bucket=bucket_name,
Key="my/prefix/file.txt",
)
assert read_file(credentials_s3, bucket_name, "my/prefix/file.txt") == "hello"
# Should NOT be able to read should-not-be-visible.txt
with pytest.raises(botocore.exceptions.ClientError):
read_file(credentials_s3, bucket_name, "should-not-be-visible.txt")
def test_read_write_bucket_prefix_permanent_credentials():
bucket_name = "s3-credentials-tests.rw-prefix-perm.{}".format(secrets.token_hex(4))
s3 = boto3.client("s3")
assert not bucket_exists(s3, bucket_name)
credentials_decoded = json.loads(
get_output("create", bucket_name, "-c", "--prefix", "my/prefix-2/")
)
# Wait for everything to exist
time.sleep(10)
# Create client with these credentials
credentials_s3 = boto3.session.Session(
aws_access_key_id=credentials_decoded["AccessKeyId"],
aws_secret_access_key=credentials_decoded["SecretAccessKey"],
).client("s3")
# Write file with root credentials that I should not be able to see
s3.put_object(
Body="hello".encode("utf-8"),
Bucket=bucket_name,
Key="should-not-be-visible.txt",
)
# I should be able to write to and read from /my/prefix/file.txt
credentials_s3.put_object(
Body="hello".encode("utf-8"),
Bucket=bucket_name,
Key="my/prefix-2/file.txt",
)
assert read_file(credentials_s3, bucket_name, "my/prefix-2/file.txt") == "hello"
# Should NOT be able to read should-not-be-visible.txt
with pytest.raises(botocore.exceptions.ClientError):
read_file(credentials_s3, bucket_name, "should-not-be-visible.txt")
def test_list_bucket_including_with_prefix():
bucket_name = "s3-credentials-tests.lbucket.{}".format(secrets.token_hex(4))
s3 = boto3.client("s3")
assert not bucket_exists(s3, bucket_name)
credentials_decoded = json.loads(get_output("create", bucket_name, "-c"))
time.sleep(10)
credentials_s3 = boto3.session.Session(
aws_access_key_id=credentials_decoded["AccessKeyId"],
aws_secret_access_key=credentials_decoded["SecretAccessKey"],
).client("s3")
credentials_s3.put_object(
Body="one".encode("utf-8"),
Bucket=bucket_name,
Key="one/file.txt",
)
credentials_s3.put_object(
Body="two".encode("utf-8"),
Bucket=bucket_name,
Key="two/file.txt",
)
# Try list-bucket against everything
everything = json.loads(
get_output(
"list-bucket",
bucket_name,
"--access-key",
credentials_decoded["AccessKeyId"],
"--secret-key",
credentials_decoded["SecretAccessKey"],
)
)
assert [e["Key"] for e in everything] == ["one/file.txt", "two/file.txt"]
# Now use --prefix
prefix_output = json.loads(
get_output(
"list-bucket",
bucket_name,
"--prefix",
"one/",
"--access-key",
credentials_decoded["AccessKeyId"],
"--secret-key",
credentials_decoded["SecretAccessKey"],
)
)
assert len(prefix_output) == 1
assert prefix_output[0]["Key"] == "one/file.txt"
def test_prefix_read_only():
bucket_name = "s3-credentials-tests.pre-ro.{}".format(secrets.token_hex(4))
s3 = boto3.client("s3")
assert not bucket_exists(s3, bucket_name)
credentials_decoded = json.loads(
get_output("create", bucket_name, "-c", "--read-only", "--prefix", "prefix/")
)
time.sleep(10)
credentials_s3 = boto3.session.Session(
aws_access_key_id=credentials_decoded["AccessKeyId"],
aws_secret_access_key=credentials_decoded["SecretAccessKey"],
).client("s3")
# Should not be able to write objects
with pytest.raises(botocore.exceptions.ClientError):
credentials_s3.put_object(
Body="allowed".encode("utf-8"),
Bucket=bucket_name,
Key="prefix/allowed.txt",
)
# So we use root permissions to write these:
s3 = boto3.client("s3")
s3.put_object(
Body="denied".encode("utf-8"),
Bucket=bucket_name,
Key="denied.txt",
)
s3.put_object(
Body="allowed".encode("utf-8"),
Bucket=bucket_name,
Key="prefix/allowed.txt",
)
# list-bucket against everything should error
with pytest.raises(GetOutputError):
get_output(
"list-bucket",
bucket_name,
"--access-key",
credentials_decoded["AccessKeyId"],
"--secret-key",
credentials_decoded["SecretAccessKey"],
)
# list-bucket against --prefix prefix/ should work
items = json.loads(
get_output(
"list-bucket",
bucket_name,
"--prefix",
"prefix/",
"--access-key",
credentials_decoded["AccessKeyId"],
"--secret-key",
credentials_decoded["SecretAccessKey"],
)
)
assert [e["Key"] for e in items] == ["prefix/allowed.txt"]
# Should NOT be able to read "denied.txt"
with pytest.raises(botocore.exceptions.ClientError):
read_file(credentials_s3, bucket_name, "denied.txt")
# Should be able to read prefix/allowed.txt
assert read_file(credentials_s3, bucket_name, "prefix/allowed.txt") == "allowed"
def test_prefix_write_only():
bucket_name = "s3-credentials-tests.pre-wo.{}".format(secrets.token_hex(4))
s3 = boto3.client("s3")
assert not bucket_exists(s3, bucket_name)
credentials_decoded = json.loads(
get_output("create", bucket_name, "-c", "--write-only", "--prefix", "prefix/")
)
time.sleep(10)
credentials_s3 = boto3.session.Session(
aws_access_key_id=credentials_decoded["AccessKeyId"],
aws_secret_access_key=credentials_decoded["SecretAccessKey"],
).client("s3")
# Should not be able to write objects to root
with pytest.raises(botocore.exceptions.ClientError):
credentials_s3.put_object(
Body="denied".encode("utf-8"),
Bucket=bucket_name,
Key="denied.txt",
)
# Should be able to write them to prefix/
credentials_s3.put_object(
Body="allowed".encode("utf-8"),
Bucket=bucket_name,
Key="prefix/allowed2.txt",
)
# Use root permissions to verfy the write
s3 = boto3.client("s3")
assert read_file(s3, bucket_name, "prefix/allowed2.txt") == "allowed"
# Should not be able to run list-bucket, even against the prefix
for options in ([], ["--prefix", "prefix/"]):
with pytest.raises(GetOutputError):
args = [
"list-bucket",
bucket_name,
"--access-key",
credentials_decoded["AccessKeyId"],
"--secret-key",
credentials_decoded["SecretAccessKey"],
] + options
get_output(*args)
# Should not be able to get-object
for key in ("denied.txt", "prefix/allowed2.txt"):
with pytest.raises(botocore.exceptions.ClientError):
read_file(credentials_s3, bucket_name, key)
class GetOutputError(Exception):
pass
def get_output(*args, input=None):
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli, args, catch_exceptions=False, input=input)
if result.exit_code != 0:
raise GetOutputError(result.output)
return result.stdout
def read_file(s3, bucket, path):
response = s3.get_object(Bucket=bucket, Key=path)
return response["Body"].read().decode("utf-8")
def cleanup_any_resources():
# Delete any users beginning s3-credentials-tests.
users = json.loads(get_output("list-users"))
users_to_delete = [
user["UserName"]
for user in users
if ".s3-credentials-tests." in user["UserName"]
]
if users_to_delete:
print("Deleting users: ", users_to_delete)
get_output("delete-user", *users_to_delete)
s3 = boto3.client("s3")
# Delete any buckets beginning s3-credentials-tests.
buckets = json.loads(get_output("list-buckets"))
buckets_to_delete = [
bucket["Name"]
for bucket in buckets
if bucket["Name"].startswith("s3-credentials-tests.")
]
for bucket in buckets_to_delete:
print("Deleting bucket: {}".format(bucket))
# Delete all objects in the bucket
boto3.resource("s3").Bucket(bucket).objects.all().delete()
# Delete the bucket
s3.delete_bucket(Bucket=bucket)
def test_public_bucket():
bucket_name = "s3-credentials-tests.public-bucket.{}".format(secrets.token_hex(4))
s3 = boto3.client("s3")
assert not bucket_exists(s3, bucket_name)
credentials_decoded = json.loads(
get_output("create", bucket_name, "-c", "--duration", "15m", "--public")
)
assert set(credentials_decoded.keys()) == {
"AccessKeyId",
"SecretAccessKey",
"SessionToken",
"Expiration",
}
# Wait for everything to exist
time.sleep(5)
# Use those credentials to upload a file
content = "<h1>Hello world</h1>"
get_output(
"put-object",
bucket_name,
"hello.html",
"-",
"--content-type",
"text/html",
"--access-key",
credentials_decoded["AccessKeyId"],
"--secret-key",
credentials_decoded["SecretAccessKey"],
"--session-token",
credentials_decoded["SessionToken"],
input=content,
)
# It should be publicly accessible
url = "https://s3.amazonaws.com/{}/hello.html".format(bucket_name)
print(url)
response = urllib.request.urlopen(url)
actual_content = response.read().decode("utf-8")
assert response.status == 200
assert response.headers["content-type"] == "text/html"
assert actual_content == content
================================================
FILE: tests/test_localserver.py
================================================
"""Tests for the localserver command and related functionality."""
import botocore
from click.testing import CliRunner
from s3_credentials.cli import cli
import datetime
import json
import pytest
from unittest.mock import Mock
def test_localserver_missing_duration():
runner = CliRunner()
result = runner.invoke(cli, ["localserver", "my-bucket"])
assert result.exit_code == 2
assert "Missing option" in result.output
assert "duration" in result.output.lower()
def test_localserver_invalid_duration():
runner = CliRunner()
result = runner.invoke(cli, ["localserver", "my-bucket", "--duration", "5s"])
assert result.exit_code == 2
assert "Duration must be between 15 minutes and 12 hours" in result.output
def test_localserver_read_only_write_only_conflict():
runner = CliRunner()
result = runner.invoke(
cli,
[
"localserver",
"my-bucket",
"--duration",
"15m",
"--read-only",
"--write-only",
],
)
assert result.exit_code == 1
assert "Cannot use --read-only and --write-only at the same time" in result.output
def test_localserver_bucket_not_exists(mocker):
boto3 = mocker.patch("boto3.client")
boto3.return_value = Mock()
boto3.return_value.head_bucket.side_effect = botocore.exceptions.ClientError(
error_response={}, operation_name=""
)
runner = CliRunner()
result = runner.invoke(
cli, ["localserver", "nonexistent-bucket", "--duration", "15m"]
)
assert result.exit_code == 1
assert "Bucket does not exist: nonexistent-bucket" in result.output
def test_credential_cache_generates_credentials(mocker):
from s3_credentials.localserver import CredentialCache
mock_iam = Mock()
mock_sts = Mock()
mock_sts.get_caller_identity.return_value = {"Account": "123456"}
mock_iam.get_role.return_value = {"Role": {"Arn": "arn:aws:iam::123456:role/test"}}
mock_sts.assume_role.return_value = {
"Credentials": {
"AccessKeyId": "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"SessionToken": "session-token",
"Expiration": datetime.datetime(2025, 12, 16, 12, 0, 0),
}
}
cache = CredentialCache(
iam=mock_iam,
sts=mock_sts,
bucket="test-bucket",
permission="read-only",
prefix="*",
duration=900, # 15 minutes
extra_statements=[],
)
credentials = cache.get_credentials()
assert credentials["AccessKeyId"] == "AKIAIOSFODNN7EXAMPLE"
assert credentials["SecretAccessKey"] == "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
assert credentials["SessionToken"] == "session-token"
mock_sts.assume_role.assert_called_once()
call_kwargs = mock_sts.assume_role.call_args[1]
assert call_kwargs["RoleArn"] == "arn:aws:iam::123456:role/test"
assert call_kwargs["RoleSessionName"] == "s3.read-only.test-bucket"
assert call_kwargs["DurationSeconds"] == 900
def test_credential_cache_caches_credentials(mocker):
from s3_credentials.localserver import CredentialCache
mock_iam = Mock()
mock_sts = Mock()
mock_sts.get_caller_identity.return_value = {"Account": "123456"}
mock_iam.get_role.return_value = {"Role": {"Arn": "arn:aws:iam::123456:role/test"}}
mock_sts.assume_role.return_value = {
"Credentials": {
"AccessKeyId": "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey": "secret",
"SessionToken": "token",
"Expiration": datetime.datetime(2025, 12, 16, 12, 0, 0),
}
}
cache = CredentialCache(
iam=mock_iam,
sts=mock_sts,
bucket="test-bucket",
permission="read-write",
prefix="*",
duration=900,
extra_statements=[],
)
# Get credentials twice
creds1 = cache.get_credentials()
creds2 = cache.get_credentials()
# Should be the same object (cached)
assert creds1 is creds2
# Should only have called assume_role once
assert mock_sts.assume_role.call_count == 1
def test_credential_cache_refreshes_after_duration(mocker):
from s3_credentials.localserver import CredentialCache
import time
mock_iam = Mock()
mock_sts = Mock()
mock_sts.get_caller_identity.return_value = {"Account": "123456"}
mock_iam.get_role.return_value = {"Role": {"Arn": "arn:aws:iam::123456:role/test"}}
mock_sts.assume_role.return_value = {
"Credentials": {
"AccessKeyId": "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey": "secret",
"SessionToken": "token",
"Expiration": datetime.datetime(2025, 12, 16, 12, 0, 0),
}
}
cache = CredentialCache(
iam=mock_iam,
sts=mock_sts,
bucket="test-bucket",
permission="read-write",
prefix="*",
duration=1, # 1 second for testing
extra_statements=[],
)
# Get credentials first time
cache.get_credentials()
assert mock_sts.assume_role.call_count == 1
# Wait for duration to expire
time.sleep(1.1)
# Get credentials again - should regenerate
cache.get_credentials()
assert mock_sts.assume_role.call_count == 2
@pytest.mark.parametrize(
"permission,expected_permission",
(
("read-write", "read-write"),
("read-only", "read-only"),
("write-only", "write-only"),
),
)
def test_credential_cache_permission_in_session_name(
mocker, permission, expected_permission
):
from s3_credentials.localserver import CredentialCache
mock_iam = Mock()
mock_sts = Mock()
mock_sts.get_caller_identity.return_value = {"Account": "123456"}
mock_iam.get_role.return_value = {"Role": {"Arn": "arn:aws:iam::123456:role/test"}}
mock_sts.assume_role.return_value = {
"Credentials": {
"AccessKeyId": "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey": "secret",
"SessionToken": "token",
"Expiration": datetime.datetime(2025, 12, 16, 12, 0, 0),
}
}
cache = CredentialCache(
iam=mock_iam,
sts=mock_sts,
bucket="my-bucket",
permission=permission,
prefix="*",
duration=900,
extra_statements=[],
)
cache.get_credentials()
call_kwargs = mock_sts.assume_role.call_args[1]
assert call_kwargs["RoleSessionName"] == f"s3.{expected_permission}.my-bucket"
def test_credential_cache_policy_generation(mocker):
from s3_credentials.localserver import CredentialCache
mock_iam = Mock()
mock_sts = Mock()
mock_sts.get_caller_identity.return_value = {"Account": "123456"}
mock_iam.get_role.return_value = {"Role": {"Arn": "arn:aws:iam::123456:role/test"}}
mock_sts.assume_role.return_value = {
"Credentials": {
"AccessKeyId": "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey": "secret",
"SessionToken": "token",
"Expiration": datetime.datetime(2025, 12, 16, 12, 0, 0),
}
}
cache = CredentialCache(
iam=mock_iam,
sts=mock_sts,
bucket="test-bucket",
permission="read-only",
prefix="*",
duration=900,
extra_statements=[],
)
cache.get_credentials()
call_kwargs = mock_sts.assume_role.call_args[1]
policy = json.loads(call_kwargs["Policy"])
assert policy["Version"] == "2012-10-17"
assert len(policy["Statement"]) == 2
# Should have ListBucket and GetObject statements
actions = []
for stmt in policy["Statement"]:
actions.extend(stmt["Action"])
assert "s3:ListBucket" in actions
assert "s3:GetObject" in actions
VALID_CREDENTIALS = {
"AccessKeyId": "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"SessionToken": "session-token",
"Expiration": datetime.datetime(2025, 12, 16, 12, 0, 0),
}
@pytest.mark.parametrize(
"path,credentials_return,credentials_error,expected_status,expected_body_contains",
[
# Success case: valid path, credentials returned
(
"/",
VALID_CREDENTIALS,
None,
200,
['"Version": 1', '"AccessKeyId"', '"SessionToken"'],
),
# 404 case: wrong path
(
"/wrong-path",
None,
None,
404,
["Not found"],
),
# 500 case: credential generation fails
(
"/",
None,
Exception("AWS Error"),
500,
["AWS Error"],
),
],
ids=["success", "wrong-path-404", "error-500"],
)
def test_credential_handler_responses(
path,
credentials_return,
credentials_error,
expected_status,
expected_body_contains,
):
from s3_credentials.localserver import make_credential_handler
import io
mock_cache = Mock()
if credentials_error:
mock_cache.get_credentials.side_effect = credentials_error
elif credentials_return:
mock_cache.get_credentials.return_value = credentials_return
handler_class = make_credential_handler(mock_cache)
handler = handler_class.__new__(handler_class)
handler.path = path
handler.wfile = io.BytesIO()
handler.request_version = "HTTP/1.1"
response_code = None
def mock_send_response(code):
nonlocal response_code
response_code = code
handler.send_response = mock_send_response
handler.send_header = lambda name, value: None
handler.end_headers = lambda: None
handler.do_GET()
assert response_code == expected_status
response_body = handler.wfile.getvalue().decode()
for expected in expected_body_contains:
assert expected in response_body
================================================
FILE: tests/test_s3_credentials.py
================================================
import botocore
from click.testing import CliRunner
import s3_credentials
from s3_credentials.cli import cli
import json
import os
import pathlib
import pytest
from unittest.mock import call, Mock
from botocore.stub import Stubber
@pytest.fixture
def stub_iam(mocker):
client = botocore.session.get_session().create_client("iam")
stubber = Stubber(client)
stubber.activate()
mocker.patch("s3_credentials.cli.make_client", return_value=client)
return stubber
@pytest.fixture
def stub_s3(mocker):
client = botocore.session.get_session().create_client("s3")
stubber = Stubber(client)
stubber.activate()
mocker.patch("s3_credentials.cli.make_client", return_value=client)
return stubber
@pytest.fixture
def stub_sts(mocker):
client = botocore.session.get_session().create_client("sts")
stubber = Stubber(client)
stubber.activate()
mocker.patch("s3_credentials.cli.make_client", return_value=client)
return stubber
def test_whoami(mocker, stub_sts):
stub_sts.add_response(
"get_caller_identity",
{
"UserId": "AEONAUTHOUNTOHU",
"Account": "123456",
"Arn": "arn:aws:iam::123456:user/user-name",
"ResponseMetadata": {},
},
)
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli, ["whoami"])
assert result.exit_code == 0
assert json.loads(result.output) == {
"UserId": "AEONAUTHOUNTOHU",
"Account": "123456",
"Arn": "arn:aws:iam::123456:user/user-name",
}
@pytest.mark.parametrize(
"option,expected",
(
(
"",
"[\n"
" {\n"
' "Path": "/",\n'
' "UserName": "NameA",\n'
' "UserId": "AID000000000000000001",\n'
' "Arn": "arn:aws:iam::000000000000:user/NameB",\n'
' "CreateDate": "2020-01-01 00:00:00+00:00"\n'
" },\n"
" {\n"
' "Path": "/",\n'
' "UserName": "NameA",\n'
' "UserId": "AID000000000000000000",\n'
' "Arn": "arn:aws:iam::000000000000:user/NameB",\n'
' "CreateDate": "2020-01-01 00:00:00+00:00"\n'
" }\n"
"]\n",
),
(
"--nl",
'{"Path": "/", "UserName": "NameA", "UserId": "AID000000000000000001", "Arn": "arn:aws:iam::000000000000:user/NameB", "CreateDate": "2020-01-01 00:00:00+00:00"}\n'
'{"Path": "/", "UserName": "NameA", "UserId": "AID000000000000000000", "Arn": "arn:aws:iam::000000000000:user/NameB", "CreateDate": "2020-01-01 00:00:00+00:00"}\n',
),
(
"--csv",
(
"UserName,UserId,Arn,Path,CreateDate,PasswordLastUsed,PermissionsBoundary,Tags\n"
"NameA,AID000000000000000001,arn:aws:iam::000000000000:user/NameB,/,2020-01-01 00:00:00+00:00,,,\n"
"NameA,AID000000000000000000,arn:aws:iam::000000000000:user/NameB,/,2020-01-01 00:00:00+00:00,,,\n"
),
),
(
"--tsv",
(
"UserName\tUserId\tArn\tPath\tCreateDate\tPasswordLastUsed\tPermissionsBoundary\tTags\n"
"NameA\tAID000000000000000001\tarn:aws:iam::000000000000:user/NameB\t/\t2020-01-01 00:00:00+00:00\t\t\t\n"
"NameA\tAID000000000000000000\tarn:aws:iam::000000000000:user/NameB\t/\t2020-01-01 00:00:00+00:00\t\t\t\n"
),
),
),
)
def test_list_users(option, expected, stub_iam):
stub_iam.add_response(
"list_users",
{
"Users": [
{
"Path": "/",
"UserName": "NameA",
"UserId": "AID000000000000000001",
"Arn": "arn:aws:iam::000000000000:user/NameB",
"CreateDate": "2020-01-01 00:00:00+00:00",
},
{
"Path": "/",
"UserName": "NameA",
"UserId": "AID000000000000000000",
"Arn": "arn:aws:iam::000000000000:user/NameB",
"CreateDate": "2020-01-01 00:00:00+00:00",
},
]
},
)
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli, ["list-users"] + ([option] if option else []))
assert result.exit_code == 0
assert result.output == expected
@pytest.mark.parametrize(
"options,expected",
(
(
[],
(
"[\n"
" {\n"
' "Name": "bucket-one",\n'
' "CreationDate": "2020-01-01 00:00:00+00:00"\n'
" },\n"
" {\n"
' "Name": "bucket-two",\n'
' "CreationDate": "2020-02-01 00:00:00+00:00"\n'
" }\n"
"]\n"
),
),
(
["--nl"],
'{"Name": "bucket-one", "CreationDate": "2020-01-01 00:00:00+00:00"}\n'
'{"Name": "bucket-two", "CreationDate": "2020-02-01 00:00:00+00:00"}\n',
),
(
["--nl", "bucket-one"],
'{"Name": "bucket-one", "CreationDate": "2020-01-01 00:00:00+00:00"}\n',
),
),
)
def test_list_buckets(stub_s3, options, expected):
stub_s3.add_response(
"list_buckets",
{
"Buckets": [
{
"Name": "bucket-one",
"CreationDate": "2020-01-01 00:00:00+00:00",
},
{
"Name": "bucket-two",
"CreationDate": "2020-02-01 00:00:00+00:00",
},
]
},
)
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli, ["list-buckets"] + options)
assert result.exit_code == 0
assert result.output == expected
def test_list_buckets_details(stub_s3):
stub_s3.add_response(
"list_buckets",
{
"Buckets": [
{
"Name": "bucket-one",
"CreationDate": "2020-01-01 00:00:00+00:00",
}
]
},
)
stub_s3.add_response(
"get_bucket_acl",
{
"Owner": {
"DisplayName": "swillison",
"ID": "36b2eeee501c5952a8ac119f9e5212277a4c01eccfa8d6a9d670bba1e2d5f441",
},
"Grants": [
{
"Grantee": {
"DisplayName": "swillison",
"ID": "36b2eeee501c5952a8ac119f9e5212277a4c01eccfa8d6a9d670bba1e2d5f441",
"Type": "CanonicalUser",
},
"Permission": "FULL_CONTROL",
}
],
"ResponseMetadata": {},
},
)
stub_s3.add_response(
"get_bucket_location",
{
"LocationConstraint": "us-west-2",
},
)
stub_s3.add_response(
"get_public_access_block",
{
"PublicAccessBlockConfiguration": {
"BlockPublicAcls": True,
"IgnorePublicAcls": True,
"BlockPublicPolicy": True,
"RestrictPublicBuckets": True,
},
},
)
stub_s3.add_response(
"get_bucket_website",
{
"IndexDocument": {"Suffix": "index.html"},
"ErrorDocument": {"Key": "error.html"},
},
)
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli, ["list-buckets", "--details"])
assert result.exit_code == 0
assert result.output == (
"[\n"
" {\n"
' "Name": "bucket-one",\n'
' "CreationDate": "2020-01-01 00:00:00+00:00",\n'
' "region": "us-west-2",\n'
' "bucket_acl": {\n'
' "Owner": {\n'
' "DisplayName": "swillison",\n'
' "ID": "36b2eeee501c5952a8ac119f9e5212277a4c01eccfa8d6a9d670bba1e2d5f441"\n'
" },\n"
' "Grants": [\n'
" {\n"
' "Grantee": {\n'
' "DisplayName": "swillison",\n'
' "ID": "36b2eeee501c5952a8ac119f9e5212277a4c01eccfa8d6a9d670bba1e2d5f441",\n'
' "Type": "CanonicalUser"\n'
" },\n"
' "Permission": "FULL_CONTROL"\n'
" }\n"
" ]\n"
" },\n"
' "public_access_block": {\n'
' "BlockPublicAcls": true,\n'
' "IgnorePublicAcls": true,\n'
' "BlockPublicPolicy": true,\n'
' "RestrictPublicBuckets": true\n'
" },\n"
' "bucket_website": {\n'
' "IndexDocument": {\n'
' "Suffix": "index.html"\n'
" },\n"
' "ErrorDocument": {\n'
' "Key": "error.html"\n'
" },\n"
' "url": "http://bucket-one.s3-website.us-west-2.amazonaws.com/"\n'
" }\n"
" }\n"
"]\n"
)
CUSTOM_POLICY = '{"custom": "policy", "bucket": "$!BUCKET_NAME!$"}'
READ_WRITE_POLICY = '{"Version": "2012-10-17", "Statement": [{"Effect": "Allow", "Action": ["s3:ListBucket", "s3:GetBucketLocation"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1"]}, {"Effect": "Allow", "Action": ["s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectLegalHold", "s3:GetObjectRetention", "s3:GetObjectTagging"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1/*"]}, {"Effect": "Allow", "Action": ["s3:PutObject", "s3:DeleteObject"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1/*"]}]}'
READ_ONLY_POLICY = '{"Version": "2012-10-17", "Statement": [{"Effect": "Allow", "Action": ["s3:ListBucket", "s3:GetBucketLocation"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1"]}, {"Effect": "Allow", "Action": ["s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectLegalHold", "s3:GetObjectRetention", "s3:GetObjectTagging"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1/*"]}]}'
WRITE_ONLY_POLICY = '{"Version": "2012-10-17", "Statement": [{"Effect": "Allow", "Action": ["s3:PutObject"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1/*"]}]}'
PREFIX_POLICY = '{"Version": "2012-10-17", "Statement": [{"Effect": "Allow", "Action": ["s3:GetBucketLocation"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1"]}, {"Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1"], "Condition": {"StringLike": {"s3:prefix": ["my-prefix/*"]}}}, {"Effect": "Allow", "Action": ["s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectLegalHold", "s3:GetObjectRetention", "s3:GetObjectTagging"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1/my-prefix/*"]}, {"Effect": "Allow", "Action": ["s3:PutObject", "s3:DeleteObject"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1/my-prefix/*"]}]}'
EXTRA_STATEMENTS_POLICY = '{"Version": "2012-10-17", "Statement": [{"Effect": "Allow", "Action": ["s3:ListBucket", "s3:GetBucketLocation"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1"]}, {"Effect": "Allow", "Action": ["s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectLegalHold", "s3:GetObjectRetention", "s3:GetObjectTagging"], "Resource": ["arn:aws:s3:::pytest-bucket-simonw-1/*"]}, {"Effect": "Allow", "Action": ["s3:PutObject", "s3:DeleteObject"], "Res
gitextract_jov9st3v/
├── .github/
│ ├── dependabot.yml
│ └── workflows/
│ ├── publish.yml
│ └── test.yml
├── .gitignore
├── .readthedocs.yaml
├── LICENSE
├── README.md
├── docs/
│ ├── .gitignore
│ ├── Makefile
│ ├── conf.py
│ ├── configuration.md
│ ├── contributing.md
│ ├── create.md
│ ├── help.md
│ ├── index.md
│ ├── localserver.md
│ ├── other-commands.md
│ ├── policy-documents.md
│ └── requirements.txt
├── pyproject.toml
├── s3_credentials/
│ ├── __init__.py
│ ├── cli.py
│ ├── localserver.py
│ └── policies.py
└── tests/
├── conftest.py
├── test_dry_run.py
├── test_integration.py
├── test_localserver.py
└── test_s3_credentials.py
SYMBOL INDEX (120 symbols across 8 files)
FILE: s3_credentials/cli.py
function bucket_exists (line 27) | def bucket_exists(s3, bucket):
function user_exists (line 35) | def user_exists(iam, username):
function common_boto3_options (line 43) | def common_boto3_options(fn):
function common_output_options (line 74) | def common_output_options(fn):
function cli (line 88) | def cli():
class PolicyParam (line 96) | class PolicyParam(click.ParamType):
method convert (line 100) | def convert(self, policy, param, ctx):
class DurationParam (line 126) | class DurationParam(click.ParamType):
method convert (line 130) | def convert(self, value, param, ctx):
class StatementParam (line 146) | class StatementParam(click.ParamType):
method convert (line 150) | def convert(self, statement, param, ctx):
function policy (line 190) | def policy(buckets, read_only, write_only, prefix, extra_statements, pub...
function create (line 299) | def create(
function whoami (line 592) | def whoami(**boto_options):
function list_users (line 603) | def list_users(nl, csv, tsv, **boto_options):
function list_roles (line 637) | def list_roles(role_names, details, nl, csv, tsv, **boto_options):
function list_user_policies (line 720) | def list_user_policies(usernames, **boto_options):
function list_buckets (line 750) | def list_buckets(buckets, details, nl, csv, tsv, **boto_options):
function delete_user (line 823) | def delete_user(usernames, **boto_options):
function make_client (line 860) | def make_client(service, access_key, secret_key, session_token, endpoint...
function ensure_s3_role_exists (line 896) | def ensure_s3_role_exists(iam, sts):
function list_bucket (line 942) | def list_bucket(bucket, prefix, urls, nl, csv, tsv, **boto_options):
function put_object (line 1001) | def put_object(bucket, key, path, content_type, silent, **boto_options):
function put_objects (line 1053) | def put_objects(bucket, objects, prefix, silent, dry_run, **boto_options):
function get_object (line 1135) | def get_object(bucket, key, output, **boto_options):
function get_objects (line 1178) | def get_objects(bucket, keys, output, patterns, silent, **boto_options):
function set_cors_policy (line 1304) | def set_cors_policy(
function get_cors_policy (line 1352) | def get_cors_policy(bucket, **boto_options):
function get_bucket_policy (line 1371) | def get_bucket_policy(bucket, **boto_options):
function set_bucket_policy (line 1392) | def set_bucket_policy(bucket, policy_file, allow_all_get, **boto_options):
function without_response_metadata (line 1416) | def without_response_metadata(data):
function debug_bucket (line 1425) | def debug_bucket(bucket, **boto_options):
function delete_objects (line 1480) | def delete_objects(bucket, keys, prefix, silent, dry_run, **boto_options):
function get_public_access_block (line 1531) | def get_public_access_block(bucket, **boto_options):
function set_public_access_block (line 1579) | def set_public_access_block(
function localserver (line 1675) | def localserver(
function output (line 1743) | def output(iterator, headers, nl, csv, tsv):
function stream_indented_json (line 1758) | def stream_indented_json(iterator, indent=2):
function paginate (line 1783) | def paginate(service, method, list_key, **kwargs):
function fix_json (line 1789) | def fix_json(row):
function format_bytes (line 1806) | def format_bytes(size):
function batches (line 1815) | def batches(all, batch_size):
FILE: s3_credentials/localserver.py
class CredentialCache (line 17) | class CredentialCache:
method __init__ (line 20) | def __init__(
method _generate_policy (line 35) | def _generate_policy(self):
method _generate_credentials (line 48) | def _generate_credentials(self):
method get_credentials (line 64) | def get_credentials(self):
function make_credential_handler (line 103) | def make_credential_handler(credential_cache):
function run_server (line 151) | def run_server(
FILE: s3_credentials/policies.py
function read_write (line 1) | def read_write(bucket, prefix="*", extra_statements=None):
function read_write_statements (line 8) | def read_write_statements(bucket, prefix="*"):
function read_only (line 21) | def read_only(bucket, prefix="*", extra_statements=None):
function read_only_statements (line 28) | def read_only_statements(bucket, prefix="*"):
function write_only (line 79) | def write_only(bucket, prefix="*", extra_statements=None):
function write_only_statements (line 86) | def write_only_statements(bucket, prefix="*"):
function wrap_policy (line 99) | def wrap_policy(statements):
function bucket_policy_allow_all_get (line 103) | def bucket_policy_allow_all_get(bucket):
FILE: tests/conftest.py
function pytest_addoption (line 8) | def pytest_addoption(parser):
function pytest_configure (line 23) | def pytest_configure(config):
function pytest_collection_modifyitems (line 30) | def pytest_collection_modifyitems(config, items):
function aws_credentials (line 43) | def aws_credentials():
function moto_s3 (line 53) | def moto_s3(aws_credentials):
function moto_s3_populated (line 61) | def moto_s3_populated(moto_s3):
FILE: tests/test_dry_run.py
function assert_match_with_wildcards (line 8) | def assert_match_with_wildcards(pattern, input):
function test_dry_run (line 79) | def test_dry_run(options, expected):
FILE: tests/test_integration.py
function cleanup (line 20) | def cleanup():
function test_create_bucket_with_read_write (line 26) | def test_create_bucket_with_read_write(tmpdir):
function test_create_bucket_read_only_duration_15 (line 55) | def test_create_bucket_read_only_duration_15():
function test_read_write_bucket_prefix_temporary_credentials (line 101) | def test_read_write_bucket_prefix_temporary_credentials():
function test_read_write_bucket_prefix_permanent_credentials (line 138) | def test_read_write_bucket_prefix_permanent_credentials():
function test_list_bucket_including_with_prefix (line 170) | def test_list_bucket_including_with_prefix():
function test_prefix_read_only (line 219) | def test_prefix_read_only():
function test_prefix_write_only (line 282) | def test_prefix_write_only():
class GetOutputError (line 328) | class GetOutputError(Exception):
function get_output (line 332) | def get_output(*args, input=None):
function read_file (line 341) | def read_file(s3, bucket, path):
function cleanup_any_resources (line 346) | def cleanup_any_resources():
function test_public_bucket (line 373) | def test_public_bucket():
FILE: tests/test_localserver.py
function test_localserver_missing_duration (line 12) | def test_localserver_missing_duration():
function test_localserver_invalid_duration (line 20) | def test_localserver_invalid_duration():
function test_localserver_read_only_write_only_conflict (line 27) | def test_localserver_read_only_write_only_conflict():
function test_localserver_bucket_not_exists (line 44) | def test_localserver_bucket_not_exists(mocker):
function test_credential_cache_generates_credentials (line 59) | def test_credential_cache_generates_credentials(mocker):
function test_credential_cache_caches_credentials (line 99) | def test_credential_cache_caches_credentials(mocker):
function test_credential_cache_refreshes_after_duration (line 136) | def test_credential_cache_refreshes_after_duration(mocker):
function test_credential_cache_permission_in_session_name (line 184) | def test_credential_cache_permission_in_session_name(
function test_credential_cache_policy_generation (line 219) | def test_credential_cache_policy_generation(mocker):
function test_credential_handler_responses (line 298) | def test_credential_handler_responses(
FILE: tests/test_s3_credentials.py
function stub_iam (line 14) | def stub_iam(mocker):
function stub_s3 (line 23) | def stub_s3(mocker):
function stub_sts (line 32) | def stub_sts(mocker):
function test_whoami (line 40) | def test_whoami(mocker, stub_sts):
function test_list_users (line 107) | def test_list_users(option, expected, stub_iam):
function test_list_buckets (line 166) | def test_list_buckets(stub_s3, options, expected):
function test_list_buckets_details (line 189) | def test_list_buckets_details(stub_s3):
function test_create (line 321) | def test_create(
function test_create_statement_error (line 375) | def test_create_statement_error(statement, expected_error):
function mocked_for_duration (line 383) | def mocked_for_duration(mocker):
function test_create_duration (line 405) | def test_create_duration(
function test_create_public (line 462) | def test_create_public(mocker):
function test_create_website (line 504) | def test_create_website(mocker):
function test_create_format_ini (line 548) | def test_create_format_ini(mocker):
function test_create_format_duration_ini (line 570) | def test_create_format_duration_ini(mocked_for_duration):
function test_list_user_policies (line 586) | def test_list_user_policies(mocker):
function test_delete_user (line 641) | def test_delete_user(mocker):
function test_get_cors_policy (line 682) | def test_get_cors_policy(mocker):
function test_set_cors_policy (line 765) | def test_set_cors_policy(mocker, options, expected_json):
function test_verify_create_policy_option (line 794) | def test_verify_create_policy_option(
function test_auth_option (line 844) | def test_auth_option(tmpdir, mocker, content, use_stdin):
function test_auth_option_errors (line 878) | def test_auth_option_errors(extra_option):
function test_policy (line 909) | def test_policy(options, expected):
function test_list_bucket (line 968) | def test_list_bucket(stub_s3, options, expected):
function test_list_bucket_empty (line 997) | def test_list_bucket_empty(stub_s3):
function stub_iam_for_list_roles (line 1007) | def stub_iam_for_list_roles(stub_iam):
function test_list_roles_details (line 1049) | def test_list_roles_details(stub_iam_for_list_roles, details):
function test_list_roles_csv (line 1080) | def test_list_roles_csv(stub_iam_for_list_roles):
function test_get_objects (line 1135) | def test_get_objects(moto_s3_populated, output, files, patterns, expecte...
function test_put_objects (line 1188) | def test_put_objects(moto_s3, args, expected, expected_output):
function test_delete_objects (line 1226) | def test_delete_objects(moto_s3_populated, args, expected, expected_error):
function test_delete_objects_dry_run (line 1249) | def test_delete_objects_dry_run(moto_s3_populated, arg):
Condensed preview — 29 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (247K chars).
[
{
"path": ".github/dependabot.yml",
"chars": 106,
"preview": "version: 2\nupdates:\n - package-ecosystem: \"pip\"\n directory: \"/\"\n schedule:\n interval: \"daily\"\n"
},
{
"path": ".github/workflows/publish.yml",
"chars": 1174,
"preview": "name: Publish Python Package\n\non:\n release:\n types: [created]\n\npermissions:\n contents: read\n\njobs:\n test:\n runs"
},
{
"path": ".github/workflows/test.yml",
"chars": 707,
"preview": "name: Test\n\non: [push, pull_request]\n\npermissions:\n contents: read\n\njobs:\n test:\n runs-on: ubuntu-latest\n strate"
},
{
"path": ".gitignore",
"chars": 86,
"preview": ".venv\n__pycache__/\n*.py[cod]\n*$py.class\nvenv\n.eggs\n.pytest_cache\n*.egg-info\n.DS_Store\n"
},
{
"path": ".readthedocs.yaml",
"chars": 196,
"preview": "version: 2\n\nbuild:\n os: ubuntu-22.04\n tools:\n python: \"3.11\"\n\nsphinx:\n configuration: docs/conf.py\n\nformats:\n - "
},
{
"path": "LICENSE",
"chars": 11357,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "README.md",
"chars": 1990,
"preview": "# s3-credentials\n\n[](https://pypi.org/project/s3-credentials/)\n"
},
{
"path": "docs/.gitignore",
"chars": 7,
"preview": "_build\n"
},
{
"path": "docs/Makefile",
"chars": 695,
"preview": "# Minimal makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line.\nSPHINXOPTS =\nSPHI"
},
{
"path": "docs/conf.py",
"chars": 5062,
"preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\nfrom subprocess import PIPE, Popen\n\n# This file is execfile()d with the "
},
{
"path": "docs/configuration.md",
"chars": 1738,
"preview": "# Configuration\n\nThis tool uses [boto3](https://boto3.amazonaws.com/) under the hood which supports [a number of differe"
},
{
"path": "docs/contributing.md",
"chars": 1084,
"preview": "# Contributing\n\nTo contribute to this tool, first checkout [the code](https://github.com/simonw/s3-credentials). You can"
},
{
"path": "docs/create.md",
"chars": 8842,
"preview": "# Creating S3 credentials\n\nThe `s3-credentials create` command is the core feature of this tool. Pass it one or more S3 "
},
{
"path": "docs/help.md",
"chars": 22481,
"preview": "# Command help\n\nThis page shows the `--help` output for all of the `s3-credentials` commands.\n\n<!-- [[[cog\nimport cog\nfr"
},
{
"path": "docs/index.md",
"chars": 2271,
"preview": "# s3-credentials\n\n[](https://pypi.org/project/s3-credentials/)\n"
},
{
"path": "docs/localserver.md",
"chars": 3226,
"preview": "# Local credential server\n\nThe `s3-credentials localserver` command starts a local HTTP server that serves temporary S3 "
},
{
"path": "docs/other-commands.md",
"chars": 18754,
"preview": "# Other commands\n\n```{contents}\n---\nlocal:\nclass: this-will-duplicate-information-and-it-is-still-useful-here\n---\n```\n\n#"
},
{
"path": "docs/policy-documents.md",
"chars": 5695,
"preview": "# Policy documents\n\nThe IAM policies generated by this tool for a bucket called `my-s3-bucket` would look like this:\n\n##"
},
{
"path": "docs/requirements.txt",
"chars": 41,
"preview": "furo\nsphinx-autobuild\nmyst-parser\ncogapp\n"
},
{
"path": "pyproject.toml",
"chars": 1233,
"preview": "[project]\nname = \"s3-credentials\"\nversion = \"0.17\"\ndescription = \"A tool for creating credentials for accessing S3 bucke"
},
{
"path": "s3_credentials/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "s3_credentials/cli.py",
"chars": 58305,
"preview": "from re import A\nimport boto3\nimport botocore\nimport click\nimport configparser\nfrom csv import DictWriter\nimport fnmatch"
},
{
"path": "s3_credentials/localserver.py",
"chars": 6622,
"preview": "\"\"\"\nLocal server for serving S3 credentials via HTTP.\n\"\"\"\n\nimport datetime\nfrom http.server import HTTPServer, BaseHTTPR"
},
{
"path": "s3_credentials/policies.py",
"chars": 3464,
"preview": "def read_write(bucket, prefix=\"*\", extra_statements=None):\n statements = read_write_statements(bucket, prefix=prefix)"
},
{
"path": "tests/conftest.py",
"chars": 1812,
"preview": "import boto3\nimport logging\nimport os\nimport pytest\nfrom moto import mock_aws\n\n\ndef pytest_addoption(parser):\n parser"
},
{
"path": "tests/test_dry_run.py",
"chars": 3107,
"preview": "from click.testing import CliRunner\nfrom s3_credentials.cli import cli\nimport pytest\nimport re\nimport textwrap\n\n\ndef ass"
},
{
"path": "tests/test_integration.py",
"chars": 14706,
"preview": "# These integration tests only run with \"pytest --integration\" -\n# they execute live calls against AWS using environment"
},
{
"path": "tests/test_localserver.py",
"chars": 9869,
"preview": "\"\"\"Tests for the localserver command and related functionality.\"\"\"\n\nimport botocore\nfrom click.testing import CliRunner\n"
},
{
"path": "tests/test_s3_credentials.py",
"chars": 47474,
"preview": "import botocore\nfrom click.testing import CliRunner\nimport s3_credentials\nfrom s3_credentials.cli import cli\nimport json"
}
]
About this extraction
This page contains the full source code of the simonw/s3-credentials GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 29 files (226.7 KB), approximately 56.1k tokens, and a symbol index with 120 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.