Full Code of jpetazzo/shpod for AI

main b6888c044194 cached
34 files
62.5 KB
18.4k tokens
3 symbols
1 requests
Download .txt
Repository: jpetazzo/shpod
Branch: main
Commit: b6888c044194
Files: 34
Total size: 62.5 KB

Directory structure:
gitextract_ufcoemsn/

├── .github/
│   └── workflows/
│       └── automated-build.yaml
├── .gitignore
├── Brewfile.netlify
├── Dockerfile
├── README.md
├── addmount.c
├── bash_profile
├── bashrc
├── bore.sh
├── build.sh
├── dind.sh
├── docker-socket.sh
├── helm/
│   └── shpod/
│       ├── .helmignore
│       ├── Chart.yaml
│       ├── templates/
│       │   ├── NOTES.txt
│       │   ├── _helpers.tpl
│       │   ├── deployment.yaml
│       │   ├── persistentvolumeclaim.yaml
│       │   ├── rbac.yaml
│       │   ├── rolebinding.yaml
│       │   ├── service.yaml
│       │   └── serviceaccount.yaml
│       └── values.yaml
├── helper-curl
├── helper-unsupported
├── init.sh
├── kind.sh
├── motd
├── netlify.toml
├── setup-tailhist.sh
├── shpod.sh
├── shpod.yaml
├── tmux.conf
└── vimrc

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/workflows/automated-build.yaml
================================================
name: Automated Build

on:
  push:
    branches:
      - main

env:
  DOCKER_BUILDKIT: 1

# Note: this is copy-pasted and adapted from
# https://github.com/jpetazzo/workflows/blob/main/.github/workflows/automated-build.yaml
# I need to find an elegant way to manage the multi-target built 🤔

jobs:
  push:

    runs-on: ubuntu-latest
    if: github.event_name == 'push'

    permissions:
      contents: read
      packages: write

    steps:
      -
        name: Set environment variables
        run: |
          IMAGES=""
          if [ "${{ secrets.DOCKER_HUB_TOKEN }}" ]; then
            echo PUSH_TO_DOCKER_HUB=yes >> $GITHUB_ENV
            IMAGES="$IMAGES docker.io/${{ github.repository }}"
            if [ "${{ inputs.DOCKER_HUB_USERNAME }}" ]; then
              echo DOCKER_HUB_USERNAME="${{ inputs.DOCKER_HUB_USERNAME }}" >> $GITHUB_ENV
            else
              echo DOCKER_HUB_USERNAME="${{ github.repository_owner }}" >> $GITHUB_ENV
            fi
          fi
          if true; then
            echo PUSH_TO_GHCR=yes >> $GITHUB_ENV
            IMAGES="$IMAGES ghcr.io/${{ github.repository }}"
          fi
          echo 'IMAGES<<EOF' >> $GITHUB_ENV
          for IMAGE in $IMAGES; do
            echo $IMAGE >> $GITHUB_ENV
            if [ "$GITHUB_REF_TYPE" == "tag" ]; then
              echo $IMAGE:$GITHUB_REF_NAME >> $GITHUB_ENV
            fi
          done
          echo 'EOF' >> $GITHUB_ENV

      -
        uses: actions/checkout@v3

      -
        name: Log into Docker Hub
        if: env.PUSH_TO_DOCKER_HUB
        uses: docker/login-action@v2
        with:
          username: ${{ env.DOCKER_HUB_USERNAME }}
          password: ${{ secrets.DOCKER_HUB_TOKEN }}

      -
        name: Log into GitHub Container Registry
        if: env.PUSH_TO_GHCR
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ github.token }}

      -
        uses: docker/setup-qemu-action@v2

      -
        uses: docker/setup-buildx-action@v2

      -
        uses: docker/build-push-action@v3
        with:
          platforms: ${{ inputs.PLATFORMS }}
          push: true
          tags: ${{ env.IMAGES }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

      -
        uses: docker/build-push-action@v3
        with:
          platforms: ${{ inputs.PLATFORMS }}
          push: true
          target: vspod
          tags: jpetazzo/shpod:vspod,ghcr.io/jpetazzo/shpod:vspod
          cache-from: type=gha
          cache-to: type=gha,mode=max


================================================
FILE: .gitignore
================================================
/build


================================================
FILE: Brewfile.netlify
================================================
brew "helm"


================================================
FILE: Dockerfile
================================================
FROM --platform=$BUILDPLATFORM golang:alpine AS builder
RUN apk add curl git make
ARG BUILDARCH TARGETARCH
ENV BUILDARCH=$BUILDARCH \
    CGO_ENABLED=0 \
    GOARCH=$TARGETARCH \
    TARGETARCH=$TARGETARCH
COPY helper-* /bin/

FROM alpine AS addmount
RUN apk add build-base
COPY addmount.c .
RUN make addmount

# https://github.com/argoproj/argo-cd/releases/latest
FROM builder AS argocd
RUN helper-curl bin argocd \
    https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-@GOARCH

# https://github.com/warpstreamlabs/bento/releases
FROM builder AS bento
ARG BENTO_VERSION=1.3.0
RUN helper-curl tar bento \
    https://github.com/warpstreamlabs/bento/releases/download/v${BENTO_VERSION}/bento_${BENTO_VERSION}_linux_@GOARCH.tar.gz

# https://github.com/coder/code-server/releases
FROM builder AS code-server
ARG CODE_SERVER_VERSION=4.105.1
RUN mkdir -p /code-server
RUN helper-curl tar "--directory=/code-server --strip-components=1" \
    https://github.com/coder/code-server/releases/download/v${CODE_SERVER_VERSION}/code-server-${CODE_SERVER_VERSION}-linux-@CODERARCH.tar.gz

# https://github.com/docker/compose/releases
FROM builder AS compose
ARG COMPOSE_VERSION=2.40.1
RUN helper-curl bin docker-compose \
    https://github.com/docker/compose/releases/download/v${COMPOSE_VERSION}/docker-compose-linux-@UARCH

# https://github.com/google/go-containerregistry/tree/main/cmd/crane
FROM builder AS crane
RUN go install github.com/google/go-containerregistry/cmd/crane@latest
RUN cp $(find bin -name crane) /usr/local/bin

# https://github.com/fluxcd/flux2/releases
FROM builder AS flux
ARG FLUX_VERSION=2.7.2
RUN helper-curl tar flux \
    https://github.com/fluxcd/flux2/releases/download/v$FLUX_VERSION/flux_${FLUX_VERSION}_linux_@GOARCH.tar.gz

# https://github.com/tomnomnom/gron/releases
FROM builder AS gron
ARG GRON_VERSION=v0.7.1
RUN go install "-ldflags=-X main.gronVersion=$GRON_VERSION" github.com/tomnomnom/gron@$GRON_VERSION
RUN cp $(find bin -name gron) /usr/local/bin

# https://github.com/helmfile/helmfile/releases
FROM builder AS helmfile
ARG HELMFILE_VERSION=1.1.7
RUN helper-curl tar helmfile \
    https://github.com/helmfile/helmfile/releases/download/v${HELMFILE_VERSION}/helmfile_${HELMFILE_VERSION}_linux_@GOARCH.tar.gz

# https://github.com/helm/helm/releases
FROM builder AS helm
ARG HELM_VERSION=3.19.0
RUN helper-curl tar "--strip-components=1 linux-@GOARCH/helm" \
    https://get.helm.sh/helm-v${HELM_VERSION}-linux-@GOARCH.tar.gz

# Use emulation instead of cross-compilation for that one.
# (The source is small enough, so I don't know if cross-compilation
# would be worth the effort.)
FROM alpine AS httping
RUN apk add build-base cmake gettext git musl-libintl ncurses-dev openssl-dev
RUN git clone https://github.com/folkertvanheusden/httping
WORKDIR httping
RUN sed -i s/60/0/ utils.c
#RUN echo "target_link_options(httping PUBLIC -static)" >> CMakeLists.txt
RUN cmake .
RUN make install BINDIR=/usr/local/bin

# https://github.com/simeji/jid/releases
FROM builder AS jid
ARG JID_VERSION=0.7.6
RUN go install github.com/simeji/jid/cmd/jid@v$JID_VERSION
RUN cp $(find bin -name jid) /usr/local/bin

# https://github.com/derailed/k9s/releases
FROM builder AS k9s
RUN helper-curl tar k9s \
    https://github.com/derailed/k9s/releases/latest/download/k9s_Linux_@GOARCH.tar.gz

# https://github.com/kubernetes-sigs/kind/releases
FROM builder AS kind
ARG KIND_VERSION=v0.30.0
RUN helper-curl bin kind \
    https://github.com/kubernetes-sigs/kind/releases/download/${KIND_VERSION}/kind-linux-@GOARCH

# https://github.com/kubernetes/kompose/releases
FROM builder AS kompose
RUN helper-curl bin kompose \
    https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-@GOARCH

# https://github.com/kubecolor/kubecolor/releases
FROM builder AS kubecolor
ARG KUBECOLOR_VERSION=0.5.2
RUN helper-curl tar kubecolor \
    https://github.com/kubecolor/kubecolor/releases/download/v${KUBECOLOR_VERSION}/kubecolor_${KUBECOLOR_VERSION}_linux_@GOARCH.tar.gz

# https://github.com/kubernetes/kubernetes/releases
FROM builder AS kubectl
ARG KUBECTL_VERSION=1.34.1
RUN helper-curl tar "--strip-components=3 kubernetes/client/bin/kubectl" \
    https://dl.k8s.io/v${KUBECTL_VERSION}/kubernetes-client-linux-@GOARCH.tar.gz

# https://github.com/stackrox/kube-linter/releases
FROM builder AS kube-linter
ARG KUBELINTER_VERSION=v0.7.6
RUN go install golang.stackrox.io/kube-linter/cmd/kube-linter@$KUBELINTER_VERSION
RUN cp $(find bin -name kube-linter) /usr/local/bin

# https://github.com/doitintl/kube-no-trouble/releases
FROM builder AS kubent
ARG KUBENT_VERSION=0.7.2
RUN helper-curl tar kubent \
    https://github.com/doitintl/kube-no-trouble/releases/download/${KUBENT_VERSION}/kubent-${KUBENT_VERSION}-linux-@GOARCH.tar.gz

# https://github.com/bitnami-labs/sealed-secrets/releases
FROM builder AS kubeseal
ARG KUBESEAL_VERSION=0.32.2
RUN helper-curl tar kubeseal \
    https://github.com/bitnami-labs/sealed-secrets/releases/download/v$KUBESEAL_VERSION/kubeseal-$KUBESEAL_VERSION-linux-@GOARCH.tar.gz

# https://github.com/kubernetes-sigs/kustomize/releases
FROM builder AS kustomize
ARG KUSTOMIZE_VERSION=5.8.1
RUN helper-curl tar kustomize \
    https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v$KUSTOMIZE_VERSION/kustomize_v${KUSTOMIZE_VERSION}_linux_@GOARCH.tar.gz

# https://github.com/kubernetes/minikube/releases
FROM builder AS minikube
ARG MINIKUBE_VERSION=v1.37.0
RUN git clone https://github.com/kubernetes/minikube --depth=1 --branch $MINIKUBE_VERSION
WORKDIR minikube
RUN make
RUN cp out/minikube /usr/local/bin/minikube

# https://ngrok.com/download
FROM builder AS ngrok
RUN helper-curl tar ngrok \
    https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-@GOARCH.tgz

# https://github.com/derailed/popeye/releases
FROM builder AS popeye
RUN helper-curl tar popeye \
    https://github.com/derailed/popeye/releases/latest/download/popeye_linux_@GOARCH.tar.gz

# https://github.com/regclient/regclient/releases
FROM builder AS regctl
ARG REGCLIENT_VERSION=0.9.2
RUN helper-curl bin regctl \
    https://github.com/regclient/regclient/releases/download/v$REGCLIENT_VERSION/regctl-linux-@GOARCH

# https://github.com/GoogleContainerTools/skaffold/releases
FROM builder AS skaffold
RUN helper-curl bin skaffold \
    https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-@GOARCH

# https://github.com/stern/stern/releases
FROM builder AS stern
ARG STERN_VERSION=1.33.0
RUN helper-curl tar stern \
    https://github.com/stern/stern/releases/download/v${STERN_VERSION}/stern_${STERN_VERSION}_linux_@GOARCH.tar.gz

# https://github.com/tilt-dev/tilt/releases
FROM builder AS tilt
ARG TILT_VERSION=0.35.2
RUN helper-curl tar tilt \
    https://github.com/tilt-dev/tilt/releases/download/v${TILT_VERSION}/tilt.${TILT_VERSION}.linux-alpine.@WTFARCH.tar.gz

# https://github.com/vmware-tanzu/velero/releases
FROM builder AS velero
ARG VELERO_VERSION=1.17.0
RUN helper-curl tar "--strip-components=1 velero-v${VELERO_VERSION}-linux-@GOARCH/velero" \
    https://github.com/vmware-tanzu/velero/releases/download/v${VELERO_VERSION}/velero-v${VELERO_VERSION}-linux-@GOARCH.tar.gz

# https://github.com/carvel-dev/ytt/releases
FROM builder AS ytt
ARG YTT_VERSION=0.52.1
RUN helper-curl bin ytt \
    https://github.com/carvel-dev/ytt/releases/download/v${YTT_VERSION}/ytt-linux-@GOARCH

# https://github.com/carvel-dev/kapp/releases
FROM builder AS kapp
ARG YTT_VERSION=0.64.2
RUN helper-curl bin kapp \
    https://github.com/carvel-dev/kapp/releases/download/v${YTT_VERSION}/kapp-linux-@GOARCH

FROM alpine AS shpod
ENV COMPLETIONS=/usr/share/bash-completion/completions
RUN apk add --no-cache apache2-utils bash bash-completion curl docker-cli docker-cli-compose docker-cli-buildx docker-engine file fzf gettext git iptables-legacy iputils jq libintl ncurses openssh openssl screen socat sudo tmux tree unzip vim yq

COPY --from=addmount    /addmount                     /usr/local/bin
COPY --from=argocd      /usr/local/bin/argocd         /usr/local/bin
COPY --from=bento       /usr/local/bin/bento          /usr/local/bin
COPY --from=compose     /usr/local/bin/docker-compose /usr/local/bin
COPY --from=crane       /usr/local/bin/crane          /usr/local/bin
COPY --from=flux        /usr/local/bin/flux           /usr/local/bin
COPY --from=gron        /usr/local/bin/gron           /usr/local/bin
COPY --from=helm        /usr/local/bin/helm           /usr/local/bin
COPY --from=helmfile    /usr/local/bin/helmfile       /usr/local/bin
COPY --from=httping     /usr/local/bin/httping        /usr/local/bin
COPY --from=jid         /usr/local/bin/jid            /usr/local/bin
COPY --from=k9s         /usr/local/bin/k9s            /usr/local/bin
COPY --from=kind        /usr/local/bin/kind           /usr/local/bin
COPY --from=kapp        /usr/local/bin/kapp           /usr/local/bin
COPY --from=kubectl     /usr/local/bin/kubectl        /usr/local/bin
COPY --from=kubecolor   /usr/local/bin/kubecolor      /usr/local/bin
COPY --from=kube-linter /usr/local/bin/kube-linter    /usr/local/bin
COPY --from=kubent      /usr/local/bin/kubent         /usr/local/bin
COPY --from=kubeseal    /usr/local/bin/kubeseal       /usr/local/bin
COPY --from=kustomize   /usr/local/bin/kustomize      /usr/local/bin
COPY --from=minikube    /usr/local/bin/minikube       /usr/local/bin
COPY --from=ngrok       /usr/local/bin/ngrok          /usr/local/bin
COPY --from=popeye      /usr/local/bin/popeye         /usr/local/bin
COPY --from=regctl      /usr/local/bin/regctl         /usr/local/bin
COPY --from=skaffold    /usr/local/bin/skaffold       /usr/local/bin
COPY --from=stern       /usr/local/bin/stern          /usr/local/bin
COPY --from=tilt        /usr/local/bin/tilt           /usr/local/bin
COPY --from=velero      /usr/local/bin/velero         /usr/local/bin
COPY --from=ytt         /usr/local/bin/ytt            /usr/local/bin

RUN set -e ; for BIN in \
    argocd \
    crane \
    flux \
    helm \
    helmfile \
    kapp \
    kind \
    kubectl \
    kube-linter \
    kustomize \
    minikube \
    regctl \
    skaffold \
    tilt \
    velero \
    ytt \
    ; do echo $BIN ; $BIN completion bash > $COMPLETIONS/$BIN.bash ; done ;\
    stern --completion bash > $COMPLETIONS/stern

RUN cd /tmp \
 && git clone https://github.com/ahmetb/kubectx \
 && cd kubectx \
 && mv kubectx /usr/local/bin/kctx \
 && mv kubens /usr/local/bin/kns \
 && mv completion/kubectx.bash $COMPLETIONS/kctx.bash \
 && mv completion/kubens.bash $COMPLETIONS/kns.bash \
 && cd .. \
 && rm -rf kubectx
RUN cd /tmp \
 && git clone https://github.com/jonmosco/kube-ps1 \
 && cp kube-ps1/kube-ps1.sh /etc/bash/ \
 && rm -rf kube-ps1

# Create user and finalize setup.

RUN echo k8s:x:1000: >> /etc/group \
 && echo k8s:x:1000:1000::/home/k8s:/bin/bash >> /etc/passwd \
 && sed -i 's/^docker:.*:$/\0k8s/' /etc/group \
 && echo "k8s ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/k8s \
 && mkdir /home/k8s \
 && chown -R k8s:k8s /home/k8s/ \
 && sed -i 's/#MaxAuthTries 6/MaxAuthTries 42/' /etc/ssh/sshd_config \
 && sed -i 's/AllowTcpForwarding no/AllowTcpForwarding yes/' /etc/ssh/sshd_config
ARG TARGETARCH
RUN \
 if [ "$TARGETARCH" != "386" ]; then \
 mkdir /tmp/krew \
 && cd /tmp/krew \
 && curl -fsSL https://github.com/kubernetes-sigs/krew/releases/latest/download/krew-linux_$TARGETARCH.tar.gz | tar -zxf- \
 && sudo -u k8s -H ./krew-linux_$TARGETARCH install krew \
 && cd \
 && rm -rf /tmp/krew \
 ; fi
COPY --chown=1000:1000 bashrc /home/k8s/.bashrc
COPY --chown=1000:1000 bash_profile /home/k8s/.bash_profile
COPY --chown=1000:1000 vimrc /home/k8s/.vimrc
COPY --chown=1000:1000 tmux.conf /home/k8s/.tmux.conf
COPY motd /etc/motd
COPY setup-tailhist.sh /usr/local/bin
COPY docker-socket.sh /usr/local/bin
COPY dind.sh /usr/local/bin
COPY kind.sh /usr/local/bin
COPY bore.sh /usr/local/bin
VOLUME /var/lib/docker

# Generate a list of all installed versions.
RUN ( \
    ab -V | head -n1 ;\
    argocd version --client | head -n1 ;\
    echo "bento $(bento --version | head -n1)" ;\
    bash --version | head -n1 ;\
    curl --version | head -n1 ;\
    docker version --format="Docker {{.Client.Version}}" ;\
    envsubst --version | head -n1 ;\
    flux --version ;\
    gron --version ;\
    git --version ;\
    jq --version ;\
    ssh -V ;\
    tmux -V ;\
    yq --version ;\
    docker-compose version ;\
    echo "crane $(crane version)" ;\
    echo "Helm $(helm version --short)" ;\
    echo "Helmfile $(helmfile version -o=short | head -n1)" ;\
    httping --version ;\
    jid --version ;\
    echo "k9s $(k9s version | grep Version)" ;\
    kind version ;\
    kapp --version | head -n1 ;\
    echo "kubecolor $(kubecolor --kubecolor-version)" ;\
    echo "kubectl $(kubectl version --client | head -n1)" ;\
    echo "kube-linter $(kube-linter version)" ;\
    echo "kubent $(kubent --version 2>&1)" ;\
    kubeseal --version ;\
    echo "kustomize $(kustomize version | head -n1)" ;\
    minikube version | head -n1 ;\
    ngrok version ;\
    echo "popeye $(popeye version | grep Version)" ;\
    echo "regctl $(regctl version --format={{.VCSTag}})" ;\
    echo "skaffold $(skaffold version)" ;\
    echo "stern $(stern --version | grep ^version)" ;\
    echo "tilt $(tilt version)" ;\
    echo "velero $(velero version --client-only | grep Version)" ;\
    ) > versions.txt

COPY init.sh /
CMD ["/init.sh"]
EXPOSE 22/tcp
ENV GENERATE_PASSWORD_LENGTH=20

FROM node:20-slim AS nodejslibs
WORKDIR /output
RUN for LINKER in /lib64/ld-linux-x86-64.so.2 /lib/ld-linux-aarch64.so.1 /lib/ld-linux-armhf.so.3; do \
      if [ -f "$LINKER" ]; then \
        install -D "$LINKER" "./$LINKER" ;\
      fi ;\
    done
RUN mkdir -p lib
RUN for LIBDIR in x86_64-linux-gnu aarch64-linux-gnu arm-linux-gnueabihf; do \
      if [ -d "/lib/$LIBDIR" ]; then \
        cp -a "/lib/$LIBDIR" lib ;\
      fi ;\
    done

# Define an extra build target with "code-server" (VScode in the browser) installed
FROM shpod AS vspod
COPY --from=nodejslibs /output /
COPY --from=code-server /code-server /opt/code-server
RUN ln -s /opt/code-server/bin/code-server /usr/local/bin
RUN sudo -u k8s -H code-server --install-extension ms-azuretools.vscode-docker
RUN sudo -u k8s -H code-server --install-extension ms-kubernetes-tools.vscode-kubernetes-tools
CMD sudo -u k8s -H -E code-server --bind-addr 0:1789
EXPOSE 1789

# Define the default build target
FROM shpod


================================================
FILE: README.md
================================================
# shpod

**⚠️ Please listen carefully, as our ~~menu options~~
installation instructions have changed.**

~~Old instructions: `curl https://shpod.in | sh`~~

New instructions: use the Helm chart!

To get a shell in your Kubernetes cluster, with `cluster-admin` privileges:

```bash
helm upgrade --install --repo https://shpod.in/ shpod shpod \
  --set rbac.cluster.clusterRoles="{cluster-admin}"
kubectl wait deployment shpod --for=condition=Available
kubectl exec -ti deployment/shpod -- login -f k8s
```

## What's this?

Shpod ("Shell in a pod") is a tool to get a shell session with a ton
of tools useful when working with containers, Docker, and Kubernetes.

It's composed of two parts:

- a container image holding all the tools,
- a Helm chart making it easy to deploy it on Kubernetes.

Its goal is to provide a normalized environment, to go
with the training materials at https://container.training/,
so that you can get all the tools you need regardless
of your exact Kubernetes setup.


## The shpod image

It's available as `jpetazzo/shpod` or `ghcr.io/jpetazzo/shpod`.

It's based on Alpine, and includes:

- ab (ApacheBench)
- bash
- bento
- crane
- curl
- Docker CLI
- Docker Compose
- envsubst
- fzf
- git
- gron
- Helm
- jid
- jq
- kubectl
- kubectx + kubens
- kube-linter
- kube-ps1
- kubeseal
- kustomize
- ngrok
- popeye
- regctl
- ship
- skaffold
- skopeo
- SSH
- stern
- tilt
- tmux
- yq
- ytt

It also includes completion for most of these tools.

When this image starts, it will behave differently depending on whether
it has a pseudo-terminal or not.

If it has a pseudo-terminal, it will spawn a shell.
You can access that shell by attaching to the container,
without having to bother with networking or password configuration.
You can see that mode in action by running one of the following commands:

```bash
docker run -ti jpetazzo/shpod
kubectl run --rm -ti shpod --image jpetazzo/shpod
```

If it does not have a pseudo-terminal, it will run an SSH server.
Depending on the values of some environment variables, it will
use a provided password or generate one, or use SSH public key
authentication (see below, "SSH access configuration").

You can see that mode in action by running the following command:

```bash
docker run jpetazzo/shpod
```

However, that mode will likely be more useful on Kubernetes, for instance:
```bash
kubectl create deployment shpod --image jpetazzo/shpod
kubectl expose deployment shpod --port 22 --type=NodePort
kubectl logs deployment/shpod
```

The last command should show you the password that was generated
for the `k8s` user:

```
Generating public/private rsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
The key fingerprint is:
SHA256:xEZav2W/XkJ45KaZvxVLNfudttmVwzvAbd8v/b8jkA0 root@shpod-5965cbcfc9-f5p8m
The key's randomart image is:
+---[RSA 3072]----+
|        o        |
|       = .       |
|      . + . o ...|
|       o   E =  +|
|        S . * B+ |
|           o @o+B|
|            = =OO|
|             +o*@|
|              =B%|
+----[SHA256]-----+
Environment variable $PASSWORD not found. Generating a password.
PASSWORD=BlVweGRkEf1PQNdrhpjg
chpasswd: password for 'k8s' changed
Server listening on 0.0.0.0 port 22.
Server listening on :: port 22.
```

In both cases, you can also access shpod by executing a new shell
in the existing container.

With Docker:
```bash
docker exec -ti <container-id> login -f k8s
```

With Kubernetes:
```bash
kubectl exec -ti deployment/shpod -- login -f k8s
```


## Multi-arch support

Shpod supports both Intel and ARM 64 bits architectures. The Dockerfile
in this repository should be able to support other architectures fairly
easily. If a given tool isn't available on the target architecture,
a dummy placeholder will be installed instead.


## SSH access configuration

The user is always `k8s` - this is currently hard-coded.

It is possible to log in either by using a password, or SSH public key
authentication.

If the `$PASSWORD` variable is set, it will define the password for
the `k8s` user.

If the `$AUTHORIZED_KEYS` variable is set, it should hold one or multiple
SSH public keys (one per line), and these keys will be added to the
`~/.ssh/authorized_keys` file.

If neither `$PASSWORD` nor `$AUTHORIZED_KEYS` are set, then a random
password will be generated. By default, that password will be 20 characters
long, using digits, lowercase, and uppercase letters.

It is possible to change the length of the generated password by setting
the variable `$GENERATE_PASSWORD_LENGTH`. If that variable is set to `0`,
no password will be generated.

⚠️ When a password is generated, it is displayed on stdout. This means
that if someone has access to the logs of the container, they will be
able to see that password.

⚠️ If the container restarts for any reasons, a new password will be
generated. This is considered to be a feature.

When using shpod as part of a larger system, it is advised to set the
password (or the SSH keys) to avoid both warnings above.


## Kubernetes permissions

Shpod is meant to be used inside Kubernetes clusters. Once you are
running inside shpod, Kubernetes commands (like `kubectl` or `helm`)
will use "in-cluster configuration"; in other words, these commands
will use the ServiecAccount of the Pod that runs shpod.

By default, on most clusters, that ServiceAccount won't have much
permissions, meaning that you will get errors like the following one:

```console
$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "default"
```

If you want to use Kubernetes commands within shpod, you need
to give permissions to that ServiceAccount.

Assuming that you are running shpod in the `default` namespace
and with the `default` ServiceAccount, you can run the following
command to give `cluster-admin` privileges (=all privileges) to
the commands running in shpod:

```bash
kubectl create clusterrolebinding shpod \
        --clusterrole=cluster-admin \
        --serviceaccount=default:default
```


## Special handling of kubeconfig

If you have a ConfigMap named `kubeconfig` in the Namespace
where shpod is running, it will extract the first file from
that ConfigMap and use it to populate `~/.kube/config`.

This lets you inject a custom kubeconfig file into shpod.


## Helm chart

Since November 2024, shpod also has a Helm chart!

This Helm chart offers the following features:

- enable or disable the SSH server (depending on your needs)
- put the `k8s` user home directory on a Persistent Volume
- list Roles and ClusterRoles to bind to the ServiceAccount

Here's an example of how to use it:

```bash
helm upgrade --install --repo https://shpod.in/ shpod shpod \
  --set service.type=NodePort \
  --set resources.requests.cpu=0.1 \
  --set resources.requests.memory=500M \
  --set resources.limits.cpu=1 \
  --set resources.limits.memory=500M \
  --set persistentVolume.enabled=true \
  --set "rbac.cluster.clusterRoles={cluster-admin}" \
  --set ssh.authorized_keys="$(cat ~/.ssh/*.pub)" \
  #
```


## I don't like Helm charts!

You can also use the following YAML manifest:

```bash
kubectl apply -f https://shpod.in/shpod.yaml
```

Then attach to the shpod pod:

```bash
kubectl attach --namespace=shpod -ti shpod
```

But you really should use the Helm chart instead.


## Why should I use the Helm chart?

I'm using shpod when teaching Kubernetes classes. I deploy a Kubernetes
cluster for each student, and they access the cluster by connecting with
SSH. In some cases, I deploy the clusters with `kubeadm` on top of "raw"
VMs, and the students connect directly to the nodes. In some cases, I'm
using managed Kubernetes clusters, and SSH access to the nodes may or
may not be possible; in any case, it will require different steps for
each cloud provider. To simplify things, I built shpod, and use it to
run an SSH server that the students connect to.

This approach works great for most Kubernetes classes, but there are a
few scenarios that are problematic; specifically, when the Node running
shpod is starved for resources, the shpod Pod might get evicted. This
causes all the files in the container to be deleted, which is not great
when it happens during a class.

The solution to that problem has multiple layers:

1. Specify resource requests and limits, in particular for memory, to
   avoid the pod being evicted by memory pressure on the node.
2. Place the `k8s` user home directory on a Persistent Volume, so that
   the content of the home directory isn't lost if the Pod gets evicted
   anyway or the underlying Node crashes or gets removed for any reason.
3. Make that Persistent Volume optional, so that shpod still works on
   clusters that don't have a Storage Class providing dynamic volume
   provisioning. In that case, fall back gracefully to an `emptyDir`
   volume, to prevent pod eviction by `kubectl drain` or by the cluster
   autoscaler, and to persist files across container restarts.

The Helm chart lets you pick easily which configuration works best for
you: with or without the SSH server, with or without a password or SSH
public keys, with or without a Persistent Volume, with or without
resource requests and limits...

## Experimental stuff

You can enable code-server (basically "VScode used from a browser")
and expose it over a `NodePort` like so:

```bash
helm upgrade --install --repo https://shpod.in/ shpod shpod \
  --set codeServer.enabled=true \
  --set persistentVolume.enabled=true \
  --set rbac.cluster.clusterRoles="{cluster-admin}" \
  --set resources.requests.cpu=0.1 \
  --set resources.requests.memory=500M \
  --set resources.limits.cpu=1 \
  --set resources.limits.memory=500M \
  --set service.type=NodePort \
  --set ssh.password=codeserver.support.is.beta.and.will.break
kubectl wait deployment shpod --for=condition=Available
```

This is super experimental; I'd like to refactor the image and the
Helm chart before going further. So if you use this, you should expect
it to break in the near future.



================================================
FILE: addmount.c
================================================
/*
 * This was taken from https://github.com/justincormack/addmount
 */

#define _GNU_SOURCE
#include <unistd.h>
#include <fcntl.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <sys/mount.h>
#include <sys/syscall.h>

#ifndef O_PATH
#define O_PATH 010000000
#endif

int open_tree(int dirfd, const char *pathname, unsigned int flags) {
	return syscall(428, dirfd, pathname, flags);
}

#define OPEN_TREE_CLONE 1
#define AT_RECURSIVE 0x8000

int move_mount(int from_dirfd, const char *from_pathname, int to_dirfd, const char *to_pathname, unsigned int flags) {
	return syscall(429, from_dirfd, from_pathname, to_dirfd, to_pathname, flags);
}

#define MOVE_MOUNT_F_SYMLINKS		0x00000001
#define MOVE_MOUNT_F_AUTOMOUNTS		0x00000002
#define MOVE_MOUNT_F_EMPTY_PATH		0x00000004
#define MOVE_MOUNT_T_SYMLINKS		0x00000010
#define MOVE_MOUNT_T_AUTOMOUNTS		0x00000020
#define MOVE_MOUNT_T_EMPTY_PATH		0x00000040

int main(int argc, char *argv[]) {
	if (argc != 5) {
		printf("Usage %s src_pid src_path dst_pid dst_path\n", argv[0]);
		exit(1);
	}
	const char *spid = argv[1];
	const char *src = argv[2];
	const char *dpid = argv[3];
	const char *dst = argv[4];

	// source mount namespace path
        char smpath[128];
        snprintf(smpath, 128, "/proc/%s/ns/mnt", spid);

	// source mount namespace fd
        int smfd = open(smpath, O_RDONLY);
        if (smfd == -1) {
                perror("open source mount namespace");
                exit(1);
        }

	// destination mlunt namespace path
        char dmpath[128];
        snprintf(dmpath, 128, "/proc/%s/ns/mnt", dpid);

	// destination mount namespace fd
        int dmfd = open(dmpath, O_RDONLY);
        if (dmfd == -1) {
                perror("open destination mount namespace");
                exit(1);
        }

	// enter source mount namespace
        if (setns(smfd, CLONE_NEWNS) == -1) {
                perror("setns source");
                exit(1);
        }
	close(smfd);

	// this creates a file descriptor equavalent to the mount --rbind tree at the source path
	int fd = open_tree(AT_FDCWD, src, OPEN_TREE_CLONE|AT_RECURSIVE);
	if (fd == -1) {
		if (errno == ENOSYS) {
			printf("open_tree ENOSYS: you need kernel 5.2 to run this code, please upgrade\n");
		}
		perror("open_tree");
		exit(1);
	}

	// enter destination mount namespace
	if (setns(dmfd, CLONE_NEWNS) == -1) {
		perror("setns destination");
		exit(1);
	}
	close(dmfd);

	// move the mount tree to the new path
	int e = move_mount(fd, "", AT_FDCWD, dst, MOVE_MOUNT_F_EMPTY_PATH);
	if (e == -1) {
		perror("move_mount");
		exit(1);
	}

	close(fd);

	return 0;
}


================================================
FILE: bash_profile
================================================
. ~/.bashrc


================================================
FILE: bashrc
================================================
# In theory, ~/.bash_profile only gets loaded for interactive login shells,
# meaning that it should run only once per session. It makes it the ideal
# place to start e.g. ssh-agent and do other one-time, expensive operations.
# On the other hand, aliases have to be defined in each shell, so they
# would typically be defined in ~/.bashrc. ~/.bashrc is also ideal for
# environment variables like PS1, or variables that we might want to redefine
# easily, since ~/.bashrc gets reloaded in each shell. Since ~/.bashrc isn't
# loaded in login shells, though, it makes sense to load it automatically
# at the end of ~/.bash_profile.
#
# With all that said, though, this will run in containers, and we can't be
# sure that there will be a proper login shell (for instance, if you run
# "kubectl exec -ti <pod> -- bash" or "docker exec -ti <container> bash"
# that will be a non-login interactive shell). Furthermore, when a shell is
# executed from code-server, it uses a kind of special script to reproduce
# the same default behavior (difference between login and non-login shells)
# but I don't know how much we can rely on that.
#
# It looks like the best course of action would be to run everything in
# ~/.bashrc, and invoke ~/.bashrc from ~/.bash_profile (or even make them
# identical with a symlink). We can revise that strategy later if needed.

###############################################################################
# First, if we don't have a kubeconfig file, let's create one.
# (This is necessary for kube_ps1 to operate correctly.)
if ! [ -f ~/.kube/config ]; then
  # If there is a ConfigMap named 'kubeconfig',
  # extract the kubeconfig file from there.
  # We need to access the Kubernetes API, so we'll do it
  # using the well-known endpoint.
  (
    # Make sure that the file will have locked-down permissions.
    # (Some tools like Helm will complain about it otherwise.)
    umask 077
    export KUBERNETES_SERVICE_HOST=kubernetes.default.svc
    export KUBERNETES_SERVICE_PORT=443
    if kubectl get configmap kubeconfig >&/dev/null; then
      echo "✏️ Downloading ConfigMap kubeconfig to .kube/config."
      kubectl get configmap kubeconfig -o json |
        jq -r '.data | to_entries | .[0].value' > ~/.kube/config
    else
      SADIR=/var/run/secrets/kubernetes.io/serviceaccount
      # If we have a ServiceAccount token, use it.
      if [ -r $SADIR/token ]; then
        echo "✏️ Generating .kube/config using ServiceAccount token."
        kubectl config set-cluster shpod \
                --server=https://kubernetes.default.svc \
                --certificate-authority=/$SADIR/ca.crt
        kubectl config set-credentials shpod \
                --token=$(cat $SADIR/token )
        kubectl config set-context shpod \
                --cluster=shpod \
                --user=shpod
        kubectl config use-context shpod
      fi
    fi
  )
fi
# Note that we could also just set the following variables:
#export KUBERNETES_SERVICE_HOST=kubernetes.default.svc
#export KUBERNETES_SERVICE_PORT=443
# ...But for some reason, that doesn't work with impersonation.
# (i.e. using "kubectl get pods --as=someone.else")

###############################################################################
# Now, let's try some xterm magic to figure out if we have a light or dark
# background, and automatically set the kubecolor theme accordingly.
# Note that some terminals don't implement the special ANSI sequence that
# we're using. On these terminals, our color detection mechanisms will incur
# an extra 3 seconds delay when logging in, and kubecolor will be disabled.
# Affected terminals include:
# - MacOS Terminal
# - Linux virtual consoles
if [ ! "$KUBECOLOR_PRESET" ] && [ ! -f ~/.kube/color.yaml ]; then
  KUBECOLOR_PRESET=$(
    success=false
    exec < /dev/tty
    oldstty=$(stty -g)
    stty raw -echo min 0
    col=11      # background
    #          OSC   Ps  ;Pt ST
    echo -en "\033]${col};?\033\\" >/dev/tty  # echo opts differ w/ OSes
    result=
    if IFS=';' read -t 2 -r -d '\' color ; then
        result=$(echo $color | sed 's/^.*\;//;s/[^rgb:0-9a-f/]//g')
        success=true
    fi
    stty $oldstty
    if $success; then
      lumaformula=$(echo $result | sed 's/rgb:\(.*\)\/\(.*\)\/\(.*\)/(2*0x\1+1*0x\2+3*0x\3)\/6\/653/')
      luma=$((lumaformula))
      if [ "$luma" -lt 25 ]; then
        echo dark
      elif [ "$luma" -gt 75 ]; then
        echo light
      else
        echo unsure
      fi
    else
      echo timeout
    fi
  )
  case "$KUBECOLOR_PRESET" in
  dark|light)
    echo "🎨 Automatically setting KUBECOLOR_PRESET=$KUBECOLOR_PRESET."
    export KUBECOLOR_PRESET
    unset NO_COLOR
    ;;
  *)
    echo "🎨 Failed to detect terminal background color. KUBECOLOR_PRESET not set."
    unset KUBECOLOR_PRESET
    export NO_COLOR=kubecolor_disabled
    ;;
  esac
fi

###############################################################################
# Finally, set up prompt, PATH, completion, history... The classics :)
if [ -f /etc/HOSTIP ]; then
  HOSTIP=$(cat /etc/HOSTIP)
else
  HOSTIP="0.0.0.0"
fi
KUBE_PS1_PREFIX=""
KUBE_PS1_SUFFIX=""
KUBE_PS1_SYMBOL_ENABLE="false"
KUBE_PS1_CTX_COLOR="green"
KUBE_PS1_NS_COLOR="green"
PS1="\e[1m\e[31m[\$HOSTIP] \e[0m(\$(kube_ps1)) \e[34m\u@\h\e[35m \w\e[0m\n$ "

export EDITOR=vim
export PATH="$HOME/.krew/bin:$PATH"

alias k=kubecolor
complete -F __start_kubectl k
. /usr/share/bash-completion/completions/kubectl.bash

export HISTSIZE=9999
export HISTFILESIZE=9999
shopt -s histappend
trap 'history -a' DEBUG
export HISTFILE=~/.history

trap exit TERM

is_kind_up() {
  kubectl config get-contexts kind-kind >/dev/null 2>&1
}

if [ "$CODESPACES" = "true" ]; then
  if ! is_kind_up; then
    echo "⏳️ KinD cluster isn't ready yet. Please wait."
    echo "💡 (Or press Ctrl-C if you don't want to wait.)"
    while ! is_kind_up; do
      sleep 1
    done
  fi
fi


================================================
FILE: bore.sh
================================================
#!/bin/sh
set -eu

CONTAINER_NAME=kind-control-plane
CONTAINER_PID=$(docker inspect $CONTAINER_NAME --format '{{.State.Pid}}')

docker exec $CONTAINER_NAME touch /borens
addmount $$ /proc/$$/ns/net $CONTAINER_PID /borens

docker exec $CONTAINER_NAME sh -c '
set -e
CNI_PLUGIN=$(cat /etc/cni/net.d/10-kindnet.conflist | jq -r ".plugins[0].type")
cat /etc/cni/net.d/10-kindnet.conflist | jq ".plugins[0] + {name: .name}" |
CNI_COMMAND=ADD CNI_CONTAINERID=bore CNI_NETNS=/borens CNI_IFNAME=bore CNI_PATH=/opt/cni/bin \
/opt/cni/bin/$CNI_PLUGIN
' > /tmp/bore.json

GATEWAY=$(jq -r .ip4.gateway < /tmp/bore.json)

ip route del default via $GATEWAY
ip route add 10.244.0.0/16 via $GATEWAY
ip route add 10.96.0.0/12 via $GATEWAY


================================================
FILE: build.sh
================================================
#!/bin/sh
mkdir -p build
cp shpod.sh shpod.yaml build

cd build
helm package ../helm/shpod
helm repo index .



================================================
FILE: dind.sh
================================================
#!/bin/sh
if [ $# = 0 ]; then
  if ! sudo mountpoint -q /var/lib/docker; then
    echo "/var/lib/docker doesn't seem to be a mountpoint."
    echo "Docker-in-Docker probably won't work. Aborting."
    exit 1
  fi
  if lsmod | grep -q ^iptable; then
    echo "Detected modules for legacy iptables."
    echo "Updating iptables to point to legacy binary."
    sudo ln -sf xtables-legacy-multi $(which iptables)
  fi
  echo "Starting Docker Engine in the background (logging to $HOME/docker.log)."
  nohup sudo sh -c "$0 dockerd &" >$HOME/docker.log
  exit 0
fi
#
# The rest of this script is taken verbatim from:
# https://raw.githubusercontent.com/moby/moby/refs/heads/master/hack/dind
#
set -e

# DinD: a wrapper script which allows docker to be run inside a docker container.
# Original version by Jerome Petazzoni <jerome@docker.com>
# See the blog post: https://www.docker.com/blog/docker-can-now-run-within-docker/
#
# This script should be executed inside a docker container in privileged mode
# ('docker run --privileged', introduced in docker 0.6).

# Usage: dind CMD [ARG...]

# apparmor sucks and Docker needs to know that it's in a container (c) @tianon
#
# Set the container env-var, so that AppArmor is enabled in the daemon and
# containerd when running docker-in-docker.
#
# see: https://github.com/containerd/containerd/blob/787943dc1027a67f3b52631e084db0d4a6be2ccc/pkg/apparmor/apparmor_linux.go#L29-L45
# see: https://github.com/moby/moby/commit/de191e86321f7d3136ff42ff75826b8107399497
export container=docker

# Allow AppArmor to work inside the container;
#
#     aa-status
#     apparmor filesystem is not mounted.
#     apparmor module is loaded.
#
#     mount -t securityfs none /sys/kernel/security
#
#     aa-status
#     apparmor module is loaded.
#     30 profiles are loaded.
#     30 profiles are in enforce mode.
#       /snap/snapd/18357/usr/lib/snapd/snap-confine
#       ...
#
# Note: https://0xn3va.gitbook.io/cheat-sheets/container/escaping/sensitive-mounts#sys-kernel-security
#
#     ## /sys/kernel/security
#
#     In /sys/kernel/security mounted the securityfs interface, which allows
#     configuration of Linux Security Modules. This allows configuration of
#     AppArmor policies, and so access to this may allow a container to disable
#     its MAC system.
#
# Given that we're running privileged already, this should not be an issue.
if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security; then
	mount -t securityfs none /sys/kernel/security || {
		echo >&2 'Could not mount /sys/kernel/security.'
		echo >&2 'AppArmor detection and --privileged mode might break.'
	}
fi

# Mount /tmp (conditionally)
# /tmp must be 'exec,rw', and 'dev' to allow mknod to work for the
# pkg/archive/archive_linux_test.go tests.
if ! mountpoint -q /tmp; then
	mount -t tmpfs none /tmp
fi

# cgroup v2: enable nesting
if [ -f /sys/fs/cgroup/cgroup.controllers ]; then
	# move the processes from the root group to the /init group,
	# otherwise writing subtree_control fails with EBUSY.
	# An error during moving non-existent process (i.e., "cat") is ignored.
	mkdir -p /sys/fs/cgroup/init
	# this happens in a loop because things like "docker exec" on our dind
	# container will create new processes, which creates a race between our
	# moving everything to "init" and enabling subtree_control
	while ! {
		# move the processes from the root group to the /init group,
		# otherwise writing subtree_control fails with EBUSY.
		# An error during moving non-existent process (i.e., "cat") is ignored.
		xargs -rn1 < /sys/fs/cgroup/cgroup.procs > /sys/fs/cgroup/init/cgroup.procs || :
		# enable controllers
		sed -e 's/ / +/g' -e 's/^/+/' < /sys/fs/cgroup/cgroup.controllers \
			> /sys/fs/cgroup/cgroup.subtree_control
	}; do true; done
fi

# Change mount propagation to shared to make the environment more similar to a
# modern Linux system, e.g. with SystemD as PID 1.
mount --make-rshared /

if [ $# -gt 0 ]; then
	exec "$@"
fi

echo >&2 'ERROR: No command specified.'
echo >&2 'You probably want to run hack/make.sh, or maybe a shell?'


================================================
FILE: docker-socket.sh
================================================
#!/bin/sh
#
# This script is not used at the moment (as of the April 2025 changes to
# add support for devcontainers) but it might be used in the future in
# an attempt to support "docker-outside-docker" instead of "docker-in-docker".
#
sudo nohup >/dev/null sh -c "
  socat unix-listen:/var/run/docker.sock,fork,user=k8s unix-connect:/var/run/docker-host.sock &
"


================================================
FILE: helm/shpod/.helmignore
================================================
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/


================================================
FILE: helm/shpod/Chart.yaml
================================================
apiVersion: v2
name: shpod
version: 0.2.0
description: Shell in a Pod
keywords:
  - ssh
  - sshd
  - shell
type: application
home: https://github.com/jpetazzo/shpod
sources:
  - https://github.com/jpetazzo/shpod
maintainers:
  - name: Jérôme Petazzoni
    email: jerome.petazzoni@gmail.com


================================================
FILE: helm/shpod/templates/NOTES.txt
================================================
{{- if .Values.ssh.enabled }}
The SSH server is enabled. You can connect to it with an SSH client.
Use the following command to see how the SSH server is exposed:

kubectl get service {{ include "shpod.fullname" . }} --namespace {{ .Release.Namespace }}

You can access it with kubectl port-forward, like this:

kubectl port-forward service {{ include "shpod.fullname" . }} --namespace {{ .Release.Namespace }} 2222:22

...And then connect using "ssh -l k8s -p 2222 localhost".
{{- else }}
The SSH server isn't enabled. You can attach to the shpod shell like this:

kubectl attach -ti deployment/{{ include "shpod.fullname" . }} --namespace {{ .Release.Namespace }}
{{- end }}

You can also execute a new shpod shell like this:

kubectl exec -ti deployment/{{ include "shpod.fullname" . }} --namespace {{ .Release.Namespace }} -- login -f k8s


================================================
FILE: helm/shpod/templates/_helpers.tpl
================================================
{{/*
Expand the name of the chart.
*/}}
{{- define "shpod.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "shpod.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "shpod.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "shpod.labels" -}}
helm.sh/chart: {{ include "shpod.chart" . }}
{{ include "shpod.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "shpod.selectorLabels" -}}
app.kubernetes.io/name: {{ include "shpod.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/*
Create the name of the service account to use
*/}}
{{- define "shpod.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "shpod.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}


================================================
FILE: helm/shpod/templates/deployment.yaml
================================================
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "shpod.fullname" . }}
  labels:
    {{- include "shpod.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "shpod.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        {{- include "shpod.labels" . | nindent 8 }}
        {{- with .Values.podLabels }}
        {{- toYaml . | nindent 8 }}
        {{- end }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "shpod.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      initContainers:
        - name: copyhome
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          volumeMounts:
            - name: home
              mountPath: /copyhome
          command:
            - cp
            - -a
            - /home/k8s/.
            - /copyhome
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          {{- if eq .Values.ssh.enabled false }}
          stdin: true
          tty: true
          {{- end }}
          env:
            - name: HOSTIP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            {{- if .Values.ssh.password }}
            - name: PASSWORD
              value: "{{ .Values.ssh.password }}"
            {{- end }}
            {{- if .Values.ssh.authorized_keys }}
            - name: AUTHORIZED_KEYS
              value: |
                {{ .Values.ssh.authorized_keys | nindent 16 }}
            {{- end }}
          ports:
            - name: ssh
              containerPort: 22
              protocol: TCP
          livenessProbe:
            {{- toYaml .Values.livenessProbe | nindent 12 }}
          readinessProbe:
            {{- toYaml .Values.readinessProbe | nindent 12 }}
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
          volumeMounts:
            - name: home
              mountPath: /home/k8s
            {{- with .Values.volumeMounts }}
              {{- toYaml . | nindent 12 }}
            {{- end }}
        {{ if .Values.codeServer.enabled }}
        - name: code-server
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:vspod"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: HOSTIP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            {{- if ( .Values.codeServer.password | default .Values.ssh.password ) }}
            - name: PASSWORD
              value: "{{ .Values.codeServer.password | default .Values.ssh.password }}"
            {{- end }}
          ports:
            - name: code-server
              containerPort: {{ .Values.codeServer.containerPort }}
              protocol: TCP
          resources:
            {{- toYaml .Values.codeServer.resources | nindent 12 }}
          volumeMounts:
            - name: home
              mountPath: /home/k8s
            {{- with .Values.volumeMounts }}
              {{- toYaml . | nindent 12 }}
            {{- end }}
      {{ end }}
      volumes:
        - name: home
          {{- if .Values.persistentVolume.enabled }}
          persistentVolumeClaim:
            claimName: {{ include "shpod.fullname" . }}
          {{- end }}
        {{- with .Values.volumes }}
          {{- toYaml . | nindent 8 }}
        {{- end }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}


================================================
FILE: helm/shpod/templates/persistentvolumeclaim.yaml
================================================
{{- if .Values.persistentVolume.enabled -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ include "shpod.fullname" . }}
  labels:
    {{- include "shpod.labels" . | nindent 4 }}
  {{- with .Values.serviceAccount.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
spec:
  accessModes:
    {{ .Values.persistentVolume.accessModes | toYaml | nindent 4 }}
  resources:
    requests:
      storage: {{ .Values.persistentVolume.size }}
  {{- with .Values.persistentVolume.storageClass }}
  storageClassName: {{ . }}
  {{- end }}
{{- end }}


================================================
FILE: helm/shpod/templates/rbac.yaml
================================================
{{- if .Values.rbac.enabled -}}
{{- range .Values.rbac.cluster.clusterRoles }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: {{
    printf "%s-%s-%s" 
    $.Release.Namespace (include "shpod.fullname" $) .
    }}
  labels:
    {{- include "shpod.labels" $ | nindent 4 }}
  {{- with $.Values.serviceAccount.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: {{ . }}
subjects:
- kind: ServiceAccount
  name: {{ include "shpod.serviceAccountName" $ }}
  namespace: {{ $.Release.Namespace }}
{{- end }}
{{- range .Values.rbac.namespace.clusterRoles }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: {{
    printf "%s-clusterrole-%s" 
    (include "shpod.fullname" $) .
    }}
  labels:
    {{- include "shpod.labels" $ | nindent 4 }}
  {{- with $.Values.serviceAccount.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: {{ . }}
subjects:
- kind: ServiceAccount
  name: {{ include "shpod.serviceAccountName" $ }}
  namespace: {{ $.Release.Namespace }}
{{- end }}
{{- range .Values.rbac.namespace.roles }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: {{
    printf "%s-role-%s" 
    (include "shpod.fullname" $) .
    }}
  labels:
    {{- include "shpod.labels" $ | nindent 4 }}
  {{- with $.Values.serviceAccount.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: {{ . }}
subjects:
- kind: ServiceAccount
  name: {{ include "shpod.serviceAccountName" $ }}
  namespace: {{ $.Release.Namespace }}
{{- end }}
{{- end }}


================================================
FILE: helm/shpod/templates/rolebinding.yaml
================================================


================================================
FILE: helm/shpod/templates/service.yaml
================================================
apiVersion: v1
kind: Service
metadata:
  name: {{ include "shpod.fullname" . }}
  labels:
    {{- include "shpod.labels" . | nindent 4 }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.port }}
      targetPort: ssh
      protocol: TCP
      name: ssh
  {{ if .Values.codeServer.enabled }}
    - port: {{ .Values.codeServer.servicePort }}
      targetPort: {{ .Values.codeServer.containerPort }}
      protocol: TCP
      name: code-server
  {{ end }}
  selector:
    {{- include "shpod.selectorLabels" . | nindent 4 }}


================================================
FILE: helm/shpod/templates/serviceaccount.yaml
================================================
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ include "shpod.serviceAccountName" . }}
  labels:
    {{- include "shpod.labels" . | nindent 4 }}
  {{- with .Values.serviceAccount.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
automountServiceAccountToken: {{ .Values.serviceAccount.automount }}
{{- end }}


================================================
FILE: helm/shpod/values.yaml
================================================
# Default values for shpod.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# This will set the replicaset count more information can be found here: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
replicaCount: 1

# This sets the container image more information can be found here: https://kubernetes.io/docs/concepts/containers/images/
image:
  repository: ghcr.io/jpetazzo/shpod
  # This sets the pull policy for images.
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: latest

# This is for the secretes for pulling an image from a private repository more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# This is to override the chart name.
nameOverride: ""
fullnameOverride: ""

#This section builds out the service account more information can be found here: https://kubernetes.io/docs/concepts/security/service-accounts/
serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Automatically mount a ServiceAccount's API credentials?
  automount: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

# This is for setting Kubernetes Annotations to a Pod.
# For more information checkout: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ 
podAnnotations: {}
# This is for setting Kubernetes Labels to a Pod.
# For more information checkout: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels: {}

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

# This is for setting up a service more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/
service:
  # This sets the service type more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
  type: ClusterIP
  # This sets the ports more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#field-spec-ports
  port: 22

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

# This is to setup the liveness and readiness probes more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
livenessProbe:
readinessProbe:

# Additional volumes on the output Deployment definition.
volumes: []
# - name: foo
#   secret:
#     secretName: mysecret
#     optional: false

# Additional volumeMounts on the output Deployment definition.
volumeMounts: []
# - name: foo
#   mountPath: "/etc/foo"
#   readOnly: true

nodeSelector: {}

tolerations: []

affinity: {}

# These values are inspired by the ones in the Prometheus chart.
# (https://artifacthub.io/packages/helm/prometheus-community/prometheus)
persistentVolume:
  ## If true, we will create and use a PVC for $HOME.
  ## If false, we'll use an emptyDir instead.
  enabled: false
  ## The remaining values are used only when "enabled" is true.
  accessModes:
    - ReadWriteOnce
  size: 1G
  storageClass: null

rbac:
  ## If rbac.enabled=false:
  ## no RoleBinding or ClusterRoleBinding will be created.
  enabled: true
  cluster:
    ## rbac.cluster.clusterRoles:
    ## list of ClusterRoles that should be granted to the ServiceAccount, cluster-wide.
    clusterRoles: []
  namespace:
    ## rbac.namespace.clusterRoles:
    ## list of ClusterRoles that should be granted to the ServiceAccount, only in the application Namespace.
    clusterRoles: [ view ]
    ## rbac.namespace.roles:
    ## list of Roles that should be granted to the ServiceAccount in the application Namespace.
    roles: []

ssh:
  ## If SSH is enabled, you can connect to shpod with an SSH client
  ## or with "kubectl exec".
  ## If SSH is disabled, you cannot connect to shpod with SSH,
  ## but you can use "kubectl exec" or "kubectl attach".
  enabled: true
  ## If authorized_keys is set, it will be added to the k8s account
  ## ~/.ssh/authorized_keys file. (It should be a string; for multiple
  ## keys, use a multi-line string.)
  authorized_keys: ""
  ## If password is set, it will be used to set the password for the k8s user.
  password: ""
  ## If neither authorized_keys nor password is set, a random password will be generated.

codeServer:
  ## If code-server is enabled, an extra container will be added in the Pod.
  ## That container will run code-server (basically VScode in a browser).
  ## An extra port will be added to the shpod Service.
  enabled: false
  servicePort: 80
  containerPort: 1789
  ## If the password is blank, it will default to ssh.password.
  password: ""
  resources: {}


================================================
FILE: helper-curl
================================================
#!/bin/sh

set -e

TYPE=$1
BIN_OR_ARGS=$2
URL=$3

case $TARGETARCH in
amd64)
  GOARCH=amd64
  UARCH=x86_64
  WTFARCH=x86_64
  CODERARCH=amd64
  ;;
arm64)
  GOARCH=arm64
  UARCH=aarch64
  WTFARCH=arm64
  CODERARCH=arm64
  ;;
arm)
  GOARCH=arm
  UARCH=armv7
  WTFARCH=arm
  CODERARCH=armv7l
  ;;
*)
  echo "Unsupported architecture: $TARGETARCH."
  GOARCH=$TARGETARCH
  UARCH=$TARGETARCH
  WTFARCH=$TARGETARCH
  CODERARCH=$TARGETARCH
  ;;
esac

mangle() {
  echo $1 | sed \
  -e s/@GOARCH/$GOARCH/g \
  -e s/@UARCH/$UARCH/g \
  -e s/@WTFARCH/$WTFARCH/g \
  -e s/@CODERARCH/$CODERARCH/g \
  #
}

URL=$(mangle $URL)
BIN_OR_ARGS=$(mangle "$BIN_OR_ARGS")

if ! curl -fsSLI $URL >/dev/null; then
  echo "URL not found: $URL"
  BIN=${BIN_OR_ARGS##*/}
  echo "Installing placeholder: $BIN"
  cp /bin/helper-unsupported /usr/local/bin/$BIN
  exit 0
fi

case "$TYPE" in
bin)
  BIN=$BIN_OR_ARGS
  curl -fsSL $URL > /usr/local/bin/$BIN
  chmod +x /usr/local/bin/$BIN
  ;;
tar)
  ARGS=$BIN_OR_ARGS
  curl -fsSL $URL | tar -zxvf- -C /usr/local/bin $ARGS
  ;;
*)
  echo "Unrecognized download type: $TYPE"
  exit 1
  ;;
esac


================================================
FILE: helper-unsupported
================================================
#!/bin/sh
echo "# ⚠️ $0 is not supported on this platform ($(uname -m))."


================================================
FILE: init.sh
================================================
#!/usr/bin/env bash
set -e

# If there is a tty, give us a shell.
# (This happens e.g. when we do "docker run -ti jpetazzo/shpod".)
# Otherwise, start an SSH server.
# (This happens e.g. when we use that image in a Pod in a Deployment.)

if tty >/dev/null; then
  exec login -f k8s
else
  if ! [ -f /etc/ssh/ssh_host_rsa_key ]; then
    ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ""
  fi
  if [ "$AUTHORIZED_KEYS" ]; then
    echo 'Environment variable $AUTHORIZED_KEYS found. Adding keys.'
    sudo -u k8s mkdir -p ~k8s/.ssh
    sudo -u k8s touch ~k8s/.ssh/authorized_keys
    while read KEY; do
      if [ "$KEY" ] && ! grep -q "$KEY" ~k8s/.ssh/authorized_keys; then
        echo "$KEY" >> ~k8s/.ssh/authorized_keys
      fi
    done <<< "$AUTHORIZED_KEYS"
  fi
  if [ "$PASSWORD" ]; then
    echo 'Environment variable $PASSWORD found. Setting user password.'
  else
    if [ ! "$AUTHORIZED_KEYS" -a "${GENERATE_PASSWORD_LENGTH-0}" -gt 0 ]; then
      echo 'Environment variable $PASSWORD not found. Generating a password.'
      PASSWORD=$(base64 /dev/urandom | tr -d +/ | head -c $GENERATE_PASSWORD_LENGTH)
      echo "PASSWORD=$PASSWORD"
    else
      echo 'Environment variable $PASSWORD not found. User password will not be set.'
    fi
  fi
  if [ "$PASSWORD" ]; then
    echo "k8s:$PASSWORD" | chpasswd
  fi
  exec /usr/sbin/sshd -D -e
fi



================================================
FILE: kind.sh
================================================
#!/bin/sh
#
# This script tries to create a KinD cluster and then add
# a couple of routes so that the pod CIDR and the service
# CIDR are directly rechable from the local machine.
# This simplifies the Kubernetes learning experience, as
# pods and services become reachable directly from the
# local machine, without having to use port forwarding or
# other mechanisms. Note, however, that it only works on
# Linux machines!
#
kubectl config get-contexts kind-kind || kind create cluster
docker exec kind-control-plane true || docker start kind-control-plane
NODE_ADDR=$(
  docker inspect kind-control-plane |
  jq -r .[].NetworkSettings.Networks.kind.IPAddress
)
sudo ip route add 10.244.0.0/24 via $NODE_ADDR
sudo ip route add 10.96.0.0/12 via $NODE_ADDR



================================================
FILE: motd
================================================

🐚 Welcome to shpod - SHell in a POD.
🔎 Check "/versions.txt" to see the list of included tools.
🔗 See https://github.com/jpetazzo/shpod for more information.
📦️ You can install extra packages with 'sudo apk add PKGNAME'.



================================================
FILE: netlify.toml
================================================
[build]
  publish = "build/"
  command = "./build.sh"

[[redirects]]
  from = "/"
  to = "/shpod.sh"
  status = 200



================================================
FILE: setup-tailhist.sh
================================================
#!/bin/sh
set -ex
mkdir /tmp/tailhist
cd /tmp/tailhist
WEBSOCKETD_VERSION=0.4.1
wget https://github.com/joewalnes/websocketd/releases/download/v$WEBSOCKETD_VERSION/websocketd-$WEBSOCKETD_VERSION-linux_amd64.zip
unzip websocketd-$WEBSOCKETD_VERSION-linux_amd64.zip
curl https://raw.githubusercontent.com/jpetazzo/container.training/main/prepare-labs/lib/tailhist.html > index.html
kubectl patch service shpod --namespace shpod -p "
spec:
  ports:
  - name: tailhist
    port: 1088
    targetPort: 1088
    nodePort: 30088
    protocol: TCP
"
./websocketd --port=1088 --staticdir=. sh -c "
  tail -n +1 -f $HOME/.history ||
  echo 'Could not read history file. Perhaps you need to \"chmod +r .history\"?'
  "  


================================================
FILE: shpod.sh
================================================
#!/bin/sh
# For more information about shpod, check it out on GitHub:
# https://github.com/jpetazzo/shpod
if [ -f shpod.yaml ]; then
  YAML=shpod.yaml
else
  YAML=https://raw.githubusercontent.com/jpetazzo/shpod/main/shpod.yaml
fi
if [ "$(kubectl get pod --namespace=shpod shpod --ignore-not-found -o jsonpath={.status.phase})" = "Running" ]; then
  echo "Shpod is already running. Starting a new shell with 'kubectl exec'."
  echo "(Note: if the main invocation of shpod exits, all others will be terminated.)"
  kubectl exec -ti --namespace=shpod shpod -- bash -l
  if [ $? = 137 ]; then
    echo "Shpod was terminated by SIGKILL. This will happen when the main invocation"
    echo "of shpod exits (all processes started by 'kubectl exec' are then terminated)."
  fi
  exit 0
fi
echo "Applying YAML: $YAML..."
kubectl apply -f $YAML
echo "Waiting for pod to be ready..."
kubectl wait --namespace=shpod --for condition=Ready pod/shpod
echo "Attaching to the pod..."
kubectl attach --namespace=shpod -ti shpod </dev/tty
echo "Deleting pod..."
echo "
Note: it's OK to press Ctrl-C if this takes too long and you're impatient.
Clean up will continue in the background. However, if you want to restart
shpod, you might have to wait a bit (about 30 seconds).
"
kubectl delete -f $YAML --now
echo "Done."


================================================
FILE: shpod.yaml
================================================
apiVersion: v1
kind: Namespace
metadata:
  name: shpod
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: shpod
  namespace: shpod
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: shpod
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: shpod
  namespace: shpod
---
apiVersion: v1
kind: Pod
metadata:
  name: shpod
  namespace: shpod
spec:
  serviceAccountName: shpod
  containers:
  - name: shpod
    image: jpetazzo/shpod
    stdin: true
    tty: true
    env:
    - name: HOSTIP
      valueFrom:
        fieldRef:
          fieldPath: status.hostIP


================================================
FILE: tmux.conf
================================================
set -g status-style bg=blue,fg=white,bold
set-option -g history-limit 1000000


================================================
FILE: vimrc
================================================
syntax on
set autoindent
set expandtab
set number
set shiftwidth=2
set softtabstop=2
set nowrap
Download .txt
gitextract_ufcoemsn/

├── .github/
│   └── workflows/
│       └── automated-build.yaml
├── .gitignore
├── Brewfile.netlify
├── Dockerfile
├── README.md
├── addmount.c
├── bash_profile
├── bashrc
├── bore.sh
├── build.sh
├── dind.sh
├── docker-socket.sh
├── helm/
│   └── shpod/
│       ├── .helmignore
│       ├── Chart.yaml
│       ├── templates/
│       │   ├── NOTES.txt
│       │   ├── _helpers.tpl
│       │   ├── deployment.yaml
│       │   ├── persistentvolumeclaim.yaml
│       │   ├── rbac.yaml
│       │   ├── rolebinding.yaml
│       │   ├── service.yaml
│       │   └── serviceaccount.yaml
│       └── values.yaml
├── helper-curl
├── helper-unsupported
├── init.sh
├── kind.sh
├── motd
├── netlify.toml
├── setup-tailhist.sh
├── shpod.sh
├── shpod.yaml
├── tmux.conf
└── vimrc
Download .txt
SYMBOL INDEX (3 symbols across 1 files)

FILE: addmount.c
  function open_tree (line 21) | int open_tree(int dirfd, const char *pathname, unsigned int flags) {
  function move_mount (line 28) | int move_mount(int from_dirfd, const char *from_pathname, int to_dirfd, ...
  function main (line 39) | int main(int argc, char *argv[]) {
Condensed preview — 34 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (68K chars).
[
  {
    "path": ".github/workflows/automated-build.yaml",
    "chars": 2577,
    "preview": "name: Automated Build\n\non:\n  push:\n    branches:\n      - main\n\nenv:\n  DOCKER_BUILDKIT: 1\n\n# Note: this is copy-pasted an"
  },
  {
    "path": ".gitignore",
    "chars": 7,
    "preview": "/build\n"
  },
  {
    "path": "Brewfile.netlify",
    "chars": 12,
    "preview": "brew \"helm\"\n"
  },
  {
    "path": "Dockerfile",
    "chars": 14436,
    "preview": "FROM --platform=$BUILDPLATFORM golang:alpine AS builder\nRUN apk add curl git make\nARG BUILDARCH TARGETARCH\nENV BUILDARCH"
  },
  {
    "path": "README.md",
    "chars": 10157,
    "preview": "# shpod\n\n**⚠️ Please listen carefully, as our ~~menu options~~\ninstallation instructions have changed.**\n\n~~Old instruct"
  },
  {
    "path": "addmount.c",
    "chars": 2684,
    "preview": "/*\n * This was taken from https://github.com/justincormack/addmount\n */\n\n#define _GNU_SOURCE\n#include <unistd.h>\n#includ"
  },
  {
    "path": "bash_profile",
    "chars": 12,
    "preview": ". ~/.bashrc\n"
  },
  {
    "path": "bashrc",
    "chars": 5883,
    "preview": "# In theory, ~/.bash_profile only gets loaded for interactive login shells,\n# meaning that it should run only once per s"
  },
  {
    "path": "bore.sh",
    "chars": 722,
    "preview": "#!/bin/sh\nset -eu\n\nCONTAINER_NAME=kind-control-plane\nCONTAINER_PID=$(docker inspect $CONTAINER_NAME --format '{{.State.P"
  },
  {
    "path": "build.sh",
    "chars": 110,
    "preview": "#!/bin/sh\nmkdir -p build\ncp shpod.sh shpod.yaml build\n\ncd build\nhelm package ../helm/shpod\nhelm repo index .\n\n"
  },
  {
    "path": "dind.sh",
    "chars": 4080,
    "preview": "#!/bin/sh\nif [ $# = 0 ]; then\n  if ! sudo mountpoint -q /var/lib/docker; then\n    echo \"/var/lib/docker doesn't seem to "
  },
  {
    "path": "docker-socket.sh",
    "chars": 365,
    "preview": "#!/bin/sh\n#\n# This script is not used at the moment (as of the April 2025 changes to\n# add support for devcontainers) bu"
  },
  {
    "path": "helm/shpod/.helmignore",
    "chars": 349,
    "preview": "# Patterns to ignore when building packages.\n# This supports shell glob matching, relative path matching, and\n# negation"
  },
  {
    "path": "helm/shpod/Chart.yaml",
    "chars": 290,
    "preview": "apiVersion: v2\nname: shpod\nversion: 0.2.0\ndescription: Shell in a Pod\nkeywords:\n  - ssh\n  - sshd\n  - shell\ntype: applica"
  },
  {
    "path": "helm/shpod/templates/NOTES.txt",
    "chars": 843,
    "preview": "{{- if .Values.ssh.enabled }}\nThe SSH server is enabled. You can connect to it with an SSH client.\nUse the following com"
  },
  {
    "path": "helm/shpod/templates/_helpers.tpl",
    "chars": 1762,
    "preview": "{{/*\nExpand the name of the chart.\n*/}}\n{{- define \"shpod.name\" -}}\n{{- default .Chart.Name .Values.nameOverride | trunc"
  },
  {
    "path": "helm/shpod/templates/deployment.yaml",
    "chars": 4391,
    "preview": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: {{ include \"shpod.fullname\" . }}\n  labels:\n    {{- include \"shpod"
  },
  {
    "path": "helm/shpod/templates/persistentvolumeclaim.yaml",
    "chars": 576,
    "preview": "{{- if .Values.persistentVolume.enabled -}}\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: {{ include \"shp"
  },
  {
    "path": "helm/shpod/templates/rbac.yaml",
    "chars": 1814,
    "preview": "{{- if .Values.rbac.enabled -}}\n{{- range .Values.rbac.cluster.clusterRoles }}\n---\napiVersion: rbac.authorization.k8s.io"
  },
  {
    "path": "helm/shpod/templates/rolebinding.yaml",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "helm/shpod/templates/service.yaml",
    "chars": 553,
    "preview": "apiVersion: v1\nkind: Service\nmetadata:\n  name: {{ include \"shpod.fullname\" . }}\n  labels:\n    {{- include \"shpod.labels\""
  },
  {
    "path": "helm/shpod/templates/serviceaccount.yaml",
    "chars": 385,
    "preview": "{{- if .Values.serviceAccount.create -}}\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: {{ include \"shpod.service"
  },
  {
    "path": "helm/shpod/values.yaml",
    "chars": 5442,
    "preview": "# Default values for shpod.\n# This is a YAML-formatted file.\n# Declare variables to be passed into your templates.\n\n# Th"
  },
  {
    "path": "helper-curl",
    "chars": 1109,
    "preview": "#!/bin/sh\n\nset -e\n\nTYPE=$1\nBIN_OR_ARGS=$2\nURL=$3\n\ncase $TARGETARCH in\namd64)\n  GOARCH=amd64\n  UARCH=x86_64\n  WTFARCH=x86"
  },
  {
    "path": "helper-unsupported",
    "chars": 74,
    "preview": "#!/bin/sh\necho \"# ⚠️ $0 is not supported on this platform ($(uname -m)).\"\n"
  },
  {
    "path": "init.sh",
    "chars": 1358,
    "preview": "#!/usr/bin/env bash\nset -e\n\n# If there is a tty, give us a shell.\n# (This happens e.g. when we do \"docker run -ti jpetaz"
  },
  {
    "path": "kind.sh",
    "chars": 759,
    "preview": "#!/bin/sh\n#\n# This script tries to create a KinD cluster and then add\n# a couple of routes so that the pod CIDR and the "
  },
  {
    "path": "motd",
    "chars": 223,
    "preview": "\n🐚 Welcome to shpod - SHell in a POD.\n🔎 Check \"/versions.txt\" to see the list of included tools.\n🔗 See https://github.co"
  },
  {
    "path": "netlify.toml",
    "chars": 117,
    "preview": "[build]\n  publish = \"build/\"\n  command = \"./build.sh\"\n\n[[redirects]]\n  from = \"/\"\n  to = \"/shpod.sh\"\n  status = 200\n\n"
  },
  {
    "path": "setup-tailhist.sh",
    "chars": 709,
    "preview": "#!/bin/sh\nset -ex\nmkdir /tmp/tailhist\ncd /tmp/tailhist\nWEBSOCKETD_VERSION=0.4.1\nwget https://github.com/joewalnes/websoc"
  },
  {
    "path": "shpod.sh",
    "chars": 1301,
    "preview": "#!/bin/sh\n# For more information about shpod, check it out on GitHub:\n# https://github.com/jpetazzo/shpod\nif [ -f shpod."
  },
  {
    "path": "shpod.yaml",
    "chars": 677,
    "preview": "apiVersion: v1\nkind: Namespace\nmetadata:\n  name: shpod\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: shpod\n "
  },
  {
    "path": "tmux.conf",
    "chars": 78,
    "preview": "set -g status-style bg=blue,fg=white,bold\nset-option -g history-limit 1000000\n"
  },
  {
    "path": "vimrc",
    "chars": 96,
    "preview": "syntax on\nset autoindent\nset expandtab\nset number\nset shiftwidth=2\nset softtabstop=2\nset nowrap\n"
  }
]

About this extraction

This page contains the full source code of the jpetazzo/shpod GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 34 files (62.5 KB), approximately 18.4k tokens, and a symbol index with 3 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!