Full Code of telekom/das-schiff for AI

main 089eaee5ff27 cached
10 files
57.2 KB
13.2k tokens
1 requests
Download .txt
Repository: telekom/das-schiff
Branch: main
Commit: 089eaee5ff27
Files: 10
Total size: 57.2 KB

Directory structure:
gitextract_zgdi21db/

├── CODEOWNERS
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── NOTICE
├── README.md
├── cluster-components-history.md
├── schiff-liquid-metal.md
├── schiff-pure-metal.md
└── templates/
    └── file-header.txt

================================================
FILE CONTENTS
================================================

================================================
FILE: CODEOWNERS
================================================
# This file provides an overview of code owners in this repository.

# Each line is a file pattern followed by one or more owners.
# The last matching pattern has the most precedence.
# For more details, read the following article on GitHub:
# https://help.github.com/articles/about-codeowners/.

# These are the default owners for the whole content of this repository.
# The default owners are automatically added as reviewers when you open
# a pull request, unless different owners are specified in the file.

* @MaxRink @Cellebyte @vukg


================================================
FILE: CODE_OF_CONDUCT.md
================================================

# Contributor Covenant Code of Conduct

## Our Pledge

We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.

## Our Standards

Examples of behavior that contributes to a positive environment for our
community include:

* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
  and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
  overall community

Examples of unacceptable behavior include:

* The use of sexualized language or imagery, and sexual attention or
  advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
  address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
  professional setting

## Enforcement Responsibilities

Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.

Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.

## Scope

This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.

## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be 
reported to the community leaders responsible for enforcement at 
[schiff@telekom.de](mailto:schiff@telekom.de).
All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the
reporter of any incident.

## Enforcement Guidelines

Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:

### 1. Correction

**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.

**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.

### 2. Warning

**Community Impact**: A violation through a single incident or series
of actions.

**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.

### 3. Temporary Ban

**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.

**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.

### 4. Permanent Ban

**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior,  harassment of an
individual, or aggression toward or disparagement of classes of individuals.

**Consequence**: A permanent ban from any sort of public interaction within
the community.

## Attribution

This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.

Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).

[homepage]: https://www.contributor-covenant.org

For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.



================================================
FILE: CONTRIBUTING.md
================================================
# Contributing

## Code of conduct

All members of the project community must abide by the [Contributor Covenant, version 2.0](CODE_OF_CONDUCT.md).
Only by respecting each other can we develop a productive, collaborative community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting [das-schiff@telekom.de](mailto:das-schiff@telekom.de) and/or a project maintainer.

We appreciate your courtesy of avoiding political questions here. Issues which are not related to the project itself will be closed by our community managers.

## Engaging in our project

We use GitHub to manage reviews of pull requests.

* If you are a new contributor, see: [Steps to Contribute](#steps-to-contribute)

* If you have a trivial fix or improvement, go ahead and create a pull request, addressing (with `@...`) a suitable maintainer of this repository (see [CODEOWNERS](CODEOWNERS) of the  repository you want to contribute to) in the description of the pull request.

* If you plan to do something more involved, please reach out to us and send an [email](mailto:das-schiff@telekom.de). This will avoid unnecessary work and surely give you and us a good deal of inspiration.

* Relevant coding style guidelines are available in the respective sub-repositories as they are programming language-dependent.

## Steps to Contribute

Should you wish to work on an issue, please claim it first by commenting on the GitHub issue that you want to work on. This is to prevent duplicated efforts from other contributors on the same issue.

If you have questions about one of the issues, please comment on them, and one of the maintainers will clarify.

We kindly ask you to follow the [Pull Request Checklist](#Pull-Request-Checklist) to ensure reviews can happen accordingly.

## Contributing Code

You are welcome to contribute code in order to fix a bug or to implement a new feature.

The following rule governs code contributions:

* Contributions must be licensed under the [Apache 2.0 License](LICENSE)
* Newly created files must be opened by an instantiated version to the file 'templates/file-header.txt'
* At least if you add a new file to the repository, add your name into the contributor section of the file NOTICE (please respect the preset entry structure)

## Contributing Documentation

You are welcome to contribute documentation to the project.

The following rule governs documentation contributions:

* Contributions must be licensed under the same license as code, the [Apache 2.0 License](LICENSE)

## Pull Request Checklist

* Branch from the master branch and, if needed, rebase to the current master branch before submitting your pull request. If it doesn't merge cleanly with master you may be asked to rebase your changes.

* Commits should be as small as possible while ensuring that each commit is correct independently (i.e., each commit should compile and pass tests).

* Test your changes as thoroughly as possible before you commit them. Preferably, automate your test by unit/integration tests. If tested manually, provide information about the test scope in the PR description (e.g. “Test passed: Upgrade version from 0.42 to 0.42.23.”).

* Create _Work In Progress [WIP]_ pull requests only if you need clarification or an explicit review before you can continue your work item.

* If your patch is not getting reviewed or you need a specific person to review it, you can @-reply a reviewer asking for a review in the pull request or a comment, or you can ask for a review by contacting us via [email](mailto:das-schiff@telekom.de).

* Post review:
  * If a review requires you to change your commit(s), please test the changes again.
  * Amend the affected commit(s) and force push onto your branch.
  * Set respective comments in your GitHub review to resolved.
  * Create a general PR comment to notify the reviewers that your amendments are ready for another round of review.

## Issues and Planning

* We use GitHub issues to track bugs and enhancement requests.

* Please provide as much context as possible when you open an issue. The information you provide must be comprehensive enough to reproduce that issue for the assignee. Therefore, contributors may use but aren't restricted to the issue template provided by the project maintainers.

* When creating an issue, try using one of our issue templates which already contain some guidelines on which content is expected to process the issue most efficiently. If no template applies, you can of course also create an issue from scratch.



================================================
FILE: LICENSE
================================================

                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: NOTICE
================================================
Copyright (c) 2020 Deutsche Telekom AG.

This project is licensed under Apache License, Version 2.0;
you may not use them except in compliance with the License.

Contributors:
-------------

Maximilian Rink [MaxRink], Deutsche Telekom AG
Marcel Fest [Cellebyte], Deutsche Telekom AG
Vuk Gojnic [vukg], Deutsche Telekom AG
Karsten Reincke [kreincke], Deutsche Telekom AG


================================================
FILE: README.md
================================================

<p align="center"><img height="150px" src="images/das-schiff-logo.png" align="center"></p>

# Das Schiff

Das SCHIFF is a GitOps based Kubernetes Cluster as a Service platform almost exclusively built using open-source components. Created by and intended for production use at [Deutsche Telekom Technik](https://de.wikipedia.org/wiki/Telekom_Deutschland#Deutsche_Telekom_Technik_GmbH) to provide Kubernetes clusters to internal teams. It is hosted on-prem on both vSphere and bare metal at various private data centers in Germany.

This repository provides some insights into how Das Schiff works. It gives an overview of our architecture and describes the Git repository structure we are planning to scale to thousands of clusters.

Note: This repository is not code focused. We do have some open source components in other repos, but this one is about Das Schiff's design. It also does not contain a step by step guide on how to build your own Kubernetes platform. But we hope that you can learn something from our approach.

<!-- - [Das Schiff Pure Metal](schiff-pure-metal.md) - our approach to dynamic bare metal cluster provisioning
- [Das Schiff Liquid Metal](schiff-liquid-metal.md) - development to optimize Pure Metal for edge and far edge deployments with use of [microVM technology](https://www.weave.works/blog/multi-cluster-kubernetes-on-microvms-for-bare-metal). -->

## Our Open Source Projects

If we develop new components for our platform, we always consider whether we can open-source them, and we have published the following projects so far:

- [telekom/netplanner](https://github.com/telekom/netplanner) - a netplan.io compatible CLI with support for more netdev devices
- [telekom/das-schiff-network-operator](https://github.com/telekom/das-schiff-network-operator) - a controller which configures netlink and frr for Telekom specific network connectivity.
- [telekom/cluster-api-ipam-provider-in-cluster](https://github.com/telekom/cluster-api-ipam-provider-in-cluster) - a work-in-progress implementation of an in-cluster IPAM provider for the new [IPAM integration](https://github.com/kubernetes-sigs/cluster-api/pull/6000) in Cluster API that we are driving
- [telekom/das-schiff-operator](https://github.com/telekom/das-schiff-operator) - a collection of a few hacky controllers for ipam integration and backing up kubeconfig files to Git

## Our Challenge

Deutsche Telekom Technik (DTT) is a subsidiary of Telekom Deutschland, the leading integrated communications provider in Germany, which in turn is part of [Deutsche Telekom Group](https://www.telekom.com/en). Deutsche Telekom Technik handles the technical delivery of the communication services offered by Telekom Deutschland. It provides the infrastructure for more than 17.5M fixed line and 48.8M mobile customers.

Similar to all telco providers, we do not primarily run typical IT workloads. Instead we operate networks and run network functions, service platforms and the related management systems. Modern examples are 5G core, ORAN DU, remote UPF, BNG, IMS, IPTV and so on. Large portions of these functions and services are geo-distributed due to the nature of communications technology (e.g. mobile towers). Looking at Germany as a whole, we are talking about a few thousand clusters at hundreds of locations in Germany (when including far edge locations).

All of these workloads are rapidly transitioning towards a cloud native architecture. This creates a demand for modern platforms to run those workloads, which we aim to provide with Kubernetes. Das SCHIFF therefore set itself the following challenge:

> How can we manage thousands of Kubernetes clusters across hundreds of locations on both bare-metal servers as well as virtual machines, with a small, highly skilled SRE team, while only using open-source software?

## Our Solution

To tackle this problem we started building Das SCHIFF. Since then, we came up with a few principles that we now try to follow as closely as we can.

### GitOps to the core

Das Schiff is following the GitOps approach for everything we do. All configuration that does not come from external systems is derived from the content of a Git repository. Almost all the data is stored in the form of manifests, some of which contain nested configuration files though (e.g. Helm values files). All changes are performed as Pull Requests with validation and approvals.

### Full Automation

We try to automate whatever we can. This includes internal systems, some of which feel pretty legacy in a cloud-native world, as well as our network fabric. And of course the deployment and lifecycle of our clusters.

### Homogeneous Clusters

In order to manage a large amount of clusters with a small team, the clusters need to be as similar to each other as possible. We therefore avoid any configuration that is specific to a single customer. Most configuration is applied on a global scope, with environment and site specific overrides where necessary.

### Aggressive Upgrade Cycles

For clusters to stay homogenous, all of them should also be running the same software versions. To achieve this, we mandate a very aggressive upgrade strategy:

  - we have a test environment to which changes are applied very quickly and without notification in advance
  - after two weeks, changes are bundled and applied to the reference environment
  - after another two weeks, changes move to production

Our internal customers have the option to stop changes from progressing to their production clusters, but have to deal with the fallout when fast-forwarding them to the latest version on their own.

### Open-source only and upstream first

We have built Das SCHIFF almost exclusively using open source components, which is also the reason why we are publishing our architecture in this repository. There are some exceptions, but those are either temporary components or integrations to internal systems that we do not want to publish (and there would also be no use in doing so).

If we require a new feature, we avoid creating an internal fork and try to integrate the feature in the upstream open source project we are consuming. If that is not possible, we try to open source our custom implementation.

### Standalone clusters

Clusters have to be able to function, even when the management plane is unavailable. All clusters pull their configuration from Git autonomously. The management plane is only required to handle infrastructure changes and components deployed during bootstrapping, which includes CNI and Flux.

## Das Schiff Features

* **Multi-cluster**  
  The entire platform is distributed across many clusters. Tenants do not share clusters.
* **Multi-site**  
  The platform spans multiple sites, including edge and far edge locations in the future.
* **Infrastructure independent**  
  Our primary focus are bare-metal and vSphere based clusters, but public clouds can be supported as well.
* **Managed through GitOps**
  * Git serves as the primary source of truth (excluding external systems we integrate with)
  * Declarative description of infrastructure
  * Hard to mutate audit trail due to git history
  * Merge requests allow to review changes
* **Batteries Included**  
  Each cluster is ready to use and comes with
  * a monitoring stack for metrics
  * preconfigured networking according to the tenant's needs

## Das Schiff Components

The core of Das Schiff consists of Git (a GitLab instance in our case), the [Flux DevOps Toolkit](https://fluxcd.io) and [Cluster API](https://cluster-api.sigs.k8s.io), a Kubernetes subproject that allows managing Kubernetes clusters as resources in a management Cluster. Cluster API itself does not provide support for deploying to different infrastructures directly, but uses so-called infrastructure providers to do so. We are currently using the [metal3](https://github.com/metal3-io/cluster-api-provider-metal3) (for bare-metal; includes OpenStack Ironic and metal3's bare metal operator) and [vSphere](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere) providers, but any other provider can be used with a few adaptions. Cluster API also abstracts cluster configuration into bootstrap and controlplane providers. It comes with providers that configure clusters using Kubeadm, which we use as well.

While we strive for homogenous clusters, 5G workoads can have pretty special network requirements. To suit all needs, clusters can run [Calico](https://www.tigera.io/project-calico/), [coil](https://github.com/cybozu-go/coil), [Multus](https://github.com/k8snetworkplumbingwg/multus-cni), [whereabouts](https://github.com/k8snetworkplumbingwg/whereabouts) and/or a custom [network-operator](https://github.com/telekom/das-schiff-network-operator). Storage is provided using NetApp Trident, Pure Storage Portworx or vSphere CSI, depending on the site and infrastructure the cluster is deployed on.

In addition we run several components on top of the clusters
  - All tenant clusters also run the Flux toolkit to pull their configuration from Git
  - The monitoring stack consists of [Prometheus](https://prometheus.io/), [Thanos](https://thanos.io/) and [Grafana](https://grafana.com/grafana/) and is deployed using [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) and [grafana-operator](https://github.com/grafana-operator/grafana-operator)
  - Logs are shipped to various destinations with [Vector](https://vector.dev/).
  - [Velero](https://velero.io/) is used for apiserver backups to internal S3 compatible storage
  - All clusters enforce community-recommended as well as internally required security policies using [Kyverno](https://kyverno.io/)
  - [RBAC Manager](https://github.com/FairwindsOps/rbac-manager) is used to make RBAC easier to handle
  - [metalLB](https://metallb.universe.tf/) acts as the load balancer for services

## Architecture

From a high level view, our architecture is pretty boring. We operate several management clusters that run Cluster API and the various providers we use. The clusters are separated by environment (test/reference and production), and for each environment there are multiple clusters at different sites. The management clusters take care of deploying tenant clusters at the various sites we offer. One management cluster is responsible for multiple sites. Each tenant cluster is only handled by a single management cluster.

Where it becomes interesting is our configuration management. As mentioned before, we are following the GitOps approach. This means that almost all of our configuration resides in Git repositories. We are currently using two repositories:
  - `cluster-definitions` holds all infrastructure (read: Cluster API) related manifests and a small amount of bootstrapping configuration
  - `cluster-components` contains the components that are deployed on top of the clusters and most of their configuration

Both of those repositories are pulled into clusters using Flux. Flux uses `GitRepository` and `Kustomization` resources in the cluster to configure what exactly should get pulled. `GitRepositories` primarily serve as a reference to a Git repository, but also allow to ignore specific files, similar to a `.gitignore` file. `Kustomizations` can then be used to apply a single path from a `GitRepository` to the cluster. And as the name suggest, `kustomize` will be used to do so, providing even more flexibility. The combination of both allows to be very specific about what should get applied to a cluster and what should not.

The `cluster-definitions` repository is only applied to the management clusters. It contains a folder per tenant cluster, structured as `<mgmt-cluster>/sites/<site>/<cluster-name>`. Each management cluster will just pull its respective folder. The `cluster-components` repository is a bit more complex and will be explained in more detail below (as will `cluster-definitions`). For now all you need to know is that its pulled in by both the management clusters (to do initial bootstrapping) and the tenant clusters (to fetch their configuration).

We also store secrets in Git. They are encrypted of course and we are using [sops](https://github.com/mozilla/sops) to do so, since Flux has an [integration](https://fluxcd.io/docs/guides/mozilla-sops/) for that.

### Deploying a Cluster

Lets have a look at the cluster deployment process to make this a little clearer. The following graphic shows a very rough visualization of a single management and a single tenant cluster, our git repositories, an engineer and a few errors that symbolize data flow and actions.

<p align="left"><img src="images/das-schiff-loop.png" width=600 align="center"></p>

To deploy a cluster, an engineer will first create Merge Requests to both repositories (1). After they are merged (thanks to Kubernetes' eventual consistency order does not matter), the management cluster will at some point pull (2) the new manifests and apply them to its API server. As these resources include Cluster API manifests, Cluster API will start to deploy the tenant cluster to the infrastructure of choice.

At the same time, Flux will start to remotely apply a few bootstrap components to the tenant cluster (3). Initially this will not work of course, as the cluster and its API do not exist yet. But it will retry until it succeeds. Those bootstrap components consist of the CNI, Flux and configuration for Flux. From this point onward, the tenant cluster will pull the remaining configuration from the `cluster-components` repository on its own (4).

This is not a one-off process though. Flux continuously monitors the referenced Git repositories and will apply any changed manifests it detects. This allows to perform changes by creating more Git commits, and also reduces state drift as any deviations from the configuration stored in Git will be corrected.

### The Git repositories

As mentioned several times already, we are following the GitOps approach. One of the most important elements of our platform therefore are the repositories storing our configuration. Our goal is to keep all necessary information to recreate our infrastructure from scratch in Git. For any external systems we need to interact with, we attempt to use deterministic identifiers so we can link them to the data in Git, even if we loose data in the clusters.

We do not store all manifests that we deploy to our clusters in Git though. In some cases there is abstraction in place, and we only store the information necessary to derive all required manifests. A common example of such abstraction are operators. They hide all the deployment details behind a custom resource that allows to describe the desired configuration of e.g. some application, and the operator takes care of creating the necessary deployments and configmaps. We then only store the custom resource, not the deployments and configmaps.

In some cases we are working towards introducing more abstraction. One area is our `cluster-definitions` repository. Currently it contains all necessary Cluster API manifests for each cluster. We are in the process of creating a custom operator that can derive all required manifests from more abstract custom resources, combining a Cluster resource with site and environment specific configuration.

Since we are not there yet, here is the structure of our current `cluster-definitions` repository:

```bash
cluster-definitions 
├ schiff-management-cluster-x # definitions are grouped per management cluster
│ │
│ │ # The Cluster API deployment in a management cluster also manages the
│ │ # cluster it is deployed in, using the manifests located here.
│ ├ self-definitions  
│ │ ├ Cluster.yml
│ │ ├ Kubeadm-config.yml
│ │ ├ machinedeployment-0.yml
│ │ └ MachineTemplates.yml
│ │
│ │ # Each management cluster manages multiple sites, which are also grouped in
│ │ # folders ...
│ └ sites
│   ├ vsphere-site-1 # ... named after the site. 
│   │ ├ customerA # Clusters are further grouped by customer
│   │ │ └ customerA-1 # and in folders named after the cluster
│   │ │   │
│   │ │   │ # The bootstrap Kustomizations which are remotely applied to the
│   │ │   │ # tenant cluster by flux. It is using the Kubeconfig created by
│   │ │   │ # Cluster API as a secret to access the tenant cluster.
│   │ │   ├ kustomizations
│   │ │   │ ├ flux-kustomization.yaml
│   │ │   │ └ cni-kustomization.yaml
│   │ │   │
│   │ │   │ # regular Cluster API manifests
│   │ │   ├ Cluster.yml
│   │ │   ├ KubeadmControlPlane.yml 
│   │ │   ├ MachineDeployment.yml
│   │ │   ├ MachineHealthcheck.yml
│   │ │   └ MachineTemplate.yml
│   │ │
│   │ └ customerB # multiple customers per site
│   │   └ customerB-1
│   │     ┆ ...
│   │
│   └ baremetal-site-1 # and multiple sites per management cluster
│     └ customerA
│       └ customerA-1
│         ┆ ...
│
├ schiff-management-cluster-y # another management cluster
│  ├ self-definitions
│  │
│  └ sites
┆    ┆ ...
```

As you can see, it is not that complicated. All definitions are grouped by the management cluster that they belong to, making it a lot easier to pull them into the correct cluster. For each management cluster there are Cluster API manifests for the cluster itself, as Cluster API is able to perform rolling upgrades and scale the cluster it is running on. The clusters are in separate folders, grouped by site, the name of which contains the location and the infrastructure in use (bare metal or vSphere).

<!-- 

```yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: boostrap-location-1-site-1-customerA-1-cluster
  namespace: location-1-site-1
spec:
  interval: 5m
  path: "./locations/location-1/site-1/customerA-1"
  prune: false
  sourceRef:
    kind: GitRepository
    name: locations-location-1-site-1-main
    namespace: schiff-system
  decryption:
    provider: sops
    secretRef:
      name: sops-gpg-schiff-cp
  timeout: 2m
  kubeConfig:
    secretRef:
      name: customer_A-workload-cluster-1-kubeconfig
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: boostrap-location-1-site-1-customer_A-workload-cluster-1-default-namespaces
  namespace: location-1-site-1
spec:
  interval: 5m
  path: "./default/components"
  prune: false
  suspend: false
  sourceRef:
    kind: GitRepository
    name: locations-location-1-site-1-main
    namespace: schiff-system
  decryption:
    provider: sops
    secretRef:
      name: sops-gpg-schiff-cp
  timeout: 2m
  kubeConfig:
    secretRef:
      name: customer_A-workload-cluster-1-kubeconfig
``` -->

The `cluster-components` repository is a bit more complex. Both, its structure and its history. Our initial approach was very hierarchical, with configuration at different locations overriding each other. This made the configuration very DRY (don't repeat yourself). But it was also hard to figure out where a specific value is coming from, and more importantly: it prevented us from performing staged rollouts of changes, or at least made them very difficult. Our current layout therefore bundles everything into *components* which are versioned. If you are interested in what exactly we changed and why, have a look [here](./cluster-components-history.md).

```bash
cluster-components
│ # Everything that is deployed on the cluster is wrapped in a component. The
│ # components are versioned and contain all configuration that is not
│ # cluster specific.
├ components
│ ├ flux
│ │ ├ v0.29.5
│ │ │ │ # Components always contain a base configuration ...
│ │ │ ├ base
│ │ │ │ ├ source-controller-deployment.yaml
│ │ │ │ ├ ...
│ │ │ │ └ kustomization.yaml
│ │ │ │ # ... which can then be overwritten. In this case the configuration
│ │ │ │ # differs by the network zone in which the cluster is deployed.
│ │ │ ├ intranet
│ │ │ │ └ kustomization.yaml
│ │ │ └ internet
│ │ │   └ kustomization.yaml
│ │ ├ v0.28.5
│ │ │ ┆ ...
│ │ ┆ ...
│ │
│ ├ rbac
│ │ └ v0.1.0
│ │   ├ base
│ │   │ ├ roles
│ │   │ │ ┆ ...
│ │   │ ├ ...
│ │   │ ├ rbac-manager-deployment.yaml
│ │   │ └ kustomization.yaml
│ │   │ # Configuration could also differ by environment, or even a
│ │   │ # combination of both. Each variant contains a kustomziation file
│ │   │ # that pulls in the base. This way a component can be deployed by
│ │   │ # providing the full path to the desired variant in the Kustomization
│ │   │ # resource.
│ │   ├ tst
│ │   │ └ kustomization.yaml
│ │   ├ ref
│ │   │ └ kustomization.yaml
│ │   └ prd
│ │     └ kustomization.yaml
│ │
│ ├ monitoring
│ │ ┆ ...
│ ┆ ... # there are many more components of course
│
│ # configuration that is applied to clusters is grouped by locations
├ locations
│ ├ location-1
│ │ └ site-1 # and then sites
│ │   ├ customerA-1 # and of course cluster
│ │   │ ├ cni
│ │   │ │ ├ clusterrolebindings
│ │   │ │ ├ clusterroles
│ │   │ │ ├ configmaps
│ │   │ │ ├ crds
│ │   │ │ └ serviceaccounts
│ │   │ ├ configmaps
│ │   │ ├ gitrepositories
│ │   │ ├ kustomizations
│ │   │ └ secrets
│ │   └ customerB-workload-cluster-1
│ │     ┆ ...
│ └ location-2
```

### The Future

We have already ran into a few issues with the new approach for components. We will therefore start migrating them to helm charts. To avoid helm subcharts, the components in helm format will create `HelmReleases` and config maps containing the necessary values to deploy the charts using Flux. The most important reasons for the switch are the following:

  * Everything is contained in one Flux resouce, a `HelmRelease` instead of having a `Kustomization` and a `GitRepository` that are co-dependant
  * It is easier to feed the properties of a cluster (e.g. environment, site) into charts than `Kustomizations`
  * Helm's templating is more powerful than kustomize, which should make configuration easier in a few cases

## We're Hiring!

If all of this sounds interesting to you, and you want to help building and operating a Kubernetes platform with us, and are living somewhere in Europe, we have good news: we're hiring!

We are looking for Site Reliability Engineers and Go Developers that have experience with Kubernetes, GitOps (we're using Flux), Cluster API (and the CAPV and CAPM3 providers), network fabrics, Prometheus, GitLab, Software Engineering in Go and working on Open Source projects. Of course you do not need to know about all of these topics, especially if you are eager learn.

If we've caught your interest, please get in touch with [@vukg](https://github.com/vukg), our squad lead, on [LinkedIn](https://www.linkedin.com/in/vuk-gojnic/) or [Twitter](https://twitter.com/vukgojnic)!

## Conference Talks and Media Coverage

#### 2023
- [GitOps, Fail, Repeat – Lessons Learned From Running a Heterogeneous Platform](https://www.youtube.com/watch?v=Wa1YD0HUXXs)
- [Open Networking Enables Deutsche Telekom to Sail the Cloud Native Seas](https://www.youtube.com/watch?v=citS0Lr9oQk)
- [DT Implementing SONiC Architecture within Multi VRF Datacenters - Jens Jetzork | SONiC Workshop](https://www.youtube.com/watch?v=nppiJvpFxfg)
#### 2022
- [Making On-Prem Bare-Metal Kubernetes Network Stack Telco Ready](https://kccnceu2022.sched.com/event/ytuA), KubeCon Europe 2022

#### 2021

- [Semaphore Uncut with Darko Fabijan](https://semaphoreci.com/blog/cloud-native-adoption-vuk-gojnic)
- [Art of Modern Ops with Cornelia Davis](https://www.weave.works/blog/kubernetes-at-deutsche-telekom-gitops-at-the-edge)
- KubeCon Europe 2021 Keynote: [How Deutsche Telekom built Das Schiff to sail Cloud Native Seas](https://www.youtube.com/watch?v=s0UKWiNNFTM)

#### 2020

- [Das Schiff at Cluster API Office Hours Apr 15th, 2020](https://youtu.be/yXHDPILQyh4?list=PL69nYSiGNLP29D0nYgAGWt1ZFqS9Z7lw4&t=251)

## Questions & Answers

### Can Das Schiff be used outside of Deutsche Telekom Technik?

No. Or at least - not exactly. Since most of Das Schiff is open source, you can build your own platform using the same components, and if it fits your needs, mimicking our repositories. But Das Schiff as a whole is tightly integrated into internal systems at Deutsche Telekom Technik, and those integrations are probably of little use for you. If you need help doing so, feel free to reach out to our friends from [Weaveworks](https://www.weave.works/), who are helping us build Das Schiff.

### Is Das Schiff part of any industry initiative?

The [CNCF CNF-WG](https://github.com/cncf/cnf-wg) is an attempt to create momentum for transforming the delivery of telco and CNFs, and we are actively contributing to it from our platform operations perspective.

## License

```
Copyright (c) 2023 Deutsche Telekom AG.

Licensed under the **Apache License, Version 2.0** (the "License"); you may not use this file except in compliance with the License.

You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the [LICENSE](./LICENSE) for the specific language governing permissions and limitations under the License.
```


================================================
FILE: cluster-components-history.md
================================================
# History of `cluster-components`

We have recently changed our approach to managing the configuration of the components we deploy on our clusters. Previously we had a complex folder structure that implemented a hierarchy of configuration. More specific configuration could be used to override more generic files. The order was as follows:

* Default - applies to all clusters
* Environments - applied to clusters in one environment (tst, ref and prod)
* Providers - applied to clusters deployed on a specific infrastructure (metal3 or vSphere)
* Network zones - applied to clusters in a specific network segment
* Sites - applied to clusters hostet at a specific site
* Customers - applied to clusters that belong to a internal customer
* Cluster - applied on this specific cluster

We used the ignore spec of `GitRepositories` in conjunction with paths in `Kustomizations` to apply folders from the structure below. The `GitRepositories` were always pointing to our `main` branch, so any changes were immediately applied to all of our clusters. While this makes it easy to perform changes, it is very risky when managing lots of clusters in case something goes wrong. Especially when performing changes at the default level. In addition, the folder structure is very complex (see below). It takes a while until you can find your way around, and its quite hard to track where a specific config option is coming from.

We have revised the structure from time to time. At some point we got rid of the `default` folder for example, and duplicated all the config to the environment folders. This allowed us to properly progress all changes through the different environments. We then realised that we need environment folders at every level in order to perform this staging consistently. So we added `tst`, `ref` and `prd` folders everywhere.

While this worked for a while, it also came with its own problems. Most importantly it was very hard to track whether changes went through the previous stage before being applied to the next. Creating diffs between stages was also difficult and would have required custom tooling.

We then thought about using separate repositories for different environments, and merging forward from one to the next. But since configuration can differ between environments, this is also complicated and would require special tooling to create those forward merges, which ignores certain files. And it also leaves another problem unsolved: staged rollouts.

We could just have pinned the `GitRepositories` to specific commits and updated them one after another, but that would also be hard to track, and it also requires upgrading all configuration for one cluster at the same time (or introduce a lot of duplication of `GitRepositories` and make it even harder to track what is applied to which cluster).

This let to the latest approach described in the main README, and to the plan of using helm for components. The components are individually packaged and versioned, and contain all required configuration, with variation based on location, environment or network zone as needed. The versioned components can be upgraded easily by specifying a tag in the `GitRepository` or the desired version in the `HelmRelease`.

```bash
cluster-components
├ customers # Defaults per customers and environments
│ ├ customer_A
│ │ ├ default # Defaults for customer_A for all environments
│ │ │ ├ configmaps
│ │ │ ├ gitrepositories
│ │ │ ├ kustomizations
│ │ │ └ namespaces
│ │ ├ prd # Specific config for customer_A per environment
│ │ ├ ref
│ │ └ tst
│ └ customer_B
│     ...
├ default # General defaults valid for all customers and environments
│ └ components
│   ├ core
│   │ ├ clusterroles
│   │ ├ configmaps
│   │ └ namespaces
│   └ monitoring
│     ├ configmaps
│     ├ grafana
│     │ ├ dashboards
│     │ │ ├ flux-system
│     │ │ ├ kube-system
│     │ │ └ monitoring-system
│     │ └ datasources
│     ├ namespaces
│     ├ prometheus
│     │ ├ alerts
│     │ ├ rules
│     │ └ servicemonitors
│     └ services
├ environments # General defaults per environment
│ ├ dev
│ │ ├ components
│ │ │ ├ core
│ │ │ │   ├ configmaps
│ │ │ │   └ helmreleases
│ │ │ └ monitoring
│ │ │   ├ crds
│ │ │   │   ├ grafana-operator
│ │ │   │   │   └ crds
│ │ │   │   └ prometheus-operator
│ │ │   │       └ crds
│ │ │   └ helmreleases
│ │ ├ configmaps
│ │ ├ helmrepositories
│ │ └ podmonitors
│ ├ prd
│ │   ...
│ ├ ref
│ │   ...
│ └ tst
│     ...
├ locations # Cluster specific configs
│ ├ location-1
│ │ └ site-1
│ │   ├ customer_A-workload-cluster-1
│ │   │ ├ cni
│ │   │ │ ├ clusterrolebindings
│ │   │ │ ├ clusterroles
│ │   │ │ ├ configmaps
│ │   │ │ ├ crds
│ │   │ │ └ serviceaccounts
│ │   │ ├ configmaps
│ │   │ ├ gitrepositories
│ │   │ ├ kustomizations
│ │   │ └ secrets
│ │   └ customer_B-workload-cluster-1
│ │       ...
│ └ location-2
│     ...
├ network-zones
│ ├ environment-defaults # Contains the plain mainifest of each environment
│ │ ├ dev
│ │ │ ├ clusterrolebindings
│ │ │ ├ clusterroles
│ │ │ ├ crds
│ │ │ ├ networkpolicies
│ │ │ ├ serviceaccounts
│ │ │ └ services
│ │ ├ prd
│ │ │   ...
│ │ ├ ref
│ │ │   ...
│ │ └ tst
│ │     ...
│ ├ network-segment-1 # Specific config for network segment 1
│ │   ... # Contains the kustomize overlays used to modify the base manifests for each environment
│ └ network-segment-2
│     ...
└ providers # CAPI provider defaults and specific configs per environment
  ├ default
  ├ metal3
  │ ├ default
  │ │ ├ configmaps
  │ │ └ namespaces
  │ ├ dev
  │ │ └ helmreleases
  │ ├ prd
  │ │ └ helmreleases
  │ ├ ref
  │ │ ├ crds
  │ │ ├ helmreleases
  │ │ └ helmrepositories
  │ └ tst
  │   └ helmreleases
  └ vsphere
      ...

```

================================================
FILE: schiff-liquid-metal.md
================================================
# Das Schiff Liquid Metal

In order to scale Das Schiff to edge locations, we were looking for a solution that allows to efficiently use the limited resources available at such locations. Using entire bare metal machines as nodes, especially for the control plane, is not really an option if only a few servers are available. At the same time, some workloads require high performance, or even direct access to hardware resources.

Together with our friends from Weaveworks we started looking at a few existing solutions like K3s, K0s, MicroK8s or using mixed node clusters, but none of them suited were satisfying all of our goals. We then started thinking about using microVMs on those bare metal machines, and Weaveworks liked the idea so much that they started building Weaveworks Liquid Metal.

You can find out all about at the Liquid Metal GitHub Org: https://github.com/liquidmetal-dev


================================================
FILE: schiff-pure-metal.md
================================================
# Das Schiff Pure Metal

## Introduction
This document describes Das Schiff Pure Metal - our approach to dynamically manage multiple bare-metal Kubernetes clusters using [Das Schiff Engine](README.md) and only bare-metal servers

## Motivation

If you need to run Cloud Native Network functions (CNFs), especially those that forward large amount of data traffic 5G Core or ORAN vBBU, or if you need to run performance hungry applications that require lot of storage througput the most effective option is to run them on bare-metal Kubernetes clusters. 

Bare-metal clusters have several key benefits over other approaches:
* More performance & less overhead
* Reduced complexity & less moving parts
* Easy and hard multi-tenancy - one bare-metal host belongs to one cluster only
* Uncomplicated usage of hardware acceleration and direct hardware access
* Highest flexibility to adapt the host level config to application needs

This is why we invested efforts to master ceration and management of many bare-metal clusters on bare-metal server pools.

## Pure Metal approach

The naming comes from fact that each Kubernetes node is a physical server.

The configuration of these servers differs greatly from the typical configuration for virtualized environments since they need to suite Kubernetes needs. This means that we use smaller single socket nodes and cluster scaling to provide desired capacity.

Currently used server hardware that we use has following general specs:
* **Control plane node**: 4 vCPUs, 32GB RAM, 800GB SSD, 4x 1GbE NIC
* **Worker node**: 64 vCPUs, 256GB RAM, 800GB SSD, 4x 25GbE NIC with hw acceleration

Since the running node instances are ephemeral and are frequently exchanged we went with external storage for persistencty. The persistent volumes are dynamically created via corresponding CSI drivers.

Everything is interconnected via DC fabric based on [SONiC](https://azure.github.io/SONiC/) netowrk operating system.

The typical setup is illustrated on the picture below:

<p align="center"><img src="images/schiff-pure-metal-illustration.png" align="center"></p>

The pool of servers is managed by [GitOps loop](README.md#das-schiff-loop) of Das Schiff Management cluster for sole purpose of creating and maintaining bare-metal customer clusters. The picture shows situation with one server pool in one location that is hosting two clustomer clusters - cluster A and cluster B. The rest of the servers in the pool are free for creation of other clusters, expansion of existing clusters as well as reserve for failover and lifecycle management purposes. 


## Use cases for Das Schiff Pure Metal

This appraoch fits almost into all for factors. It can be used in core data centers with sizable amount of compute ressources as well as in edge and far edge locations.

<p align="center"><img src="images/schiff-server-pools.png" align="center"></p>

Das Schiff Pure Metal can run any cloud native application. Hovwewer typical applications that run there are:
* 5G Core
* 5G UPF (User Plane Functions)
* 5G AUSF
* ORAN vBBU
* Large Kafka deployments (100s of TB)
* Large Elasticsearch deployments (100s of TB)
* etc.

### Limitations of Pure Metal

Pure Metal When it comes to very constrained environments like far edge with 3-5 available servers.

As minimal reasonable number of worker nodes in one cluster is 3 the minimal ressources of the cluster are **192 vCPUs** and **768GB RAM**. Therefore it is not efficient to use it deployments of smaller applications.

Due to these limitations we are workng on [Das Schiff Liquid Metal](schiff-liquid-metal.md) which targets to use the lightweight virtualization technology such as [Firecracker microVM](https://github.com/firecracker-microvm/firecracker) to eliminate these limitations and opens doors for endless possibilities when it comes to managing kubernetes clusters on pools of bare-metal servers.

## How to engage

If Pure Metal rises your interest we would be happy to engage in the [discussion](../../discussions) about how could we bring it forward.


================================================
FILE: templates/file-header.txt
================================================
/*
 * das-schiff
 *
 * (C) 2020, YOUR_NAME, YOUR_COMPANY
 *
 * Deutsche Telekom AG and all other contributors /
 * copyright owners license this file to you under the Apache 
 * License, Version 2.0 (the "License"); you may not use this 
 * file except in compliance with the License. 
 * You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing,
 * software distributed under the License is distributed on an
 * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 * KIND, either express or implied.  See the License for the
 * specific language governing permissions and limitations
 * under the License.
 */
Download .txt
gitextract_zgdi21db/

├── CODEOWNERS
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── NOTICE
├── README.md
├── cluster-components-history.md
├── schiff-liquid-metal.md
├── schiff-pure-metal.md
└── templates/
    └── file-header.txt
Condensed preview — 10 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (61K chars).
[
  {
    "path": "CODEOWNERS",
    "chars": 540,
    "preview": "# This file provides an overview of code owners in this repository.\n\n# Each line is a file pattern followed by one or mo"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "chars": 5251,
    "preview": "\n# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make particip"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 4555,
    "preview": "# Contributing\n\n## Code of conduct\n\nAll members of the project community must abide by the [Contributor Covenant, versio"
  },
  {
    "path": "LICENSE",
    "chars": 11358,
    "preview": "\n                                 Apache License\n                           Version 2.0, January 2004\n                  "
  },
  {
    "path": "NOTICE",
    "chars": 370,
    "preview": "Copyright (c) 2020 Deutsche Telekom AG.\n\nThis project is licensed under Apache License, Version 2.0;\nyou may not use the"
  },
  {
    "path": "README.md",
    "chars": 25153,
    "preview": "\n<p align=\"center\"><img height=\"150px\" src=\"images/das-schiff-logo.png\" align=\"center\"></p>\n\n# Das Schiff\n\nDas SCHIFF is"
  },
  {
    "path": "cluster-components-history.md",
    "chars": 5686,
    "preview": "# History of `cluster-components`\n\nWe have recently changed our approach to managing the configuration of the components"
  },
  {
    "path": "schiff-liquid-metal.md",
    "chars": 893,
    "preview": "# Das Schiff Liquid Metal\n\nIn order to scale Das Schiff to edge locations, we were looking for a solution that allows to"
  },
  {
    "path": "schiff-pure-metal.md",
    "chars": 4048,
    "preview": "# Das Schiff Pure Metal\n\n## Introduction\nThis document describes Das Schiff Pure Metal - our approach to dynamically man"
  },
  {
    "path": "templates/file-header.txt",
    "chars": 710,
    "preview": "/*\n * das-schiff\n *\n * (C) 2020, YOUR_NAME, YOUR_COMPANY\n *\n * Deutsche Telekom AG and all other contributors /\n * copyr"
  }
]

About this extraction

This page contains the full source code of the telekom/das-schiff GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 10 files (57.2 KB), approximately 13.2k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!