[
  {
    "path": "README.md",
    "content": "---\ntitle: Simple RKE2, Longhorn, NeuVector and Rancher Install\nauthor: Andy Clemenko, @clemenko, clemenko@gmail.com\n---\n\n# Simple RKE2, Longhorn, NeuVector and Rancher Install - Updated 2024\n\n![logp](img/banner-rounded.png)\n\nThroughout my career there has always been a disconnect between the documentation and the practical implementation. The Kubernetes (k8s) ecosystem is no stranger to this problem. This guide is a simple approach to installing Kubernetes and some REALLY useful tools. We will walk through installing all the following.\n\n- [RKE2](https://docs.rke2.io) - Security focused Kubernetes\n- [Rancher](https://www.suse.com/products/suse-rancher/) - Multi-Cluster Kubernetes Management\n- [Longhorn](https://longhorn.io) - Unified storage layer\n- [NeuVector](https://www.suse.com/products/neuvector/) - Full Lifecycle Container Security\n\nWe will need a few tools for this guide. We will walk through how to install `helm` and `kubectl`.\n\nOr [Watch the video](https://youtu.be/ONes6pv_9J4).\nThe original but outdated [video is here.](https://youtu.be/oM-6sd4KSmA).\n\n**For more fun check out my list of other content and videos at https://rfed.io/links.**\n\n---\n\n> **Table of Contents**:\n>\n> * [Whoami](#whoami)\n> * [Prerequisites](#prerequisites)\n> * [Linux Servers](#linux-servers)\n> * [RKE2 Install](#rke2-install)\n>   * [RKE2 Server Install](#rke2-server-install)\n>   * [RKE2 Agent Install](#rke2-agent-install)\n> * [Rancher](#rancher)\n>   * [Rancher Install](#rancher-install)\n>   * [Rancher Gui](#rancher-gui)\n> * [Longhorn](#longhorn)\n>   * [Longhorn Install](#longhorn-install)\n>   * [Longhorn Gui](#longhorn-gui)\n> * [NeuVector](#neuvector)\n>   * [NeuVector Install](#neuvector-install)\n>   * [NeuVector Gui](#neuvector-gui)\n> * [Automation](#automation)\n> * [Conclusion](#conclusion)\n\n---\n\n## Whoami\n\nJust a geek - Andy Clemenko - @clemenko - clemenko@gmail.com\n\n## Prerequisites\n\nThe prerequisites are fairly simple. We need 3 linux servers with access to the internet. They can be bare metal, or in the cloud provider of your choice. I prefer [Digital Ocean](https://digitalocean.com). For this guide we are going to use [Harvester](https://harvesterhci.io/). We need an `ssh` client to connect to the servers. For the purpose of this guide let's use https://sslip.io/. We will need to know the IP of the first server of the cluster.\n\n## Linux Servers\n\nFor the sake of this guide we are going to use [Ubuntu](https://ubuntu.com). Our goal is a simple deployment. The recommended size of each node is 4 Cores and 8GB of memory with at least 60GB of storage. One of the nice things about [Longhorn](https://longhorn.io) is that we do not need to attach additional storage. Here is an example list of servers. Please keep in mind that your server names can be anything. Just keep in mind which ones are the \"server\" and \"agents\".\n\n| name | core | memory | ip | disk | os |\n|---| --- | --- | --- | --- | --- |\n|rancher-01 | 4 | 8Gi | 192.168.1.12 | 60 | Ubuntu 22.04 x64 |\n|rancher-02 | 4 | 8Gi | 192.168.1.74 | 60 | Ubuntu 22.04 x64 |\n|rancher-03 | 4 | 8Gi | 192.168.1.247 | 60 | Ubuntu 22.04 x64 |\n\nFor Kubernetes we will need to \"set\" one of the nodes as the control plane. Rancher-01 looks like a winner for this. First we need to `ssh` into all three nodes and make sure we have all the updates and add a few things. For the record I am not a fan of software firewalls. Please feel free to reach to me to discuss. :D\n\nWe need to run the following commands on each of your nodes. Make sure you pick the OS of choice.\n\n**Ubuntu**:\n\n```bash\n# Ubuntu instructions \n# stop the software firewall\nsystemctl disable --now ufw\n\n# get updates, install nfs, and apply\napt update\napt install nfs-common -y  \napt upgrade -y\n\n# clean up\napt autoremove -y\n```\n\n**Rocky / Centos / RHEL**:\n\n```bash\n# Rocky instructions \n# stop the software firewall\nsystemctl disable --now firewalld\n\n# get updates, install nfs, and apply\ndnf install -y nfs-utils cryptsetup iscsi-initiator-utils\n\n# enable iscsi for Longhorn\nsystemctl enable --now iscsid.service \n\n# update all the things\ndnf update -y\n\n# clean up\ndnf clean all\n```\n\nCool, lets move on to the RKE2.\n\n## RKE2 Install\n\n### RKE2 Server Install (rancher-01)\n\nNow that we have all the nodes up to date, let's focus on `rancher-01`. While this might seem controversial, `curl | bash` does work nicely. The install script will use the tarball install for **Ubuntu** and the RPM install for **Rocky/Centos**. Please be patient, the start command can take a minute. Here are the [rke2 docs](https://docs.rke2.io/install/methods/) and [install options](https://docs.rke2.io/install/configuration#configuring-the-linux-installation-script) for reference.\n\n```bash\n# On rancher-01\ncurl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh - \n\n# we can set the token - create config dir/file\nmkdir -p /etc/rancher/rke2/ \necho \"token: bootstrapAllTheThings\" > /etc/rancher/rke2/config.yaml\n\n# start and enable for restarts - \nsystemctl enable --now rke2-server.service\n```\n\n\nHere is what the **Ubuntu** version should look like:\n\n![rke_install](img/rke_install.jpg)\n\nLet's validate everything worked as expected. Run a `systemctl status rke2-server` and make sure it is `active`.\n\n![rke_install](img/rke_status.jpg)\n\nPerfect! Now we can start talking Kubernetes. We need to symlink the `kubectl` cli on `rancher-01` that gets installed from RKE2.\n\n```bash\n# symlink all the things - kubectl\nln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl\n\n# add kubectl conf with persistence, as per Duane\necho \"export KUBECONFIG=/etc/rancher/rke2/rke2.yaml PATH=$PATH:/usr/local/bin/:/var/lib/rancher/rke2/bin/\" >> ~/.bashrc\nsource ~/.bashrc\n\n# check node status\nkubectl get node\n```\n\nHopefully everything looks good! Here is an example.\n\n![rke_node](img/rke_nodes.jpg)\n\nFor those that are not TOO familiar with k8s, the config file is what `kubectl` uses to authenticate to the api service. If you want to use a workstation, jump box, or any other machine you will want to copy `/etc/rancher/rke2/rke2.yaml`. You will want to modify the file to change the ip address. \n\n### RKE2 Agent Install (rancher-02, rancher-03)\n\nThe agent install is VERY similar to the server install. Except that we need an agent config file before starting. We will start with `rancher-02`. We need to install the agent and setup the configuration file.\n\n```bash\n# we can export the rancher-01 IP from the first server.\nexport RANCHER1_IP=192.168.1.12   # <-- change this!\n\n# we add INSTALL_RKE2_TYPE=agent\ncurl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent sh -  \n\n# create config dir/file\nmkdir -p /etc/rancher/rke2/ \n\n# change the ip to reflect your rancher-01 ip\ncat << EOF >> /etc/rancher/rke2/config.yaml\nserver: https://$RANCHER1_IP:9345\ntoken: bootstrapAllTheThings\nEOF\n\n# enable and start\nsystemctl enable --now rke2-agent.service\n```\n\nWhat should this look like:\n\n![rke_agent](img/rke_agent.jpg)\n\nRinse and repeat. Run the same install commands on `rancher-03`. Next we can validate all the nodes are playing nice by running `kubectl get node -o wide` on `rancher-01`.\n\n![moar_nodes](img/moar_nodes.jpg)\n\nHuzzah! RKE2 is fully installed. From here on out we will only need to talk to the kubernetes api. Meaning we will only need to remain ssh'ed into `rancher-01`.\n\nNow let's install Rancher.\n\n## Rancher\n\nFor more information about the Rancher versions, please refer to the  [Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/). We are going to use the latest version. For additional reading take a look at the [Rancher docs](https://ranchermanager.docs.rancher.com).\n\n### Rancher Install\n\nFor Rancher we will need [Helm](https://helm.sh/). We are going to live on the edge! Here are the [install docs](https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster) for reference.\n\n```bash\n# on the server rancher-01\n# add helm\ncurl -#L https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash\n\n# add needed helm charts\nhelm repo add rancher-latest https://releases.rancher.com/server-charts/latest --force-update\nhelm repo add jetstack https://charts.jetstack.io --force-update\n```\n\nQuick note about Rancher. Rancher needs jetstack/cert-manager to create the self signed TLS certificates. We need to install it with the Custom Resource Definition (CRD). Please pay attention to the `helm` install for Rancher. The URL`rancher.192.168.1.12.sslip.io` will need to be changed to for your IP. Also notice I am setting the `bootstrapPassword` and replicas. This allows us to skip a step later. :D\n\n```bash\n# still on rancher-01\n\n# helm install jetstack\nhelm upgrade -i cert-manager jetstack/cert-manager -n cert-manager --create-namespace --set crds.enabled=true\n\n# helm install rancher\n# CHANGE the IP to the one for rancher-01\nexport RANCHER1_IP=192.168.1.12\nhelm upgrade -i rancher rancher-latest/rancher --create-namespace --namespace cattle-system --set hostname=rancher.$RANCHER1_IP.sslip.io --set bootstrapPassword=bootStrapAllTheThings --set replicas=1\n```\n\nNow we can validate everything installed with a `helm list -A` or `kubectl get pod -A`. Keep in mind it may take a minute or so for all the pods to come up. GUI time...\n\n### Rancher GUI\n\nWe should now able to get to the GUI at https://rancher.192.168.1.12.sslip.io. The good news is that be default rke2 installs with the `nginx` ingress controller. Keep in mind that the browser may show an error for the self signed certificate. \n\n![error](img/tls_error.jpg)\n\nOnce past that you should see the following screen asking about the password. Remember the helm install? `bootStrapAllTheThings` is the password.\n\n![welcome](img/welcome.jpg)\n\nWe need to validate the Server URL and accept the terms and conditions.\n\n![eula](img/eula.jpg)\n\n**AND we are in!** Switching to light mode.\n\n![dashboard](img/dashboard.jpg)\n\n### Rancher Design\n\nLet's take a second and talk about Ranchers Multi-cluster design. Bottom line, Rancher can operate in a Spoke and Hub model. Meaning one k8s cluster for Rancher and then \"downstream\" clusters for all the workloads. Personally I prefer the decoupled model where there is only one cluster per Rancher install. This allows for continued manageability during networks outages. For the purpose of the is guide we are concentrate on the single cluster deployment. There is good [documentation](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters) on \"importing\" downstream clusters.\n\nNow let's install Longhorn.\n\n## Longhorn\n\n### Longhorn Install\n\nThere are two methods for installing. Rancher has Chart built in.\n\n![charts](img/charts.jpg)\n\nNow for the good news, [Longhorn docs](https://longhorn.io/docs/1.6.1/deploy/install/) show two easy install methods. Helm and `kubectl`. Let's stick with Helm for this guide.\n\n```bash\n# get charts\nhelm repo add longhorn https://charts.longhorn.io --force-update\n\n# install\nhelm upgrade -i longhorn longhorn/longhorn --namespace longhorn-system --create-namespace\n```\n\nFairly easy right?\n\n### Longhorn GUI\n\nOne of the benefits of Rancher is its ability to adjust to what's installed. Meaning the Rancher GUI will see Longhorn is installed and provide a link. We can find it by navigating to the `local` cluster from the dashboard. Or clicking on the bull with horns icon on the left nav bar under the home. From there look \n\n![longhorn installed](img/longhorn_installed.jpg)\n\nThis brings up the Longhorn GUI.\n\n![longhorn](img/longhorn.jpg)\n\nOne of the other benefits of this integration is that rke2 also knows it is installed. Run `kubectl get sc` to show the storage classes.\n\n```text\nroot@rancher-01:~# kubectl  get sc\nNAME                 PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE\nlonghorn (default)   driver.longhorn.io   Delete          Immediate           true                   3m58s\n```\n\nNow we have a default storage class for the cluster. This allows for the automatic creation of Physical Volumes (PVs) based on a Physical Volume Claim (PVC). The best part is that \"it just works\" using the existing, unused storage, on the three nodes. Take a look around in the gui. Notice the Volumes on the Nodes. For fun, here is a demo flask app that uses a PVC for Redis. `kubectl apply -f https://raw.githubusercontent.com/clemenko/k8s_yaml/master/flask_simple_nginx.yml`\n\nNow let's install NeuVector.\n\n## NeuVector\n\n### NeuVector Install\n\nSimilar to Longhorn we are going to use `helm` for the install. We do have a choice to use Single Sign On (SSO) with Rancher's credentials. For this guide we are going to stick with independent username and password. The helm options for SSO are below if you want experiment.\n\n```bash\n# helm repo add\nhelm repo add neuvector https://neuvector.github.io/neuvector-helm/ --force-update\n\n# helm install \nexport RANCHER1_IP=192.168.1.12\n\nhelm upgrade -i neuvector --namespace cattle-neuvector-system neuvector/core --create-namespace --set manager.svc.type=ClusterIP --set controller.pvc.enabled=true --set controller.pvc.capacity=500Mi --set manager.ingress.enabled=true --set manager.ingress.host=neuvector.$RANCHER1_IP.sslip.io --set manager.ingress.tls=true \n\n# add for single sign-on\n# --set controller.ranchersso.enabled=true --set global.cattle.url=https://rancher.$RANCHER1_IP.sslip.io\n```\n\nWe should wait a few seconds for the pods to deploy.\n\n```bash\nkubectl get pod -n cattle-neuvector-system\n```\n\nIt will take a minute for everything to become running.\n\n```text\nroot@rancher-01:~# kubectl get pod -n cattle-neuvector-system\nNAME                                        READY   STATUS    RESTARTS   AGE\nneuvector-controller-pod-657599c5fd-2zfhh   1/1     Running   0          109s\nneuvector-controller-pod-657599c5fd-qd2jx   1/1     Running   0          109s\nneuvector-controller-pod-657599c5fd-qv5nv   1/1     Running   0          109s\nneuvector-enforcer-pod-2dczs                1/1     Running   0          109s\nneuvector-enforcer-pod-47r2q                1/1     Running   0          109s\nneuvector-enforcer-pod-bg7rd                1/1     Running   0          109s\nneuvector-manager-pod-66cfdb8779-f8qtj      1/1     Running   0          109s\nneuvector-scanner-pod-fc48d77fc-b8hb9       1/1     Running   0          109s\nneuvector-scanner-pod-fc48d77fc-h8lwd       1/1     Running   0          109s\nneuvector-scanner-pod-fc48d77fc-mmmpx       1/1     Running   0          109s\n```\n\nLet's take a look at the GUI.\n\n### NeuVector GUI\n\nSimilar to Longhorn, Rancher see that the application is installed and created a NavLink for it. We are going to click it or navigate to http://neuvector.192.168.1.12.sslip.io. The login is `admin/admin`. Make sure to check the EULA box.\n\n![neu](img/neu.jpg)\n\nBefore taking a look around we should turn on \"Auto Scan\". This will automatically scan all images on the cluster. Navigate to Assets --> Containers. In the upper right there is a a toggle for Auto Scan. \n\n![autoscan](img/autoscan.jpg)\n\nHopefully you made it this far!\n\n## Automation\n\nYes we can automate all the things. Here is the repo I use automating the complete stack https://github.com/clemenko/rke2. This repo is for entertainment purposes only. There I use tools like pdsh to run parallel ssh into the nodes to complete a few tasks. Ansible would be a good choice for this. But I am old and like bash. Sorry the script is a beast. I need to clean it up.\n\n## Conclusion\n\nAs we can see, setting up RKE2, Rancher, NeuVector and Longhorn is not that complicated. We can get deploy Kubernetes, a storage layer, and a management gui in a few minutes. Simple, right? One of the added benefits of using the Suse / Rancher stack is that all the pieces are modular. Use only what you need, when you need it. Hope this was helpful. Please feel free reach out, or open any issues at https://github.com/clemenko/rke_install_blog.\n\nthanks!\n\n![success](img/success.jpg)\n\n**For more fun check out my list of other content and videos at https://rfed.io/links.**\n"
  }
]