1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-14 02:06:32 +00:00

Experiment with PDF generation

Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
David Young
2022-08-19 16:40:53 +12:00
parent c051e0bdad
commit abf9309cb1
317 changed files with 124 additions and 546 deletions

View File

@@ -0,0 +1,77 @@
---
description: Creating a Kubernetes cluster on DigitalOcean
---
# Kubernetes on DigitalOcean
IMO, the easiest Kubernetes cloud provider to experiment with is [DigitalOcean](https://m.do.co/c/e33b78ad621b) (_this is a referral link_). I've included instructions below to start a basic cluster.
![Kubernetes on Digital Ocean](/images/kubernetes-on-digitalocean.jpg)
## Ingredients
1. [DigitalOcean](https://www.digitalocean.com/?refcode=e33b78ad621b) account, either linked to a credit card or (_my preference for a trial_) topped up with $5 credit from PayPal. (_yes, this is a referral link, making me some 💰 to buy 🍷_)
2. Geek-Fu required : 🐱 (easy - even has screenshots!)
## Preparation
### Create DigitalOcean Account
Create a project, and then from your project page, click **Manage** -> **Kubernetes (LTD)** in the left-hand panel:
![Kubernetes on Digital Ocean Screenshot #1](/images/kubernetes-on-digitalocean-screenshot-1.png){ loading=lazy }
Until DigitalOcean considers their Kubernetes offering to be "production ready", you'll need the additional step of clicking on **Enable Limited Access**:
![Kubernetes on Digital Ocean Screenshot #2](/images/kubernetes-on-digitalocean-screenshot-2.png){ loading=lazy }
The _Enable Limited Access_ button changes to read _Create a Kubernetes Cluster_ . Cleeeek it:
![Kubernetes on Digital Ocean Screenshot #3](/images/kubernetes-on-digitalocean-screenshot-3.png){ loading=lazy }
When prompted, choose some defaults for your first node pool (_your pool of "compute" resources for your cluster_), and give it a name. In more complex deployments, you can use this concept of "node pools" to run certain applications (_like an inconsequential nightly batch job_) on a particular class of compute instance (_such as cheap, preemptible instances_)
![Kubernetes on Digital Ocean Screenshot #4](/images/kubernetes-on-digitalocean-screenshot-4.png){ loading=lazy }
That's it! Have a sip of your 🍷, a bite of your :cheese:, and wait for your cluster to build. While you wait, follow the instructions to setup kubectl (if you don't already have it)
![Kubernetes on Digital Ocean Screenshot #5](/images/kubernetes-on-digitalocean-screenshot-5.png){ loading=lazy }
DigitalOcean will provide you with a "kubeconfig" file to use to access your cluster. It's at the bottom of the page (_illustrated below_), and easy to miss (_in my experience_).
![Kubernetes on Digital Ocean Screenshot #6](/images/kubernetes-on-digitalocean-screenshot-6.png){ loading=lazy }
## Release the kubectl!
Save your kubeconfig file somewhere, and test it our by running ```kubectl --kubeconfig=<PATH TO KUBECONFIG> get nodes``` [^1]
Example output:
```bash
[davidy:~/Downloads] 130 % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
NAME STATUS ROLES AGE VERSION
festive-merkle-8n9e Ready <none> 20s v1.13.1
[davidy:~/Downloads] %
```
In the example above, my nodes were being deployed. Repeat the command to see your nodes spring into existence:
```bash
[davidy:~/Downloads] % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
NAME STATUS ROLES AGE VERSION
festive-merkle-8n96 Ready <none> 6s v1.13.1
festive-merkle-8n9e Ready <none> 34s v1.13.1
[davidy:~/Downloads] %
[davidy:~/Downloads] % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
NAME STATUS ROLES AGE VERSION
festive-merkle-8n96 Ready <none> 30s v1.13.1
festive-merkle-8n9a Ready <none> 17s v1.13.1
festive-merkle-8n9e Ready <none> 58s v1.13.1
[davidy:~/Downloads] %
```
That's it. You have a beautiful new kubernetes cluster ready for some action!
[^1]: Do you live in the CLI? Install the kubectl autocompletion for [bash](https://kubernetes.io/docs/tasks/tools/included/optional-kubectl-configs-bash-linux/) or [zsh](https://kubernetes.io/docs/tasks/tools/included/optional-kubectl-configs-zsh/) to make your life much easier!
--8<-- "recipe-footer.md"

View File

@@ -0,0 +1,65 @@
---
title: How to choose a managed Kubernetes cluster vs build your own
description: So you want to play with Kubernetes? The first decision you need to make is how your cluster will run. Do you choose (and pay a premium for) a managed Kubernetes cloud provider, or do you "roll your own" with kubeadm on bare-metal, VMs, or k3s?
---
# Kubernetes Cluster
There are an ever-increasing amount of ways to deploy and run Kubernetes. The primary distinction to be aware of is whether to fork out for a managed Kubernetes instance or not. Managed instances have some advantages, which I'll detail below, but these come at additional cost.
## Managed (Cloud Provider)
### Popular Options
Popular options are:
* [DigitalOcean](/kubernetes/cluster/digitalocean/)
* Google Kubernetes Engine (GKE)
* Amazon Elastic Kubernetes Service (EKS)
* Azure Kubernetes Service (AKS)
### Upgrades
A managed Kubernetes provider will typically provide a way to migrate to pre-tested and trusted versions of Kuberenetes, as they're released and then tested. This [doesn't mean that upgrades will be trouble-free](https://www.digitalocean.com/community/tech_talks/20-000-upgrades-later-lessons-from-a-year-of-managed-kubernetes-upgrades), but they're likely to be less of a PITA. With Kubernetes' 4-month release cadence, you'll want to keep an eye on updates, and avoid becoming too out-of-date.
### Horizontal Scaling
One of the key drawcards for Kubernetes is horizonal scaling. You want to be able to expand/contract your cluster as your workloads change, even if just for one day a month. Doing this on your own hardware is.. awkward.
### Load Balancing
Even if you had enough hardware capacity to handle any unexpected scaling requirements, ensuring that traffic can reliably reach your cluster is a complicated problem. You need to present a "virtual" IP for external traffic to ingress the cluster on. There are popular solutions to provide LoadBalancer services to a self-managed cluster (*i.e., [MetalLB](/kubernetes/loadbalancer/metallb/)*), but they do represent extra complexity, and won't necessarily be resilient to outages outside of the cluster (*network devices, power, etc*).
### Storage
Cloud providers make it easy to connect their storage solutions to your cluster, but you'll pay as you scale, and in most cases, I/O on cloud block storage is throttled along with your provisioned size. (*So a 1Gi volume will have terrible IOPS compared to a 100Gi volume*)
### Services
Some things just "work better" in a cloud provider environment. For example, to run a highly available Postgres instance on Kubernetes requires at least 3 nodes, and 3 x storage, plus manual failover/failback in the event of an actual issue. This can represent a huge cost if you simply need a PostgreSQL database to provide (*for example*) a backend to an authentication service like Keycloak. Cloud providers will have a range of managed database solutions which will cost far less than do-it-yourselfing, and integrate easily and securely into their kubernetes offerings.
### Summary
Go with a managed provider if you want your infrastructure to be resilient to your own hardware/connectivity issues. I.e., there's a material impact to a power/network/hardware outage, and the cost of the managed provider is less than the cost of an outage.
## DIY (Cloud Provider, Bare Metal, VMs)
### Popular Options
Popular options are:
* Rancher's K3s
* Ubuntu's Charmed Kubernetes
### Flexible
With self-hosted Kubernetes, you're free to mix/match your configuration as you see fit. You can run a single k3s node on a raspberry pi, or a fully HA pi-cluster, or a handful of combined master/worker nodes on a bunch of proxmox VMs, or on plain bare-metal.
### Education
You'll learn more about how to care for and feed your cluster if you build it yourself. But you'll definately spend more time on it, and it won't always be when you expect!
### Summary
Go with a self-hosted cluster if you want to learn more, you'd rather spend time than money, or you've already got significant investment in local infructure and technical skillz.
--8<-- "recipe-footer.md"

View File

@@ -0,0 +1,161 @@
---
title: Quickly (and simply) create a k8s cluster with k3s
description: Creating a Kubernetes cluster on k3s
---
# Deploy your k8s cluster on k3s
If you're wanting to self-host your own Kubernetes cluster, one of the simplest and most widely-supported approach is Rancher's [k3s](https://k3s.io/).
## Why k3s vs k8s?
!!! question "k3s vs k8s - which is better to start with?"
**Question**: If you're wanting to learn about Kubernetes, isn't it "better" to just jump into the "deep end", and use "full" k8s? Is k3s a "lite" version of k8s?
**Answer**: It depends on what you want to learn. If you want to deep-dive into the interaction between the apiserver, schedule, etcd and SSL certificates, then k3s will hide much of this from you, and you'd probably prefer to learn [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way). If, however, you want to learn how to **drive** Kubernetes as an operator / user, then k3s abstracts a lot of the (*unnecessary?*) complexity around cluster setup, bootstrapping, and upgrading.
Some of the "let's-just-get-started" advantages to k3s are:
* Packaged as a single binary.
* Lightweight storage backend based on sqlite3 as the default storage mechanism. etcd3, MySQL, Postgres also still available.
* Simple but powerful “batteries-included” features have been added, such as: a local storage provider, a service load balancer, a Helm controller, and the Traefik ingress controller (I prefer to leave some of these out)
## k3s requirements
!!! summary "Ingredients"
* [ ] One or more "modern" Linux hosts to serve as cluster masters. (*Using an odd number of masters is required for HA*). Additional steps are required for [Raspbian Buster](https://rancher.com/docs/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster), [Alpine](https://rancher.com/docs/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup), or [RHEL/CentOS](https://rancher.com/docs/k3s/latest/en/advanced/#additional-preparation-for-red-hat-centos-enterprise-linux).
* [ ] Ensure you have sudo access to your nodes, and that each node meets the [installation requirements](https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/).
Optional:
* [ ] Additional hosts to serve as cluster agents (*assuming that not everybody gets to be a master!*)
!!! question "Which host OS to use for k8s?"
Strictly, it doesn't matter. I prefer the latest Ubuntu LTS server version, but that's because I like to standardize my toolset across different clusters / platforms - I find this makes it easier to manage the "cattle" :cow: over time!
## k3s single node setup
If you only want a single-node k3s cluster, then simply run the following to do the deployment:
```bash
MYSECRET=iambatman
curl -fL https://get.k3s.io | K3S_TOKEN=${MYSECRET} \
sh -s - --disable traefik server
```
!!! question "Why no k3s traefik?"
k3s comes with the traefik ingress "built-in", so why not deploy it? Because we'd rather deploy it **later** (*if we even want it*), using the same [deployment strategy](/kubernetes/deployment/flux/) which we use with all of our other services, so that we can easily update/configure it.
## k3s multi master setup
### Deploy first master
You may only have one node now, but it's a good idea to prepare for future expansion by bootstrapping k3s in "embedded etcd" multi master HA mode. Pick a secret to use for your server token, and run the following:
```bash
MYSECRET=iambatman
curl -fL https://get.k3s.io | K3S_TOKEN=${MYSECRET} \
sh -s - --disable traefik --disable servicelb server --cluster-init
```
!!! question "y no servicelb or k3s traefik?"
K3s includes a [rudimentary load balancer](/kubernetes/loadbalancer/k3s/) which utilizes host ports to make a given port available on all nodes. If you plan to deploy one, and only one k3s node, then this is a viable configuration, and you can leave out the `--disable servicelb` text above. If you plan for more nodes and you want to run k3s HA though, then you're better off deploying [MetalLB](/kubernetes/loadbalancer/metallb/) to do "real" loadbalancing.
You should see output which looks something like this:
```bash
root@shredder:~# curl -fL https://get.k3s.io | K3S_TOKEN=${MYSECRET} \
> sh -s - --disable traefik server --cluster-init
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 27318 100 27318 0 0 144k 0 --:--:-- --:--:-- --:--:-- 144k
[INFO] Finding release for channel stable
[INFO] Using v1.21.5+k3s2 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s2/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s2/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
root@shredder:~#
```
Provided the last line of output says `Starting k3s` and not something more troublesome-sounding.. you have a cluster! Run `k3s kubectl get nodes -o wide` to confirm this, which has the useful side-effect of printing out your first master's IP address (*which we'll need for the next step*)
```bash
root@shredder:~# k3s kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
shredder Ready control-plane,etcd,master 83s v1.21.5+k3s2 192.168.39.201 <none> Ubuntu 20.04.3 LTS 5.4.0-70-generic containerd://1.4.11-k3s1
root@shredder:~#
```
!!! tip "^Z undo undo ..."
Oops! Did you mess something up? Just run `k3s-uninstall.sh` to wipe all traces of K3s, and start over!
### Deploy other k3s master nodes (optional)
Now that the first master is deploy, add additional masters (*remember to keep the total number of masters to an odd number*) by referencing the secret, and the IP address of the first master, on all the others:
```bash
MYSECRET=iambatman
curl -fL https://get.k3s.io | K3S_TOKEN=${MYSECRET} \
sh -s - server --disable servicelb --server https://<IP OF FIRST MASTER>:6443
```
Run `k3s kubectl get nodes` to see your new master node make friends with the others:
```bash
root@shredder:~# k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
bebop Ready control-plane,etcd,master 4m13s v1.21.5+k3s2
rocksteady Ready control-plane,etcd,master 4m42s v1.21.5+k3s2
shredder Ready control-plane,etcd,master 8m54s v1.21.5+k3s2
root@shredder:~#
```
### Deploy k3s worker nodes (optional)
If you have more nodes which you want _not_ to be considered masters, then run the following on each. Note that the command syntax differs slightly from the masters (*which is why k3s deploys this as k3s-agent instead*)
```bash
MYSECRET=iambatman
curl -fL https://get.k3s.io | K3S_TOKEN=${MYSECRET} \
K3S_URL=https://<IP OF FIRST MASTER>:6443 \
sh -s -
```
!!! question "y no kubectl on k3s-agent?"
If you tried to run `k3s kubectl` on an agent, you'll notice that it returns an error about `localhost:8080` being refused. This is **normal**, and it happens because agents aren't necessarily "trusted" to the same degree that masters are, and so the cluster admin credentials are **not** saved to the filesystem, as they are with masters.
!!! tip "^Z undo undo ..."
Oops! Did you mess something up? Just run `k3s-agent-uninstall.sh` to wipe all traces of K3s agent, and start over!
## Cuddle your cluster with k3s kubectl!
k3s will have saved your kubeconfig file on the masters to `/etc/rancher/k3s/k3s.yaml`. This file contains the necessary config and certificates to administer your cluster, and should be treated with the same respect and security as your root password. To interact with the cluster, you need to tell the kubectl command where to find this `KUBECONFIG` file. There are a few ways to do this...
1. Prefix your `kubectl` commands with `k3s`. i.e., `kubectl cluster-info` becomes `k3s kubectl cluster-info`
2. Update your environment variables in your shell to set `KUBECONFIG` to `/etc/rancher/k3s/k3s.yaml`
3. Copy ``/etc/rancher/k3s/k3s.yaml` to `~/.kube/config`, which is the default location `kubectl` will look for
Cuddle your beautiful new cluster by running `kubectl cluster-info` [^1] - if that doesn't work, check your k3s logs[^2].
[^1]: Do you live in the CLI? Install the kubectl autocompletion for [bash](https://kubernetes.io/docs/tasks/tools/included/optional-kubectl-configs-bash-linux/) or [zsh](https://kubernetes.io/docs/tasks/tools/included/optional-kubectl-configs-zsh/) to make your life much easier!
[^2]: Looking for your k3s logs? Under Ubuntu LTS, run `journalctl -u k3s` to show your logs
[^3]: k3s is not the only "lightweight kubernetes" game in town. Minikube (*virtualization-based*) and mikrok8s (*possibly better for Ubuntu users since it's installed in a "snap" - haha*) are also popular options. One day I'll write a "mikrok8s vs k3s" review, but it doesn't really matter for our cluster operations - as I understand it, microk8s makes HA clustering slightly easire than k3s, but you get slightly less "out-of-the-box" in return, so mikrok8s may be more suitable for experience users / production edge deployments.
--8<-- "recipe-footer.md"