mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 17:56:26 +00:00
Add markdown linting (without breaking the site this time!)
This commit is contained in:
@@ -42,7 +42,8 @@ DigitalOcean will provide you with a "kubeconfig" file to use to access your clu
|
||||
Save your kubeconfig file somewhere, and test it our by running ```kubectl --kubeconfig=<PATH TO KUBECONFIG> get nodes```
|
||||
|
||||
Example output:
|
||||
```
|
||||
|
||||
```bash
|
||||
[davidy:~/Downloads] 130 % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
festive-merkle-8n9e Ready <none> 20s v1.13.1
|
||||
@@ -51,7 +52,7 @@ festive-merkle-8n9e Ready <none> 20s v1.13.1
|
||||
|
||||
In the example above, my nodes were being deployed. Repeat the command to see your nodes spring into existence:
|
||||
|
||||
```
|
||||
```bash
|
||||
[davidy:~/Downloads] % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
festive-merkle-8n96 Ready <none> 6s v1.13.1
|
||||
@@ -80,7 +81,6 @@ Still with me? Good. Move on to creating your own external load balancer..
|
||||
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
|
||||
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||
|
||||
|
||||
[^1]: Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -15,21 +15,21 @@ _Unlike_ the Docker Swarm design, the Kubernetes design is:
|
||||
|
||||
## Design Decisions
|
||||
|
||||
**The design and recipes are provider-agnostic**
|
||||
### The design and recipes are provider-agnostic**
|
||||
|
||||
This means that:
|
||||
|
||||
- The design should work on GKE, AWS, DigitalOcean, Azure, or even MicroK8s
|
||||
- Custom service elements specific to individual providers are avoided
|
||||
|
||||
**The simplest solution to achieve the desired result will be preferred**
|
||||
### The simplest solution to achieve the desired result will be preferred**
|
||||
|
||||
This means that:
|
||||
|
||||
- Persistent volumes from the cloud provider are used for all persistent storage
|
||||
- We'll do things the "_Kubernetes way_", i.e., using secrets and configmaps, rather than trying to engineer around the Kubernetes basic building blocks.
|
||||
|
||||
**Insofar as possible, the format of recipes will align with Docker Swarm**
|
||||
### Insofar as possible, the format of recipes will align with Docker Swarm**
|
||||
|
||||
This means that:
|
||||
|
||||
|
||||
@@ -310,4 +310,4 @@ Feel free to talk to today's chef in the discord, or see one of his many other l
|
||||
The links above are just redirect links incase anything ever changes, and it has analytics too
|
||||
-->
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -31,7 +31,6 @@ To rapidly get Helm up and running, start with the [Quick Start Guide](https://h
|
||||
See the [installation guide](https://helm.sh/docs/intro/install/) for more options,
|
||||
including installing pre-releases.
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
### Initialise Helm
|
||||
@@ -44,15 +43,14 @@ That's it - not very exciting I know, but we'll need helm for the next and final
|
||||
|
||||
Still with me? Good. Move on to understanding Helm charts...
|
||||
|
||||
* [Start](/kubernetes/) - Why Kubernetes?
|
||||
* [Design](/kubernetes/design/) - How does it fit together?
|
||||
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
|
||||
* [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access
|
||||
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||
* Helm (this page) - Uber-recipes from fellow geeks
|
||||
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||
- [Start](/kubernetes/) - Why Kubernetes?
|
||||
- [Design](/kubernetes/design/) - How does it fit together?
|
||||
- [Cluster](/kubernetes/cluster/) - Setup a basic cluster
|
||||
- [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access
|
||||
- [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||
- Helm (this page) - Uber-recipes from fellow geeks
|
||||
- [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||
|
||||
|
||||
[^1]: Of course, you can have lots of fun deploying all sorts of things via Helm. Check out https://artifacthub.io for some examples.
|
||||
[^1]: Of course, you can have lots of fun deploying all sorts of things via Helm. Check out <https://artifacthub.io> for some examples.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
My first introduction to Kubernetes was a children's story:
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/4ht22ReBjno" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
## Wait, what?
|
||||
@@ -44,7 +45,7 @@ Let's talk some definitions. Kubernetes.io provides a [glossary](https://kuberne
|
||||
|
||||
## Mm.. maaaaybe, how do I start?
|
||||
|
||||
If you're like me, and you learn by doing, either play with the examples at https://labs.play-with-k8s.com/, or jump right in by setting up a Google Cloud trial (_you get \$300 credit for 12 months_), or a small cluster on [Digital Ocean](/kubernetes/cluster/).
|
||||
If you're like me, and you learn by doing, either play with the examples at <https://labs.play-with-k8s.com/>, or jump right in by setting up a Google Cloud trial (_you get \$300 credit for 12 months_), or a small cluster on [Digital Ocean](/kubernetes/cluster/).
|
||||
|
||||
If you're the learn-by-watching type, just search for "Kubernetes introduction video". There's a **lot** of great content available.
|
||||
|
||||
|
||||
@@ -31,14 +31,14 @@ We **could** run our webhook as a simple HTTP listener, but really, in a world w
|
||||
|
||||
In my case, since I use CloudFlare, I create /etc/webhook/letsencrypt/cloudflare.ini:
|
||||
|
||||
```
|
||||
```ini
|
||||
dns_cloudflare_email=davidy@funkypenguin.co.nz
|
||||
dns_cloudflare_api_key=supersekritnevergonnatellyou
|
||||
```
|
||||
|
||||
I request my cert by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
cd /etc/webhook/
|
||||
docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare --preferred-challenges dns certonly --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini -d ''*.funkypenguin.co.nz'
|
||||
```
|
||||
@@ -48,7 +48,7 @@ Why use a wildcard cert? So my enemies can't examine my certs to enumerate my va
|
||||
|
||||
I add the following as a cron command to renew my certs every day:
|
||||
|
||||
```
|
||||
```bash
|
||||
cd /etc/webhook && docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare renew --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini
|
||||
```
|
||||
|
||||
@@ -56,13 +56,13 @@ Once you've confirmed you've got a valid LetsEncrypt certificate stored in `/etc
|
||||
|
||||
### Install webhook
|
||||
|
||||
We're going to use https://github.com/adnanh/webhook to run our webhook. On some distributions (_❤️ ya, Debian!_), webhook and its associated systemd config can be installed by running `apt-get install webhook`.
|
||||
We're going to use <https://github.com/adnanh/webhook> to run our webhook. On some distributions (_❤️ ya, Debian!_), webhook and its associated systemd config can be installed by running `apt-get install webhook`.
|
||||
|
||||
### Create webhook config
|
||||
|
||||
We'll create a single webhook, by creating `/etc/webhook/hooks.json` as follows. Choose a nice secure random string for your MY_TOKEN value!
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /etc/webhook
|
||||
export MY_TOKEN=ilovecheese
|
||||
echo << EOF > /etc/webhook/hooks.json
|
||||
@@ -100,8 +100,8 @@ echo << EOF > /etc/webhook/hooks.json
|
||||
{
|
||||
"type": "value",
|
||||
"value": "$MY_TOKEN",
|
||||
"parameter":
|
||||
{
|
||||
"parameter":
|
||||
{
|
||||
"source": "header",
|
||||
"name": "X-Funkypenguin-Token"
|
||||
}
|
||||
@@ -122,7 +122,7 @@ This section is particular to Debian Stretch and its webhook package. If you're
|
||||
|
||||
Since we want to force webhook to run in secure mode (_no point having a token if it can be extracted from a simple packet capture!_) I ran `systemctl edit webhook`, and pasted in the following:
|
||||
|
||||
```
|
||||
```bash
|
||||
[Service]
|
||||
# Override the default (non-secure) behaviour of webhook by passing our certificate details and custom hooks.json location
|
||||
ExecStart=
|
||||
@@ -135,7 +135,7 @@ Then I restarted webhook by running `systemctl enable webhook && systemctl resta
|
||||
|
||||
When successfully authenticated with our top-secret token, our webhook will execute a local script, defined as follows (_yes, you should create this file_):
|
||||
|
||||
```
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
NAME=$1
|
||||
@@ -153,9 +153,9 @@ fi
|
||||
|
||||
# Either add or remove a service based on $ACTION
|
||||
case $ACTION in
|
||||
add)
|
||||
# Create the portion of haproxy config
|
||||
cat << EOF > /etc/webhook/haproxy/$FRONTEND_PORT.inc
|
||||
add)
|
||||
# Create the portion of haproxy config
|
||||
cat << EOF > /etc/webhook/haproxy/$FRONTEND_PORT.inc
|
||||
### >> Used to run $NAME:${FRONTEND_PORT}
|
||||
frontend ${FRONTEND_PORT}_frontend
|
||||
bind *:$FRONTEND_PORT
|
||||
@@ -170,13 +170,13 @@ backend ${FRONTEND_PORT}_backend
|
||||
server s1 $DST_IP:$BACKEND_PORT
|
||||
### << Used to run $NAME:$FRONTEND_PORT
|
||||
EOF
|
||||
;;
|
||||
delete)
|
||||
rm /etc/webhook/haproxy/$FRONTEND_PORT.inc
|
||||
;;
|
||||
*)
|
||||
echo "Invalid action $ACTION"
|
||||
exit 2
|
||||
;;
|
||||
delete)
|
||||
rm /etc/webhook/haproxy/$FRONTEND_PORT.inc
|
||||
;;
|
||||
*)
|
||||
echo "Invalid action $ACTION"
|
||||
exit 2
|
||||
esac
|
||||
|
||||
# Concatenate all the haproxy configs into a single file
|
||||
@@ -188,8 +188,8 @@ haproxy -f /etc/webhook/haproxy/pre_validate.cfg -c
|
||||
# If validation was successful, only _then_ copy it over to /etc/haproxy/haproxy.cfg, and reload
|
||||
if [[ $? -gt 0 ]]
|
||||
then
|
||||
echo "HAProxy validation failed, not continuing"
|
||||
exit 2
|
||||
echo "HAProxy validation failed, not continuing"
|
||||
exit 2
|
||||
else
|
||||
# Remember what the original file looked like
|
||||
m1=$(md5sum "/etc/haproxy/haproxy.cfg")
|
||||
@@ -212,7 +212,7 @@ fi
|
||||
|
||||
Create `/etc/webhook/haproxy/global` and populate with something like the following. This will be the non-dynamically generated part of our HAProxy config:
|
||||
|
||||
```
|
||||
```ini
|
||||
global
|
||||
log /dev/log local0
|
||||
log /dev/log local1 notice
|
||||
@@ -256,7 +256,7 @@ defaults
|
||||
|
||||
### Take the bait!
|
||||
|
||||
Whew! We now have all the components of our automated load-balancing solution in place. Browse to your VM's FQDN at https://whatever.it.is:9000/hooks/update-haproxy, and you should see the text "_Hook rules were not satisfied_", with a valid SSL certificate (_You didn't send a token_).
|
||||
Whew! We now have all the components of our automated load-balancing solution in place. Browse to your VM's FQDN at <https://whatever.it.is:9000/hooks/update-haproxy>, and you should see the text "_Hook rules were not satisfied_", with a valid SSL certificate (_You didn't send a token_).
|
||||
|
||||
If you don't see the above, then check the following:
|
||||
|
||||
@@ -267,7 +267,7 @@ If you don't see the above, then check the following:
|
||||
|
||||
You'll see me use this design in any Kubernetes-based recipe which requires container-specific ports, like UniFi. Here's an excerpt of the .yml which defines the UniFi controller:
|
||||
|
||||
```
|
||||
```yaml
|
||||
<snip>
|
||||
spec:
|
||||
containers:
|
||||
@@ -305,7 +305,7 @@ The takeaways here are:
|
||||
|
||||
Here's what the webhook logs look like when the above is added to the UniFi deployment:
|
||||
|
||||
```
|
||||
```bash
|
||||
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 Started POST /hooks/update-haproxy
|
||||
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 update-haproxy got matched
|
||||
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 update-haproxy hook triggered successfully
|
||||
|
||||
@@ -8,6 +8,7 @@ Now that we're playing in the deep end with Kubernetes, we'll need a Cloud-nativ
|
||||
|
||||
It bears repeating though - don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/1UtFeMoqVHQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
This recipe employs a clever tool ([miracle2k/k8s-snapshots](https://github.com/miracle2k/k8s-snapshots)), running _inside_ your cluster, to trigger automated snapshots of your persistent volumes, using your cloud provider's APIs.
|
||||
@@ -33,10 +34,8 @@ If you're running GKE, run the following to create a RoleBinding, allowing your
|
||||
|
||||
If your cluster is RBAC-enabled (_it probably is_), you'll need to create a ClusterRole and ClusterRoleBinding to allow k8s_snapshots to see your PVs and friends:
|
||||
|
||||
````
|
||||
|
||||
````bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/master/rbac.yaml
|
||||
|
||||
```
|
||||
|
||||
## Serving
|
||||
@@ -45,7 +44,7 @@ kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/maste
|
||||
|
||||
Ready? Run the following to create a deployment in to the kube-system namespace:
|
||||
|
||||
```
|
||||
```bash
|
||||
|
||||
cat <<EOF | kubectl create -f -
|
||||
apiVersion: extensions/v1beta1
|
||||
@@ -74,39 +73,30 @@ k8s-snapshots relies on annotations to tell it how frequently to snapshot your P
|
||||
|
||||
From the k8s-snapshots README:
|
||||
|
||||
````
|
||||
|
||||
The generations are defined by a list of deltas formatted as ISO 8601 durations (this differs from tarsnapper). PT60S or PT1M means a minute, PT12H or P0.5D is half a day, P1W or P7D is a week. The number of backups in each generation is implied by it's and the parent generation's delta.
|
||||
|
||||
For example, given the deltas PT1H P1D P7D, the first generation will consist of 24 backups each one hour older than the previous (or the closest approximation possible given the available backups), the second generation of 7 backups each one day older than the previous, and backups older than 7 days will be discarded for good.
|
||||
|
||||
The most recent backup is always kept.
|
||||
|
||||
The first delta is the backup interval.
|
||||
|
||||
```
|
||||
> The generations are defined by a list of deltas formatted as ISO 8601 durations (this differs from tarsnapper). PT60S or PT1M means a minute, PT12H or P0.5D is half a day, P1W or P7D is a week. The number of backups in each generation is implied by it's and the parent generation's delta.
|
||||
>
|
||||
> For example, given the deltas PT1H P1D P7D, the first generation will consist of 24 backups each one hour older than the previous (or the closest approximation possible given the available backups), the second generation of 7 backups each one day older than the previous, and backups older than 7 days will be discarded for good.
|
||||
>
|
||||
> The most recent backup is always kept.
|
||||
>
|
||||
> The first delta is the backup interval.
|
||||
|
||||
To add the annotation to an existing PV, run something like this:
|
||||
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl patch pv pvc-01f74065-8fe9-11e6-abdd-42010af00148 -p \
|
||||
'{"metadata": {"annotations": {"backup.kubernetes.io/deltas": "P1D P30D P360D"}}}'
|
||||
|
||||
```
|
||||
|
||||
To add the annotation to a _new_ PV, add the following annotation to your **PVC**:
|
||||
|
||||
```
|
||||
|
||||
```yaml
|
||||
backup.kubernetes.io/deltas: PT1H P2D P30D P180D
|
||||
|
||||
```
|
||||
|
||||
Here's an example of the PVC for the UniFi recipe, which includes 7 daily snapshots of the PV:
|
||||
|
||||
```
|
||||
|
||||
```yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
@@ -119,7 +109,6 @@ accessModes: - ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
|
||||
````
|
||||
|
||||
And here's what my snapshot list looks like after a few days:
|
||||
@@ -132,8 +121,7 @@ If you're running traditional compute instances with your cloud provider (I do t
|
||||
|
||||
To do so, first create a custom resource, ```SnapshotRule```:
|
||||
|
||||
````
|
||||
|
||||
````bash
|
||||
cat <<EOF | kubectl create -f -
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
@@ -149,13 +137,11 @@ singular: snapshotrule
|
||||
kind: SnapshotRule
|
||||
shortNames: - sr
|
||||
EOF
|
||||
|
||||
````
|
||||
|
||||
Then identify the volume ID of your volume, and create an appropriate ```SnapshotRule```:
|
||||
|
||||
````
|
||||
|
||||
````bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: "k8s-snapshots.elsdoerfer.com/v1"
|
||||
kind: SnapshotRule
|
||||
|
||||
@@ -13,13 +13,13 @@ This recipe utilises the [traefik helm chart](https://github.com/helm/charts/tre
|
||||
|
||||
Clone the helm charts, by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/helm/charts
|
||||
```
|
||||
|
||||
Change to stable/traefik:
|
||||
|
||||
```
|
||||
```bash
|
||||
cd charts/stable/traefik
|
||||
```
|
||||
|
||||
@@ -29,7 +29,7 @@ The beauty of the helm approach is that all the complexity of the Kubernetes ele
|
||||
|
||||
These are my values, you'll need to adjust for your own situation:
|
||||
|
||||
```
|
||||
```yaml
|
||||
imageTag: alpine
|
||||
serviceType: NodePort
|
||||
# yes, we're not listening on 80 or 443 because we don't want to pay for a loadbalancer IP to do this. I use poor-mans-k8s-lb instead
|
||||
@@ -101,7 +101,7 @@ Since we deployed Traefik using helm, we need to take a slightly different appro
|
||||
|
||||
Create phone-home.yaml as follows:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
@@ -141,7 +141,7 @@ spec:
|
||||
|
||||
Create your webhook token secret by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
echo -n "imtoosecretformyshorts" > webhook_token.secret
|
||||
kubectl create secret generic traefik-credentials --from-file=webhook_token.secret
|
||||
```
|
||||
@@ -169,20 +169,20 @@ Run ```kubectl create -f phone-home.yaml``` to create the pod.
|
||||
|
||||
Run ```kubectl get pods -o wide``` to confirm that both the phone-home pod and the traefik pod are on the same node:
|
||||
|
||||
```
|
||||
```bash
|
||||
# kubectl get pods -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
phonehome-traefik 1/1 Running 0 20h 10.56.2.55 gke-penguins-are-sexy-8b85ef4d-2c9g
|
||||
traefik-69db67f64c-5666c 1/1 Running 0 10d 10.56.2.30 gkepenguins-are-sexy-8b85ef4d-2c9g
|
||||
```
|
||||
|
||||
Now browse to https://<your load balancer>, and you should get a valid SSL cert, along with a 404 error (_you haven't deployed any other recipes yet_)
|
||||
Now browse to `https://<your load balancer`, and you should get a valid SSL cert, along with a 404 error (_you haven't deployed any other recipes yet_)
|
||||
|
||||
### Making changes
|
||||
|
||||
If you change a value in values.yaml, and want to update the traefik pod, run:
|
||||
|
||||
```
|
||||
```bash
|
||||
helm upgrade --values values.yml traefik stable/traefik --recreate-pods
|
||||
```
|
||||
|
||||
@@ -210,4 +210,4 @@ I'll be adding more Kubernetes versions of existing recipes soon. Check out the
|
||||
|
||||
[^1]: It's kinda lame to be able to bring up Traefik but not to use it. I'll be adding the oauth_proxy element shortly, which will make this last step a little more conclusive and exciting!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
Reference in New Issue
Block a user