mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 17:56:26 +00:00
Added Kubernetes design
This commit is contained in:
BIN
manuscript/images/Untitled.png
Normal file
BIN
manuscript/images/Untitled.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 969 KiB |
BIN
manuscript/images/kubernetes-cluster-design.png
Normal file
BIN
manuscript/images/kubernetes-cluster-design.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 343 KiB |
90
manuscript/kubernetes/cluster.md
Normal file
90
manuscript/kubernetes/cluster.md
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
# Kubernetes on DigitalOcean
|
||||||
|
|
||||||
|
IMO, the easiest Kubernetes cloud provider to experiment with is [DigitalOcean](https://m.do.co/c/e33b78ad621b) (_this is a referral link_). I've included instructions below to start a basic cluster.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
## Ingredients
|
||||||
|
|
||||||
|
1. [DigitalOcean](https://m.do.co/c/e33b78ad621b) (_yes, this is a referral link, making me some 💰_) account, either linked to a credit card or (_my preference for a trial_) topped up with $5 credit from PayPal.
|
||||||
|
|
||||||
|
## Preparation
|
||||||
|
|
||||||
|
### Create DigitalOcean Account
|
||||||
|
|
||||||
|
Create a project, and then from your project page, click **Manage** -> **Kubernetes (LTD)** in the left-hand panel:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Until DigitalOcean considers their Kubernetes offering to be "production ready", you'll need the additional step of clicking on **Enable Limited Access**:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The _Enable Limited Access_ button changes to read _Create a Kubernetes Cluster_ . Cleeeek it:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
When prompted, choose some defaults for your first node pool (_your pool of "compute" resources for your cluster_), and give it a name. In more complex deployments, you can use this concept of "node pools" to run certain applications (_like an inconsequential nightly batch job_) on a particular class of compute instance (_such as cheap, preemptible instances_)
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
That's it! Have a sip of your 🍷, a bite of your :cheese:, and wait for your cluster to build. While you wait, follow the instructions to setup kubectl (if you don't already have it)
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
DigitalOcean will provide you with a "kubeconfig" file to use to access your cluster. It's at the bottom of the page (_illustrated below_), and easy to miss (_in my experience_).
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Release the kubectl!
|
||||||
|
|
||||||
|
Save your kubeconfig file somewhere, and test it our by running ```kubectl --kubeconfig=<PATH TO KUBECONFIG> get nodes```
|
||||||
|
|
||||||
|
Example output:
|
||||||
|
```
|
||||||
|
[davidy:~/Downloads] 130 % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
|
||||||
|
NAME STATUS ROLES AGE VERSION
|
||||||
|
festive-merkle-8n9e Ready <none> 20s v1.13.1
|
||||||
|
[davidy:~/Downloads] %
|
||||||
|
```
|
||||||
|
|
||||||
|
In the example above, my nodes were being deployed. Repeat the command to see your nodes spring into existence:
|
||||||
|
|
||||||
|
```
|
||||||
|
[davidy:~/Downloads] % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
|
||||||
|
NAME STATUS ROLES AGE VERSION
|
||||||
|
festive-merkle-8n96 Ready <none> 6s v1.13.1
|
||||||
|
festive-merkle-8n9e Ready <none> 34s v1.13.1
|
||||||
|
[davidy:~/Downloads] %
|
||||||
|
|
||||||
|
[davidy:~/Downloads] % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
|
||||||
|
NAME STATUS ROLES AGE VERSION
|
||||||
|
festive-merkle-8n96 Ready <none> 30s v1.13.1
|
||||||
|
festive-merkle-8n9a Ready <none> 17s v1.13.1
|
||||||
|
festive-merkle-8n9e Ready <none> 58s v1.13.1
|
||||||
|
[davidy:~/Downloads] %
|
||||||
|
```
|
||||||
|
|
||||||
|
That's it. You have a beautiful new kubernetes cluster ready for some action!
|
||||||
|
|
||||||
|
## Move on..
|
||||||
|
|
||||||
|
Still with me? Good. Move on to creating your own external load balancer..
|
||||||
|
|
||||||
|
* [Start](/kubernetes/start/) - Why Kubernetes?
|
||||||
|
* [Design](/kubernetes/design/) - How does it fit together?
|
||||||
|
* Cluster (this page) - Setup a basic cluster
|
||||||
|
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
|
||||||
|
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||||
|
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||||
|
*
|
||||||
|
## Chef's Notes
|
||||||
|
|
||||||
|
1. Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come!
|
||||||
|
|
||||||
|
### Tip your waiter (donate) 👏
|
||||||
|
|
||||||
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
### Your comments? 💬
|
||||||
137
manuscript/kubernetes/design.md
Normal file
137
manuscript/kubernetes/design.md
Normal file
@@ -0,0 +1,137 @@
|
|||||||
|
# Design
|
||||||
|
|
||||||
|
Like the [Docker Swarm](ha-docker-swarm/design/) "_private cloud_" design, the Kubernetes design is:
|
||||||
|
|
||||||
|
* **Highly-available** (_can tolerate the failure of a single component_)
|
||||||
|
* **Scalable** (_can add resource or capacity as required_)
|
||||||
|
* **Portable** (_run it in DigitalOcean today, AWS tomorrow and Azure on Thursday_)
|
||||||
|
* **Secure** (_access protected with LetsEncrypt certificates_)
|
||||||
|
* **Automated** (_requires minimal care and feeding_)
|
||||||
|
|
||||||
|
*Unlike* the Docker Swarm design, the Kubernetes design is:
|
||||||
|
|
||||||
|
* **Cloud-Native** (_While you **can** [run your own Kubernetes cluster](https://microk8s.io/), it's far simpler to let someone else manage the infrastructure, freeing you to play with the fun stuff_)
|
||||||
|
* **Complex** (_Requires more basic elements, more verbose configuration, and provides more flexibility and customisability_)
|
||||||
|
|
||||||
|
## Design Decisions
|
||||||
|
|
||||||
|
**The design and recipes are provider-agnostic**
|
||||||
|
|
||||||
|
This means that:
|
||||||
|
|
||||||
|
* The design should work on GKE, AWS, DigitalOcean, Azure, or even MicroK8s
|
||||||
|
* Custom service elements specific to individual providers are avoided
|
||||||
|
|
||||||
|
**The simplest solution to achieve the desired result will be preferred**
|
||||||
|
|
||||||
|
This means that:
|
||||||
|
|
||||||
|
* Persistent volumes from the cloud provider are used for all persistent storage
|
||||||
|
* We'll do things the "_Kubernetes way_", i.e., using secrets and configmaps, rather than trying to engineer around the Kubernetes basic building blocks.
|
||||||
|
|
||||||
|
**Insofar as possible, the format of recipes will align with Docker Swarm**
|
||||||
|
|
||||||
|
This means that:
|
||||||
|
|
||||||
|
* We use Kubernetes namespaces to replicate Docker Swarm's "_per-stack_" networking and service discovery
|
||||||
|
|
||||||
|
## Security
|
||||||
|
|
||||||
|
Under this design, the only inbound connections we're permitting to our Kubernetes swarm are:
|
||||||
|
|
||||||
|
### Network Flows
|
||||||
|
|
||||||
|
* HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy (_Traefik_)
|
||||||
|
* Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](/recipes/mqtt/)_)
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
|
||||||
|
* Other than when an SSL-served application provides a trusted level of authentication, or where the application requires public exposure, applications served via Traefik will be protected with an OAuth proxy.
|
||||||
|
|
||||||
|
## The challenges of external access
|
||||||
|
|
||||||
|
Because we're Cloude-Native now, it's complex to get traffic **into** our cluster from outside. We basically have 3 options:
|
||||||
|
|
||||||
|
1. **HostIP**: Map a port on the host to a service. This is analogous to Docker's port exposure, but lacking in that it restricts us to one host port per-container, and it's not possible to anticipate _which_ of your Kubernetes hosts is running a given container. Kubernetes does not have Docker Swarm's "routing mesh", allowing for simple load-balancing of incoming connections.
|
||||||
|
|
||||||
|
2. **LoadBalancer**: Purchase a "loadbalancer" per-service from your cloud provider. While this is the simplest way to assure a fixed IP and port combination will always exist for your service, it has 2 significant limitations:
|
||||||
|
1. Cost is prohibitive, at roughly $US10/month per port
|
||||||
|
2. You won't get the _same_ fixed IP for multiple ports. So if you wanted to expose 443 and 25 (_webmail and smtp server, for example_), you'd find yourself assigned a port each on two **unique** IPs, a challenge for a single DNS-based service, like "_mail.batman.com_"
|
||||||
|
|
||||||
|
3. **NodePort**: Expose our service as a port (_between 30000-32767_) on the host which happens to be running the service. This is challenging because you might want to expose port 443, but that's not possible with NodePort.
|
||||||
|
|
||||||
|
To further complicate options #1 and #3 above, our cloud provider may, without notice, change the IP of the host running your containers (_O hai, Google!_).
|
||||||
|
|
||||||
|
Our solution to these challenges is to employ a simple-but-effective solution which places an HAProxy instance in front of the services exposed by NodePort. For example, this allows us to expose a container on 443 as NodePort 30443, and to cause HAProxy to listen on port 443, and forward all requests to our Node's IP on port 30443, after which it'll be forwarded onto our container on the original port 443.
|
||||||
|
|
||||||
|
We use a phone-home container, which calls a simple webhook on our haproxy VM, advising HAProxy to update its backend for the calling IP. This means that when our provider changes the host's IP, we automatically update HAProxy and keep-on-truckin'!
|
||||||
|
|
||||||
|
Here's a high-level diagram:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
So what's happening in the diagram above? I'm glad you asked - let's go through it!
|
||||||
|
|
||||||
|
### Setting the scene
|
||||||
|
|
||||||
|
In the diagram, we have a Kubernetes cluster comprised of 3 nodes. You'll notice that there's no visible master node. This is because most cloud providers will give you "_free_" master node, but you don't get to access it. The master node is just a part of the Kubernetes "_as-a-service_" which you're purchasing.
|
||||||
|
|
||||||
|
Our nodes are partitioned into several namespaces, which logically separate our individual recipes. (_I.e., allowing both a "gitlab" and a "nextcloud" namespace to include a service named "db", which would be challenging without namespaces_)
|
||||||
|
|
||||||
|
Outside of our cluster (_could be anywhere on the internet_) is a single VM servicing as a load-balancer, running HAProxy and a webhook service. This load-balancer is described in detail, [in its own section](/kubernetes/loadbalancer/), but what's important up-front is that this VM is the **only element of the design for which we need to provide a fixed IP address**.
|
||||||
|
|
||||||
|
### 1 : The mosquitto pod
|
||||||
|
|
||||||
|
In the "mqtt" namespace, we have a single pod, running 2 containers - the mqtt broker, and a "phone-home" container.
|
||||||
|
|
||||||
|
Why 2 containers in one pod, instead of 2 independent pods? Because all the containers in a pod are **always** run on the same physical host. We're using the phone-home container as a simple way to call a webhook on the not-in-the-cluster VM.
|
||||||
|
|
||||||
|
The phone-home container calls the webhook, and tells HAProxy to listen on port 8443, and to forward any incoming requests to port 30843 (_within the NodePort range_) on the IP of the host running the container (_and because of the pod, tho phone-home container is guaranteed to be on the same host as the MQTT container_).
|
||||||
|
|
||||||
|
### 2 : The Traefik Ingress
|
||||||
|
|
||||||
|
In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/docker-ha-swarm/traefik/).
|
||||||
|
|
||||||
|
What's happening in the diagram is that a phone-home pod is tied to the traefik pod using affinity, so that both containers will be executed on the same host. Again, the phone-home container calls a webhook on the HAProxy VM, auto-configuring HAproxy to send any HTTPs traffic to its calling address and customer NodePort port number.
|
||||||
|
|
||||||
|
When an inbound HTTPS request is received by Traefik, based on some internal Kubernetes elements (ingresses), Traefik provides SSL termination, and routes the request to the appropriate service (_In this case, either the GitLab UI or teh UniFi UI_)
|
||||||
|
|
||||||
|
### 3 : The UniFi pod
|
||||||
|
|
||||||
|
What's happening in the UniFi pod is a combination of #1 and #2 above. UniFi controller provides a webUI (_typically 8443, but we serve it via Traefik on 443_), plus some extra ports for device adoption, which are using a proprietary protocol, and can't be proxied with Traefik.
|
||||||
|
|
||||||
|
To make both the webUI and the adoption ports work, we use a combination of an ingress for the webUI (_see #2 above_), and a phone-home container to tell HAProxy to forward port 8080 (_the adoption port_) directly to the host, using a NodePort-exposed service.
|
||||||
|
|
||||||
|
This allows us to retain the use of a single IP for all controller functions, as accessed outside of the cluster.
|
||||||
|
|
||||||
|
### 4 : The webhook
|
||||||
|
|
||||||
|
Each phone-home container is calling a webhook on the HAProxy VM, secured with a secret shared token. The phone-home container passes the desired frontend port (i.e., 443), the corresponding NodeIP port (i.e., 30443), and the node's current public IP address.
|
||||||
|
|
||||||
|
The webhook uses the provided details to update HAProxy for the combination of values, validate the config, and then restart HAProxy.
|
||||||
|
|
||||||
|
### 5 : The user
|
||||||
|
|
||||||
|
Finally, the DNS for all externally-accessible services is pointed to the IP of the HAProxy VM. On receiving an inbound request (_be it port 443, 8080, or anything else configured_), HAProxy will forward the request to the IP and NodePort port learned from the phone-home container.
|
||||||
|
|
||||||
|
## Move on..
|
||||||
|
|
||||||
|
Still with me? Good. Move on to creating your cluster!
|
||||||
|
|
||||||
|
* [Start](/kubernetes/start/) - Why Kubernetes?
|
||||||
|
* Design (this page) - How does it fit together?
|
||||||
|
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
|
||||||
|
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
|
||||||
|
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||||
|
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||||
|
|
||||||
|
|
||||||
|
## Chef's Notes
|
||||||
|
|
||||||
|
### Tip your waiter (donate) 👏
|
||||||
|
|
||||||
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
### Your comments? 💬
|
||||||
165
manuscript/kubernetes/loadbalancer.md
Normal file
165
manuscript/kubernetes/loadbalancer.md
Normal file
@@ -0,0 +1,165 @@
|
|||||||
|
# Load Balancer
|
||||||
|
|
||||||
|
One of the issues I encountered early on in migrating my Docker Swarm workloads to Kubernetes on GKE, was how to reliably permit inbound traffic into the cluster.
|
||||||
|
|
||||||
|
There were several complications with the "traditional" mechanisms of providing a load-balanced ingress, not the least of which was cost. I also found that even if I paid my cloud provider (_Google_) for a load-balancer Kubernetes service, this service required a unique IP per exposed port, which was incompatible with my mining pool empire (_mining pools need to expose multiple ports on the same DNS name_).
|
||||||
|
|
||||||
|
This recipe details an simple alternative design to permit the exposure of as many ports as you like, on a single public IP, to a cluster of Kubernetes nodes running as many pods/containers as you need, with service exposed via NodePort.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Details
|
||||||
|
|
||||||
|
## Ingredients
|
||||||
|
|
||||||
|
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
||||||
|
2. VPS _outside_ of Kubernetes cluster. Perhaps, on a $5 DigitalOcean droplet [banana](https://m.do.co/c/e33b78ad621b) droplet.. (_yes, another referral link. Mooar money for me!_)
|
||||||
|
|
||||||
|
## Preparation
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
|
||||||
|
This recipe gets a little hairy. We need to use a webhook on a VPS, with a predictable IP address. This webhook can receive HTTP POST transactions from containers running within our Kubernetes clustert. Each POST to the webhook will setup a haproxy frontend/backend combination to forward a specific port to a service within Kubernetes, exposed by Nodeport.
|
||||||
|
|
||||||
|
### Install webhook
|
||||||
|
|
||||||
|
On my little Debian Stretch VM, I installed the webhook Go binary, by running ```apt-get install webhook```.
|
||||||
|
|
||||||
|
### Create /etc/webhook/hooks.json
|
||||||
|
|
||||||
|
I created a single webhook, by defining ```/etc/webhook/hooks.json``` as follows. Note that we're matching on a token header in the request called ```X-Funkypenguin-Token```. Set this value to a complicated random string. The secure storage of this string is all which separates you and a nefarious actor from hijacking your haproxy for malicious purposes!
|
||||||
|
|
||||||
|
```
|
||||||
|
/etc/webhook/hooks.json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"id": "update-haproxy",
|
||||||
|
"execute-command": "/etc/webhook/update-haproxy.sh",
|
||||||
|
"command-working-directory": "/etc/webhook",
|
||||||
|
"pass-arguments-to-command":
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"source": "payload",
|
||||||
|
"name": "name"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"source": "payload",
|
||||||
|
"name": "frontend-port"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"source": "payload",
|
||||||
|
"name": "backend-port"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"source": "payload",
|
||||||
|
"name": "dst-ip"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"source": "payload",
|
||||||
|
"name": "action"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"trigger-rule":
|
||||||
|
{
|
||||||
|
"match":
|
||||||
|
{
|
||||||
|
"type": "value",
|
||||||
|
"value": "banana",
|
||||||
|
"parameter":
|
||||||
|
{
|
||||||
|
"source": "header",
|
||||||
|
"name": "X-Funkypenguin-Token"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create /etc/webhook/update-haproxy.sh
|
||||||
|
|
||||||
|
When successfully authenticated with our top-secret token, our webhook will execute a local script, defined as follows (_yes, you should create this file_):
|
||||||
|
|
||||||
|
```
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
NAME=$1
|
||||||
|
FRONTEND_PORT=$2
|
||||||
|
BACKEND_PORT=$3
|
||||||
|
DST_IP=$4
|
||||||
|
ACTION=$5
|
||||||
|
|
||||||
|
# Bail if we haven't received our expected parameters
|
||||||
|
if [[ "$#" -ne 5 ]]
|
||||||
|
then
|
||||||
|
echo "illegal number of parameters"
|
||||||
|
exit 2;
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Either add or remove a service based on $ACTION
|
||||||
|
case $ACTION in
|
||||||
|
add)
|
||||||
|
# Create the portion of haproxy config
|
||||||
|
cat << EOF > /etc/webhook/haproxy/$FRONTEND_PORT.inc
|
||||||
|
### >> Used to run $NAME:${FRONTEND_PORT}
|
||||||
|
frontend ${FRONTEND_PORT}_frontend
|
||||||
|
bind *:$FRONTEND_PORT
|
||||||
|
mode tcp
|
||||||
|
default_backend ${FRONTEND_PORT}_backend
|
||||||
|
|
||||||
|
backend ${FRONTEND_PORT}_backend
|
||||||
|
mode tcp
|
||||||
|
balance roundrobin
|
||||||
|
stick-table type ip size 200k expire 30m
|
||||||
|
stick on src
|
||||||
|
server s1 $DST_IP:$BACKEND_PORT
|
||||||
|
### << Used to run $NAME:$FRONTEND_PORT
|
||||||
|
EOF
|
||||||
|
;;
|
||||||
|
delete)
|
||||||
|
rm /etc/webhook/haproxy/$FRONTEND_PORT.inc
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Invalid action $ACTION"
|
||||||
|
exit 2
|
||||||
|
esac
|
||||||
|
|
||||||
|
# Concatenate all the haproxy configs into a single file
|
||||||
|
cat /etc/webhook/haproxy/global /etc/webhook/haproxy/*.inc > /etc/webhook/haproxy/pre_validate.cfg
|
||||||
|
|
||||||
|
# Validate the generated config
|
||||||
|
haproxy -f /etc/webhook/haproxy/pre_validate.cfg -c
|
||||||
|
|
||||||
|
# If validation was successful, only _then_ copy it over to /etc/haproxy/haproxy.cfg, and reload
|
||||||
|
if [[ $? -gt 0 ]]
|
||||||
|
then
|
||||||
|
echo "HAProxy validation failed, not continuing"
|
||||||
|
exit 2
|
||||||
|
else
|
||||||
|
echo YES, but Not yet
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Get LetsEncrypt certificate
|
||||||
|
|
||||||
|
We **could** run our webhook as a simple HTTP listener, but really, in a world where LetsEncrypt cacn assign you a wildcard certificate in under 30 seconds, thaht's unforgivable. Use the following **general** example to create a LetsEncrypt wildcard cert for your host:
|
||||||
|
|
||||||
|
In my case, since I use CloduFlare, I create /etc/webhook/letsencrypt/cloudflare.ini:
|
||||||
|
|
||||||
|
```
|
||||||
|
dns_cloudflare_email=davidy@funkypenguin.co.nz
|
||||||
|
dns_cloudflare_api_key=supersekritnevergonnatellyou
|
||||||
|
```
|
||||||
|
|
||||||
|
Why use a wildcard cert? So my enemies can't examine my certs to enumerate my various services and discover my weaknesses, of course!
|
||||||
|
|
||||||
|
Create my first cert by running:
|
||||||
|
```
|
||||||
|
docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare --preferred-challenges dns certonly --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini -d ''*.funkypenguin.co.nz'
|
||||||
|
```
|
||||||
|
|
||||||
|
Add the following as a cron command to renew my certs every day:
|
||||||
|
|
||||||
|
```
|
||||||
|
docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare renew --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini
|
||||||
|
```
|
||||||
3
manuscript/kubernetes/snapshots.md
Normal file
3
manuscript/kubernetes/snapshots.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Snapshots
|
||||||
|
|
||||||
|
.. coming soon!
|
||||||
@@ -52,7 +52,19 @@ If you're the learn-by-watching type, just search for "Kubernetes introduction v
|
|||||||
|
|
||||||
As of Jan 2019, our first (_and only!_) Kubernetes recipe is a WIP for the Mosquitto [MQTT](/recipes/mqtt/) broker. It's a good, simple starter if you're into home automation (_shoutout to [Home Assistant](/recipes/homeassistant/)!_), since it only requires a single container, and a simple NodePort service.
|
As of Jan 2019, our first (_and only!_) Kubernetes recipe is a WIP for the Mosquitto [MQTT](/recipes/mqtt/) broker. It's a good, simple starter if you're into home automation (_shoutout to [Home Assistant](/recipes/homeassistant/)!_), since it only requires a single container, and a simple NodePort service.
|
||||||
|
|
||||||
I'd love for your [feedback](/support/) on the Kubernetes recipes, as well as suggestions for what to add next. The current plan is to replicate the Chef's Favorites recipes (_see the left-hand panel_) into Kubernetes first.
|
I'd love for your [feedback](/support/) on the Kubernetes recipes, as well as suggestions for what to add next. The current rough plan is to replicate the Chef's Favorites recipes (_see the left-hand panel_) into Kubernetes first.
|
||||||
|
|
||||||
|
## Move on..
|
||||||
|
|
||||||
|
Still with me? Good. Move on to reviewing the design elements
|
||||||
|
|
||||||
|
* Start (this page) - Why Kubernetes?
|
||||||
|
* [Design](/kubernetes/design/) - How does it fit together?
|
||||||
|
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
|
||||||
|
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
|
||||||
|
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||||
|
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
|
|||||||
3
manuscript/kubernetes/traefik.md
Normal file
3
manuscript/kubernetes/traefik.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Traefik
|
||||||
|
|
||||||
|
.. coming soon :)
|
||||||
Reference in New Issue
Block a user