1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-13 09:46:23 +00:00

Fix Dead Links (#129)

This commit is contained in:
Thomas
2021-01-04 16:00:48 +13:00
committed by GitHub
parent 77184f5937
commit 6892542f9d
51 changed files with 354 additions and 361 deletions

View File

@@ -1,17 +1,17 @@
# Design
Like the [Docker Swarm](ha-docker-swarm/design/) "_private cloud_" design, the Kubernetes design is:
Like the [Docker Swarm](/ha-docker-swarm/design/) "_private cloud_" design, the Kubernetes design is:
* **Highly-available** (_can tolerate the failure of a single component_)
* **Scalable** (_can add resource or capacity as required_)
* **Portable** (_run it in DigitalOcean today, AWS tomorrow and Azure on Thursday_)
* **Secure** (_access protected with LetsEncrypt certificates_)
* **Automated** (_requires minimal care and feeding_)
- **Highly-available** (_can tolerate the failure of a single component_)
- **Scalable** (_can add resource or capacity as required_)
- **Portable** (_run it in DigitalOcean today, AWS tomorrow and Azure on Thursday_)
- **Secure** (_access protected with LetsEncrypt certificates_)
- **Automated** (_requires minimal care and feeding_)
*Unlike* the Docker Swarm design, the Kubernetes design is:
_Unlike_ the Docker Swarm design, the Kubernetes design is:
* **Cloud-Native** (_While you **can** [run your own Kubernetes cluster](https://microk8s.io/), it's far simpler to let someone else manage the infrastructure, freeing you to play with the fun stuff_)
* **Complex** (_Requires more basic elements, more verbose configuration, and provides more flexibility and customisability_)
- **Cloud-Native** (_While you **can** [run your own Kubernetes cluster](https://microk8s.io/), it's far simpler to let someone else manage the infrastructure, freeing you to play with the fun stuff_)
- **Complex** (_Requires more basic elements, more verbose configuration, and provides more flexibility and customisability_)
## Design Decisions
@@ -19,21 +19,21 @@ Like the [Docker Swarm](ha-docker-swarm/design/) "_private cloud_" design, the K
This means that:
* The design should work on GKE, AWS, DigitalOcean, Azure, or even MicroK8s
* Custom service elements specific to individual providers are avoided
- The design should work on GKE, AWS, DigitalOcean, Azure, or even MicroK8s
- Custom service elements specific to individual providers are avoided
**The simplest solution to achieve the desired result will be preferred**
This means that:
* Persistent volumes from the cloud provider are used for all persistent storage
* We'll do things the "_Kubernetes way_", i.e., using secrets and configmaps, rather than trying to engineer around the Kubernetes basic building blocks.
- Persistent volumes from the cloud provider are used for all persistent storage
- We'll do things the "_Kubernetes way_", i.e., using secrets and configmaps, rather than trying to engineer around the Kubernetes basic building blocks.
**Insofar as possible, the format of recipes will align with Docker Swarm**
This means that:
* We use Kubernetes namespaces to replicate Docker Swarm's "_per-stack_" networking and service discovery
- We use Kubernetes namespaces to replicate Docker Swarm's "_per-stack_" networking and service discovery
## Security
@@ -41,12 +41,12 @@ Under this design, the only inbound connections we're permitting to our Kubernet
### Network Flows
* HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy (_Traefik_)
* Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](/recipes/mqtt/)_)
- HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy (_Traefik_)
- Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](/recipes/mqtt/)_)
### Authentication
* Other than when an SSL-served application provides a trusted level of authentication, or where the application requires public exposure, applications served via Traefik will be protected with an OAuth proxy.
- Other than when an SSL-served application provides a trusted level of authentication, or where the application requires public exposure, applications served via Traefik will be protected with an OAuth proxy.
## The challenges of external access
@@ -55,8 +55,9 @@ Because we're Cloude-Native now, it's complex to get traffic **into** our cluste
1. **HostIP**: Map a port on the host to a service. This is analogous to Docker's port exposure, but lacking in that it restricts us to one host port per-container, and it's not possible to anticipate _which_ of your Kubernetes hosts is running a given container. Kubernetes does not have Docker Swarm's "routing mesh", allowing for simple load-balancing of incoming connections.
2. **LoadBalancer**: Purchase a "loadbalancer" per-service from your cloud provider. While this is the simplest way to assure a fixed IP and port combination will always exist for your service, it has 2 significant limitations:
1. Cost is prohibitive, at roughly $US10/month per port
2. You won't get the _same_ fixed IP for multiple ports. So if you wanted to expose 443 and 25 (_webmail and smtp server, for example_), you'd find yourself assigned a port each on two **unique** IPs, a challenge for a single DNS-based service, like "_mail.batman.com_"
1. Cost is prohibitive, at roughly \$US10/month per port
2. You won't get the _same_ fixed IP for multiple ports. So if you wanted to expose 443 and 25 (_webmail and smtp server, for example_), you'd find yourself assigned a port each on two **unique** IPs, a challenge for a single DNS-based service, like "_mail.batman.com_"
3. **NodePort**: Expose our service as a port (_between 30000-32767_) on the host which happens to be running the service. This is challenging because you might want to expose port 443, but that's not possible with NodePort.
@@ -92,7 +93,7 @@ The phone-home container calls the webhook, and tells HAProxy to listen on port
### 2 : The Traefik Ingress
In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/docker-ha-swarm/traefik/).
In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/ha-docker-swarm/traefik/).
What's happening in the diagram is that a phone-home pod is tied to the traefik pod using affinity, so that both containers will be executed on the same host. Again, the phone-home container calls a webhook on the HAProxy VM, auto-configuring HAproxy to send any HTTPs traffic to its calling address and customer NodePort port number.
@@ -120,10 +121,10 @@ Finally, the DNS for all externally-accessible services is pointed to the IP of
Still with me? Good. Move on to creating your cluster!
* [Start](/kubernetes/start/) - Why Kubernetes?
* Design (this page) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
- [Start](/kubernetes/start/) - Why Kubernetes?
- Design (this page) - How does it fit together?
- [Cluster](/kubernetes/cluster/) - Setup a basic cluster
- [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
- [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
- [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
- [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm

View File

@@ -4,7 +4,7 @@ One of the issues I encountered early on in migrating my Docker Swarm workloads
There were several complications with the "traditional" mechanisms of providing a load-balanced ingress, not the least of which was cost. I also found that even if I paid my cloud provider (_Google_) for a load-balancer Kubernetes service, this service required a unique IP per exposed port, which was incompatible with my mining pool empire (_mining pools need to expose multiple ports on the same DNS name_).
See further examination of the problem and possible solutions in the [Kubernetes design](kubernetes/design/#the-challenges-of-external-access) page.
See further examination of the problem and possible solutions in the [Kubernetes design](/kubernetes/design/#the-challenges-of-external-access) page.
This recipe details a simple design to permit the exposure of as many ports as you like, on a single public IP, to a cluster of Kubernetes nodes running as many pods/containers as you need, with services exposed via NodePort.
@@ -13,10 +13,9 @@ This recipe details a simple design to permit the exposure of as many ports as y
## Ingredients
1. [Kubernetes cluster](/kubernetes/cluster/)
2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar 🍷 for me!_)
2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [\$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar 🍷 for me!_)
3. Geek-Fu required : 🐧🐧🐧 (_complex - inline adjustments required_)
## Preparation
### Summary
@@ -24,7 +23,7 @@ This recipe details a simple design to permit the exposure of as many ports as y
### Create LetsEncrypt certificate
!!! warning
Safety first, folks. You wouldn't run a webhook exposed to the big bad ol' internet without first securing it with a valid SSL certificate? Of course not, I didn't think so!
Safety first, folks. You wouldn't run a webhook exposed to the big bad ol' internet without first securing it with a valid SSL certificate? Of course not, I didn't think so!
Use whatever method you prefer to generate (and later, renew) your LetsEncrypt cert. The example below uses the CertBot docker image for CloudFlare DNS validation, since that's what I've used elsewhere.
@@ -38,13 +37,14 @@ dns_cloudflare_api_key=supersekritnevergonnatellyou
```
I request my cert by running:
```
cd /etc/webhook/
docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare --preferred-challenges dns certonly --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini -d ''*.funkypenguin.co.nz'
```
!!! question
Why use a wildcard cert? So my enemies can't examine my certs to enumerate my various services and discover my weaknesses, of course!
Why use a wildcard cert? So my enemies can't examine my certs to enumerate my various services and discover my weaknesses, of course!
I add the following as a cron command to renew my certs every day:
@@ -52,15 +52,15 @@ I add the following as a cron command to renew my certs every day:
cd /etc/webhook && docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare renew --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini
```
Once you've confirmed you've got a valid LetsEncrypt certificate stored in ```/etc/webhook/letsencrypt/live/<your domain>/fullcert.pem```, proceed to the next step..
Once you've confirmed you've got a valid LetsEncrypt certificate stored in `/etc/webhook/letsencrypt/live/<your domain>/fullcert.pem`, proceed to the next step..
### Install webhook
We're going to use https://github.com/adnanh/webhook to run our webhook. On some distributions (_❤ ya, Debian!_), webhook and its associated systemd config can be installed by running ```apt-get install webhook```.
We're going to use https://github.com/adnanh/webhook to run our webhook. On some distributions (_❤ ya, Debian!_), webhook and its associated systemd config can be installed by running `apt-get install webhook`.
### Create webhook config
We'll create a single webhook, by creating ```/etc/webhook/hooks.json``` as follows. Choose a nice secure random string for your MY_TOKEN value!
We'll create a single webhook, by creating `/etc/webhook/hooks.json` as follows. Choose a nice secure random string for your MY_TOKEN value!
```
mkdir /etc/webhook
@@ -113,14 +113,14 @@ EOF
```
!!! note
Note that to avoid any bozo from calling our we're matching on a token header in the request called ```X-Funkypenguin-Token```. Webhook will **ignore** any request which doesn't include a matching token in the request header.
Note that to avoid any bozo from calling our we're matching on a token header in the request called `X-Funkypenguin-Token`. Webhook will **ignore** any request which doesn't include a matching token in the request header.
### Update systemd for webhook
!!! note
This section is particular to Debian Stretch and its webhook package. If you're using another OS for your VM, just ensure that you can start webhook with a config similar to the one illustrated below.
This section is particular to Debian Stretch and its webhook package. If you're using another OS for your VM, just ensure that you can start webhook with a config similar to the one illustrated below.
Since we want to force webhook to run in secure mode (_no point having a token if it can be extracted from a simple packet capture!_) I ran ```systemctl edit webhook```, and pasted in the following:
Since we want to force webhook to run in secure mode (_no point having a token if it can be extracted from a simple packet capture!_) I ran `systemctl edit webhook`, and pasted in the following:
```
[Service]
@@ -129,7 +129,7 @@ ExecStart=
ExecStart=/usr/bin/webhook -hooks /etc/webhook/hooks.json -verbose -secure -cert /etc/webhook/letsencrypt/live/funkypenguin.co.nz/fullchain.pem -key /etc/webhook/letsencrypt/live/funkypenguin.co.nz/privkey.pem
```
Then I restarted webhook by running ```systemctl enable webhook && systemctl restart webhook```. I watched the subsequent logs by running ```journalctl -u webhook -f```
Then I restarted webhook by running `systemctl enable webhook && systemctl restart webhook`. I watched the subsequent logs by running `journalctl -u webhook -f`
### Create /etc/webhook/update-haproxy.sh
@@ -210,7 +210,7 @@ fi
### Create /etc/webhook/haproxy/global
Create ```/etc/webhook/haproxy/global``` and populate with something like the following. This will be the non-dynamically generated part of our HAProxy config:
Create `/etc/webhook/haproxy/global` and populate with something like the following. This will be the non-dynamically generated part of our HAProxy config:
```
global
@@ -260,7 +260,7 @@ Whew! We now have all the components of our automated load-balancing solution in
If you don't see the above, then check the following:
1. Does the webhook verbose log (```journalctl -u webhook -f```) complain about invalid arguments or missing files?
1. Does the webhook verbose log (`journalctl -u webhook -f`) complain about invalid arguments or missing files?
2. Is port 9000 open to the internet on your VM?
### Apply to pods
@@ -315,20 +315,18 @@ Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 command ou
<HAProxy restarts>
```
## Move on..
Still with me? Good. Move on to setting up an ingress SSL terminating proxy with Traefik..
* [Start](/kubernetes/start/) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* Load Balancer (this page) - Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
- [Start](/kubernetes/start/) - Why Kubernetes?
- [Design](/kubernetes/design/) - How does it fit together?
- [Cluster](/kubernetes/cluster/) - Setup a basic cluster
- Load Balancer (this page) - Setup inbound access
- [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
- [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
- [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
## Chef's Notes
1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome 😉
1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome 😉

View File

@@ -2,7 +2,7 @@
Before we get carried away creating pods, services, deployments etc, let's spare a thought for _security_... (_DevSecPenguinOps, here we come!_). In the context of this recipe, security refers to safe-guarding your data from accidental loss, as well as malicious impact.
Under [Docker Swarm](/ha-docker-swarm/design/), we used [shared storage](/ha-docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](recipes/elkarbackup/)) to automate backups of our persistent data.
Under [Docker Swarm](/ha-docker-swarm/design/), we used [shared storage](/ha-docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](/recipes/elkarbackup/)) to automate backups of our persistent data.
Now that we're playing in the deep end with Kubernetes, we'll need a Cloud-native backup solution...
@@ -23,7 +23,7 @@ This recipe employs a clever tool ([miracle2k/k8s-snapshots](https://github.com/
If you're running GKE, run the following to create a RoleBinding, allowing your user to grant rights-it-doesn't-currently-have to the service account responsible for creating the snapshots:
```kubectl create clusterrolebinding your-user-cluster-admin-binding \
````kubectl create clusterrolebinding your-user-cluster-admin-binding \
--clusterrole=cluster-admin --user=<your user@yourdomain>```
!!! question
@@ -33,8 +33,10 @@ If you're running GKE, run the following to create a RoleBinding, allowing your
If your cluster is RBAC-enabled (_it probably is_), you'll need to create a ClusterRole and ClusterRoleBinding to allow k8s_snapshots to see your PVs and friends:
```
````
kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/master/rbac.yaml
```
## Serving
@@ -44,24 +46,25 @@ kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/maste
Ready? Run the following to create a deployment in to the kube-system namespace:
```
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-snapshots
namespace: kube-system
name: k8s-snapshots
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
app: k8s-snapshots
spec:
containers:
- name: k8s-snapshots
image: elsdoerfer/k8s-snapshots:v2.0
replicas: 1
template:
metadata:
labels:
app: k8s-snapshots
spec:
containers: - name: k8s-snapshots
image: elsdoerfer/k8s-snapshots:v2.0
EOF
```
````
Confirm your pod is running and happy by running ```kubectl get pods -n kubec-system```, and ```kubectl -n kube-system logs k8s-snapshots<tab-to-auto-complete>```
@@ -71,7 +74,8 @@ k8s-snapshots relies on annotations to tell it how frequently to snapshot your P
From the k8s-snapshots README:
```
````
The generations are defined by a list of deltas formatted as ISO 8601 durations (this differs from tarsnapper). PT60S or PT1M means a minute, PT12H or P0.5D is half a day, P1W or P7D is a week. The number of backups in each generation is implied by it's and the parent generation's delta.
For example, given the deltas PT1H P1D P7D, the first generation will consist of 24 backups each one hour older than the previous (or the closest approximation possible given the available backups), the second generation of 7 backups each one day older than the previous, and backups older than 7 days will be discarded for good.
@@ -79,38 +83,44 @@ For example, given the deltas PT1H P1D P7D, the first generation will consist of
The most recent backup is always kept.
The first delta is the backup interval.
```
To add the annotation to an existing PV, run something like this:
```
kubectl patch pv pvc-01f74065-8fe9-11e6-abdd-42010af00148 -p \
'{"metadata": {"annotations": {"backup.kubernetes.io/deltas": "P1D P30D P360D"}}}'
'{"metadata": {"annotations": {"backup.kubernetes.io/deltas": "P1D P30D P360D"}}}'
```
To add the annotation to a _new_ PV, add the following annotation to your **PVC**:
```
backup.kubernetes.io/deltas: PT1H P2D P30D P180D
```
Here's an example of the PVC for the UniFi recipe, which includes 7 daily snapshots of the PV:
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: controller-volumeclaim
namespace: unifi
annotations:
backup.kubernetes.io/deltas: P1D P7D
name: controller-volumeclaim
namespace: unifi
annotations:
backup.kubernetes.io/deltas: P1D P7D
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
```
accessModes: - ReadWriteOnce
resources:
requests:
storage: 1Gi
````
And here's what my snapshot list looks like after a few days:
@@ -122,40 +132,43 @@ If you're running traditional compute instances with your cloud provider (I do t
To do so, first create a custom resource, ```SnapshotRule```:
```
````
cat <<EOF | kubectl create -f -
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: snapshotrules.k8s-snapshots.elsdoerfer.com
name: snapshotrules.k8s-snapshots.elsdoerfer.com
spec:
group: k8s-snapshots.elsdoerfer.com
version: v1
scope: Namespaced
names:
plural: snapshotrules
singular: snapshotrule
kind: SnapshotRule
shortNames:
- sr
group: k8s-snapshots.elsdoerfer.com
version: v1
scope: Namespaced
names:
plural: snapshotrules
singular: snapshotrule
kind: SnapshotRule
shortNames: - sr
EOF
```
````
Then identify the volume ID of your volume, and create an appropriate ```SnapshotRule```:
```
````
cat <<EOF | kubectl apply -f -
apiVersion: "k8s-snapshots.elsdoerfer.com/v1"
kind: SnapshotRule
metadata:
name: haproxy-badass-loadbalancer
name: haproxy-badass-loadbalancer
spec:
deltas: P1D P7D
backend: google
disk:
name: haproxy2
zone: australia-southeast1-a
deltas: P1D P7D
backend: google
disk:
name: haproxy2
zone: australia-southeast1-a
EOF
```
!!! note
@@ -177,4 +190,5 @@ Still with me? Good. Move on to understanding Helm charts...
## Chef's Notes
1. I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74).
1. I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74).
```

View File

@@ -24,27 +24,27 @@ Yes, but that's a necessary sacrifice for the maturity, power and flexibility it
Let's talk some definitions. Kubernetes.io provides a [glossary](https://kubernetes.io/docs/reference/glossary/?fundamental=true). My definitions are below:
* **Node** : A compute instance which runs docker containers, managed by a cluster master.
- **Node** : A compute instance which runs docker containers, managed by a cluster master.
* **Cluster** : One or more "worker nodes" which run containers. Very similar to a Docker Swarm node. In most cloud provider deployments, the [master node for your cluster is provided free of charge](https://www.sdxcentral.com/articles/news/google-eliminates-gke-management-fees-kubernetes-clusters/2017/11/), but you don't get to access it.
- **Cluster** : One or more "worker nodes" which run containers. Very similar to a Docker Swarm node. In most cloud provider deployments, the [master node for your cluster is provided free of charge](https://www.sdxcentral.com/articles/news/google-eliminates-gke-management-fees-kubernetes-clusters/2017/11/), but you don't get to access it.
* **Pod** : A collection of one or more the containers. If a pod runs multiple containers, these containers always run on the same node.
- **Pod** : A collection of one or more the containers. If a pod runs multiple containers, these containers always run on the same node.
* **Deployment** : A definition of a desired state. I.e., "I want a pod with containers A and B running". The Kubernetes master then ensures that any changes necessary to maintain the state are taken. (_I.e., if a pod crashes, but is supposed to be running, a new pod will be started_)
- **Deployment** : A definition of a desired state. I.e., "I want a pod with containers A and B running". The Kubernetes master then ensures that any changes necessary to maintain the state are taken. (_I.e., if a pod crashes, but is supposed to be running, a new pod will be started_)
* **Service** : Unlike Docker Swarm, service discovery is not _built in_ to Kubernetes. For your pods to discover each other (say, to have "webserver" talk to "database"), you create a service for each pod, and refer to these services when you want your containers (_in pods_) to talk to each other. Complicated, yes, but the abstraction allows you to do powerful things, like auto-scale-up a bunch of database "pods" behind a service called "database", or perform a rolling container image upgrade with zero impact.
- **Service** : Unlike Docker Swarm, service discovery is not _built in_ to Kubernetes. For your pods to discover each other (say, to have "webserver" talk to "database"), you create a service for each pod, and refer to these services when you want your containers (_in pods_) to talk to each other. Complicated, yes, but the abstraction allows you to do powerful things, like auto-scale-up a bunch of database "pods" behind a service called "database", or perform a rolling container image upgrade with zero impact.
* **External access** : Services not only allow pods to discover each other, but they're also the mechanism through which the outside world can talk to a container. At the simplest level, this is akin to exposing a container port on a docker host.
- **External access** : Services not only allow pods to discover each other, but they're also the mechanism through which the outside world can talk to a container. At the simplest level, this is akin to exposing a container port on a docker host.
* **Ingress** : When mapping ports to applications is inadequate (think virtual web hosts), an ingress is a sort of "inbound router" which can receive requests on one port (i.e., HTTPS), and forward them to a variety of internal pods, based on things like VHOST, etc. For us, this is the functional equivalent of what Traefik does in Docker Swarm. In fact, we use a Traefik Ingress in Kubernetes to accomplish the same.
- **Ingress** : When mapping ports to applications is inadequate (think virtual web hosts), an ingress is a sort of "inbound router" which can receive requests on one port (i.e., HTTPS), and forward them to a variety of internal pods, based on things like VHOST, etc. For us, this is the functional equivalent of what Traefik does in Docker Swarm. In fact, we use a Traefik Ingress in Kubernetes to accomplish the same.
* **Persistent Volume** : A virtual disk which is attached to a pod, storing persistent data. Meets the requirement for shared storage from Docker Swarm. I.e., if a persistent volume (PV) is bound to a pod, and the pod dies and is recreated, or get upgraded to a new image, the PV the data is bound to the new container. PVs can be "claimed" in a YAML definition, so that your Kubernetes provider will auto-create a PV when you launch your pod. PVs can be snapshotted.
- **Persistent Volume** : A virtual disk which is attached to a pod, storing persistent data. Meets the requirement for shared storage from Docker Swarm. I.e., if a persistent volume (PV) is bound to a pod, and the pod dies and is recreated, or get upgraded to a new image, the PV the data is bound to the new container. PVs can be "claimed" in a YAML definition, so that your Kubernetes provider will auto-create a PV when you launch your pod. PVs can be snapshotted.
* **Namespace** : An abstraction to separate a collection of pods, services, ingresses, etc. A "virtual cluster within a cluster". Can be used for security, or simplicity. For example, since we don't have individual docker stacks anymore, if you commonly name your database container "db", and you want to deploy two applications which both use a database container, how will you name your services? Use namespaces to keep each application ("nextcloud" vs "kanboard") separate. Namespaces also allow you to allocate resources **limits** to the aggregate of containers in a namespace, so you could, for example, limit the "nextcloud" namespace to 2.3 CPUs and 1200MB RAM.
- **Namespace** : An abstraction to separate a collection of pods, services, ingresses, etc. A "virtual cluster within a cluster". Can be used for security, or simplicity. For example, since we don't have individual docker stacks anymore, if you commonly name your database container "db", and you want to deploy two applications which both use a database container, how will you name your services? Use namespaces to keep each application ("nextcloud" vs "kanboard") separate. Namespaces also allow you to allocate resources **limits** to the aggregate of containers in a namespace, so you could, for example, limit the "nextcloud" namespace to 2.3 CPUs and 1200MB RAM.
## Mm.. maaaaybe, how do I start?
If you're like me, and you learn by doing, either play with the examples at https://labs.play-with-k8s.com/, or jump right in by setting up a Google Cloud trial (_you get $300 credit for 12 months_), or a small cluster on [Digital Ocean](/kubernetes/digitalocean/).
If you're like me, and you learn by doing, either play with the examples at https://labs.play-with-k8s.com/, or jump right in by setting up a Google Cloud trial (_you get \$300 credit for 12 months_), or a small cluster on [Digital Ocean](/kubernetes/cluster/).
If you're the learn-by-watching type, just search for "Kubernetes introduction video". There's a **lot** of great content available.
@@ -58,10 +58,10 @@ I'd love for your [feedback](/support/) on the Kubernetes recipes, as well as su
Still with me? Good. Move on to reviewing the design elements
* Start (this page) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
- Start (this page) - Why Kubernetes?
- [Design](/kubernetes/design/) - How does it fit together?
- [Cluster](/kubernetes/cluster/) - Setup a basic cluster
- [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
- [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
- [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
- [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm