mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-12 17:26:19 +00:00
Add authentik, tidy up recipe-footer
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
@@ -0,0 +1,67 @@
|
||||
---
|
||||
date: 2023-06-09
|
||||
categories:
|
||||
- note
|
||||
tags:
|
||||
- elfhosted
|
||||
title: Baby steps towards ElfHosted
|
||||
description: Every journey has a beginning. This is the beginning of the ElfHosted journey
|
||||
draft: true
|
||||
---
|
||||
|
||||
Securing the Hetzner environment
|
||||
|
||||
Before building out our Kubernetes cluster, I wanted to secure the environment a little. On Hetzner, each server is assigned a public IP from a huge pool, and is directly accessible over the internet. This provides quick access for administration, but before building out our controlplane, I wanted to lock down access.
|
||||
|
||||
## Requirements
|
||||
|
||||
* [x] Kubernetes worker/controlplane nodes are privately addressed
|
||||
* [x] Control plane (API) will be accessible only internally
|
||||
* [x] Nodes can be administered directly on their private address range
|
||||
|
||||
## The bastion VM
|
||||
|
||||
I created a small cloud "ampere" VM using Hetzner's cloud console. These cloud VMs are provisioned separately from dedicated servers, but it's possible to interconnect them with dedicated servers using vSwitches/subnets (bascically VLANs)
|
||||
|
||||
I needed a "bastion" host - a small node (probably a VM), which I could secure and then use for further ingress into my infrastructure.
|
||||
|
||||
## Connecting Bastion VM to dedicated VMs
|
||||
|
||||
I
|
||||
|
||||
https://tailscale.com/kb/1150/cloud-hetzner/
|
||||
|
||||
|
||||
https://tailscale.com/kb/1077/secure-server-ubuntu-18-04/
|
||||
|
||||
|
||||
https://docs.hetzner.com/cloud/networks/connect-dedi-vswitch
|
||||
|
||||
```bash
|
||||
tailscale up --advertise-routes 10.0.42.0/24
|
||||
```
|
||||
|
||||
sysctl edit
|
||||
|
||||
```bash
|
||||
# NAT table rules
|
||||
*nat
|
||||
:POSTROUTING ACCEPT [0:0]
|
||||
|
||||
# Forward traffic through eth0 - Change to match you out-interface
|
||||
-A POSTROUTING -s <your tailscale ip> -j MASQUERADE
|
||||
|
||||
# don't delete the 'COMMIT' line or these nat table rules won't
|
||||
# be processed
|
||||
COMMIT
|
||||
```
|
||||
|
||||
|
||||
hetzner_cloud_console_subnet_routes.png
|
||||
|
||||
hetzner_vswitch_setup.png
|
||||
|
||||
## Secure hosts
|
||||
|
||||
* [ ] Create last-resort root password
|
||||
* [ ] Setup non-root sudo account (ansiblize this?)
|
||||
151
docs/blog/posts/notes/elfhosted/setup-k3s.md
Normal file
151
docs/blog/posts/notes/elfhosted/setup-k3s.md
Normal file
@@ -0,0 +1,151 @@
|
||||
---
|
||||
date: 2023-06-11
|
||||
categories:
|
||||
- note
|
||||
tags:
|
||||
- elfhosted
|
||||
title: Kubernetes on Hetzner dedicated server
|
||||
description: How to setup and secure a bare-metal Kubernetes infrastructure on Hetzner dedicated servers
|
||||
draft: true
|
||||
---
|
||||
|
||||
# Kubernetes (K3s) on Hetzner
|
||||
|
||||
In this post, we continue our adventure setting up an app hosting platform running on Kubernetes.
|
||||
|
||||
--8<-- "blog-series-elfhosted.md"
|
||||
|
||||
My two physical servers were "delivered" (to my inbox), along with instructions re SSHing to the "rescueimage" environment, which looks like this:
|
||||
|
||||
|
||||
|
||||
<!-- more -->
|
||||
|
||||
--8<-- "what-is-elfhosted.md"
|
||||
|
||||
|
||||
## Secure nodes
|
||||
|
||||
Per the K3s docs, there are some local firewall requirements for K3s server/worker nodes:
|
||||
|
||||
https://docs.k3s.io/installation/requirements#inbound-rules-for-k3s-server-nodes
|
||||
|
||||
|
||||
|
||||
It's aliiive!
|
||||
|
||||
```
|
||||
root@fairy01 ~ # kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
elf01 Ready <none> 15s v1.26.5+k3s1
|
||||
fairy01 Ready control-plane,etcd,master 96s v1.26.5+k3s1
|
||||
root@fairy01 ~ #
|
||||
```
|
||||
|
||||
Now install flux, according to this documentedb bootstrap process...
|
||||
|
||||
|
||||
https://metallb.org/configuration/k3s/
|
||||
|
||||
|
||||
Prepare for Longhorn's [NFS schenanigans](https://longhorn.io/docs/1.4.2/deploy/install/#installing-nfsv4-client):
|
||||
|
||||
```
|
||||
apt-get -y install nfs-common tuned
|
||||
```
|
||||
|
||||
Performance mode!
|
||||
|
||||
`tuned-adm profile throughput-performance`
|
||||
|
||||
Taint the master(s)
|
||||
|
||||
```
|
||||
kubectl taint node fairy01 node-role.kubernetes.io/control-plane=true:NoSchedule
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
increase max pods:
|
||||
https://stackoverflow.com/questions/65894616/how-do-you-increase-maximum-pods-per-node-in-k3s
|
||||
|
||||
https://gist.github.com/rosskirkpat/57aa392a4b44cca3d48dfe58b5716954
|
||||
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --disable traefik --disable servicelb --flannel-backend=wireguard-native --flannel-iface=enp0s31f6.4000 --kube-controller-manager-arg=node-cidr-mask-size=22 --kubelet-arg=max-pods=500 --node-taint node-role.kubernetes.io/control-plane --prefer-bundled-bin" sh -
|
||||
```
|
||||
|
||||
create secondary masters:
|
||||
|
||||
```
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --disable traefik --disable servicelb --flannel-backend=wireguard-native --flannel-iface=enp0s31f6.4000 --kube-controller-manager-arg=node-cidr-mask-size=22 --kubelet-arg=max-pods=500 --node-taint node-role.kubernetes.io/control-plane --prefer-bundled-bin" sh -
|
||||
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
mkdir -p /etc/rancher/k3s/
|
||||
cat << EOF >> /etc/rancher/k3s/kubelet-server.config
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
maxPods: 500
|
||||
EOF
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
and on the worker
|
||||
|
||||
|
||||
Ensure that `/etc/rancher/k3s` exists, to hold our kubelet custom configuration file:
|
||||
|
||||
```bash
|
||||
mkdir -p /etc/rancher/k3s/
|
||||
cat << EOF >> /etc/rancher/k3s/kubelet-server.config
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
maxPods: 500
|
||||
EOF
|
||||
```
|
||||
|
||||
Get [token](https://docs.k3s.io/cli/token) from `/var/lib/rancher/k3s/server/token` on the server, and prepare the environment like this:
|
||||
```bash
|
||||
export K3S_TOKEN=<token from master>
|
||||
export K3S_URL=https://<ip of master>:6443
|
||||
```
|
||||
|
||||
Now join the worker using
|
||||
|
||||
```
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent --flannel-iface=eno1.4000 --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --prefer-bundled-bin" sh -
|
||||
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
flux bootstrap github \
|
||||
--owner=geek-cookbook \
|
||||
--repository=geek-cookbook/elfhosted-flux \
|
||||
--path bootstrap
|
||||
```
|
||||
|
||||
```
|
||||
root@fairy01:~# kubectl -n sealed-secrets create secret tls elfhosted-expires-june-2033 \
|
||||
--cert=mytls.crt --key=mytls.key
|
||||
secret/elfhosted-expires-june-2033 created
|
||||
root@fairy01:~# kubectl kubectl -n sealed-secrets label secret^C
|
||||
root@fairy01:~# kubectl -n sealed-secrets label secret elfhosted-expires-june-2033 sealedsecrets.bitnami.com/sealed-secrets-key=active
|
||||
secret/elfhosted-expires-june-2033 labeled
|
||||
root@fairy01:~# kubectl rollout restart -n sealed-secrets deployment sealed-secrets
|
||||
deployment.apps/sealed-secrets restarted
|
||||
```
|
||||
|
||||
increase watchers (jellyfin)
|
||||
echo fs.inotify.max_user_watches=2097152 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
|
||||
|
||||
echo 512 > /proc/sys/fs/inotify/max_user_instances
|
||||
|
||||
on dwarves
|
||||
|
||||
k taint node dwarf01.elfhosted.com node-role.elfhosted.com/node=storage:NoSchedule
|
||||
|
||||
Reference in New Issue
Block a user