mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 01:36:23 +00:00
Add markdown linting (without breaking the site this time!)
This commit is contained in:
@@ -10,7 +10,7 @@ In the design described below, our "private cloud" platform is:
|
||||
|
||||
## Design Decisions
|
||||
|
||||
**Where possible, services will be highly available.**
|
||||
### Where possible, services will be highly available.**
|
||||
|
||||
This means that:
|
||||
|
||||
@@ -39,8 +39,7 @@ Under this design, the only inbound connections we're permitting to our docker s
|
||||
### Authentication
|
||||
|
||||
* Where the hosted application provides a trusted level of authentication (*i.e., [NextCloud](/recipes/nextcloud/)*), or where the application requires public exposure (*i.e. [Privatebin](/recipes/privatebin/)*), no additional layer of authentication will be required.
|
||||
* Where the hosted application provides inadequate (*i.e. [NZBGet](/recipes/autopirate/nzbget/)*) or no authentication (*i.e. [Gollum](/recipes/gollum/)*), a further authentication against an OAuth provider will be required.
|
||||
|
||||
* Where the hosted application provides inadequate (*i.e. [NZBGet](/recipes/autopirate/nzbget/)*) or no authentication (*i.e. [Gollum](/recipes/gollum/)*), a further authentication against an OAuth provider will be required.
|
||||
|
||||
## High availability
|
||||
|
||||
@@ -78,7 +77,6 @@ When the failed (*or upgraded*) host is restored to service, the following is il
|
||||
* Existing containers which were migrated off the node are not migrated backend
|
||||
* Keepalived VIP regains full redundancy
|
||||
|
||||
|
||||

|
||||
|
||||
### Total cluster failure
|
||||
@@ -91,4 +89,4 @@ In summary, although I suffered an **unplanned power outage to all of my infrast
|
||||
|
||||
[^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -6,7 +6,7 @@ For truly highly-available services with Docker containers, we need an orchestra
|
||||
|
||||
!!! summary
|
||||
Existing
|
||||
|
||||
|
||||
* [X] 3 x nodes (*bare-metal or VMs*), each with:
|
||||
* A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
|
||||
* At least 2GB RAM
|
||||
@@ -19,19 +19,20 @@ For truly highly-available services with Docker containers, we need an orchestra
|
||||
|
||||
Add some handy bash auto-completion for docker. Without this, you'll get annoyed that you can't autocomplete ```docker stack deploy <blah> -c <blah.yml>``` commands.
|
||||
|
||||
```
|
||||
```bash
|
||||
cd /etc/bash_completion.d/
|
||||
curl -O https://raw.githubusercontent.com/docker/cli/b75596e1e4d5295ac69b9934d1bd8aff691a0de8/contrib/completion/bash/docker
|
||||
```
|
||||
|
||||
Install some useful bash aliases on each host
|
||||
```
|
||||
|
||||
```bash
|
||||
cd ~
|
||||
curl -O https://raw.githubusercontent.com/funkypenguin/geek-cookbook/master/examples/scripts/gcb-aliases.sh
|
||||
echo 'source ~/gcb-aliases.sh' >> ~/.bash_profile
|
||||
```
|
||||
|
||||
## Serving
|
||||
## Serving
|
||||
|
||||
### Release the swarm!
|
||||
|
||||
@@ -39,7 +40,7 @@ Now, to launch a swarm. Pick a target node, and run `docker swarm init`
|
||||
|
||||
Yeah, that was it. Seriously. Now we have a 1-node swarm.
|
||||
|
||||
```
|
||||
```bash
|
||||
[root@ds1 ~]# docker swarm init
|
||||
Swarm initialized: current node (b54vls3wf8xztwfz79nlkivt8) is now a manager.
|
||||
|
||||
@@ -56,7 +57,7 @@ To add a manager to this swarm, run 'docker swarm join-token manager' and follow
|
||||
|
||||
Run `docker node ls` to confirm that you have a 1-node swarm:
|
||||
|
||||
```
|
||||
```bash
|
||||
[root@ds1 ~]# docker node ls
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
b54vls3wf8xztwfz79nlkivt8 * ds1.funkypenguin.co.nz Ready Active Leader
|
||||
@@ -67,7 +68,7 @@ Note that when you run `docker swarm init` above, the CLI output gives youe a co
|
||||
|
||||
On the first swarm node, generate the necessary token to join another manager by running ```docker swarm join-token manager```:
|
||||
|
||||
```
|
||||
```bash
|
||||
[root@ds1 ~]# docker swarm join-token manager
|
||||
To add a manager to this swarm, run the following command:
|
||||
|
||||
@@ -80,8 +81,7 @@ To add a manager to this swarm, run the following command:
|
||||
|
||||
Run the command provided on your other nodes to join them to the swarm as managers. After addition of a node, the output of ```docker node ls``` (on either host) should reflect all the nodes:
|
||||
|
||||
|
||||
```
|
||||
```bash
|
||||
[root@ds2 davidy]# docker node ls
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
b54vls3wf8xztwfz79nlkivt8 ds1.funkypenguin.co.nz Ready Active Leader
|
||||
@@ -97,14 +97,14 @@ To address this, we'll run the "[meltwater/docker-cleanup](https://github.com/me
|
||||
|
||||
First, create `docker-cleanup.env` (_mine is under `/var/data/config/docker-cleanup`_), and exclude container images we **know** we want to keep:
|
||||
|
||||
```
|
||||
```bash
|
||||
KEEP_IMAGES=traefik,keepalived,docker-mailserver
|
||||
DEBUG=1
|
||||
```
|
||||
|
||||
Then create a docker-compose.yml as follows:
|
||||
|
||||
```
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
@@ -137,7 +137,7 @@ If your swarm runs for a long time, you might find yourself running older contai
|
||||
|
||||
Create `/var/data/config/shepherd/shepherd.env` as follows:
|
||||
|
||||
```
|
||||
```bash
|
||||
# Don't auto-update Plex or Emby (or Jellyfin), I might be watching a movie! (Customize this for the containers you _don't_ want to auto-update)
|
||||
BLACKLIST_SERVICES="plex_plex emby_emby jellyfin_jellyfin"
|
||||
# Run every 24 hours. Note that SLEEP_TIME appears to be in seconds.
|
||||
@@ -146,7 +146,7 @@ SLEEP_TIME=86400
|
||||
|
||||
Then create /var/data/config/shepherd/shepherd.yml as follows:
|
||||
|
||||
```
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
@@ -175,4 +175,4 @@ What have we achieved?
|
||||
|
||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -34,7 +34,7 @@ On all nodes which will participate in keepalived, we need the "ip_vs" kernel mo
|
||||
|
||||
Set this up once-off for both the primary and secondary nodes, by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
echo "modprobe ip_vs" >> /etc/modules
|
||||
modprobe ip_vs
|
||||
```
|
||||
@@ -43,14 +43,13 @@ modprobe ip_vs
|
||||
|
||||
Assuming your IPs are as follows:
|
||||
|
||||
```
|
||||
* 192.168.4.1 : Primary
|
||||
* 192.168.4.2 : Secondary
|
||||
* 192.168.4.3 : Virtual
|
||||
```
|
||||
- 192.168.4.1 : Primary
|
||||
- 192.168.4.2 : Secondary
|
||||
- 192.168.4.3 : Virtual
|
||||
|
||||
Run the following on the primary
|
||||
```
|
||||
|
||||
```bash
|
||||
docker run -d --name keepalived --restart=always \
|
||||
--cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW --net=host \
|
||||
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.4.1', '192.168.4.2']" \
|
||||
@@ -60,7 +59,8 @@ docker run -d --name keepalived --restart=always \
|
||||
```
|
||||
|
||||
And on the secondary[^2]:
|
||||
```
|
||||
|
||||
```bash
|
||||
docker run -d --name keepalived --restart=always \
|
||||
--cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW --net=host \
|
||||
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.4.1', '192.168.4.2']" \
|
||||
@@ -73,7 +73,6 @@ docker run -d --name keepalived --restart=always \
|
||||
|
||||
That's it. Each node will talk to the other via unicast (*no need to un-firewall multicast addresses*), and the node with the highest priority gets to be the master. When ingress traffic arrives on the master node via the VIP, docker's routing mesh will deliver it to the appropriate docker node.
|
||||
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved?
|
||||
@@ -88,4 +87,4 @@ What have we achieved?
|
||||
[^1]: Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
|
||||
[^2]: More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -16,7 +16,6 @@ Let's start building our cluster. You can use either bare-metal machines or virt
|
||||
* At least 20GB disk space (_but it'll be tight_)
|
||||
* [ ] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
||||
|
||||
|
||||
## Preparation
|
||||
|
||||
### Permit connectivity
|
||||
@@ -27,7 +26,7 @@ Most modern Linux distributions include firewall rules which only only permit mi
|
||||
|
||||
Add something like this to `/etc/sysconfig/iptables`:
|
||||
|
||||
```
|
||||
```bash
|
||||
# Allow all inter-node communication
|
||||
-A INPUT -s 192.168.31.0/24 -j ACCEPT
|
||||
```
|
||||
@@ -38,7 +37,7 @@ And restart iptables with ```systemctl restart iptables```
|
||||
|
||||
Install the (*non-default*) persistent iptables tools, by running `apt-get install iptables-persistent`, establishing some default rules (*dkpg will prompt you to save current ruleset*), and then add something like this to `/etc/iptables/rules.v4`:
|
||||
|
||||
```
|
||||
```bash
|
||||
# Allow all inter-node communication
|
||||
-A INPUT -s 192.168.31.0/24 -j ACCEPT
|
||||
```
|
||||
@@ -49,17 +48,15 @@ And refresh your running iptables rules with `iptables-restore < /etc/iptables/r
|
||||
|
||||
Depending on your hosting environment, you may have DNS automatically setup for your VMs. If not, it's useful to set up static entries in /etc/hosts for the nodes. For example, I setup the following:
|
||||
|
||||
```
|
||||
192.168.31.11 ds1 ds1.funkypenguin.co.nz
|
||||
192.168.31.12 ds2 ds2.funkypenguin.co.nz
|
||||
192.168.31.13 ds3 ds3.funkypenguin.co.nz
|
||||
```
|
||||
- 192.168.31.11 ds1 ds1.funkypenguin.co.nz
|
||||
- 192.168.31.12 ds2 ds2.funkypenguin.co.nz
|
||||
- 192.168.31.13 ds3 ds3.funkypenguin.co.nz
|
||||
|
||||
### Set timezone
|
||||
|
||||
Set your local timezone, by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
ln -sf /usr/share/zoneinfo/<your timezone> /etc/localtime
|
||||
```
|
||||
|
||||
@@ -69,11 +66,11 @@ After completing the above, you should have:
|
||||
|
||||
!!! summary "Summary"
|
||||
Deployed in this recipe:
|
||||
|
||||
|
||||
* [X] 3 x nodes (*bare-metal or VMs*), each with:
|
||||
* A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
|
||||
* At least 2GB RAM
|
||||
* At least 20GB disk space (_but it'll be tight_)
|
||||
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -18,7 +18,7 @@ The registry mirror runs as a swarm stack, using a simple docker-compose.yml. Cu
|
||||
|
||||
Create /var/data/config/registry/registry.yml as follows:
|
||||
|
||||
```
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
@@ -48,7 +48,7 @@ We create this registry without consideration for SSL, which will fail if we att
|
||||
|
||||
Create /var/data/registry/registry-mirror-config.yml as follows:
|
||||
|
||||
```
|
||||
```yaml
|
||||
version: 0.1
|
||||
log:
|
||||
fields:
|
||||
@@ -83,7 +83,7 @@ Launch the registry stack by running `docker stack deploy registry -c <path-to-d
|
||||
|
||||
To tell docker to use the registry mirror, and (_while we're here_) in order to be able to watch the logs of any service from any manager node (_an experimental feature in the current Atomic docker build_), edit **/etc/docker-latest/daemon.json** on each node, and change from:
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"log-driver": "journald",
|
||||
"signature-verification": false
|
||||
@@ -92,7 +92,7 @@ To tell docker to use the registry mirror, and (_while we're here_) in order to
|
||||
|
||||
To:
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"log-driver": "journald",
|
||||
"signature-verification": false,
|
||||
@@ -103,11 +103,11 @@ To:
|
||||
|
||||
Then restart docker by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
systemctl restart docker-latest
|
||||
```
|
||||
|
||||
!!! tip ""
|
||||
Note the extra comma required after "false" above
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -29,7 +29,7 @@ One of your nodes will become the cephadm "master" node. Although all nodes will
|
||||
|
||||
Run the following on the ==master== node:
|
||||
|
||||
```
|
||||
```bash
|
||||
MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'`
|
||||
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
|
||||
chmod +x cephadm
|
||||
@@ -44,7 +44,7 @@ The process takes about 30 seconds, after which, you'll have a MVC (*Minimum Via
|
||||
??? "Example output from a fresh cephadm bootstrap"
|
||||
```
|
||||
root@raphael:~# MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'`
|
||||
root@raphael:~# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
|
||||
root@raphael:~# curl --silent --remote-name --location <https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm>
|
||||
|
||||
root@raphael:~# chmod +x cephadm
|
||||
root@raphael:~# mkdir -p /etc/ceph
|
||||
@@ -130,7 +130,6 @@ The process takes about 30 seconds, after which, you'll have a MVC (*Minimum Via
|
||||
root@raphael:~#
|
||||
```
|
||||
|
||||
|
||||
### Prepare other nodes
|
||||
|
||||
It's now necessary to tranfer the following files to your ==other== nodes, so that cephadm can add them to your cluster, and so that they'll be able to mount the cephfs when we're done:
|
||||
@@ -141,11 +140,10 @@ It's now necessary to tranfer the following files to your ==other== nodes, so th
|
||||
| `/etc/ceph/ceph.client.admin.keyring` | `/etc/ceph/ceph.client.admin.keyring` |
|
||||
| `/etc/ceph/ceph.pub` | `/root/.ssh/authorized_keys` (append to anything existing) |
|
||||
|
||||
|
||||
Back on the ==master== node, run `ceph orch host add <node-name>` once for each other node you want to join to the cluster. You can validate the results by running `ceph orch host ls`
|
||||
|
||||
!!! question "Should we be concerned about giving cephadm using root access over SSH?"
|
||||
Not really. Docker is inherently insecure at the host-level anyway (*think what would happen if you launched a global-mode stack with a malicious container image which mounted `/root/.ssh`*), so worrying about cephadm seems a little barn-door-after-horses-bolted. If you take host-level security seriously, consider switching to [Kubernetes](/kubernetes/) :)
|
||||
Not really. Docker is inherently insecure at the host-level anyway (*think what would happen if you launched a global-mode stack with a malicious container image which mounted `/root/.ssh`*), so worrying about cephadm seems a little barn-door-after-horses-bolted. If you take host-level security seriously, consider switching to [Kubernetes](/kubernetes/) :)
|
||||
|
||||
### Add OSDs
|
||||
|
||||
@@ -161,7 +159,7 @@ You can watch the progress by running `ceph fs ls` (to see the fs is configured)
|
||||
|
||||
On ==every== node, create a mountpoint for the data, by running ```mkdir /var/data```, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually _mounted_ if there's a network / boot delay getting access to the gluster volume:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data
|
||||
|
||||
MYNODES="<node1>,<node2>,<node3>" # Add your own nodes here, comma-delimited
|
||||
@@ -184,14 +182,13 @@ mount -a
|
||||
mount -a
|
||||
```
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
### Sprinkle with tools
|
||||
|
||||
Although it's possible to use `cephadm shell` to exec into a container with the necessary ceph tools, it's more convenient to use the native CLI tools. To this end, on each node, run the following, which will install the appropriate apt repository, and install the latest ceph CLI tools:
|
||||
|
||||
```
|
||||
```bash
|
||||
curl -L https://download.ceph.com/keys/release.asc | sudo apt-key add -
|
||||
cephadm add-repo --release octopus
|
||||
cephadm install ceph-common
|
||||
@@ -199,9 +196,9 @@ cephadm install ceph-common
|
||||
|
||||
### Drool over dashboard
|
||||
|
||||
Ceph now includes a comprehensive dashboard, provided by the mgr daemon. The dashboard will be accessible at https://[IP of your ceph master node]:8443, but you'll need to run `ceph dashboard ac-user-create <username> <password> administrator` first, to create an administrator account:
|
||||
Ceph now includes a comprehensive dashboard, provided by the mgr daemon. The dashboard will be accessible at `https://[IP of your ceph master node]:8443`, but you'll need to run `ceph dashboard ac-user-create <username> <password> administrator` first, to create an administrator account:
|
||||
|
||||
```
|
||||
```bash
|
||||
root@raphael:~# ceph dashboard ac-user-create batman supermansucks administrator
|
||||
{"username": "batman", "password": "$2b$12$3HkjY85mav.dq3HHAZiWP.KkMiuoV2TURZFH.6WFfo/BPZCT/0gr.", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1590372281, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": false}
|
||||
root@raphael:~#
|
||||
@@ -223,11 +220,7 @@ What have we achieved?
|
||||
Here's a screencast of the playbook in action. I sped up the boring parts, it actually takes ==5 min== (*you can tell by the timestamps on the prompt*):
|
||||
|
||||

|
||||
[patreon]: https://www.patreon.com/bePatron?u=6982506
|
||||
[github_sponsor]: https://github.com/sponsors/funkypenguin
|
||||
[patreon]: <https://www.patreon.com/bePatron?u=6982506>
|
||||
[github_sponsor]: <https://github.com/sponsors/funkypenguin>
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -32,7 +32,7 @@ On each host, run a variation following to create your bricks, adjusted for the
|
||||
|
||||
!!! note "The example below assumes /dev/vdb is dedicated to the gluster volume"
|
||||
|
||||
```
|
||||
```bash
|
||||
(
|
||||
echo o # Create a new empty DOS partition table
|
||||
echo n # Add a new partition
|
||||
@@ -60,7 +60,7 @@ Atomic doesn't include the Gluster server components. This means we'll have to
|
||||
|
||||
Run the following on each host:
|
||||
|
||||
````
|
||||
````bash
|
||||
docker run \
|
||||
-h glusterfs-server \
|
||||
-v /etc/glusterfs:/etc/glusterfs:z \
|
||||
@@ -82,7 +82,7 @@ From the node, run `gluster peer probe <other host>`.
|
||||
|
||||
Example output:
|
||||
|
||||
```
|
||||
```bash
|
||||
[root@glusterfs-server /]# gluster peer probe ds1
|
||||
peer probe: success.
|
||||
[root@glusterfs-server /]#
|
||||
@@ -92,7 +92,7 @@ Run ```gluster peer status``` on both nodes to confirm that they're properly con
|
||||
|
||||
Example output:
|
||||
|
||||
```
|
||||
```bash
|
||||
[root@glusterfs-server /]# gluster peer status
|
||||
Number of Peers: 1
|
||||
|
||||
@@ -108,7 +108,7 @@ Now we create a *replicated volume* out of our individual "bricks".
|
||||
|
||||
Create the gluster volume by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
gluster volume create gv0 replica 2 \
|
||||
server1:/var/no-direct-write-here/brick1 \
|
||||
server2:/var/no-direct-write-here/brick1
|
||||
@@ -116,7 +116,7 @@ gluster volume create gv0 replica 2 \
|
||||
|
||||
Example output:
|
||||
|
||||
```
|
||||
```bash
|
||||
[root@glusterfs-server /]# gluster volume create gv0 replica 2 ds1:/var/no-direct-write-here/brick1/gv0 ds3:/var/no-direct-write-here/brick1/gv0
|
||||
volume create: gv0: success: please start the volume to access data
|
||||
[root@glusterfs-server /]#
|
||||
@@ -124,7 +124,7 @@ volume create: gv0: success: please start the volume to access data
|
||||
|
||||
Start the volume by running ```gluster volume start gv0```
|
||||
|
||||
```
|
||||
```bash
|
||||
[root@glusterfs-server /]# gluster volume start gv0
|
||||
volume start: gv0: success
|
||||
[root@glusterfs-server /]#
|
||||
@@ -138,7 +138,7 @@ From one other host, run ```docker exec -it glusterfs-server bash``` to shell in
|
||||
|
||||
On the host (i.e., outside of the container - type ```exit``` if you're still shelled in), create a mountpoint for the data, by running ```mkdir /var/data```, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually _mounted_ if there's a network / boot delay getting access to the gluster volume:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data
|
||||
MYHOST=`hostname -s`
|
||||
echo '' >> /etc/fstab >> /etc/fstab
|
||||
@@ -149,7 +149,7 @@ mount -a
|
||||
|
||||
For some reason, my nodes won't auto-mount this volume on boot. I even tried the trickery below, but they stubbornly refuse to automount:
|
||||
|
||||
```
|
||||
```bash
|
||||
echo -e "\n\n# Give GlusterFS 10s to start before \
|
||||
mounting\nsleep 10s && mount -a" >> /etc/rc.local
|
||||
systemctl enable rc-local.service
|
||||
@@ -168,4 +168,4 @@ After completing the above, you should have:
|
||||
1. Migration of shared storage from GlusterFS to Ceph ()[#2](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/2))
|
||||
2. Correct the fact that volumes don't automount on boot ([#3](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/3))
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -29,11 +29,11 @@ Under normal OIDC auth, you have to tell your auth provider which URLs it may re
|
||||
|
||||
[@thomaseddon's traefik-forward-auth](https://github.com/thomseddon/traefik-forward-auth) includes an ingenious mechanism to simulate an "_auth host_" in your OIDC authentication, so that you can protect an unlimited amount of DNS names (_with a common domain suffix_), without having to manually maintain a list.
|
||||
|
||||
#### How does it work?
|
||||
### How does it work?
|
||||
|
||||
Say you're protecting **radarr.example.com**. When you first browse to **https://radarr.example.com**, Traefik forwards your session to traefik-forward-auth, to be authenticated. Traefik-forward-auth redirects you to your OIDC provider's login (_KeyCloak, in this case_), but instructs the OIDC provider to redirect a successfully authenticated session **back** to **https://auth.example.com/_oauth**, rather than to **https://radarr.example.com/_oauth**.
|
||||
Say you're protecting **radarr.example.com**. When you first browse to **<https://radarr.example.com>**, Traefik forwards your session to traefik-forward-auth, to be authenticated. Traefik-forward-auth redirects you to your OIDC provider's login (_KeyCloak, in this case_), but instructs the OIDC provider to redirect a successfully authenticated session **back** to **<https://auth.example.com/_oauth>**, rather than to **<https://radarr.example.com/_oauth>**.
|
||||
|
||||
When you successfully authenticate against the OIDC provider, you are redirected to the "_redirect_uri_" of https://auth.example.com. Again, your request hits Traefik, which forwards the session to traefik-forward-auth, which **knows** that you've just been authenticated (_cookies have a role to play here_). Traefik-forward-auth also knows the URL of your **original** request (_thanks to the X-Forwarded-Whatever header_). Traefik-forward-auth redirects you to your original destination, and everybody is happy.
|
||||
When you successfully authenticate against the OIDC provider, you are redirected to the "_redirect_uri_" of <https://auth.example.com>. Again, your request hits Traefik, which forwards the session to traefik-forward-auth, which **knows** that you've just been authenticated (_cookies have a role to play here_). Traefik-forward-auth also knows the URL of your **original** request (_thanks to the X-Forwarded-Whatever header_). Traefik-forward-auth redirects you to your original destination, and everybody is happy.
|
||||
|
||||
This clever workaround only works under 2 conditions:
|
||||
|
||||
@@ -50,4 +50,4 @@ Traefik Forward Auth needs to authenticate an incoming user against a provider.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: Authhost mode is specifically handy for Google authentication, since Google doesn't permit wildcard redirect_uris, like [KeyCloak][keycloak] does.
|
||||
[^1]: Authhost mode is specifically handy for Google authentication, since Google doesn't permit wildcard redirect_uris, like [KeyCloak][keycloak] does.
|
||||
|
||||
@@ -49,7 +49,7 @@ staticPasswords:
|
||||
|
||||
Create `/var/data/config/traefik-forward-auth/traefik-forward-auth.env` as follows:
|
||||
|
||||
```
|
||||
```bash
|
||||
DEFAULT_PROVIDER: oidc
|
||||
PROVIDERS_OIDC_CLIENT_ID: foo # This is the staticClients.id value in config.yml above
|
||||
PROVIDERS_OIDC_CLIENT_SECRET: bar # This is the staticClients.secret value in config.yml above
|
||||
@@ -176,7 +176,7 @@ Once you redeploy traefik-forward-auth with the above, it **should** use dex as
|
||||
|
||||
### Test
|
||||
|
||||
Browse to https://whoami.example.com (_obviously, customized for your domain and having created a DNS record_), and all going according to plan, you'll be redirected to a CoreOS Dex login. Once successfully logged in, you'll be directed to the basic whoami page :thumbsup:
|
||||
Browse to <https://whoami.example.com> (_obviously, customized for your domain and having created a DNS record_), and all going according to plan, you'll be redirected to a CoreOS Dex login. Once successfully logged in, you'll be directed to the basic whoami page :thumbsup:
|
||||
|
||||
### Protect services
|
||||
|
||||
|
||||
@@ -12,9 +12,9 @@ This recipe will illustrate how to point Traefik Forward Auth to Google, confirm
|
||||
|
||||
#### TL;DR
|
||||
|
||||
Log into https://console.developers.google.com/, create a new project then search for and select "**Credentials**" in the search bar.
|
||||
Log into <https://console.developers.google.com/>, create a new project then search for and select "**Credentials**" in the search bar.
|
||||
|
||||
Fill out the "OAuth Consent Screen" tab, and then click, "**Create Credentials**" > "**OAuth client ID**". Select "**Web Application**", fill in the name of your app, skip "**Authorized JavaScript origins**" and fill "**Authorized redirect URIs**" with either all the domains you will allow authentication from, appended with the url-path (*e.g. https://radarr.example.com/_oauth, https://radarr.example.com/_oauth, etc*), or if you don't like frustration, use a "auth host" URL instead, like "*https://auth.example.com/_oauth*" (*see below for details*)
|
||||
Fill out the "OAuth Consent Screen" tab, and then click, "**Create Credentials**" > "**OAuth client ID**". Select "**Web Application**", fill in the name of your app, skip "**Authorized JavaScript origins**" and fill "**Authorized redirect URIs**" with either all the domains you will allow authentication from, appended with the url-path (*e.g. <https://radarr.example.com/_oauth>, <https://radarr.example.com/_oauth>, etc*), or if you don't like frustration, use a "auth host" URL instead, like "*<https://auth.example.com/_oauth>*" (*see below for details*)
|
||||
|
||||
#### Monkey see, monkey do 🙈
|
||||
|
||||
@@ -27,7 +27,7 @@ Here's a [screencast I recorded](https://static.funkypenguin.co.nz/2021/screenca
|
||||
|
||||
Create `/var/data/config/traefik-forward-auth/traefik-forward-auth.env` as follows:
|
||||
|
||||
```
|
||||
```bash
|
||||
PROVIDERS_GOOGLE_CLIENT_ID=<your client id>
|
||||
PROVIDERS_GOOGLE_CLIENT_SECRET=<your client secret>
|
||||
SECRET=<a random string, make it up>
|
||||
@@ -41,7 +41,7 @@ WHITELIST=you@yourdomain.com, me@mydomain.com
|
||||
|
||||
Create `/var/data/config/traefik-forward-auth/traefik-forward-auth.yml` as follows:
|
||||
|
||||
```
|
||||
```yaml
|
||||
traefik-forward-auth:
|
||||
image: thomseddon/traefik-forward-auth:2.1.0
|
||||
env_file: /var/data/config/traefik-forward-auth/traefik-forward-auth.env
|
||||
@@ -77,7 +77,7 @@ Create `/var/data/config/traefik-forward-auth/traefik-forward-auth.yml` as follo
|
||||
|
||||
If you're not confident that forward authentication is working, add a simple "whoami" test container to the above .yml, to help debug traefik forward auth, before attempting to add it to a more complex container.
|
||||
|
||||
```
|
||||
```yaml
|
||||
# This simply validates that traefik forward authentication is working
|
||||
whoami:
|
||||
image: containous/whoami
|
||||
@@ -114,7 +114,7 @@ Deploy traefik-forward-auth with ```docker stack deploy traefik-forward-auth -c
|
||||
|
||||
### Test
|
||||
|
||||
Browse to https://whoami.example.com (*obviously, customized for your domain and having created a DNS record*), and all going according to plan, you should be redirected to a Google login. Once successfully logged in, you'll be directed to the basic whoami page.
|
||||
Browse to <https://whoami.example.com> (*obviously, customized for your domain and having created a DNS record*), and all going according to plan, you should be redirected to a Google login. Once successfully logged in, you'll be directed to the basic whoami page.
|
||||
|
||||
## Summary
|
||||
|
||||
@@ -127,4 +127,4 @@ What have we achieved? By adding an additional three simple labels to any servic
|
||||
|
||||
[^1]: Be sure to populate `WHITELIST` in `traefik-forward-auth.env`, else you'll happily be granting **any** authenticated Google account access to your services!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -10,7 +10,7 @@ While the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe
|
||||
|
||||
Create `/var/data/config/traefik/traefik-forward-auth.env` as follows (_change "master" if you created a different realm_):
|
||||
|
||||
```
|
||||
```bash
|
||||
CLIENT_ID=<your keycloak client name>
|
||||
CLIENT_SECRET=<your keycloak client secret>
|
||||
OIDC_ISSUER=https://<your keycloak URL>/auth/realms/master
|
||||
@@ -23,8 +23,8 @@ COOKIE_DOMAIN=<the root FQDN of your domain>
|
||||
|
||||
This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/ha-docker-swarm/traefik/) recipe:
|
||||
|
||||
```
|
||||
traefik-forward-auth:
|
||||
```bash
|
||||
traefik-forward-auth:
|
||||
image: funkypenguin/traefik-forward-auth
|
||||
env_file: /var/data/config/traefik/traefik-forward-auth.env
|
||||
networks:
|
||||
@@ -39,8 +39,8 @@ This is a small container, you can simply add the following content to the exist
|
||||
|
||||
If you're not confident that forward authentication is working, add a simple "whoami" test container, to help debug traefik forward auth, before attempting to add it to a more complex container.
|
||||
|
||||
```
|
||||
# This simply validates that traefik forward authentication is working
|
||||
```bash
|
||||
# This simply validates that traefik forward authentication is working
|
||||
whoami:
|
||||
image: containous/whoami
|
||||
networks:
|
||||
@@ -64,13 +64,13 @@ Redeploy traefik with `docker stack deploy traefik-app -c /var/data/traefik/trae
|
||||
|
||||
### Test
|
||||
|
||||
Browse to https://whoami.example.com (_obviously, customized for your domain and having created a DNS record_), and all going according to plan, you'll be redirected to a KeyCloak login. Once successfully logged in, you'll be directed to the basic whoami page.
|
||||
Browse to <https://whoami.example.com> (_obviously, customized for your domain and having created a DNS record_), and all going according to plan, you'll be redirected to a KeyCloak login. Once successfully logged in, you'll be directed to the basic whoami page.
|
||||
|
||||
### Protect services
|
||||
|
||||
To protect any other service, ensure the service itself is exposed by Traefik (_if you were previously using an oauth_proxy for this, you may have to migrate some labels from the oauth_proxy serivce to the service itself_). Add the following 3 labels:
|
||||
|
||||
```
|
||||
```yaml
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
@@ -89,4 +89,4 @@ What have we achieved? By adding an additional three simple labels to any servic
|
||||
|
||||
[^1]: KeyCloak is very powerful. You can add 2FA and all other clever things outside of the scope of this simple recipe ;)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -36,7 +36,7 @@ While it's possible to configure traefik via docker command arguments, I prefer
|
||||
|
||||
Create `/var/data/traefikv2/traefik.toml` as follows:
|
||||
|
||||
```
|
||||
```bash
|
||||
[global]
|
||||
checkNewVersion = true
|
||||
|
||||
@@ -87,7 +87,7 @@ Create `/var/data/traefikv2/traefik.toml` as follows:
|
||||
|
||||
Create `/var/data/config/traefik/traefik.yml` as follows:
|
||||
|
||||
```
|
||||
```yaml
|
||||
version: "3.2"
|
||||
|
||||
# What is this?
|
||||
@@ -116,7 +116,7 @@ networks:
|
||||
|
||||
Create `/var/data/config/traefikv2/traefikv2.env` with the environment variables required by the provider you chose in the LetsEncrypt DNS Challenge section of `traefik.toml`. Full configuration options can be found in the [Traefik documentation](https://doc.traefik.io/traefik/https/acme/#providers). Route53 and CloudFlare examples are below.
|
||||
|
||||
```
|
||||
```bash
|
||||
# Route53 example
|
||||
AWS_ACCESS_KEY_ID=<your-aws-key>
|
||||
AWS_SECRET_ACCESS_KEY=<your-aws-secret>
|
||||
@@ -185,7 +185,7 @@ networks:
|
||||
|
||||
Docker won't start a service with a bind-mount to a non-existent file, so prepare an empty acme.json and traefik.log (_with the appropriate permissions_) by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
touch /var/data/traefikv2/acme.json
|
||||
touch /var/data/traefikv2/traefik.log
|
||||
chmod 600 /var/data/traefikv2/acme.json
|
||||
@@ -205,7 +205,7 @@ Likewise with the log file.
|
||||
|
||||
First, launch the traefik stack, which will do nothing other than create an overlay network by running `docker stack deploy traefik -c /var/data/config/traefik/traefik.yml`
|
||||
|
||||
```
|
||||
```bash
|
||||
[root@kvm ~]# docker stack deploy traefik -c /var/data/config/traefik/traefik.yml
|
||||
Creating network traefik_public
|
||||
Creating service traefik_scratch
|
||||
@@ -214,7 +214,7 @@ Creating service traefik_scratch
|
||||
|
||||
Now deploy the traefik application itself (*which will attach to the overlay network*) by running `docker stack deploy traefikv2 -c /var/data/config/traefikv2/traefikv2.yml`
|
||||
|
||||
```
|
||||
```bash
|
||||
[root@kvm ~]# docker stack deploy traefikv2 -c /var/data/config/traefikv2/traefikv2.yml
|
||||
Creating service traefikv2_traefikv2
|
||||
[root@kvm ~]#
|
||||
@@ -222,7 +222,7 @@ Creating service traefikv2_traefikv2
|
||||
|
||||
Confirm traefik is running with `docker stack ps traefikv2`:
|
||||
|
||||
```
|
||||
```bash
|
||||
root@raphael:~# docker stack ps traefikv2
|
||||
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
|
||||
lmvqcfhap08o traefikv2_app.dz178s1aahv16bapzqcnzc03p traefik:v2.4 donatello Running Running 2 minutes ago *:443->443/tcp,*:80->80/tcp
|
||||
@@ -231,11 +231,11 @@ root@raphael:~#
|
||||
|
||||
### Check Traefik Dashboard
|
||||
|
||||
You should now be able to access[^1] your traefik instance on **https://traefik.<your domain\>** (*if your LetsEncrypt certificate is working*), or **http://<node IP\>:8080** (*if it's not*)- It'll look a little lonely currently (*below*), but we'll populate it as we add recipes :grin:
|
||||
You should now be able to access[^1] your traefik instance on `https://traefik.<your domain\>` (*if your LetsEncrypt certificate is working*), or `http://<node IP\>:8080` (*if it's not*)- It'll look a little lonely currently (*below*), but we'll populate it as we add recipes :grin:
|
||||
|
||||

|
||||
|
||||
### Summary
|
||||
### Summary
|
||||
|
||||
!!! summary
|
||||
We've achieved:
|
||||
@@ -246,4 +246,4 @@ You should now be able to access[^1] your traefik instance on **https://traefik.
|
||||
|
||||
[^1]: Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/ha-docker-swarm/traefik-forward-auth/)!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
Reference in New Issue
Block a user