mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 01:36:23 +00:00
Add Komga recipe (#132)
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
# Docker Swarm Mode
|
||||
|
||||
For truly highly-available services with Docker containers, we need an orchestration system. Docker Swarm (as defined at 1.13) is the simplest way to achieve redundancy, such that a single docker host could be turned off, and none of our services will be interrupted.
|
||||
For truly highly-available services with Docker containers, we need an orchestration system. Docker Swarm (*as defined at 1.13*) is the simplest way to achieve redundancy, such that a single docker host could be turned off, and none of our services will be interrupted.
|
||||
|
||||
## Ingredients
|
||||
|
||||
@@ -81,13 +81,13 @@ To add a manager to this swarm, run the following command:
|
||||
Run the command provided on your other nodes to join them to the swarm as managers. After addition of a node, the output of ```docker node ls``` (on either host) should reflect all the nodes:
|
||||
|
||||
|
||||
````
|
||||
```
|
||||
[root@ds2 davidy]# docker node ls
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
b54vls3wf8xztwfz79nlkivt8 ds1.funkypenguin.co.nz Ready Active Leader
|
||||
xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reachable
|
||||
[root@ds2 davidy]#
|
||||
````
|
||||
```
|
||||
|
||||
### Setup automated cleanup
|
||||
|
||||
@@ -95,7 +95,7 @@ Docker swarm doesn't do any cleanup of old images, so as you experiment with var
|
||||
|
||||
To address this, we'll run the "[meltwater/docker-cleanup](https://github.com/meltwater/docker-cleanup)" container on all of our nodes. The container will clean up unused images after 30 minutes.
|
||||
|
||||
First, create docker-cleanup.env (_mine is under /var/data/config/docker-cleanup_), and exclude container images we **know** we want to keep:
|
||||
First, create `docker-cleanup.env` (_mine is under `/var/data/config/docker-cleanup`_), and exclude container images we **know** we want to keep:
|
||||
|
||||
```
|
||||
KEEP_IMAGES=traefik,keepalived,docker-mailserver
|
||||
@@ -136,7 +136,7 @@ Launch the cleanup stack by running ```docker stack deploy docker-cleanup -c <pa
|
||||
|
||||
If your swarm runs for a long time, you might find yourself running older container images, after newer versions have been released. If you're the sort of geek who wants to live on the edge, configure [shepherd](https://github.com/djmaze/shepherd) to auto-update your container images regularly.
|
||||
|
||||
Create /var/data/config/shepherd/shepherd.env as follows:
|
||||
Create `/var/data/config/shepherd/shepherd.env` as follows:
|
||||
|
||||
```
|
||||
# Don't auto-update Plex or Emby (or Jellyfin), I might be watching a movie! (Customize this for the containers you _don't_ want to auto-update)
|
||||
@@ -165,11 +165,16 @@ services:
|
||||
|
||||
Launch shepherd by running ```docker stack deploy shepherd -c /var/data/config/shepherd/shepherd.yml```, and then just forget about it, comfortable in the knowledge that every day, Shepherd will check that your images are the latest available, and if not, will destroy and recreate the container on the latest available image.
|
||||
|
||||
### Summary
|
||||
## Summary
|
||||
|
||||
After completing the above, you should have:
|
||||
|
||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/)
|
||||
What have we achieved?
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/)
|
||||
|
||||
|
||||
--8<-- "5-min-install.md"
|
||||
|
||||
## Chef's Notes 📓
|
||||
@@ -8,6 +8,8 @@ Normally this is done using a HA loadbalancer, but since Docker Swarm aready pro
|
||||
|
||||
This is accomplished with the use of keepalived on at least two nodes.
|
||||
|
||||

|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
@@ -18,13 +20,13 @@ This is accomplished with the use of keepalived on at least two nodes.
|
||||
|
||||
New:
|
||||
|
||||
* [ ] At least 3 x IPv4 addresses (*one for each node and one for the virtual IP*)
|
||||
* [ ] At least 3 x IPv4 addresses (*one for each node and one for the virtual IP[^1])
|
||||
|
||||
## Preparation
|
||||
|
||||
### Enable IPVS module
|
||||
|
||||
On all nodes which will participate in keepalived, we need the "ip_vs" kernel module, in order to permit serivces to bind to non-local interface addresses.
|
||||
On all nodes which will participate in keepalived, we need the "ip_vs" kernel module, in order to permit services to bind to non-local interface addresses.
|
||||
|
||||
Set this up once-off for both the primary and secondary nodes, by running:
|
||||
|
||||
@@ -37,9 +39,11 @@ modprobe ip_vs
|
||||
|
||||
Assuming your IPs are as follows:
|
||||
|
||||
```
|
||||
* 192.168.4.1 : Primary
|
||||
* 192.168.4.2 : Secondary
|
||||
* 192.168.4.3 : Virtual
|
||||
```
|
||||
|
||||
Run the following on the primary
|
||||
```
|
||||
@@ -51,7 +55,7 @@ docker run -d --name keepalived --restart=always \
|
||||
osixia/keepalived:2.0.20
|
||||
```
|
||||
|
||||
And on the secondary:
|
||||
And on the secondary[^2]:
|
||||
```
|
||||
docker run -d --name keepalived --restart=always \
|
||||
--cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW --net=host \
|
||||
@@ -65,7 +69,20 @@ docker run -d --name keepalived --restart=always \
|
||||
|
||||
That's it. Each node will talk to the other via unicast (*no need to un-firewall multicast addresses*), and the node with the highest priority gets to be the master. When ingress traffic arrives on the master node via the VIP, docker's routing mesh will deliver it to the appropriate docker node.
|
||||
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved?
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] A Virtual IP to which all cluster traffic can be forwarded externally, making it "*Highly Available*"
|
||||
|
||||
--8<-- "5-min-install.md"
|
||||
|
||||
|
||||
## Chef's notes 📓
|
||||
|
||||
1. Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
|
||||
2. More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.
|
||||
[^1]: Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
|
||||
[^2]: More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.
|
||||
Reference in New Issue
Block a user