mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2026-01-02 19:39:21 +00:00
Tidy up like it's 2019
This commit is contained in:
@@ -4,21 +4,20 @@ For truly highly-available services with Docker containers, we need an orchestra
|
||||
|
||||
## Ingredients
|
||||
|
||||
* 3 x CentOS Atomic hosts (bare-metal or VMs). A reasonable minimum would be:
|
||||
* 1 x vCPU
|
||||
* 1GB repo_name
|
||||
* 10GB HDD
|
||||
* Hosts must be within the same subnet, and connected on a low-latency link (i.e., no WAN links)
|
||||
!!! summary
|
||||
* [X] 3 x modern linux hosts (*bare-metal or VMs*). A reasonable minimum would be:
|
||||
* 2 x vCPU
|
||||
* 2GB RAM
|
||||
* 20GB HDD
|
||||
* [X] Hosts must be within the same subnet, and connected on a low-latency link (*i.e., no WAN links*)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Release the swarm!
|
||||
|
||||
Now, to launch my swarm:
|
||||
Now, to launch a swarm. Pick a target node, and run `docker swarm init`
|
||||
|
||||
```docker swarm init```
|
||||
|
||||
Yeah, that was it. Now I have a 1-node swarm.
|
||||
Yeah, that was it. Seriously. Now we have a 1-node swarm.
|
||||
|
||||
```
|
||||
[root@ds1 ~]# docker swarm init
|
||||
@@ -35,7 +34,7 @@ To add a manager to this swarm, run 'docker swarm join-token manager' and follow
|
||||
[root@ds1 ~]#
|
||||
```
|
||||
|
||||
Run ```docker node ls``` to confirm that I have a 1-node swarm:
|
||||
Run `docker node ls` to confirm that you have a 1-node swarm:
|
||||
|
||||
```
|
||||
[root@ds1 ~]# docker node ls
|
||||
@@ -44,7 +43,7 @@ b54vls3wf8xztwfz79nlkivt8 * ds1.funkypenguin.co.nz Ready Active Leade
|
||||
[root@ds1 ~]#
|
||||
```
|
||||
|
||||
Note that when I ran ```docker swarm init``` above, the CLI output gave me a command to run to join further nodes to my swarm. This would join the nodes as __workers__ (as opposed to __managers__). Workers can easily be promoted to managers (and demoted again), but since we know that we want our other two nodes to be managers too, it's simpler just to add them to the swarm as managers immediately.
|
||||
Note that when you run `docker swarm init` above, the CLI output gives youe a command to run to join further nodes to my swarm. This command would join the nodes as __workers__ (*as opposed to __managers__*). Workers can easily be promoted to managers (*and demoted again*), but since we know that we want our other two nodes to be managers too, it's simpler just to add them to the swarm as managers immediately.
|
||||
|
||||
On the first swarm node, generate the necessary token to join another manager by running ```docker swarm join-token manager```:
|
||||
|
||||
@@ -59,7 +58,7 @@ To add a manager to this swarm, run the following command:
|
||||
[root@ds1 ~]#
|
||||
```
|
||||
|
||||
Run the command provided on your second node to join it to the swarm as a manager. After adding the second node, the output of ```docker node ls``` (on either host) should reflect two nodes:
|
||||
Run the command provided on your other nodes to join them to the swarm as managers. After addition of a node, the output of ```docker node ls``` (on either host) should reflect all the nodes:
|
||||
|
||||
|
||||
````
|
||||
@@ -70,19 +69,6 @@ xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reach
|
||||
[root@ds2 davidy]#
|
||||
````
|
||||
|
||||
Repeat the process to add your third node.
|
||||
|
||||
Finally, ```docker node ls``` should reflect that you have 3 reachable manager nodes, one of whom is the "Leader":
|
||||
|
||||
```
|
||||
[root@ds3 ~]# docker node ls
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
36b4twca7i3hkb7qr77i0pr9i ds1.example.com Ready Active Reachable
|
||||
l14rfzazbmibh1p9wcoivkv1s * ds3.example.com Ready Active Reachable
|
||||
tfsgxmu7q23nuo51wwa4ycpsj ds2.example.com Ready Active Leader
|
||||
[root@ds3 ~]#
|
||||
```
|
||||
|
||||
### Setup automated cleanup
|
||||
|
||||
Docker swarm doesn't do any cleanup of old images, so as you experiment with various stacks, and as updated containers are released upstream, you'll soon find yourself loosing gigabytes of disk space to old, unused images.
|
||||
@@ -135,8 +121,8 @@ Create /var/data/config/shepherd/shepherd.env as follows:
|
||||
```
|
||||
# Don't auto-update Plex or Emby, I might be watching a movie! (Customize this for the containers you _don't_ want to auto-update)
|
||||
BLACKLIST_SERVICES="plex_plex emby_emby"
|
||||
# Run every 24 hours. I _really_ don't need new images more frequently than that!
|
||||
SLEEP_TIME=1440
|
||||
# Run every 24 hours. Note that SLEEP_TIME appears to be in seconds.
|
||||
SLEEP_TIME=86400
|
||||
```
|
||||
|
||||
Then create /var/data/config/shepherd/shepherd.yml as follows:
|
||||
@@ -178,7 +164,7 @@ echo 'source ~/gcb-aliases.sh' >> ~/.bash_profile
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
### Tip your waiter (support me) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
|
||||
Reference in New Issue
Block a user