1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-14 10:16:27 +00:00

Update README

This commit is contained in:
David Young
2017-07-26 00:00:49 +12:00
parent 20d77ffb61
commit 704aa02fb3
22 changed files with 71 additions and 525 deletions

View File

@@ -1,3 +0,0 @@
# About
This is the advanced section. It's for geeks who are proficient working in the command line, understanding networking, and who enjoy a challenge!

View File

@@ -1,278 +0,0 @@
# Docker Swarm
For truly highly-available services with Docker containers, we need an orchestration system. Docker Swarm (as defined at 1.13) is the simplest way to achieve redundancy, such that a single docker host could be turned off, and none of our services will be interrupted.
## Ingredients
* 2 x CentOS Atomic hosts (bare-metal or VMs). A reasonable minimum would be:
* 1 x vCPU
* 1GB repo_name
* 10GB HDD
* Hosts must be within the same subnet, and connected on a low-latency link (i.e., no WAN links)
## Preparation
### Install CentOS Atomic hosts
I decided to use CentOS Atomic rather than full-blown CentOS7, for the following reasons:
1. I want less responsibility for maintaining the system, including ensuring regular software updates and reboots. Atomic's idempotent nature means the OS is largely real-only, and updates/rollbacks are "atomic" (haha) procedures, which can be easily rolled back if required.
2. For someone used to administrating servers individually, Atomic is a PITA. You have to employ [tricky](http://blog.oddbit.com/2015/03/10/booting-cloud-images-with-libvirt/) [tricks](https://spinningmatt.wordpress.com/2014/01/08/a-recipe-for-starting-cloud-images-with-virt-install/) to get it to install in a non-cloud environment. It's not designed for tweaking or customizing beyond what cloud-config is capable of. For my purposes, this is good, because it forces me to change my thinking - to consider every daemon as a container, and every config as code, to be checked in and version-controlled. Atomic forces this thinking on you.
3. I want the design to be as "portable" as possible. While I run it on VPSs now, I may want to migrate it to a "cloud" provider in the future, and I'll want the most portable, reproducible design.
```
systemctl disable docker --now
systemctl enable docker-latest --now
sed -i '/DOCKERBINARY/s/^#//g' /etc/sysconfig/docker
atomic host upgrade
```
## Setup Swarm
Setting up swarm really is straightforward. You need to ensure that the nodes can talk to each other.
In my case, my nodes are on a shared subnet with other VPSs, so I wanted to ensure that they were not exposed more than necessary. If I were doing this within a cloud infrastructure which provided separation of instances, I wouldn't need to be so specific:
```
# Permit Docker Swarm from other nodes/managers
-A INPUT -s 202.170.164.47 -p tcp --dport 2376 -j ACCEPT
-A INPUT -s 202.170.164.47 -p tcp --dport 2377 -j ACCEPT
-A INPUT -s 202.170.164.47 -p tcp --dport 7946 -j ACCEPT
-A INPUT -s 202.170.164.47 -p udp --dport 7946 -j ACCEPT
-A INPUT -s 202.170.164.47 -p udp --dport 4789 -j ACCEPT
```
````
Now, to launch my swarm:
```docker swarm init```
Yeah, that was it. Now I have a 1-node swarm.
```
[root@ds1 ~]# docker swarm init
Swarm initialized: current node (b54vls3wf8xztwfz79nlkivt8) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-bsud7xnvhv4cicwi7l6c9s6l0 \
202.170.164.47:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@ds1 ~]#
```
Right, so I a 1-node swarm:
```
[root@ds1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
b54vls3wf8xztwfz79nlkivt8 * ds1.funkypenguin.co.nz Ready Active Leader
[root@ds1 ~]#
```
If I followed the "join" command above, I'd end up with a worker node. In my case, I actually want another manager, so that I have full HA, so I followed the instruction and ran ```docker swarm join-token manager``` instead.:
```
[root@ds1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-cfm24bq2zvfkcwujwlp5zqxta \
202.170.164.47:2377
[root@ds1 ~]#
```
I run the command:
````
[root@ds2 davidy]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
b54vls3wf8xztwfz79nlkivt8 ds1.funkypenguin.co.nz Ready Active Leader
xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reachable
[root@ds2 davidy]#
````
Swarm initialized: current node (25fw5695wkqxm8mtwqnktwykr) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-e9rw3a8pi53jhlghuyscm52bn \
202.170.161.87:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@ds1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
202.170.161.87:2377
[root@ds1 ~]#
````
Added the second host to the swarm, then promoted it.
````
[root@ds2 ~]# docker swarm join \
> --token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
> 202.170.161.87:2377
This node joined a swarm as a manager.
[root@ds2 ~]#
````
lvcreate -l 100%FREE -n gfs /dev/atomicos
mkfs.xfs -i size=512 /dev/atomicos/gfs
mkdir -p /srv/glusterfs
echo '//dev/atomicos/gfs /srv/glusterfs/ xfs defaults 1 2' >> /etc/fstab
mount -a && mount
````
docker run \
-h glusterfs-server \
-v /etc/glusterfs:/etc/glusterfs:z \
-v /var/lib/glusterd:/var/lib/glusterd:z \
-v /var/log/glusterfs:/var/log/glusterfs:z \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v /var/srv/glusterfs:/var/srv/glusterfs \
-d --privileged=true --net=host \
--restart=always \
--name="glusterfs-server" \
gluster/gluster-centos
````
now exec into the container, and "probe" its peer, to establish the gluster cluster
```
[root@ds1 ~]# docker exec -it glusterfs-server bash
[root@glusterfs-server /]#
```
```
[root@glusterfs-server /]# gluster peer probe ds2
peer probe: success.
[root@glusterfs-server /]# gluster peer status
Number of Peers: 1
Hostname: ds2
Uuid: 9fbc1985-4e8d-4380-9c10-3c699ebcb10c
State: Peer in Cluster (Connected)
[root@glusterfs-server /]# exit
exit
[root@ds1 ~]#
```
Run ```gluster volume create gv0 replica 2 ds1:/var/srv/glusterfs/gv0 ds2:/var/srv/glusterfs/gv0``` as below to create the cluster:
```
[root@glusterfs-server /]# gluster volume create gv0 replica 2 ds1:/var/srv/glusterfs/gv0 ds2:/var/srv/glusterfs/gv0
volume create: gv0: success: please start the volume to access data
[root@glusterfs-server /]#
```
Run ```gluster volume start gv0``` to start it:
```
[root@glusterfs-server /]# gluster volume start gv0
volume start: gv0: success
[root@glusterfs-server /]#
```
Exit out of the container:
```
[root@glusterfs-server /]# exit
exit
[root@ds1 ~]#
```
Create your mountpoint on the host, and mount the gluster volume:
```
mkdir /srv/data
HOSTNAME=`hostname -s`
echo "$HOSTNAME:/gv0 /srv/data glusterfs defaults,_netdev 0 0" >> /etc/fstab
mount -a && mount
```
mount -t glusterfs ds1:/gv0 /srv/data/
on secondary
mkdir /srv/data
mount -t glusterfs ds2:/gv0 /srv/data/
/dev/VG-vda3/gv0 /srv/glusterfs xfs defaults 1 2
ds2:/gv0 /srv/data glusterfs defaults,_netdev 0 0
install docker-compose:
````
curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
````
### Atomic hosts
docker stack deploy traefik -c traefik.yml
need to deal with selinux though :(, had to set to permissive to get it working
this seemed to work:
https://github.com/dpw/selinux-dockersock
````
mkdir ~/dockersock
cd ~/dockersock
curl -O https://raw.githubusercontent.com/dpw/selinux-dockersock/master/Makefile
curl -O https://raw.githubusercontent.com/dpw/selinux-dockersock/master/dockersock.te
make && semodule -i dockersock.pp
````
however... glusterfs still doesn't support selinux, so until that's sorted, you have te disable selinux anyway with "setenforce 0", in order for _ANY_ containers to write to the glusterfs fuse partition.
need to add something to rc.local to make glustetr fs mount
__ maybe __this works:
setsebool -P virt_sandbox_use_fusefs on
added {"experimental":true} to /etc/docker-latest/daemon.json to enable logs of deployed services
I.e changed this:
```
Usage: docker stack COMMAND
{
"log-driver": "journald",
"signature-verification": false
}
```
To this:
```
{
"log-driver": "journald",
"signature-verification": false,
"experimental": true
}```
!!! note the comma after "false" above

View File

@@ -1,226 +0,0 @@
# keepalived
## On both hosts
The design for redundant Docker hosts requires a virtual IP for high availability. To enable this, we install the "keepalived" daemon on both hosts:
````yum -y install keepalived````
Below, we'll configure a very basic primary/secondary configuration.
!!! note
Note that if you have a firewall on your hosts, you need to permit the VRRP traffic, as follows (note that for both INPUT and OUTPUT rule, the destination is 224.0.0.18, a multicast address)
````
# permit keepalived in
-A INPUT -i eth0 -d 224.0.0.18 -j ACCEPT
# permit keepalived out
-A OUTPUT -o eth0 -d 224.0.0.18 -j ACCEPT
````
## On the primary
Configure keepalived (note the priority)
````
VIP=<YOUR HA IP>
PASS=<PASSWORD-OF-CHOICE>
cat << EOF > /etc/keepalived/keepalived.conf
vrrp_instance DS {
state MASTER
interface eth0
virtual_router_id 42
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass $PASS
}
virtual_ipaddress {
$VIP
}
}
EOF
systemctl enable keepalived
systemctl start keepalived
````
## On the secondary
Repeat the same on the secondary (all that changes is the priority - the priority of the secondary must be lower than that of the primary):
````
VIP=<YOUR HA IP>
PASS=<PASSWORD-OF-CHOICE>
cat << EOF > /etc/keepalived/keepalived.conf
vrrp_instance DS {
state MASTER
interface eth0
virtual_router_id 42
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass $PASS
}
virtual_ipaddress {
$VIP
}
}
EOF
systemctl enable keepalived
systemctl start keepalived
````
Check the state of keepalived on both nodes by running
````systemctl status keepalived````
## Confirm HA function
You should now be able to ping your HA IP address, and you can test the HA function by running ````tail -f /var/log/messages | grep Keepalived```` the secondary node, and turning keepalived off/on on the primary node, by running ````systemctl stop keepalived && sleep 10s && systemctl start keepalived````.
## Docker
On both hosts, run:
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum makecache fast
sudo yum install docker-ce
sudo systemctl start docker
sudo docker run hello-world
## Setup Swarm
````
[[root@ds1 ~]# docker swarm init
Swarm initialized: current node (25fw5695wkqxm8mtwqnktwykr) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-e9rw3a8pi53jhlghuyscm52bn \
202.170.161.87:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@ds1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
202.170.161.87:2377
[root@ds1 ~]#
````
Added the second host to the swarm, then promoted it.
````
[root@ds2 ~]# docker swarm join \
> --token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
> 202.170.161.87:2377
This node joined a swarm as a manager.
[root@ds2 ~]#
````
mkfs.xfs -i size=512 /dev/vdb1
mkdir -p /srv/glusterfs
echo '/dev/vdb1 /srv/glusterfs/ xfs defaults 1 2' >> /etc/fstab
mount -a && mount
````
docker run \
-h glusterfs-server \
-v /etc/glusterfs:/etc/glusterfs:z \
-v /var/lib/glusterd:/var/lib/glusterd:z \
-v /var/log/glusterfs:/var/log/glusterfs:z \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v /var/srv/glusterfs:/var/srv/glusterfs \
-d --privileged=true --net=host \
--restart=always \
--name="glusterfs-server" \
gluster/gluster-centos
````
gluster volume create gv0 replica 2 server1:/data/brick1/gv0 server2:/data/brick1/gv0
gluster volume start gv0
[root@ds2 ~]# gluster peer probe ds1
gluster volume create gv0 replica 2 ds1:/srv/glusterfs/gv0 ds2:/srv/glusterfs/gv0
gluster volume start gv0
[root@ds1 ~]# mkdir /srv/data
[root@ds1 ~]# mount -t glusterfs ds1:/gv0 /srv/data/
on secondary
mkdir /srv/data
mount -t glusterfs ds2:/gv0 /srv/data/
/dev/VG-vda3/gv0 /srv/glusterfs xfs defaults 1 2
ds2:/gv0 /srv/data glusterfs defaults,_netdev 0 0
install docker-compose:
````
curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
````
### Atomic hosts
systemctl disable docker --now
systemctl enable docker-latest --now
sed -i '/DOCKERBINARY/s/^#//g' /etc/sysconfig/docker
atomic host upgrade
docker stack deploy traefik -c traefik.yml
need to deal with selinux though :(, had to set to permissive to get it working
this seemed to work:
https://github.com/dpw/selinux-dockersock
need to add something to rc.local to make glustetr fs mount
added {"experimental":true} to /etc/docker/dameon.json to enable logs of deployed services
echo "modprobe ip_vs" >> /etc/rc.local
for primary / secondary keepalived
docker run -d --name keepalived --restart=always \
--cap-add=NET_ADMIN --net=host \
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['202.170.164.47', '202.170.164.48']" \
-e KEEPALIVED_VIRTUAL_IPS=202.170.164.49 \
-e KEEPALIVED_PRIORITY=100 \
osixia/keepalived:1.3.5

View File

@@ -1 +0,0 @@
docker stack deploy traefik -c docker-compose.yml