mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 01:36:23 +00:00
Update README
This commit is contained in:
37
.gitignore
vendored
Normal file
37
.gitignore
vendored
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
# Compiled source #
|
||||||
|
###################
|
||||||
|
*.com
|
||||||
|
*.class
|
||||||
|
*.dll
|
||||||
|
*.exe
|
||||||
|
*.o
|
||||||
|
*.so
|
||||||
|
|
||||||
|
# Packages #
|
||||||
|
############
|
||||||
|
# it's better to unpack these files and commit the raw source
|
||||||
|
# git has its own built in compression methods
|
||||||
|
*.7z
|
||||||
|
*.dmg
|
||||||
|
*.gz
|
||||||
|
*.iso
|
||||||
|
*.jar
|
||||||
|
*.rar
|
||||||
|
*.tar
|
||||||
|
*.zip
|
||||||
|
|
||||||
|
# Logs and databases #
|
||||||
|
######################
|
||||||
|
*.log
|
||||||
|
*.sql
|
||||||
|
*.sqlite
|
||||||
|
|
||||||
|
# OS generated files #
|
||||||
|
######################
|
||||||
|
.DS_Store
|
||||||
|
.DS_Store?
|
||||||
|
._*
|
||||||
|
.Spotlight-V100
|
||||||
|
.Trashes
|
||||||
|
ehthumbs.db
|
||||||
|
Thumbs.db
|
||||||
@@ -4,6 +4,4 @@
|
|||||||
|
|
||||||
1. "Recipies" are sorted by degree of geekiness required to complete them. Relatively straightforward projects are "beginner", more complex projects are "intermediate", and the really fun ones are "advanced".
|
1. "Recipies" are sorted by degree of geekiness required to complete them. Relatively straightforward projects are "beginner", more complex projects are "intermediate", and the really fun ones are "advanced".
|
||||||
2. Each recipe contains enough detail in a single page to take a project to completion.
|
2. Each recipe contains enough detail in a single page to take a project to completion.
|
||||||
3. When there are optional add-ons/integrations possible to a project (i.e., the addition of "smart LED bulbs" to Home Assistant), this will be reflected as a sub-page of the main project.
|
3. When there are optional add-ons/integrations possible to a project (i.e., the addition of "smart LED bulbs" to Home Assistant), this will be reflected either as a brief "Chef's note" after the recipe, or if they're substantial enough, as a sub-page of the main project
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|||||||
@@ -1,3 +0,0 @@
|
|||||||
# About
|
|
||||||
|
|
||||||
This is the advanced section. It's for geeks who are proficient working in the command line, understanding networking, and who enjoy a challenge!
|
|
||||||
@@ -1,278 +0,0 @@
|
|||||||
# Docker Swarm
|
|
||||||
|
|
||||||
For truly highly-available services with Docker containers, we need an orchestration system. Docker Swarm (as defined at 1.13) is the simplest way to achieve redundancy, such that a single docker host could be turned off, and none of our services will be interrupted.
|
|
||||||
|
|
||||||
## Ingredients
|
|
||||||
|
|
||||||
* 2 x CentOS Atomic hosts (bare-metal or VMs). A reasonable minimum would be:
|
|
||||||
* 1 x vCPU
|
|
||||||
* 1GB repo_name
|
|
||||||
* 10GB HDD
|
|
||||||
* Hosts must be within the same subnet, and connected on a low-latency link (i.e., no WAN links)
|
|
||||||
|
|
||||||
## Preparation
|
|
||||||
|
|
||||||
### Install CentOS Atomic hosts
|
|
||||||
|
|
||||||
I decided to use CentOS Atomic rather than full-blown CentOS7, for the following reasons:
|
|
||||||
|
|
||||||
1. I want less responsibility for maintaining the system, including ensuring regular software updates and reboots. Atomic's idempotent nature means the OS is largely real-only, and updates/rollbacks are "atomic" (haha) procedures, which can be easily rolled back if required.
|
|
||||||
2. For someone used to administrating servers individually, Atomic is a PITA. You have to employ [tricky](http://blog.oddbit.com/2015/03/10/booting-cloud-images-with-libvirt/) [tricks](https://spinningmatt.wordpress.com/2014/01/08/a-recipe-for-starting-cloud-images-with-virt-install/) to get it to install in a non-cloud environment. It's not designed for tweaking or customizing beyond what cloud-config is capable of. For my purposes, this is good, because it forces me to change my thinking - to consider every daemon as a container, and every config as code, to be checked in and version-controlled. Atomic forces this thinking on you.
|
|
||||||
3. I want the design to be as "portable" as possible. While I run it on VPSs now, I may want to migrate it to a "cloud" provider in the future, and I'll want the most portable, reproducible design.
|
|
||||||
|
|
||||||
```
|
|
||||||
systemctl disable docker --now
|
|
||||||
systemctl enable docker-latest --now
|
|
||||||
sed -i '/DOCKERBINARY/s/^#//g' /etc/sysconfig/docker
|
|
||||||
|
|
||||||
atomic host upgrade
|
|
||||||
```
|
|
||||||
|
|
||||||
## Setup Swarm
|
|
||||||
|
|
||||||
Setting up swarm really is straightforward. You need to ensure that the nodes can talk to each other.
|
|
||||||
|
|
||||||
In my case, my nodes are on a shared subnet with other VPSs, so I wanted to ensure that they were not exposed more than necessary. If I were doing this within a cloud infrastructure which provided separation of instances, I wouldn't need to be so specific:
|
|
||||||
|
|
||||||
```
|
|
||||||
# Permit Docker Swarm from other nodes/managers
|
|
||||||
-A INPUT -s 202.170.164.47 -p tcp --dport 2376 -j ACCEPT
|
|
||||||
-A INPUT -s 202.170.164.47 -p tcp --dport 2377 -j ACCEPT
|
|
||||||
-A INPUT -s 202.170.164.47 -p tcp --dport 7946 -j ACCEPT
|
|
||||||
-A INPUT -s 202.170.164.47 -p udp --dport 7946 -j ACCEPT
|
|
||||||
-A INPUT -s 202.170.164.47 -p udp --dport 4789 -j ACCEPT
|
|
||||||
```
|
|
||||||
|
|
||||||
````
|
|
||||||
|
|
||||||
Now, to launch my swarm:
|
|
||||||
|
|
||||||
```docker swarm init```
|
|
||||||
|
|
||||||
Yeah, that was it. Now I have a 1-node swarm.
|
|
||||||
|
|
||||||
```
|
|
||||||
[root@ds1 ~]# docker swarm init
|
|
||||||
Swarm initialized: current node (b54vls3wf8xztwfz79nlkivt8) is now a manager.
|
|
||||||
|
|
||||||
To add a worker to this swarm, run the following command:
|
|
||||||
|
|
||||||
docker swarm join \
|
|
||||||
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-bsud7xnvhv4cicwi7l6c9s6l0 \
|
|
||||||
202.170.164.47:2377
|
|
||||||
|
|
||||||
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
|
|
||||||
|
|
||||||
[root@ds1 ~]#
|
|
||||||
```
|
|
||||||
|
|
||||||
Right, so I a 1-node swarm:
|
|
||||||
|
|
||||||
```
|
|
||||||
[root@ds1 ~]# docker node ls
|
|
||||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
|
||||||
b54vls3wf8xztwfz79nlkivt8 * ds1.funkypenguin.co.nz Ready Active Leader
|
|
||||||
[root@ds1 ~]#
|
|
||||||
```
|
|
||||||
|
|
||||||
If I followed the "join" command above, I'd end up with a worker node. In my case, I actually want another manager, so that I have full HA, so I followed the instruction and ran ```docker swarm join-token manager``` instead.:
|
|
||||||
|
|
||||||
```
|
|
||||||
[root@ds1 ~]# docker swarm join-token manager
|
|
||||||
To add a manager to this swarm, run the following command:
|
|
||||||
|
|
||||||
docker swarm join \
|
|
||||||
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-cfm24bq2zvfkcwujwlp5zqxta \
|
|
||||||
202.170.164.47:2377
|
|
||||||
|
|
||||||
[root@ds1 ~]#
|
|
||||||
```
|
|
||||||
|
|
||||||
I run the command:
|
|
||||||
|
|
||||||
````
|
|
||||||
[root@ds2 davidy]# docker node ls
|
|
||||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
|
||||||
b54vls3wf8xztwfz79nlkivt8 ds1.funkypenguin.co.nz Ready Active Leader
|
|
||||||
xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reachable
|
|
||||||
[root@ds2 davidy]#
|
|
||||||
````
|
|
||||||
|
|
||||||
|
|
||||||
Swarm initialized: current node (25fw5695wkqxm8mtwqnktwykr) is now a manager.
|
|
||||||
|
|
||||||
To add a worker to this swarm, run the following command:
|
|
||||||
|
|
||||||
docker swarm join \
|
|
||||||
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-e9rw3a8pi53jhlghuyscm52bn \
|
|
||||||
202.170.161.87:2377
|
|
||||||
|
|
||||||
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
|
|
||||||
|
|
||||||
[root@ds1 ~]# docker swarm join-token manager
|
|
||||||
To add a manager to this swarm, run the following command:
|
|
||||||
|
|
||||||
docker swarm join \
|
|
||||||
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
|
|
||||||
202.170.161.87:2377
|
|
||||||
|
|
||||||
[root@ds1 ~]#
|
|
||||||
````
|
|
||||||
|
|
||||||
Added the second host to the swarm, then promoted it.
|
|
||||||
|
|
||||||
````
|
|
||||||
[root@ds2 ~]# docker swarm join \
|
|
||||||
> --token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
|
|
||||||
> 202.170.161.87:2377
|
|
||||||
This node joined a swarm as a manager.
|
|
||||||
[root@ds2 ~]#
|
|
||||||
````
|
|
||||||
|
|
||||||
lvcreate -l 100%FREE -n gfs /dev/atomicos
|
|
||||||
mkfs.xfs -i size=512 /dev/atomicos/gfs
|
|
||||||
mkdir -p /srv/glusterfs
|
|
||||||
echo '//dev/atomicos/gfs /srv/glusterfs/ xfs defaults 1 2' >> /etc/fstab
|
|
||||||
mount -a && mount
|
|
||||||
|
|
||||||
````
|
|
||||||
docker run \
|
|
||||||
-h glusterfs-server \
|
|
||||||
-v /etc/glusterfs:/etc/glusterfs:z \
|
|
||||||
-v /var/lib/glusterd:/var/lib/glusterd:z \
|
|
||||||
-v /var/log/glusterfs:/var/log/glusterfs:z \
|
|
||||||
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
|
||||||
-v /var/srv/glusterfs:/var/srv/glusterfs \
|
|
||||||
-d --privileged=true --net=host \
|
|
||||||
--restart=always \
|
|
||||||
--name="glusterfs-server" \
|
|
||||||
gluster/gluster-centos
|
|
||||||
````
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
now exec into the container, and "probe" its peer, to establish the gluster cluster
|
|
||||||
|
|
||||||
```
|
|
||||||
[root@ds1 ~]# docker exec -it glusterfs-server bash
|
|
||||||
[root@glusterfs-server /]#
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
[root@glusterfs-server /]# gluster peer probe ds2
|
|
||||||
peer probe: success.
|
|
||||||
[root@glusterfs-server /]# gluster peer status
|
|
||||||
Number of Peers: 1
|
|
||||||
|
|
||||||
Hostname: ds2
|
|
||||||
Uuid: 9fbc1985-4e8d-4380-9c10-3c699ebcb10c
|
|
||||||
State: Peer in Cluster (Connected)
|
|
||||||
[root@glusterfs-server /]# exit
|
|
||||||
exit
|
|
||||||
[root@ds1 ~]#
|
|
||||||
```
|
|
||||||
|
|
||||||
Run ```gluster volume create gv0 replica 2 ds1:/var/srv/glusterfs/gv0 ds2:/var/srv/glusterfs/gv0``` as below to create the cluster:
|
|
||||||
```
|
|
||||||
[root@glusterfs-server /]# gluster volume create gv0 replica 2 ds1:/var/srv/glusterfs/gv0 ds2:/var/srv/glusterfs/gv0
|
|
||||||
volume create: gv0: success: please start the volume to access data
|
|
||||||
[root@glusterfs-server /]#
|
|
||||||
```
|
|
||||||
|
|
||||||
Run ```gluster volume start gv0``` to start it:
|
|
||||||
|
|
||||||
```
|
|
||||||
[root@glusterfs-server /]# gluster volume start gv0
|
|
||||||
volume start: gv0: success
|
|
||||||
[root@glusterfs-server /]#
|
|
||||||
```
|
|
||||||
|
|
||||||
Exit out of the container:
|
|
||||||
```
|
|
||||||
[root@glusterfs-server /]# exit
|
|
||||||
exit
|
|
||||||
[root@ds1 ~]#
|
|
||||||
```
|
|
||||||
|
|
||||||
Create your mountpoint on the host, and mount the gluster volume:
|
|
||||||
|
|
||||||
```
|
|
||||||
mkdir /srv/data
|
|
||||||
HOSTNAME=`hostname -s`
|
|
||||||
echo "$HOSTNAME:/gv0 /srv/data glusterfs defaults,_netdev 0 0" >> /etc/fstab
|
|
||||||
mount -a && mount
|
|
||||||
```
|
|
||||||
|
|
||||||
mount -t glusterfs ds1:/gv0 /srv/data/
|
|
||||||
|
|
||||||
|
|
||||||
on secondary
|
|
||||||
mkdir /srv/data
|
|
||||||
mount -t glusterfs ds2:/gv0 /srv/data/
|
|
||||||
|
|
||||||
|
|
||||||
/dev/VG-vda3/gv0 /srv/glusterfs xfs defaults 1 2
|
|
||||||
ds2:/gv0 /srv/data glusterfs defaults,_netdev 0 0
|
|
||||||
|
|
||||||
|
|
||||||
install docker-compose:
|
|
||||||
|
|
||||||
````
|
|
||||||
curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
|
||||||
chmod +x /usr/local/bin/docker-compose
|
|
||||||
````
|
|
||||||
|
|
||||||
|
|
||||||
### Atomic hosts
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
docker stack deploy traefik -c traefik.yml
|
|
||||||
|
|
||||||
need to deal with selinux though :(, had to set to permissive to get it working
|
|
||||||
|
|
||||||
this seemed to work:
|
|
||||||
|
|
||||||
https://github.com/dpw/selinux-dockersock
|
|
||||||
|
|
||||||
````
|
|
||||||
mkdir ~/dockersock
|
|
||||||
cd ~/dockersock
|
|
||||||
curl -O https://raw.githubusercontent.com/dpw/selinux-dockersock/master/Makefile
|
|
||||||
curl -O https://raw.githubusercontent.com/dpw/selinux-dockersock/master/dockersock.te
|
|
||||||
make && semodule -i dockersock.pp
|
|
||||||
````
|
|
||||||
|
|
||||||
however... glusterfs still doesn't support selinux, so until that's sorted, you have te disable selinux anyway with "setenforce 0", in order for _ANY_ containers to write to the glusterfs fuse partition.
|
|
||||||
|
|
||||||
need to add something to rc.local to make glustetr fs mount
|
|
||||||
|
|
||||||
__ maybe __this works:
|
|
||||||
setsebool -P virt_sandbox_use_fusefs on
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
added {"experimental":true} to /etc/docker-latest/daemon.json to enable logs of deployed services
|
|
||||||
|
|
||||||
I.e changed this:
|
|
||||||
|
|
||||||
```
|
|
||||||
Usage: docker stack COMMAND
|
|
||||||
{
|
|
||||||
"log-driver": "journald",
|
|
||||||
"signature-verification": false
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
To this:
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
"log-driver": "journald",
|
|
||||||
"signature-verification": false,
|
|
||||||
"experimental": true
|
|
||||||
}```
|
|
||||||
|
|
||||||
!!! note the comma after "false" above
|
|
||||||
@@ -1,226 +0,0 @@
|
|||||||
# keepalived
|
|
||||||
|
|
||||||
|
|
||||||
## On both hosts
|
|
||||||
|
|
||||||
The design for redundant Docker hosts requires a virtual IP for high availability. To enable this, we install the "keepalived" daemon on both hosts:
|
|
||||||
|
|
||||||
````yum -y install keepalived````
|
|
||||||
|
|
||||||
Below, we'll configure a very basic primary/secondary configuration.
|
|
||||||
|
|
||||||
!!! note
|
|
||||||
Note that if you have a firewall on your hosts, you need to permit the VRRP traffic, as follows (note that for both INPUT and OUTPUT rule, the destination is 224.0.0.18, a multicast address)
|
|
||||||
|
|
||||||
````
|
|
||||||
# permit keepalived in
|
|
||||||
-A INPUT -i eth0 -d 224.0.0.18 -j ACCEPT
|
|
||||||
|
|
||||||
# permit keepalived out
|
|
||||||
-A OUTPUT -o eth0 -d 224.0.0.18 -j ACCEPT
|
|
||||||
````
|
|
||||||
|
|
||||||
## On the primary
|
|
||||||
|
|
||||||
Configure keepalived (note the priority)
|
|
||||||
````
|
|
||||||
VIP=<YOUR HA IP>
|
|
||||||
PASS=<PASSWORD-OF-CHOICE>
|
|
||||||
cat << EOF > /etc/keepalived/keepalived.conf
|
|
||||||
vrrp_instance DS {
|
|
||||||
state MASTER
|
|
||||||
interface eth0
|
|
||||||
virtual_router_id 42
|
|
||||||
priority 200
|
|
||||||
advert_int 1
|
|
||||||
authentication {
|
|
||||||
auth_type PASS
|
|
||||||
auth_pass $PASS
|
|
||||||
}
|
|
||||||
virtual_ipaddress {
|
|
||||||
$VIP
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
systemctl enable keepalived
|
|
||||||
systemctl start keepalived
|
|
||||||
````
|
|
||||||
|
|
||||||
## On the secondary
|
|
||||||
|
|
||||||
Repeat the same on the secondary (all that changes is the priority - the priority of the secondary must be lower than that of the primary):
|
|
||||||
|
|
||||||
````
|
|
||||||
VIP=<YOUR HA IP>
|
|
||||||
PASS=<PASSWORD-OF-CHOICE>
|
|
||||||
cat << EOF > /etc/keepalived/keepalived.conf
|
|
||||||
vrrp_instance DS {
|
|
||||||
state MASTER
|
|
||||||
interface eth0
|
|
||||||
virtual_router_id 42
|
|
||||||
priority 100
|
|
||||||
advert_int 1
|
|
||||||
authentication {
|
|
||||||
auth_type PASS
|
|
||||||
auth_pass $PASS
|
|
||||||
}
|
|
||||||
virtual_ipaddress {
|
|
||||||
$VIP
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
systemctl enable keepalived
|
|
||||||
systemctl start keepalived
|
|
||||||
````
|
|
||||||
|
|
||||||
Check the state of keepalived on both nodes by running
|
|
||||||
````systemctl status keepalived````
|
|
||||||
|
|
||||||
|
|
||||||
## Confirm HA function
|
|
||||||
|
|
||||||
You should now be able to ping your HA IP address, and you can test the HA function by running ````tail -f /var/log/messages | grep Keepalived```` the secondary node, and turning keepalived off/on on the primary node, by running ````systemctl stop keepalived && sleep 10s && systemctl start keepalived````.
|
|
||||||
|
|
||||||
## Docker
|
|
||||||
|
|
||||||
On both hosts, run:
|
|
||||||
|
|
||||||
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
|
|
||||||
sudo yum-config-manager \
|
|
||||||
--add-repo \
|
|
||||||
https://download.docker.com/linux/centos/docker-ce.repo
|
|
||||||
|
|
||||||
sudo yum makecache fast
|
|
||||||
sudo yum install docker-ce
|
|
||||||
|
|
||||||
sudo systemctl start docker
|
|
||||||
sudo docker run hello-world
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Setup Swarm
|
|
||||||
|
|
||||||
````
|
|
||||||
[[root@ds1 ~]# docker swarm init
|
|
||||||
Swarm initialized: current node (25fw5695wkqxm8mtwqnktwykr) is now a manager.
|
|
||||||
|
|
||||||
To add a worker to this swarm, run the following command:
|
|
||||||
|
|
||||||
docker swarm join \
|
|
||||||
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-e9rw3a8pi53jhlghuyscm52bn \
|
|
||||||
202.170.161.87:2377
|
|
||||||
|
|
||||||
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
|
|
||||||
|
|
||||||
[root@ds1 ~]# docker swarm join-token manager
|
|
||||||
To add a manager to this swarm, run the following command:
|
|
||||||
|
|
||||||
docker swarm join \
|
|
||||||
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
|
|
||||||
202.170.161.87:2377
|
|
||||||
|
|
||||||
[root@ds1 ~]#
|
|
||||||
````
|
|
||||||
|
|
||||||
Added the second host to the swarm, then promoted it.
|
|
||||||
|
|
||||||
````
|
|
||||||
[root@ds2 ~]# docker swarm join \
|
|
||||||
> --token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
|
|
||||||
> 202.170.161.87:2377
|
|
||||||
This node joined a swarm as a manager.
|
|
||||||
[root@ds2 ~]#
|
|
||||||
````
|
|
||||||
|
|
||||||
mkfs.xfs -i size=512 /dev/vdb1
|
|
||||||
mkdir -p /srv/glusterfs
|
|
||||||
echo '/dev/vdb1 /srv/glusterfs/ xfs defaults 1 2' >> /etc/fstab
|
|
||||||
mount -a && mount
|
|
||||||
|
|
||||||
````
|
|
||||||
docker run \
|
|
||||||
-h glusterfs-server \
|
|
||||||
-v /etc/glusterfs:/etc/glusterfs:z \
|
|
||||||
-v /var/lib/glusterd:/var/lib/glusterd:z \
|
|
||||||
-v /var/log/glusterfs:/var/log/glusterfs:z \
|
|
||||||
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
|
||||||
-v /var/srv/glusterfs:/var/srv/glusterfs \
|
|
||||||
-d --privileged=true --net=host \
|
|
||||||
--restart=always \
|
|
||||||
--name="glusterfs-server" \
|
|
||||||
gluster/gluster-centos
|
|
||||||
````
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
gluster volume create gv0 replica 2 server1:/data/brick1/gv0 server2:/data/brick1/gv0
|
|
||||||
gluster volume start gv0
|
|
||||||
|
|
||||||
[root@ds2 ~]# gluster peer probe ds1
|
|
||||||
|
|
||||||
|
|
||||||
gluster volume create gv0 replica 2 ds1:/srv/glusterfs/gv0 ds2:/srv/glusterfs/gv0
|
|
||||||
gluster volume start gv0
|
|
||||||
|
|
||||||
[root@ds1 ~]# mkdir /srv/data
|
|
||||||
[root@ds1 ~]# mount -t glusterfs ds1:/gv0 /srv/data/
|
|
||||||
|
|
||||||
|
|
||||||
on secondary
|
|
||||||
mkdir /srv/data
|
|
||||||
mount -t glusterfs ds2:/gv0 /srv/data/
|
|
||||||
|
|
||||||
|
|
||||||
/dev/VG-vda3/gv0 /srv/glusterfs xfs defaults 1 2
|
|
||||||
ds2:/gv0 /srv/data glusterfs defaults,_netdev 0 0
|
|
||||||
|
|
||||||
|
|
||||||
install docker-compose:
|
|
||||||
|
|
||||||
````
|
|
||||||
curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
|
||||||
chmod +x /usr/local/bin/docker-compose
|
|
||||||
````
|
|
||||||
|
|
||||||
|
|
||||||
### Atomic hosts
|
|
||||||
|
|
||||||
systemctl disable docker --now
|
|
||||||
systemctl enable docker-latest --now
|
|
||||||
sed -i '/DOCKERBINARY/s/^#//g' /etc/sysconfig/docker
|
|
||||||
|
|
||||||
atomic host upgrade
|
|
||||||
|
|
||||||
|
|
||||||
docker stack deploy traefik -c traefik.yml
|
|
||||||
|
|
||||||
need to deal with selinux though :(, had to set to permissive to get it working
|
|
||||||
|
|
||||||
this seemed to work:
|
|
||||||
|
|
||||||
https://github.com/dpw/selinux-dockersock
|
|
||||||
|
|
||||||
|
|
||||||
need to add something to rc.local to make glustetr fs mount
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
added {"experimental":true} to /etc/docker/dameon.json to enable logs of deployed services
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
echo "modprobe ip_vs" >> /etc/rc.local
|
|
||||||
|
|
||||||
for primary / secondary keepalived
|
|
||||||
|
|
||||||
|
|
||||||
docker run -d --name keepalived --restart=always \
|
|
||||||
--cap-add=NET_ADMIN --net=host \
|
|
||||||
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['202.170.164.47', '202.170.164.48']" \
|
|
||||||
-e KEEPALIVED_VIRTUAL_IPS=202.170.164.49 \
|
|
||||||
-e KEEPALIVED_PRIORITY=100 \
|
|
||||||
osixia/keepalived:1.3.5
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
docker stack deploy traefik -c docker-compose.yml
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
# 2-Factor authentication
|
|
||||||
|
|
||||||
## What is it?
|
|
||||||
|
|
||||||
## Why do we need it?
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
# Beginner
|
|
||||||
|
|
||||||
The recipies in the beginner section meet the following criteria:
|
|
||||||
|
|
||||||
1. They do not require command-line skills
|
|
||||||
29
docs/recipies/git-docker.md
Normal file
29
docs/recipies/git-docker.md
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
# Introduction
|
||||||
|
|
||||||
|
Our HA platform design relies on Atomic OS, which only contains bare minimum elements to run containers.
|
||||||
|
|
||||||
|
So how can we use git on this system, to push/pull the changes we make to config files?
|
||||||
|
|
||||||
|
docker run -v /var/data/git-docker/data:/root funkypenguin/git-docker ssh-keygen -t ed25519 -f /root/.ssh/id_ed25519
|
||||||
|
Generating public/private ed25519 key pair.
|
||||||
|
Enter passphrase (empty for no passphrase): Enter same passphrase again: Created directory '/root/.ssh'.
|
||||||
|
Your identification has been saved in /root/.ssh/id_ed25519.
|
||||||
|
Your public key has been saved in /root/.ssh/id_ed25519.pub.
|
||||||
|
The key fingerprint is:
|
||||||
|
SHA256:uZtriS7ypx7Q4kr+w++nHhHpcRfpf5MhxP3Wpx3H3hk root@a230749d8d8a
|
||||||
|
The key's randomart image is:
|
||||||
|
+--[ED25519 256]--+
|
||||||
|
| .o . |
|
||||||
|
| . ..o . |
|
||||||
|
| + .... ...|
|
||||||
|
| .. + .o . . E=|
|
||||||
|
| o .o S . . ++B|
|
||||||
|
| . o . . . +..+|
|
||||||
|
| .o .. ... . . |
|
||||||
|
|o..o..+.oo |
|
||||||
|
|...=OX+.+. |
|
||||||
|
+----[SHA256]-----+
|
||||||
|
[root@ds3 data]#
|
||||||
|
|
||||||
|
|
||||||
|
alias git='docker run -v $PWD:/var/data -v /var/data/git-docker/data:/root funkypenguin/git-docker git'
|
||||||
@@ -24,10 +24,10 @@ pages:
|
|||||||
- Keepalived: ha-docker-swarm/keepalived.md
|
- Keepalived: ha-docker-swarm/keepalived.md
|
||||||
- Docker Swarm Mode: ha-docker-swarm/docker-swarm-mode.md
|
- Docker Swarm Mode: ha-docker-swarm/docker-swarm-mode.md
|
||||||
- Traefik: ha-docker-swarm/traefik.md
|
- Traefik: ha-docker-swarm/traefik.md
|
||||||
- Tiny Tiny RSS:
|
# - Tiny Tiny RSS:
|
||||||
- Basic: advanced/tiny-tiny-rss.md
|
# - Basic: advanced/tiny-tiny-rss.md
|
||||||
- Plugins: advanced/tiny-tiny-rss.md
|
# - Plugins: advanced/tiny-tiny-rss.md
|
||||||
- Themes: advanced/tiny-tiny-rss.md
|
# - Themes: advanced/tiny-tiny-rss.md
|
||||||
|
|
||||||
# - Home Assistant:
|
# - Home Assistant:
|
||||||
# - About: advanced/home-assistant/basic.md
|
# - About: advanced/home-assistant/basic.md
|
||||||
|
|||||||
Reference in New Issue
Block a user