mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 01:36:23 +00:00
ElkarBackup, how you doin?
This commit is contained in:
@@ -10,11 +10,10 @@
|
|||||||
## Recent additions to work-in-progress
|
## Recent additions to work-in-progress
|
||||||
|
|
||||||
* [IPFS Cluster](/recipes/ipfs-cluster/), providing inter-planetary docker swarm shared storage with the inter-planetary filesystem (_29 Nov 2018_)
|
* [IPFS Cluster](/recipes/ipfs-cluster/), providing inter-planetary docker swarm shared storage with the inter-planetary filesystem (_29 Nov 2018_)
|
||||||
* [ElkarBackup](/recipes/elkarbackup/), a beautiful GUI-based backup solution built on rsync/rsnapshot (_24 Nov 2018_)
|
|
||||||
|
|
||||||
|
|
||||||
## Recently added recipes
|
## Recently added recipes
|
||||||
|
|
||||||
|
* [ElkarBackup](/recipes/elkarbackup/), a beautiful GUI-based backup solution built on rsync/rsnapshot (_1 Jan 2019_)
|
||||||
* Added [Collabora Online](/recipes/collabora-online), an rich document editor within [NextCloud](/recipes/nextcloud/) (_think "headless LibreOffice"_)
|
* Added [Collabora Online](/recipes/collabora-online), an rich document editor within [NextCloud](/recipes/nextcloud/) (_think "headless LibreOffice"_)
|
||||||
* Added [phpIPAM](/recipes/phpipam), an IP address management tool (_18 Dec 2018_)
|
* Added [phpIPAM](/recipes/phpipam), an IP address management tool (_18 Dec 2018_)
|
||||||
* Added [KeyCloak](/recipes/keycloak), an open source identity and access management solution which backends neatly into [OpenLDAP](/recipes/openldap/) (_among other providers_), providing true SSO (_13 Dec 2018_)
|
* Added [KeyCloak](/recipes/keycloak), an open source identity and access management solution which backends neatly into [OpenLDAP](/recipes/openldap/) (_among other providers_), providing true SSO (_13 Dec 2018_)
|
||||||
|
|||||||
BIN
manuscript/images/elkarbackup-setup-1.png
Normal file
BIN
manuscript/images/elkarbackup-setup-1.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 963 KiB |
BIN
manuscript/images/elkarbackup-setup-2.png
Normal file
BIN
manuscript/images/elkarbackup-setup-2.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 93 KiB |
BIN
manuscript/images/elkarbackup-setup-3.png
Normal file
BIN
manuscript/images/elkarbackup-setup-3.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 62 KiB |
@@ -1,26 +1,20 @@
|
|||||||
hero: Not all heroes wear capes
|
hero: Real heroes backup their 💾
|
||||||
|
|
||||||
!!! danger "This recipe is a work in progress"
|
|
||||||
This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
|
||||||
|
|
||||||
So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁
|
|
||||||
|
|
||||||
|
|
||||||
# Elkar Backup
|
# Elkar Backup
|
||||||
|
|
||||||
ElkarBackup is a free open-source backup solution based on RSync/RSnapshot
|
Don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
|
||||||
|
|
||||||
|
<iframe width="560" height="315" src="https://www.youtube.com/embed/1UtFeMoqVHQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||||
|
|
||||||
|
!!! important
|
||||||
|
Ongoing development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
|
||||||
|
|
||||||
|
[](https://www.observe.global/)
|
||||||
|
|
||||||
|
ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. I find this result more reassuring than a blob of compressed, (encrypted?) data that [more sophisticated backup solutions](/recipes/duplicity/) would produce for you.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## Why is this a WIP?
|
|
||||||
|
|
||||||
What's missing from the recipe currently is:
|
|
||||||
|
|
||||||
1. An explanation for the environment variables, plus details re how to use scripts to send data offsite, like Duplicity does.
|
|
||||||
2. Details about ElkarBackup
|
|
||||||
3. A mysql container to backup the elkar database (_unnecessary since Elkarbackup includes scripts to back itself up)_
|
|
||||||
|
|
||||||
|
|
||||||
## Details
|
## Details
|
||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
@@ -36,18 +30,20 @@ What's missing from the recipe currently is:
|
|||||||
We'll need several directories to bind-mount into our container, so create them in /var/data/elkarbackup:
|
We'll need several directories to bind-mount into our container, so create them in /var/data/elkarbackup:
|
||||||
|
|
||||||
```
|
```
|
||||||
mkdir -p /var/data/elkarbackup/{backups,uploads,sshkeys}
|
mkdir -p /var/data/elkarbackup/{backups,uploads,sshkeys,database-dump}
|
||||||
|
mkdir -p /var/data/runtime/elkarbackup/db
|
||||||
|
mkdir -p /var/data/config/elkarbackup
|
||||||
```
|
```
|
||||||
|
|
||||||
### Prepare environment
|
### Prepare environment
|
||||||
|
|
||||||
Create elkarbackup.env, and populate with the following variables
|
Create /var/data/config/elkarbackup/elkarbackup.env, and populate with the following variables
|
||||||
```
|
```
|
||||||
SYMFONY__DATABASE__PASSWORD=password
|
SYMFONY__DATABASE__PASSWORD=password
|
||||||
EB_CRON=enabled
|
EB_CRON=enabled
|
||||||
TZ='Spain/Madrid'
|
TZ='Etc/UTC'
|
||||||
|
|
||||||
#SMTP
|
#SMTP - Populate these if you want email notifications
|
||||||
#SYMFONY__MAILER__HOST=
|
#SYMFONY__MAILER__HOST=
|
||||||
#SYMFONY__MAILER__USER=
|
#SYMFONY__MAILER__USER=
|
||||||
#SYMFONY__MAILER__PASSWORD=
|
#SYMFONY__MAILER__PASSWORD=
|
||||||
@@ -62,6 +58,23 @@ OAUTH2_PROXY_CLIENT_SECRET=
|
|||||||
OAUTH2_PROXY_COOKIE_SECRET=
|
OAUTH2_PROXY_COOKIE_SECRET=
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Create ```/var/data/config/elkarbackup/elkarbackup-db-backup.env```, and populate with the following, to setup the nightly database dump.
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
Running a daily database dump might be considered overkill, since ElkarBackup can be configured to backup its own database. However, making my own backup keeps the operation of this stack consistent with **other** stacks which employ MariaDB.
|
||||||
|
|
||||||
|
Also, did you ever hear about the guy who said "_I wish I had fewer backups"?
|
||||||
|
|
||||||
|
No, me either :shrug:
|
||||||
|
|
||||||
|
````
|
||||||
|
# For database backup (keep 7 days daily backups)
|
||||||
|
MYSQL_PWD=<same as SYMFONY__DATABASE__PASSWORD above>
|
||||||
|
MYSQL_USER=root
|
||||||
|
BACKUP_NUM_KEEP=7
|
||||||
|
BACKUP_FREQUENCY=1d
|
||||||
|
````
|
||||||
|
|
||||||
### Setup Docker Swarm
|
### Setup Docker Swarm
|
||||||
|
|
||||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||||
@@ -75,16 +88,35 @@ version: "3"
|
|||||||
|
|
||||||
services:
|
services:
|
||||||
db:
|
db:
|
||||||
image: mariadb:10
|
image: mariadb:10.4
|
||||||
env_file: /var/data/config/elkarbackup/elkarbackup.env
|
env_file: /var/data/config/elkarbackup/elkarbackup.env
|
||||||
networks:
|
networks:
|
||||||
- internal
|
- internal
|
||||||
volumes:
|
volumes:
|
||||||
- /etc/localtime:/etc/localtime:ro
|
- /etc/localtime:/etc/localtime:ro
|
||||||
- /var/data/runtime/elkarbackup3/db:/var/lib/mysql
|
- /var/data/runtime/elkarbackup/db:/var/lib/mysql
|
||||||
|
|
||||||
elkarbackup:
|
db-backup:
|
||||||
image: elkarbackup/elkarbackup:1.3.0-apache
|
image: mariadb:10.4
|
||||||
|
env_file: /var/data/config/elkarbackup/elkarbackup-db-backup.env
|
||||||
|
volumes:
|
||||||
|
- /var/data/elkarbackup/database-dump:/dump
|
||||||
|
- /etc/localtime:/etc/localtime:ro
|
||||||
|
entrypoint: |
|
||||||
|
bash -c 'bash -s <<EOF
|
||||||
|
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||||
|
sleep 2m
|
||||||
|
while /bin/true; do
|
||||||
|
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
|
||||||
|
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
|
||||||
|
sleep $$BACKUP_FREQUENCY
|
||||||
|
done
|
||||||
|
EOF'
|
||||||
|
networks:
|
||||||
|
- internal
|
||||||
|
|
||||||
|
app:
|
||||||
|
image: elkarbackup/elkarbackup
|
||||||
env_file: /var/data/config/elkarbackup/elkarbackup.env
|
env_file: /var/data/config/elkarbackup/elkarbackup.env
|
||||||
networks:
|
networks:
|
||||||
- internal
|
- internal
|
||||||
@@ -136,11 +168,84 @@ networks:
|
|||||||
|
|
||||||
Launch the ElkarBackup stack by running ```docker stack deploy elkarbackup -c <path -to-docker-compose.yml>```
|
Launch the ElkarBackup stack by running ```docker stack deploy elkarbackup -c <path -to-docker-compose.yml>```
|
||||||
|
|
||||||
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
|
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password default password "root":
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
First thing you do, change your password, using the gear icon, and "Change Password" link:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Have a read of the [Elkarbackup Docs](https://docs.elkarbackup.org/docs/introduction.html) - they introduce the concept of **clients** (_hosts containing data to be backed up_), **jobs** (_what data gets backed up_), **policies** (_when is data backed up and how long is it kept_).
|
||||||
|
|
||||||
|
At the very least, you want to setup a **client** called "_localhost_" with an empty path (_i.e., the job path will be accessed locally, without SSH_), and then add a job to this client to backup /var/data, **excluding** ```/var/data/runtime``` and ```/var/data/elkarbackup/backup``` (_unless you **like** "backup-ception"_)
|
||||||
|
|
||||||
|
### Copying your backup data offsite
|
||||||
|
|
||||||
|
From the WebUI, you can download a script intended to be executed on a remote host, to backup your backup data to an offsite location. This is a **Good Idea**(tm), but needs some massaging for a Docker swarm deployment.
|
||||||
|
|
||||||
|
Here's a variation to the standard script, which I've employed:
|
||||||
|
|
||||||
|
```
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
REPOSITORY=/var/data/elkarbackup/backups
|
||||||
|
SERVER=<target host member of docker swarm>
|
||||||
|
SERVER_USER=elkarbackup
|
||||||
|
UPLOADS=/var/data/elkarbackup/uploads
|
||||||
|
TARGET=/srv/backup/elkarbackup
|
||||||
|
|
||||||
|
echo "Starting backup..."
|
||||||
|
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
|
||||||
|
|
||||||
|
ssh "$SERVER_USER@$SERVER" "cd '$REPOSITORY'; find . -maxdepth 2 -mindepth 2" | sed s/^..// | while read jobId
|
||||||
|
do
|
||||||
|
echo Backing up job $jobId
|
||||||
|
mkdir -p $TARGET/$jobId 2>/dev/null
|
||||||
|
rsync -aH --delete "$SERVER_USER@$SERVER:$REPOSITORY/$jobId/" $TARGET/$jobId
|
||||||
|
done
|
||||||
|
|
||||||
|
echo Backing up uploads
|
||||||
|
rsync -aH --delete "$SERVER_USER@$SERVER":"$UPLOADS/" $TARGET/uploads
|
||||||
|
|
||||||
|
USED=`df -h . | awk 'NR==2 { print $3 }'`
|
||||||
|
USE=`df -h . | awk 'NR==2 { print $5 }'`
|
||||||
|
AVAILABLE=`df -h . | awk 'NR==2 { print $4 }'`
|
||||||
|
|
||||||
|
echo "Backup finished succesfully!"
|
||||||
|
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
|
||||||
|
echo ""
|
||||||
|
echo "**** INFO ****"
|
||||||
|
echo "Used disk space: $USED ($USE)"
|
||||||
|
echo "Available disk space: $AVAILABLE"
|
||||||
|
echo ""
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
You'll note that I don't use the script to create a mysql dump (_since Elkar is running within a container anyway_), rather I just rely on the database dump which is made nightly into ```/var/data/elkarbackup/database-dump/```
|
||||||
|
|
||||||
|
### Restoring data
|
||||||
|
|
||||||
|
Repeat after me : "**It's not a backup unless you've tested a restore**"
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
I had some difficulty making restoring work well in the webUI. My attempts to "Restore to client" failed with an SSH error about "localhost" not found. I **was** able to download the backup from my web browser, so I considered it a successful restore, since I can retrieve the backed-up data either from the webUI or from the filesystem directly.
|
||||||
|
|
||||||
|
To restore files form a job, click on the "Restore" button in the WebUI, while on the **Jobs** tab:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
This takes you to a list of backup names and file paths. You can choose to download the entire contents of the backup from your browser as a .tar.gz, or to restore the backup to the client. If you click on the **name** of the backup, you can also drill down into the file structure, choosing to restore a single file or directory.
|
||||||
|
|
||||||
|
!!! important
|
||||||
|
Ongoing development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
|
||||||
|
|
||||||
|
[](https://www.observe.global/)
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
1. If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You'd also need to add the traefik_public network to the app service.
|
1. If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You'd also need to add the traefik_public network to the app service.
|
||||||
|
2. The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (donate) 👏
|
||||||
|
|
||||||
|
|||||||
@@ -54,6 +54,7 @@ pages:
|
|||||||
- Jackett: recipes/autopirate/jackett.md
|
- Jackett: recipes/autopirate/jackett.md
|
||||||
- Heimdall: recipes/autopirate/heimdall.md
|
- Heimdall: recipes/autopirate/heimdall.md
|
||||||
- End: recipes/autopirate/end.md
|
- End: recipes/autopirate/end.md
|
||||||
|
- ElkarBackup: recipes/elkarbackup.md
|
||||||
- Emby: recipes/emby.md
|
- Emby: recipes/emby.md
|
||||||
- Home Assistant:
|
- Home Assistant:
|
||||||
- Start: recipes/homeassistant.md
|
- Start: recipes/homeassistant.md
|
||||||
@@ -108,7 +109,6 @@ pages:
|
|||||||
# - KeyCloak: recipes/sso-stack/keycloak.md
|
# - KeyCloak: recipes/sso-stack/keycloak.md
|
||||||
- Work-in-Progress:
|
- Work-in-Progress:
|
||||||
# - MatterMost: recipes/mattermost.md
|
# - MatterMost: recipes/mattermost.md
|
||||||
- ElkarBackup: recipes/elkarbackup.md
|
|
||||||
- IPFS Cluster: recipes/ipfs-cluster.md
|
- IPFS Cluster: recipes/ipfs-cluster.md
|
||||||
# - HackMD: recipes/hackmd.md
|
# - HackMD: recipes/hackmd.md
|
||||||
# - Mastodon: recipes/mastodon.md
|
# - Mastodon: recipes/mastodon.md
|
||||||
|
|||||||
Reference in New Issue
Block a user