1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-20 21:22:00 +00:00

Update for leanpub preview

This commit is contained in:
AutoPenguin
2020-06-03 01:42:01 +00:00
parent 65dd34c7ea
commit 2e8e16157b
193 changed files with 12667 additions and 155 deletions

View File

@@ -0,0 +1,42 @@
# Containers
In the course of creating these recipes, I've often ended up creating containers with my own tweaks or changes. Below is a list of all the containers I've built. All of them are automatic builds, and the Dockerfiles and build logs are publicly available:
Name | Description | Badges
--|--|--
[funkypenguin/athena](https://hub.docker.com/r/funkypenguin/athena/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/athena.svg)](https://hub.docker.com/r/funkypenguin/athena/)| Athena cryptocurrency daemon/services |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/athena.svg)](https://hub.docker.com/r/funkypenguin/athena/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/athena.svg)](https://hub.docker.com/r/funkypenguin/athena/)
[funkypenguin/alertmanager-discord](https://hub.docker.com/r/funkypenguin/alertmanager-discord/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/alertmanager-discord.svg)](https://hub.docker.com/r/funkypenguin/alertmanager-discord/)| AlertManager-compatible webhook to send Prometheus alerts to a Discord channel |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/alertmanager-discord.svg)](https://hub.docker.com/r/funkypenguin/alertmanager-discord/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/alertmanager-discord.svg)](https://hub.docker.com/r/funkypenguin/alertmanager-discord/)
[funkypenguin/aeon](https://hub.docker.com/r/funkypenguin/aeon/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/aeon.svg)](https://hub.docker.com/r/funkypenguin/aeon/)| Aeon cryptocurrency daemon/services |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/aeon.svg)](https://hub.docker.com/r/funkypenguin/aeon/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/aeon.svg)](https://hub.docker.com/r/funkypenguin/aeon/)
[funkypenguin/bittube](https://hub.docker.com/r/funkypenguin/bittube/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/bittube.svg)](https://hub.docker.com/r/funkypenguin/bittube/)| BitTube cryptocurrency daemon/services |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/bittube.svg)](https://hub.docker.com/r/funkypenguin/bittube/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/bittube.svg)](https://hub.docker.com/r/funkypenguin/bittube/)
[funkypenguin/cryptonote-nodejs-pool](https://hub.docker.com/r/funkypenguin/cryptonote-nodejs-pool/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/cryptonote-nodejs-pool.svg)](https://hub.docker.com/r/funkypenguin/cryptonote-nodejs-pool/)| nodeJS-based mining pool for cryptonote-based mining pools, supporting advanced features like email/telegram notifications |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/cryptonote-nodejs-pool.svg)](https://hub.docker.com/r/funkypenguin/cryptonote-nodejs-pool/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/cryptonote-nodejs-pool.svg)](https://hub.docker.com/r/funkypenguin/cryptonote-nodejs-pool/)
[funkypenguin/conceal-core](https://hub.docker.com/r/funkypenguin/conceald/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/conceald.svg)](https://hub.docker.com/r/funkypenguin/conceald//)| Conceal cryptocurrency daemon/services |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/conceald.svg)](https://hub.docker.com/r/funkypenguin/conceald/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/conceald.svg)](https://hub.docker.com/r/funkypenguin/conceald/)
[funkypenguin/git-docker](https://hub.docker.com/r/funkypenguin/git-docker/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/git-docker.svg)](https://hub.docker.com/r/funkypenguin/git-docker/)| Git client in a docker container, for use on immutable OS (Atomic) hosts|[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/git-docker.svg)](https://hub.docker.com/r/funkypenguin/git-docker/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/git-docker.svg)](https://hub.docker.com/r/funkypenguin/git-docker/)
[funkypenguin/home-assistant](https://hub.docker.com/r/funkypenguin/home-assistant/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/home-assistant.svg)](https://hub.docker.com/r/funkypenguin/home-assistant//)| home-assistant |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/home-assistant.svg)](https://hub.docker.com/r/funkypenguin/home-assistant/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/home-assistant.svg)](https://hub.docker.com/r/funkypenguin/home-assistant/)
[funkypenguin/htpc-cron](https://hub.docker.com/r/funkypenguin/htpc-cron/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/htpc-cron.svg)](https://hub.docker.com/r/funkypenguin/htpc-cron/)| htpc-cron |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/htpc-cron.svg)](https://hub.docker.com/r/funkypenguin/htpc-cron/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/htpc-cron.svg)](https://hub.docker.com/r/funkypenguin/htpc-cron/)
[funkypenguin/kepl](https://hub.docker.com/r/funkypenguin/kepl/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/kepl.svg)](https://hub.docker.com/r/funkypenguin/kepl/)| KEPL cryptocurrency daemon/services |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/kepl.svg)](https://hub.docker.com/r/funkypenguin/kepl/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/kepl.svg)](https://hub.docker.com/r/funkypenguin/kepl/)
[funkypenguin/koson](https://hub.docker.com/r/funkypenguin/koson/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/koson.svg)](https://hub.docker.com/r/funkypenguin/koson/)| koson |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/koson.svg)](https://hub.docker.com/r/funkypenguin/koson/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/koson.svg)](https://hub.docker.com/r/funkypenguin/koson/)
[funkypenguin/loki](https://hub.docker.com/r/funkypenguin/loki/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/loki.svg)](https://hub.docker.com/r/funkypenguin/loki/)| loki |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/loki.svg)](https://hub.docker.com/r/funkypenguin/loki/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/loki.svg)](https://hub.docker.com/r/funkypenguin/loki/)
[funkypenguin/masari](https://hub.docker.com/r/funkypenguin/masari/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/masari.svg)](https://hub.docker.com/r/funkypenguin/masari//)| Masari cryptocurrency daemon/services |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/masari.svg)](https://hub.docker.com/r/funkypenguin/masari/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/masari.svg)](https://hub.docker.com/r/funkypenguin/masari/)
[funkypenguin/monero](https://hub.docker.com/r/funkypenguin/monero/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/monero.svg)](https://hub.docker.com/r/funkypenguin/monero/)| Monero cryptocurrency daemon/services |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/monero.svg)](https://hub.docker.com/r/funkypenguin/monero/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/monero.svg)](https://hub.docker.com/r/funkypenguin/monero/)
[funkypenguin/monkeytips](https://hub.docker.com/r/funkypenguin/monkeytips/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/monkeytips.svg)](https://hub.docker.com/r/funkypenguin/monkeytips//)| MonkeyTips cryptocurrency daemon/services |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/monkeytips.svg)](https://hub.docker.com/r/funkypenguin/monkeytips/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/monkeytips.svg)](https://hub.docker.com/r/funkypenguin/monkeytips/)
[funkypenguin/minio](https://hub.docker.com/r/funkypenguin/minio/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/minio.svg)](https://hub.docker.com/r/funkypenguin/minio/)| minio |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/minio.svg)](https://hub.docker.com/r/funkypenguin/minio/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/minio.svg)](https://hub.docker.com/r/funkypenguin/minio/)
[funkypenguin/mqtt-certbot-dns](https://hub.docker.com/r/funkypenguin/mqtt-certbot-dns/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/mqtt-certbot-dns.svg)](https://hub.docker.com/r/funkypenguin/mqtt-certbot-dns/)| mqtt-certbot-dns |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/mqtt-certbot-dns.svg)](https://hub.docker.com/r/funkypenguin/mqtt-certbot-dns/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/mqtt-certbot-dns.svg)](https://hub.docker.com/r/funkypenguin/mqtt-certbot-dns/)
[funkypenguin/munin-server](https://hub.docker.com/r/funkypenguin/munin-server/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/munin-server.svg)](https://hub.docker.com/r/funkypenguin/munin-server/)| munin-server |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/munin-server.svg)](https://hub.docker.com/r/funkypenguin/munin-server/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/munin-server.svg)](https://hub.docker.com/r/funkypenguin/munin-server/)
[funkypenguin/munin-node](https://hub.docker.com/r/funkypenguin/munin-node/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/munin-node.svg)](https://hub.docker.com/r/funkypenguin/munin-node/)| munin-node |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/munin-node.svg)](https://hub.docker.com/r/funkypenguin/munin-node/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/munin-node.svg)](https://hub.docker.com/r/funkypenguin/munin-node/)
[funkypenguin/mwlib](https://hub.docker.com/r/funkypenguin/mwlib/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/mwlib.svg)](https://hub.docker.com/r/funkypenguin/mwlib/)| mwlib |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/mwlib.svg)](https://hub.docker.com/r/funkypenguin/mwlib/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/mwlib.svg)](https://hub.docker.com/r/funkypenguin/mwlib/)
[funkypenguin/mqttwarn](https://hub.docker.com/r/funkypenguin/mqttwarn/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/mqttwarn.svg)](https://hub.docker.com/r/funkypenguin/mqttwarn/)| mqttwarn |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/mqttwarn.svg)](https://hub.docker.com/r/funkypenguin/mqttwarn/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/mqttwarn.svg)](https://hub.docker.com/r/funkypenguin/mqttwarn/)
[funkypenguin/nginx-proxy-letsencrypt](https://hub.docker.com/r/funkypenguin/nginx-proxy-letsencrypt/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/nginx-proxy-letsencrypt.svg)](https://hub.docker.com/r/funkypenguin/nginx-proxy-letsencrypt/)| nginx-proxy-letsencrypt |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/nginx-proxy-letsencrypt.svg)](https://hub.docker.com/r/funkypenguin/nginx-proxy-letsencrypt/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/nginx-proxy-letsencrypt.svg)](https://hub.docker.com/r/funkypenguin/nginx-proxy-letsencrypt/)
[funkypenguin/nzbdrone](https://hub.docker.com/r/funkypenguin/nzbdrone/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/nzbdrone.svg)](https://hub.docker.com/r/funkypenguin/nzbdrone/)| nzbdrone |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/nzbdrone.svg)](https://hub.docker.com/r/funkypenguin/nzbdrone/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/nzbdrone.svg)](https://hub.docker.com/r/funkypenguin/nzbdrone/)
[funkypenguin/owntracks](https://hub.docker.com/r/funkypenguin/owntracks/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/owntracks.svg)](https://hub.docker.com/r/funkypenguin/owntracks//)| Owntracks |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/owntracks.svg)](https://hub.docker.com/r/funkypenguin/owntracks/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/owntracks.svg)](https://hub.docker.com/r/funkypenguin/owntracks/)
[funkypenguin/oauth2_proxy](https://hub.docker.com/r/funkypenguin/oauth2_proxy/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/oauth2_proxy.svg)](https://hub.docker.com/r/funkypenguin/oauth2_proxy/)| OAuth2 proxy supporting self-signed upstream certs |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/oauth2_proxy.svg)](https://hub.docker.com/r/funkypenguin/oauth2_proxy/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/oauth2_proxy.svg)](https://hub.docker.com/r/funkypenguin/oauth2_proxy/)
[funkypenguin/plex](https://hub.docker.com/r/funkypenguin/plex/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/plex.svg)](https://hub.docker.com/r/funkypenguin/plex/)| plex |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/plex.svg)](https://hub.docker.com/r/funkypenguin/plex/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/plex.svg)](https://hub.docker.com/r/funkypenguin/plex/)
[funkypenguin/radarrsync](https://hub.docker.com/r/funkypenguin/radarrsync/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/radarrsync.svg)](https://hub.docker.com/r/funkypenguin/radarrsync/)| Python script to sync multiple Radarr instances |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/radarrsync.svg)](https://hub.docker.com/r/funkypenguin/radarrsync/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/radarrsync.svg)](https://hub.docker.com/r/funkypenguin/radarrsync/)
[funkypenguin/ryo-currency](https://hub.docker.com/r/funkypenguin/ryo-currency/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/ryo-currency.svg)](https://hub.docker.com/r/funkypenguin/ryo-currency/)| RYO cryptocurrency daemon/services |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/ryo-currency.svg)](https://hub.docker.com/r/funkypenguin/ryo-currency/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/ryo-currency.svg)](https://hub.docker.com/r/funkypenguin/ryo-currency/)
[funkypenguin/rtorrent](https://hub.docker.com/r/funkypenguin/rtorrent/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/rtorrent.svg)](https://hub.docker.com/r/funkypenguin/rtorrent/)| rtorrent |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/rtorrent.svg)](https://hub.docker.com/r/funkypenguin/rtorrent/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/rtorrent.svg)](https://hub.docker.com/r/funkypenguin/rtorrent/)
[funkypenguin/sabnzbd](https://hub.docker.com/r/funkypenguin/sabnzbd/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/sabnzbd.svg)](https://hub.docker.com/r/funkypenguin/oauth2_proxy/)| sabnzbd |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/sabnzbd.svg)](https://hub.docker.com/r/funkypenguin/sabnzbd/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/sabnzbd.svg)](https://hub.docker.com/r/funkypenguin/sabnzbd/)
[funkypenguin/turtlecoind](https://hub.docker.com/r/funkypenguin/turtlecoind/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/turtlecoind.svg)](https://hub.docker.com/r/funkypenguin/turtlecoind/)| turtlecoin |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/turtlecoind.svg)](https://hub.docker.com/r/funkypenguin/turtlecoind/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/turtlecoind.svg)](https://hub.docker.com/r/funkypenguin/turtlecoind/)
[funkypenguin/temasek](https://hub.docker.com/r/funkypenguin/temasek/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/temasek.svg)](https://hub.docker.com/r/funkypenguin/temasek/)| temasek |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/temasek.svg)](https://hub.docker.com/r/funkypenguin/temasek/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/temasek.svg)](https://hub.docker.com/r/funkypenguin/temasek/)
[funkypenguin/turtle-pool](https://hub.docker.com/r/funkypenguin/turtle-pool/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/turtle-pool.svg)](https://hub.docker.com/r/funkypenguin/turtle-pool//)| turtle-pool |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/turtle-pool.svg)](https://hub.docker.com/r/funkypenguin/turtle-pool/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/turtle-pool.svg)](https://hub.docker.com/r/funkypenguin/turtle-pool/)
[funkypenguin/turtlecoin](https://hub.docker.com/r/funkypenguin/turtlecoin/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/turtlecoin.svg)](https://hub.docker.com/r/funkypenguin/turtlecoin/)| turtlecoin |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/turtlecoin.svg)](https://hub.docker.com/r/funkypenguin/turtlecoin/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/turtlecoin.svg)](https://hub.docker.com/r/funkypenguin/turtlecoin/)
[funkypenguin/x-cash](https://hub.docker.com/r/funkypenguin/x-cash/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/x-cash.svg)](https://hub.docker.com/r/funkypenguin/x-cash/)| X-CASH cryptocurrency daemon/services |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/x-cash.svg)](https://hub.docker.com/r/funkypenguin/x-cash/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/x-cash.svg)](https://hub.docker.com/r/funkypenguin/x-cash/)
[funkypenguin/xmrig-cpu](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)<br/>[![Size](https://images.microbadger.com/badges/image/funkypenguin/xmrig-cpu.svg)](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)| xmrig-cpu |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/xmrig-cpu.svg)](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)<br/>[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/xmrig-cpu.svg)](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)|

View File

@@ -0,0 +1,17 @@
# Data layout
The applications deployed in the stack utilize a combination of data-at-rest (_static config, files, etc_) and runtime data (_live database files_). The realtime data can't be [backed up](/recipes/duplicity) with a simple copy-paste, so where we employ databases, we also include containers to perform a regular export of database data to a filesystem location.
So that we can confidently backup all our data, I've setup a data layout as follows:
## Configuration data
Configuration data goes into /var/data/config/[recipe name], and is typically only a docker-compose .yml, and a .env file
## Runtime data
Realtime data (typically database files or files-in-use) are stored in /var/data/realtime/[recipe-name], and are **excluded** from backup (_They change constantly, and cannot be safely restored_).
## Static data
Static data goes into /var/data/[recipe name], and includes anything that can be safely backed up while a container is running. This includes database exports of the runtime data above.

View File

@@ -0,0 +1,52 @@
# Introduction
Our HA platform design relies on Atomic OS, which only contains bare minimum elements to run containers.
So how can we use git on this system, to push/pull the changes we make to config files? With a container, of course!
## git-docker
I [made a simple container](https://github.com/funkypenguin/git-docker/blob/master/Dockerfile) which just basically executes git in the CWD:
To use it transparently, add an alias for the "git" command, or just download it with the rest of the [handy aliases](https://raw.githubusercontent.com/funkypenguin/geek-cookbook/master/examples/scripts/gcb-aliases.sh):
```
alias git='docker run -v $PWD:/var/data -v \
/var/data/git-docker/data/.ssh:/root/.ssh funkypenguin/git-docker git'
```
## Setup SSH key
If you plan to actually _push_ using git, you'll need to setup an SSH keypair. You _could_ copy across whatever keypair you currently use, but it's probably more appropriate to generate a specific keypair for this purpose.
Generate your new SSH keypair by running:
```
mkdir -p /var/data/git-docker/data/.ssh
chmod 600 /var/data/git-docker/data/.ssh
docker run -v /var/data/git-docker/data/.ssh:/root/.ssh funkypenguin/git-docker ssh-keygen -t ed25519 -f /root/.ssh/id_ed25519
```
The output will look something like this:
```
Generating public/private ed25519 key pair.
Enter passphrase (empty for no passphrase): Enter same passphrase again: Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_ed25519.
Your public key has been saved in /root/.ssh/id_ed25519.pub.
The key fingerprint is:
SHA256:uZtriS7ypx7Q4kr+w++nHhHpcRfpf5MhxP3Wpx3H3hk root@a230749d8d8a
The key's randomart image is:
+--[ED25519 256]--+
| .o . |
| . ..o . |
| + .... ...|
| .. + .o . . E=|
| o .o S . . ++B|
| . o . . . +..+|
| .o .. ... . . |
|o..o..+.oo |
|...=OX+.+. |
+----[SHA256]-----+
```
Now add the contents of /var/data/git-docker/data/.ssh/id_ed25519.pub to your git account, and off you go - just run "git" from your Atomic host as usual, and pretend that you have the client installed!

View File

@@ -115,8 +115,8 @@ I had to add compute admin, service admin, and kubernetes engine admin to my org
gsutil mb -p ${TF_ADMIN} gs://${TF_ADMIN}
Creating gs://funkypenguin-terraform-admin/...
[davidy:~/Documents remix/kubernetes/terraform] master(+1/-0)* ±
[davidy:~/Documents remix/kubernetes/terraform] master(+1/-0)* ± cat > backend.tf <<EOF
[davidy:~/Documents remix/kubernetes/terraform] master(+1/-0)*
[davidy:~/Documents remix/kubernetes/terraform] master(+1/-0)* cat > backend.tf <<EOF
heredoc> terraform {
heredoc> backend "gcs" {
heredoc> bucket = "${TF_ADMIN}"
@@ -125,9 +125,9 @@ heredoc> project = "${TF_ADMIN}"
heredoc> }
heredoc> }
heredoc> EOF
[davidy:~/Documents remix/kubernetes/terraform] master(+1/-0)* ± gsutil versioning set on gs://${TF_ADMIN}
[davidy:~/Documents remix/kubernetes/terraform] master(+1/-0)* gsutil versioning set on gs://${TF_ADMIN}
Enabling versioning for gs://funkypenguin-terraform-admin/...
[davidy:~/Documents remix/kubernetes/terraform] master(+1/-0)* ± export GOOGLE_APPLICATION_CREDENTIALS=${TF_CREDS}
[davidy:~/Documents remix/kubernetes/terraform] master(+1/-0)* export GOOGLE_APPLICATION_CREDENTIALS=${TF_CREDS}
export GOOGLE_PROJECT=${TF_ADMIN}

View File

@@ -0,0 +1,192 @@
## Terraform
We _could_ describe the manual gcloud/ssh steps required to deploy a Kubernetes cluster to Google Kubernetes Engine, but using Terraform allows us to abstract ourself from the provider, and focus on just the infrastructure we need built.
The terraform config we produce is theoretically reusabel across AWS, Azure, OpenStack, as well as GCE.
Install terraform locally - on OSX, I used ```brew install terraform```
Confirm it's correctly installed by running ```terraform -v```. My output looks like this:
```
[davidy:~] % terraform -v
Terraform v0.11.8
[davidy:~] %
```
## Google Cloud SDK
I can't remember how I installed gcloud, but I don't think I used homebrew. Run ```curl https://sdk.cloud.google.com | bash``` for a standard install, followed by ```gcloud init``` for the first-time setup.
This works:
```
cat <<-"BREWFILE" > Brewfile
cask 'google-cloud-sdk'
brew 'kubectl'
brew 'terraform'
BREWFILE
brew bundle --verbose
```
### Prepare for terraform
I followed [this guide](https://cloud.google.com/community/tutorials/managing-gcp-projects-with-terraform) to setup the following in the "best" way:
Run ```gcloud beta billing accounts list``` to get your billing account
```
export TF_ADMIN=tf-admin-funkypenguin
export TF_CREDS=serviceaccount.json
export TF_VAR_org_id=250566349101
export TF_VAR_billing_account=0156AE-7AE048-1DA888
export TF_VAR_region=australia-southeast1
export GOOGLE_APPLICATION_CREDENTIALS=${TF_CREDS}
gcloud projects create ${TF_ADMIN} --set-as-default
gcloud beta billing projects link ${TF_ADMIN} \
--billing-account ${TF_VAR_billing_account}
gcloud iam service-accounts create terraform \
--display-name "Terraform admin account"
Created service account [terraform].
gcloud iam service-accounts keys create ${TF_CREDS} \
--iam-account terraform@${TF_ADMIN}.iam.gserviceaccount.com
created key [c0a49832c94aa0e23278165e2d316ee3d5bad438] of type [json] as [serviceaccount.json] for [terraform@funkypenguin-terraform-admin.iam.gserviceaccount.com]
gcloud projects add-iam-policy-binding ${TF_ADMIN} \
> --member serviceAccount:terraform@${TF_ADMIN}.iam.gserviceaccount.com \
> --role roles/viewer
bindings:
- members:
- user:googlecloud2018@funkypenguin.co.nz
role: roles/owner
- members:
- serviceAccount:terraform@funkypenguin-terraform-admin.iam.gserviceaccount.com
role: roles/viewer
etag: BwV0VGSzYSU=
version: 1gcloud projects add-iam-policy-binding ${TF_ADMIN} \
> --member serviceAccount:terraform@${TF_ADMIN}.iam.gserviceaccount.com \
> --role roles/viewer
bindings:
- members:
- user:googlecloud2018@funkypenguin.co.nz
role: roles/owner
- members:
- serviceAccount:terraform@funkypenguin-terraform-admin.iam.gserviceaccount.com
role: roles/viewer
etag: BwV0VGSzYSU=
version: 1
gcloud projects add-iam-policy-binding ${TF_ADMIN} \
> --member serviceAccount:terraform@${TF_ADMIN}.iam.gserviceaccount.com \
> --role roles/storage.admin
bindings:
- members:
- user:googlecloud2018@funkypenguin.co.nz
role: roles/owner
- members:
- serviceAccount:terraform@funkypenguin-terraform-admin.iam.gserviceaccount.com
role: roles/storage.admin
- members:
- serviceAccount:terraform@funkypenguin-terraform-admin.iam.gserviceaccount.com
role: roles/viewer
etag: BwV0VGZwXfM=
version: 1
gcloud services enable cloudresourcemanager.googleapis.com
gcloud services enable cloudbilling.googleapis.com
gcloud services enable iam.googleapis.com
gcloud services enable compute.googleapis.com
## FIXME
Enabled Kubernetes Engine API in the tf-admin project, so that terraform can actually compute versions of the engine available
## FIXME
I had to add compute admin, service admin, and kubernetes engine admin to my org-level account, in order to use gcloud get-cluster-credentilals
gsutil mb -p ${TF_ADMIN} gs://${TF_ADMIN}
Creating gs://funkypenguin-terraform-admin/...
[davidy:~/Documents … remix/kubernetes/terraform] master(+1/-0)* ±
[davidy:~/Documents … remix/kubernetes/terraform] master(+1/-0)* ± cat > backend.tf <<EOF
heredoc> terraform {
heredoc> backend "gcs" {
heredoc> bucket = "${TF_ADMIN}"
heredoc> path = "/terraform.tfstate"
heredoc> project = "${TF_ADMIN}"
heredoc> }
heredoc> }
heredoc> EOF
[davidy:~/Documents … remix/kubernetes/terraform] master(+1/-0)* ± gsutil versioning set on gs://${TF_ADMIN}
Enabling versioning for gs://funkypenguin-terraform-admin/...
[davidy:~/Documents … remix/kubernetes/terraform] master(+1/-0)* ± export GOOGLE_APPLICATION_CREDENTIALS=${TF_CREDS}
export GOOGLE_PROJECT=${TF_ADMIN}
```
### Create Service Account
Since it's probably not a great idea to associate your own, master Google Cloud account with your automation process (after all, you can't easily revoke your own credentials if they leak), create a Service Account for terraform under GCE, and grant it the "Compute Admin" role.
Download the resulting JSON, and save it wherever you're saving your code. Remember to protect this .json file like a password, so add it to .gitignore if you're checking your code into git (_and if you're not checking your code into git, what's wrong with you, just do it now!_)
### Setup provider.tf
I setup my provider like this, noting that the project name (which must already be created) came from the output of ```gloud projects list```, and region/zone came from https://cloud.google.com/compute/docs/regions-zones/
```
# Specify the provider (GCP, AWS, Azure)
provider "google" {
credentials = "${file("serviceaccount.json")}"
project = "funkypenguin-mining-pools"
region = "australia-southeast1"
}
```
### Setup compute.tf
Just playing, I setup this:
```
# Create a new instance
resource "google_compute_instance" "ubuntu-xenial" {
name = "ubuntu-xenial"
machine_type = "f1-micro"
zone = "us-west1-a"
boot_disk {
initialize_params {
image = "ubuntu-1604-lts"
}
}
network_interface {
network = "default"
access_config {}
}
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
```
### Initialize and plan (it's free)
Run ```terraform init``` to initialize Terraform
Then run ```terrafor plan``` to check that the plan looks good.
### Apply (not necessarily free)
Once your plan (above) is good, run ```terraform apply``` to put it into motion. This is the point where you may start incurring costs.
### Setup kubectl
gcloud container clusters get-credentials $(terraform output cluster_name) --zone $(terraform output cluster_zone) --project $(terraform output project_id)

View File

@@ -0,0 +1,56 @@
# Networks
In order to avoid IP addressing conflicts as we bring swarm networks up/down, we will statically address each docker overlay network, and record the details below:
Network | Range
--|--
[Traefik](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik/) | _unspecified_
[Docker-cleanup](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/docker-swarm-mode/#setup-automated-cleanup) | 172.16.0.0/24
[Mail Server](https://geek-cookbook.funkypenguin.co.nz/recipes/mail/) | 172.16.1.0/24
[Gitlab](https://geek-cookbook.funkypenguin.co.nz/recipes/gitlab/) | 172.16.2.0/24
[Wekan](https://geek-cookbook.funkypenguin.co.nz/recipes/wekan/) | 172.16.3.0/24
[Piwik](https://geek-cookbook.funkypenguin.co.nz/recipes/piwik/) | 172.16.4.0/24
[Tiny Tiny RSS](https://geek-cookbook.funkypenguin.co.nz/recipes/tiny-tiny-rss/) | 172.16.5.0/24
[Huginn](https://geek-cookbook.funkypenguin.co.nz/recipes/huginn/) | 172.16.6.0/24
[Unifi](https://geek-cookbook.funkypenguin.co.nz/recipes/unifi/) | 172.16.7.0/24
[Kanboard](https://geek-cookbook.funkypenguin.co.nz/recipes/kanboard/) | 172.16.8.0/24
[Gollum](https://geek-cookbook.funkypenguin.co.nz/recipes/gollum/) | 172.16.9.0/24
[Duplicity](https://geek-cookbook.funkypenguin.co.nz/recipes/duplicity/) | 172.16.10.0/24
[Autopirate](https://geek-cookbook.funkypenguin.co.nz/recipes/autopirate/) | 172.16.11.0/24
[Nextcloud](https://geek-cookbook.funkypenguin.co.nz/recipes/nextcloud/) | 172.16.12.0/24
[Portainer](https://geek-cookbook.funkypenguin.co.nz/recipes/portainer/) | 172.16.13.0/24
[Home-Assistant](https://geek-cookbook.funkypenguin.co.nz/recipes/home-assistant/) | 172.16.14.0/24
[OwnTracks](https://geek-cookbook.funkypenguin.co.nz/recipes/owntracks/) | 172.16.15.0/24
[Plex](https://geek-cookbook.funkypenguin.co.nz/recipes/plex/) | 172.16.16.0/24
[Emby](https://geek-cookbook.funkypenguin.co.nz/recipes/emby/) | 172.16.17.0/24
[Calibre-Web](https://geek-cookbook.funkypenguin.co.nz/recipes/calibre-web/) | 172.16.18.0/24
[Wallabag](https://geek-cookbook.funkypenguin.co.nz/recipes/wallabag/) | 172.16.19.0/24
[InstaPy](https://geek-cookbook.funkypenguin.co.nz/recipes/instapy/) | 172.16.20.0/24
[Turtle Pool](https://geek-cookbook.funkypenguin.co.nz/recipes/turtle-pool/) | 172.16.21.0/24
[MiniFlux](https://geek-cookbook.funkypenguin.co.nz/recipes/miniflux/) | 172.16.22.0/24
[Gitlab Runner](https://geek-cookbook.funkypenguin.co.nz/recipes/gitlab-runner/) | 172.16.23.0/24
[Munin](https://geek-cookbook.funkypenguin.co.nz/recipes/munin/) | 172.16.24.0/24
[Masari Mining Pool](https://geek-cookbook.funkypenguin.co.nz/recipes/cryptonote-mining-pool/masari/) | 172.16.25.0/24
[Athena Mining Pool](https://geek-cookbook.funkypenguin.co.nz/recipes/cryptonote-mining-pool/athena/) | 172.16.26.0/24
[Bookstack](https://geek-cookbook.funkypenguin.co.nz/recipes/bookstack/) | 172.16.33.0/24
[Swarmprom](https://geek-cookbook.funkypenguin.co.nz/recipes/swarmprom/) | 172.16.34.0/24
[Realms](https://geek-cookbook.funkypenguin.co.nz/recipes/realms/) | 172.16.35.0/24
[ElkarBackup](https://geek-cookbook.funkypenguin.co.nz/recipes/elkarbackp/) | 172.16.36.0/24
[Mayan EDMS](https://geek-cookbook.funkypenguin.co.nz/recipes/realms/) | 172.16.37.0/24
[Shaarli](https://geek-cookbook.funkypenguin.co.nz/recipes/shaarli/) | 172.16.38.0/24
[OpenLDAP](https://geek-cookbook.funkypenguin.co.nz/recipes/openldap/) | 172.16.39.0/24
[MatterMost](https://geek-cookbook.funkypenguin.co.nz/recipes/mattermost/) | 172.16.40.0/24
[PrivateBin](https://geek-cookbook.funkypenguin.co.nz/recipes/privatebin/) | 172.16.41.0/24
[Mayan EDMS](https://geek-cookbook.funkypenguin.co.nz/recipes/mayan-edms/) | 172.16.42.0/24
[Hack MD](https://geek-cookbook.funkypenguin.co.nz/recipes/hackmd/) | 172.16.43.0/24
[FlightAirMap](https://geek-cookbook.funkypenguin.co.nz/recipes/flightairmap/) |172.16.44.0/24
[Wetty](https://geek-cookbook.funkypenguin.co.nz/recipes/wetty/) | 172.16.45.0/24
[FileBrowser](https://geek-cookbook.funkypenguin.co.nz/recipes/filebrowser/) | 172.16.46.0/24
[phpIPAM](https://geek-cookbook.funkypenguin.co.nz/recipes/phpipam/) | 172.16.47.0/24
[Dozzle](https://geek-cookbook.funkypenguin.co.nz/recipes/dozzle/) | 172.16.48.0/24
[KeyCloak](https://geek-cookbook.funkypenguin.co.nz/recipes/keycloak/) | 172.16.49.0/24
[Sensu](https://geek-cookbook.funkypenguin.co.nz/recipes/sensu/) | 172.16.50.0/24
[Magento](https://geek-cookbook.funkypenguin.co.nz/recipes/magento/) | 172.16.51.0/24
[Graylog](https://geek-cookbook.funkypenguin.co.nz/recipes/graylog/) | 172.16.52.0/24
[Harbor](https://geek-cookbook.funkypenguin.co.nz/recipes/graylog/) | 172.16.53.0/24
[Harbor-Clair](https://geek-cookbook.funkypenguin.co.nz/recipes/graylog/) | 172.16.54.0/24

View File

@@ -0,0 +1,79 @@
# OAuth proxy
Some of the platforms we use on our swarm may have strong, proven security to prevent abuse. Techniques such as rate-limiting (to defeat brute force attacks) or even support 2-factor authentication (tiny-tiny-rss or Wallabag support this).
Other platforms may provide **no authentication** (Traefik's web UI for example), or minimal, un-proven UI authentication which may have been added as an afterthought.
Still platforms may hold such sensitive data (i.e., NextCloud), that we'll feel more secure by putting an additional authentication layer in front of them.
This is the role of the OAuth proxy.
## How does it work?
**Normally**, Traefik proxies web requests directly to individual web apps running in containers. The user talks directly to the webapp, and the webapp is responsible for ensuring appropriate authentication.
When employing the **OAuth proxy** , the proxy sits in the middle of this transaction - traefik sends the web client to the OAuth proxy, the proxy authenticates the user against a 3rd-party source (_GitHub, Google, etc_), and then passes authenticated requests on to the web app in the container.
Illustrated below:
![OAuth proxy](/images/oauth_proxy.png)
The advantage under this design is additional security. If I'm deploying a web app which I expect only myself to require access to, I'll put the oauth_proxy in front of it. The overhead is negligible, and the additional layer of security is well-worth it.
## Ingredients
## Preparation
### OAuth provider
OAuth Proxy currently supports the following OAuth providers:
* Google (default)
* Azure
* Facebook
* GitHub
* GitLab
* LinkedIn
* MyUSA
Follow the [instructions](https://github.com/bitly/oauth2_proxy) to setup your oauth provider. You need to setup a unique key/secret for **each** instance of the proxy you want to run, since in each case the callback URL will differ.
### Authorized emails file
There are a variety of options with oauth_proxy re which email addresses (authenticated against your oauth provider) should be permitted access. You can permit access based on email domain (*@gmail.com), individual email address (batman@gmail.com), or based on provider-specific groups (_i.e., a GitHub organization_)
The most restrictive configuration allows access on a per-email address basis, which is illustrated below:
I created **/var/data/oauth_proxy/authenticated-emails.txt**, and add my own email address to the first line.
### Configure stack
You'll need to define a service for the oauth_proxy in every stack which you want to protect. Here's an example from the [Wekan](/recipes/wekan/) recipe:
```
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/wekan/wekan.env
networks:
- traefik
- internal
deploy:
labels:
- traefik.frontend.rule=Host:wekan.funkypenguin.co.nz
- traefik.docker.network=traefik
- traefik.port=4180
volumes:
- /var/data/oauth_proxy/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://wekan:80
-redirect-url=https://wekan.funkypenguin.co.nz
-http-address=http://0.0.0.0:4180
-email-domain=funkypenguin.co.nz
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
Note above how:
* Labels are required to tell Traefik to forward the traffic to the proxy, rather than the backend container running the app
* An environment file is defined, but..
* The redirect URL must still be passed to the oauth_proxy in the command argument

View File

@@ -0,0 +1,58 @@
# OpenVPN
Sometimes you need an OpenVPN tunnel between your docker hosts and some other environment. I needed this to provide connectivity between swarm-deployed services like Home Assistant, and my IOT devices within my home LAN.
OpenVPN is one application which doesn't really work in a swarm-type deployment, since each host will typically require a unique certificate/key to connect to the VPN anyway.
In my case, I needed each docker node to connect via [OpenVPN](http://www.openvpn.org) back to a [pfsense](http://www.pfsense.org) instance, but there were a few gotchas related to OpenVPN at CentOS Atomic which I needed to address first.
## SELinux for OpenVPN
Yes, SELinux. Install a custom policy permitting a docker container to create tun interfaces, like this:
```
cat << EOF > docker-openvpn.te
module docker-openvpn 1.0;
require {
type svirt_lxc_net_t;
class tun_socket create;
}
#============= svirt_lxc_net_t ==============
allow svirt_lxc_net_t self:tun_socket create;
EOF
checkmodule -M -m -o docker-openvpn.mod docker-openvpn.te
semodule_package -o docker-openvpn.pp -m docker-openvpn.mod
semodule -i docker-openvpn.pp
```
## Insert the tun module
Even with the SELinux policy above, I still need to insert the "tun" module into the running kernel at the host-level, before a docker container can use it to create a tun interface.
Run the following to auto-insert the tun module on boot:
```
cat << EOF >> /etc/rc.d/rc.local
# Insert the "tun" module so that the vpn-client container can access /dev/net/tun
/sbin/modprobe tun
EOF
chmod 755 /etc/rc.d/rc.local
```
## Connect the VPN
Finally, for each node, I exported client credentials, and SCP'd them over to the docker node, into /root/my-vpn-configs-here/. I also had to use the NET_ADMIN cap-add parameter, as illustrated below:
```
docker run -d --name vpn-client \
--restart=always --cap-add=NET_ADMIN --net=host \
--device /dev/net/tun \
-v /root/my-vpn-configs-here:/vpn:z \
ekristen/openvpn-client --config /vpn/my-host-config.ovpn
```
Now every time my node boots, it establishes a VPN tunnel back to my pfsense host and (_by using custom configuration directives in OpenVPN_) is assigned a static VPN IP.

View File

@@ -0,0 +1,26 @@
# Troubleshooting
Having difficulty with a recipe? Here are some tips..
## Why is my stack not launching?
Run ```docker stack ps <stack name> --no-trunc``` for more details on why individual containers failed to launching
## Attaching to running container
Need to debug **why** your oauth2_proxy container can't talk to its upstream app? Start by identifying which node the proxy container is running on, using ```docker ps <stack name>```.
SSH to the host node, and attach to the container using ```docker exec -it <continer id> /bin/bash``` (_substitute ```/bin/ash``` for ```/bin/bash```, in the case of an Alpine container_), and then try to telnet to your upstream host.
## Watching logs of container
Need to see what a particular container is doing? Run ```docker service logs -f <stack name>_<container name>``` to watch a particular service. As the service dies and is recreated, the logs will continue to be displayed.
## Visually monitoring containers with ctop
For a visual "top-like" display of your container's activity (_as well as a [detailed per-container view](https://github.com/bcicen/ctop/blob/master/_docs/single.md)_), try using [ctop](https://github.com/bcicen/ctop).
To execute, simply run `docker run --rm -ti --name ctop -v /var/run/docker.sock:/var/run/docker.sock quay.io/vektorlab/ctop:latest`
Example:
![](https://github.com/bcicen/ctop/raw/master/_docs/img/grid.gif)