Merge with master
@@ -2,25 +2,27 @@
|
||||
|
||||
## Subscribe to updates
|
||||
|
||||
Sign up [here](http://eepurl.com/dfx95n) (double-opt-in) to receive email updates on new and improve recipes!
|
||||
* Email : Sign up [here](http://eepurl.com/dfx95n) (double-opt-in) to receive email updates on new and improve recipes!
|
||||
* Mastodon: https://mastodon.social/@geekcookbook_changes
|
||||
* RSS: https://mastodon.social/@geekcookbook_changes.atom
|
||||
* The #changelog channel in our [Discord server](http://chat.funkypenguin.co.nz)
|
||||
|
||||
## Recent additions to work-in-progress
|
||||
|
||||
* Kubernetes recipes for UniFi controller, Miniflux, Kanboard and PrivateBin coming in March! (_19 Mar 2019_)
|
||||
|
||||
## Recently added recipes
|
||||
|
||||
* Added a list of [sponsored projects](sponsored-projects/) which I regularly donate to, to keep the geeky ingredients fresh! (_8 Jun 2018_)
|
||||
* [Turtle Pool](/recipies/turtle-pool/) - A mining pool for the fun, friendly, no-BS, still-in-its-infancy cryptocurrency, "[TurtleCoin](http://turtlecoin.lol)" (_7 May 2018_)
|
||||
* [Wallabag](/recipies/wallabag/) - Self-hosted Read-it-Later / Annotation app (_21 Apr 2018_)
|
||||
* [InstaPy](/recipies/instapy/) - Automate your Instagrammage (_17 Apr 2018_)
|
||||
* [CryptoMiner](/recipies/cryto-miner/start/) - Become a cryptocurrency miner, put your GPU to work!
|
||||
* [Calibre-Web](/recipies/calibre-web/) - Plex for EBooks (_8 Jan 2018_)
|
||||
* [Emby](/recipies/emby/) - Geekier alternative to Plex, with improved parental controls (_28 Dec 2017_)
|
||||
* Added Kubernetes version of [Miniflux](/recipes/kubernetes/miniflux/) recipe, a minimalistic RSS reader supporting the Fever API (_26 Mar 2019_)
|
||||
* Added Kubernetes version of [Kanboard](/recipes/kubernetes/kanboard/) recipe, a lightweight, well-supported Kanban tool for visualizing your work (_19 Mar 2019_)
|
||||
* Added [Minio](/recipes/minio/), a high performance distributed object storage server, designed for large-scale private cloud infrastructure, but perfect for simple use cases where emulating AWS S3 is useful. (_27 Jan 2019_)
|
||||
* Added the beginning of the **Kubernetes** design, including a getting started on using [Digital Ocean,](/kubernetes/digitalocean/) and a WIP recipe for an [MQTT](/recipes/mqtt/) broker (_21 Jan 2019_)
|
||||
* [ElkarBackup](/recipes/elkarbackup/), a beautiful GUI-based backup solution built on rsync/rsnapshot (_1 Jan 2019_)
|
||||
|
||||
|
||||
## Recent improvements
|
||||
|
||||
* [Lazy Librarian](/recipies/autopirate/lazylibrarian/) component of [autopirate](/recipies/autopirate/start/) recipe updated to include calibre-server, so that downloaded ebooks can be automatically added to a calibre library, to then be independently managed using [Calibre-Web](/recipies/calibre-web/) (_27 May 2018_)
|
||||
* [Miniflux](/recipies/miniflux/) updated to version 2.0, including PostgreSQL database and nightly DB dumps (_24 May 2018_)
|
||||
* [Turtle Pool](/recipies/turtle-pool/) updated for redundant daemons plus a failsafe (_16 May 2018_)
|
||||
* "Disengaged" [AutoPirate](/recipies/autopirate/) uber-recipe into individual sub-recipies per-page, easing navigation and support / comment flow
|
||||
* Switched [Emby](/recipies/emby/) to official docker container (more up-to-date) (_27 Mar 2018_)
|
||||
* [Docker Swarm Mode](/ha-docker-swarm/docker-swarm-mode/#setup-automatic-updates) setup updated for automatic container updates (Shepherd)
|
||||
* [Kanboard](/recipies/kanboard/) recipe [improved](https://github.com/funkypenguin/geek-cookbook/commit/8597bcc6319b571c8138cd1b615e8c512e5f5bd5) with the inclusion of a cron container to run automated daily jobs (_22 Dec 2017_)
|
||||
* Added recipe for [automated snapshots of Kubernetes Persistent Volumes](/kubernetes/snapshots/), instructions for using [Helm](/kubernetes/helm/), and recipe for deploying [Traefik](/kubernetes/traefik/), which completes the Kubernetes cluster design! (_9 Feb 2019_)
|
||||
* Added detailed description (_and diagram_) of our [Kubernetes design](/kubernetes/design/), plus a [simple load-balancer design](kubernetes/loadbalancer/) to avoid the complexities/costs of permitting ingress access to a cluster (_7 Feb 2019_)
|
||||
* Added an [introductory/explanatory page, including a children's story, on Kubernetes](/kubernetes/start/) (_29 Jan 2019_)
|
||||
* [NextCloud](/recipes/nextcloud/) updated to fix CalDAV/CardDAV service discovery behind Traefik reverse proxy (_12 Dec 2018_)
|
||||
|
||||
@@ -2,38 +2,90 @@
|
||||
|
||||
index.md
|
||||
README.md
|
||||
CHANGELOG.md
|
||||
whoami.md
|
||||
|
||||
sections/ha-docker-swarm.md
|
||||
ha-docker-swarm/design.md
|
||||
ha-docker-swarm/vms.md
|
||||
ha-docker-swarm/shared-storage-ceph.md
|
||||
ha-docker-swarm/shared-storage-glustermd
|
||||
ha-docker-swarm/shared-storage-gluster.md
|
||||
ha-docker-swarm/keepalived.md
|
||||
ha-docker-swarm/traefik.md
|
||||
ha-docker-swarm/docker-swarm-mode.md
|
||||
ha-docker-swarm/traefik.md
|
||||
ha-docker-swarm/registry.md
|
||||
ha-docker-swarm/duplicity.md
|
||||
|
||||
sections/recipies.md
|
||||
recipies/mail.md
|
||||
recipies/gitlab.md
|
||||
recipies/gitlab-runner.md
|
||||
recipies/wekan.md
|
||||
recipies/huginn.md
|
||||
recipies/kanboard.md
|
||||
recipies/miniflux.md
|
||||
recipies/ghost.md
|
||||
recipies/piwik.md
|
||||
recipies/autopirate.md
|
||||
recipies/nextcloud.md
|
||||
recipies/portainer.md
|
||||
recipies/turtle-pool.md
|
||||
recipies/tiny-tiny-rss.md
|
||||
sections/chefs-favorites-docker.md
|
||||
recipes/autopirate.md
|
||||
recipes/autopirate/sabnzbd.md
|
||||
recipes/autopirate/nzbget.md
|
||||
recipes/autopirate/rtorrent.md
|
||||
recipes/autopirate/sonarr.md
|
||||
recipes/autopirate/radarr.md
|
||||
recipes/autopirate/mylar.md
|
||||
recipes/autopirate/lazylibrarian.md
|
||||
recipes/autopirate/headphones.md
|
||||
recipes/autopirate/lidarr.md
|
||||
recipes/autopirate/nzbhydra.md
|
||||
recipes/autopirate/nzbhydra2.md
|
||||
recipes/autopirate/ombi.md
|
||||
recipes/autopirate/jackett.md
|
||||
recipes/autopirate/heimdall.md
|
||||
recipes/autopirate/end.md
|
||||
|
||||
recipes/elkarbackup.md
|
||||
recipes/emby.md
|
||||
recipes/homeassistant.md
|
||||
recipes/homeassistant/ibeacon.md
|
||||
recipes/huginn.md
|
||||
recipes/kanboard.md
|
||||
recipes/miniflux.md
|
||||
recipes/munin.md
|
||||
recipes/nextcloud.md
|
||||
recipes/owntracks.md
|
||||
recipes/phpipam.md
|
||||
recipes/plex.md
|
||||
recipes/privatebin.md
|
||||
recipes/swarmprom.md
|
||||
recipes/turtle-pool.md
|
||||
|
||||
sections/menu-docker.md
|
||||
recipes/bookstack.md
|
||||
recipes/cryptominer.md
|
||||
recipes/cryptominer/mining-rig.md
|
||||
recipes/cryptominer/amd-gpu.md
|
||||
recipes/cryptominer/nvidia-gpu.md
|
||||
recipes/cryptominer/mining-pool.md
|
||||
recipes/cryptominer/wallet.md
|
||||
recipes/cryptominer/exchange.md
|
||||
recipes/cryptominer/minerhotel.md
|
||||
recipes/cryptominer/monitor.md
|
||||
recipes/cryptominer/profit.md
|
||||
recipes/calibre-web.md
|
||||
recipes/collabora-online.md
|
||||
recipes/ghost.md
|
||||
recipes/gitlab.md
|
||||
recipes/gitlab-runner.md
|
||||
recipes/gollum.md
|
||||
recipes/instapy.md
|
||||
recipes/keycloak.md
|
||||
recipes/openldap.md
|
||||
recipes/mail.md
|
||||
recipes/minio.md
|
||||
recipes/piwik.md
|
||||
recipes/portainer.md
|
||||
recipes/realms.md
|
||||
recipes/tiny-tiny-rss.md
|
||||
recipes/wallabag.md
|
||||
recipes/wekan.md
|
||||
recipes/wetty.md
|
||||
|
||||
sections/reference.md
|
||||
reference/oauth_proxy.md
|
||||
reference/data_layout.md
|
||||
reference/networks.md
|
||||
reference/containers.md
|
||||
reference/git-docker.md
|
||||
reference/openvpn.md
|
||||
reference/troubleshooting.md
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
<!-- Piwik -->
|
||||
<script type="text/javascript">
|
||||
var _paq = _paq || [];
|
||||
/* tracker methods like "setCustomDimension" should be called before "trackPageView" */
|
||||
_paq.push(['trackPageView']);
|
||||
_paq.push(['enableLinkTracking']);
|
||||
(function() {
|
||||
var u="//piwik.funkypenguin.co.nz/";
|
||||
_paq.push(['setTrackerUrl', u+'piwik.php']);
|
||||
_paq.push(['setSiteId', '2']);
|
||||
var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0];
|
||||
g.type='text/javascript'; g.async=true; g.defer=true; g.src=u+'piwik.js'; s.parentNode.insertBefore(g,s);
|
||||
})();
|
||||
</script>
|
||||
<!-- End Piwik Code -->
|
||||
@@ -51,7 +51,7 @@ Assuming 3 nodes, under normal circumstances the following is illustrated:
|
||||
* The **traefik** service (in swarm mode) receives incoming requests (on http and https), and forwards them to individual containers. Traefik knows the containers names because it's able to access the docker socket.
|
||||
* All 3 nodes run keepalived, at different priorities. Since traefik is running as a swarm service and listening on TCP 80/443, requests made to the keepalived VIP and arriving at **any** of the swarm nodes will be forwarded to the traefik container (no matter which node it's on), and then onto the target backend.
|
||||
|
||||

|
||||

|
||||
|
||||
### Node failure
|
||||
|
||||
@@ -63,7 +63,7 @@ In the case of a failure (or scheduled maintenance) of one of the nodes, the fol
|
||||
* The **traefik** service is either restarted or unaffected, and as the backend containers stop/start and change IP, traefik is aware and updates accordingly.
|
||||
* The keepalived VIP continues to function on the remaining nodes, and docker swarm continues to forward any traffic received on TCP 80/443 to the appropriate node.
|
||||
|
||||

|
||||

|
||||
|
||||
### Node restore
|
||||
|
||||
@@ -75,7 +75,7 @@ When the failed (or upgraded) host is restored to service, the following is illu
|
||||
* Keepalived VIP regains full redundancy
|
||||
|
||||
|
||||

|
||||

|
||||
|
||||
### Total cluster failure
|
||||
|
||||
|
||||
|
Before Width: | Height: | Size: 43 KiB |
@@ -6,7 +6,7 @@ While Docker Swarm is great for keeping containers running (_and restarting thos
|
||||
|
||||
### Why GlusterFS?
|
||||
|
||||
This GlusterFS recipe was my original design for shared storage, but I [found it to be flawed](ha-docker-swarm/shared-storage-ceph/#why-not-glusterfs), and I replaced it with a [design which employs Ceph instead](http://localhost:8000/ha-docker-swarm/shared-storage-ceph/#why-ceph). This recipe is an alternate to the Ceph design, if you happen to prefer GlusterFS.
|
||||
This GlusterFS recipe was my original design for shared storage, but I [found it to be flawed](shared-storage-ceph/#why-not-glusterfs), and I replaced it with a [design which employs Ceph instead](shared-storage-ceph/#why-ceph). This recipe is an alternate to the Ceph design, if you happen to prefer GlusterFS.
|
||||
|
||||
## Ingredients
|
||||
|
||||
|
||||
@@ -21,6 +21,9 @@ To deal with these gaps, we need a front-end load-balancer, and in this design,
|
||||
|
||||
The traefik container is aware of the __other__ docker containers in the swarm, because it has access to the docker socket at **/var/run/docker.sock**. This allows traefik to dynamically configure itself based on the labels found on containers in the swarm, which is hugely useful. To make this functionality work on our SELinux-enabled Atomic hosts, we need to add custom SELinux policy.
|
||||
|
||||
!!! tip
|
||||
The following is only necessary if you're using SELinux!
|
||||
|
||||
Run the following to build and activate policy to permit containers to access docker.sock:
|
||||
|
||||
```
|
||||
@@ -37,7 +40,7 @@ make && semodule -i dockersock.pp
|
||||
|
||||
While it's possible to configure traefik via docker command arguments, I prefer to create a config file (traefik.toml). This allows me to change traefik's behaviour by simply changing the file, and keeps my docker config simple.
|
||||
|
||||
Create /var/data/traefik/traefik.toml as follows:
|
||||
Create ```/var/data/traefik/```, and then create ```traefik.toml``` inside it as follows:
|
||||
|
||||
```
|
||||
checkNewVersion = true
|
||||
@@ -76,9 +79,14 @@ watch = true
|
||||
swarmmode = true
|
||||
```
|
||||
|
||||
|
||||
### Prepare the docker service config
|
||||
|
||||
Create /var/data/traefik/docker-compose.yml as follows:
|
||||
!!! tip
|
||||
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||
|
||||
|
||||
Create /var/data/config/traefik/docker-compose.yml as follows:
|
||||
|
||||
```
|
||||
version: "3"
|
||||
@@ -123,12 +131,14 @@ networks:
|
||||
- subnet: 10.1.0.0/24
|
||||
```
|
||||
|
||||
Docker won't start an image with a bind-mount to a non-existent file, so prepare acme.json (_with the appropriate permissions_) by running:
|
||||
Docker won't start an image with a bind-mount to a non-existent file, so prepare an empty acme.json (_with the appropriate permissions_) by running:
|
||||
|
||||
```
|
||||
touch /var/data/traefik/acme.json
|
||||
chmod 600 /var/data/traefik/acme.json
|
||||
```.
|
||||
```
|
||||
|
||||
Traefik will populate acme.json itself when it runs, but it needs to exist before the container will start (_Chicken, meet egg._)
|
||||
|
||||
### Launch
|
||||
|
||||
|
||||
BIN
manuscript/images/athena-mining-pool.png
Normal file
|
After Width: | Height: | Size: 297 KiB |
BIN
manuscript/images/bookstack.png
Normal file
|
After Width: | Height: | Size: 208 KiB |
BIN
manuscript/images/collabora-online-in-nextcloud.png
Normal file
|
After Width: | Height: | Size: 95 KiB |
BIN
manuscript/images/collabora-online.png
Normal file
|
After Width: | Height: | Size: 116 KiB |
BIN
manuscript/images/collabora-traffic-flow.png
Normal file
|
After Width: | Height: | Size: 168 KiB |
BIN
manuscript/images/common_observatory.png
Normal file
|
After Width: | Height: | Size: 4.6 KiB |
BIN
manuscript/images/cryptonote-mining-pool.png
Normal file
|
After Width: | Height: | Size: 112 KiB |
|
Before Width: | Height: | Size: 314 KiB After Width: | Height: | Size: 314 KiB |
|
Before Width: | Height: | Size: 333 KiB After Width: | Height: | Size: 333 KiB |
|
Before Width: | Height: | Size: 310 KiB After Width: | Height: | Size: 310 KiB |
BIN
manuscript/images/elkarbackup-setup-1.png
Normal file
|
After Width: | Height: | Size: 963 KiB |
BIN
manuscript/images/elkarbackup-setup-2.png
Normal file
|
After Width: | Height: | Size: 93 KiB |
BIN
manuscript/images/elkarbackup-setup-3.png
Normal file
|
After Width: | Height: | Size: 62 KiB |
BIN
manuscript/images/elkarbackup.png
Normal file
|
After Width: | Height: | Size: 135 KiB |
BIN
manuscript/images/heimdall.jpg
Normal file
|
After Width: | Height: | Size: 330 KiB |
BIN
manuscript/images/ipfs.png
Normal file
|
After Width: | Height: | Size: 175 KiB |
BIN
manuscript/images/keycloak.png
Normal file
|
After Width: | Height: | Size: 32 KiB |
BIN
manuscript/images/kubernetes-cluster-design.png
Normal file
|
After Width: | Height: | Size: 343 KiB |
BIN
manuscript/images/kubernetes-helm.png
Normal file
|
After Width: | Height: | Size: 59 KiB |
BIN
manuscript/images/kubernetes-on-digitalocean-screenshot-1.png
Normal file
|
After Width: | Height: | Size: 159 KiB |
BIN
manuscript/images/kubernetes-on-digitalocean-screenshot-2.png
Normal file
|
After Width: | Height: | Size: 555 KiB |
BIN
manuscript/images/kubernetes-on-digitalocean-screenshot-3.png
Normal file
|
After Width: | Height: | Size: 156 KiB |
BIN
manuscript/images/kubernetes-on-digitalocean-screenshot-4.png
Normal file
|
After Width: | Height: | Size: 300 KiB |
BIN
manuscript/images/kubernetes-on-digitalocean-screenshot-5.png
Normal file
|
After Width: | Height: | Size: 198 KiB |
BIN
manuscript/images/kubernetes-on-digitalocean-screenshot-6.png
Normal file
|
After Width: | Height: | Size: 189 KiB |
BIN
manuscript/images/kubernetes-on-digitalocean.jpg
Normal file
|
After Width: | Height: | Size: 80 KiB |
BIN
manuscript/images/kubernetes-snapshots.png
Normal file
|
After Width: | Height: | Size: 246 KiB |
BIN
manuscript/images/lidarr.png
Normal file
|
After Width: | Height: | Size: 2.4 MiB |
BIN
manuscript/images/mattermost.png
Normal file
|
After Width: | Height: | Size: 128 KiB |
|
Before Width: | Height: | Size: 140 KiB After Width: | Height: | Size: 140 KiB |
BIN
manuscript/images/minio.png
Normal file
|
After Width: | Height: | Size: 140 KiB |
BIN
manuscript/images/mqtt.png
Normal file
|
After Width: | Height: | Size: 310 KiB |
BIN
manuscript/images/munin.png
Normal file
|
After Width: | Height: | Size: 83 KiB |
BIN
manuscript/images/name.jpg
Normal file
|
After Width: | Height: | Size: 140 KiB |
BIN
manuscript/images/nzbhydra2.png
Normal file
|
After Width: | Height: | Size: 519 KiB |
BIN
manuscript/images/openldap.jpeg
Normal file
|
After Width: | Height: | Size: 162 KiB |
BIN
manuscript/images/phpipam.png
Normal file
|
After Width: | Height: | Size: 3.6 MiB |
BIN
manuscript/images/privatebin.png
Normal file
|
After Width: | Height: | Size: 104 KiB |
BIN
manuscript/images/realms.png
Normal file
|
After Width: | Height: | Size: 111 KiB |
BIN
manuscript/images/sso-stack-keycloak-1.png
Normal file
|
After Width: | Height: | Size: 58 KiB |
BIN
manuscript/images/sso-stack-keycloak-2.png
Normal file
|
After Width: | Height: | Size: 48 KiB |
BIN
manuscript/images/sso-stack-keycloak-3.png
Normal file
|
After Width: | Height: | Size: 95 KiB |
BIN
manuscript/images/sso-stack-keycloak-4.png
Normal file
|
After Width: | Height: | Size: 83 KiB |
BIN
manuscript/images/sso-stack-lam-1.png
Normal file
|
After Width: | Height: | Size: 93 KiB |
BIN
manuscript/images/sso-stack-lam-2.png
Normal file
|
After Width: | Height: | Size: 86 KiB |
BIN
manuscript/images/sso-stack-lam-3.png
Normal file
|
After Width: | Height: | Size: 62 KiB |
BIN
manuscript/images/sso-stack-lam-4.png
Normal file
|
After Width: | Height: | Size: 88 KiB |
BIN
manuscript/images/sso-stack-lam-5.png
Normal file
|
After Width: | Height: | Size: 66 KiB |
BIN
manuscript/images/sso-stack-lam-6.png
Normal file
|
After Width: | Height: | Size: 68 KiB |
BIN
manuscript/images/sso-stack-lam-7.png
Normal file
|
After Width: | Height: | Size: 251 KiB |
BIN
manuscript/images/swarmprom.png
Normal file
|
After Width: | Height: | Size: 881 KiB |
BIN
manuscript/images/terraform_service_accounts.png
Normal file
|
After Width: | Height: | Size: 69 KiB |
BIN
manuscript/images/terraform_service_accounts_2.png
Normal file
|
After Width: | Height: | Size: 199 KiB |
BIN
manuscript/images/wetty.png
Normal file
|
After Width: | Height: | Size: 153 KiB |
@@ -1,6 +1,6 @@
|
||||
# What is this?
|
||||
|
||||
The "**[Geek's Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of guides for establishing your own highly-available docker container cluster (swarm). This swarm enables you to run self-hosted services such as [GitLab](/recipies/gitlab/), [Plex](/recipies/plex/), [NextCloud](/recipies/nextcloud/), etc. Recent updates and additions are posted on the [CHANGELOG](/CHANGELOG/).
|
||||
The "**[Geek's Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of guides for establishing your own highly-available docker container cluster (swarm). This swarm enables you to run self-hosted services such as [GitLab](/recipes/gitlab/), [Plex](/recipes/plex/), [NextCloud](/recipes/nextcloud/), etc. Recent updates and additions are posted on the [CHANGELOG](/CHANGELOG/).
|
||||
|
||||
## Who is this for?
|
||||
|
||||
@@ -16,6 +16,10 @@ So if you're familiar enough with the tools, and you've done self-hosting before
|
||||
2. You want to play. You want a safe sandbox to test new tools, keeping the ones you want and tossing the ones you don't.
|
||||
3. You want reliability. Once you go from __playing__ with a tool to actually __using__ it, you want it to be available when you need it. Having to "_quickly ssh into the host and restart the webserver_" doesn't cut it when your wife wants to know why her phone won't sync!
|
||||
|
||||
## What have you done for me lately? (CHANGELOG)
|
||||
|
||||
Check out recent change at [CHANGELOG](/CHANGELOG/)
|
||||
|
||||
## What do you want from me?
|
||||
|
||||
I want your money.
|
||||
@@ -24,8 +28,14 @@ No, seriously (_but yes, I do want your money - see below_), If the above applie
|
||||
|
||||
### Get in touch
|
||||
|
||||
<<<<<<< HEAD
|
||||
* Tweet me up, I'm [@funkypenguin](https://twitter.com/funkypenguin)!
|
||||
* or better yet, come into the [kitchen](https://discourse.geek-kitchen.funkypenguin.co.nz/) (discussion forums) to say hi, ask a question, or suggest a new recipe!
|
||||
=======
|
||||
* Come and say hi to me and the friendly geeks in the [Discord](http://chat.funkypenguin.co.nz) chat or the [Discourse](https://discourse.geek-kitchen.funkypenguin.co.nz/) forums - say hi, ask a question, or suggest a new recipe!
|
||||
* Tweet me up, I'm [@funkypenguin](https://twitter.com/funkypenguin)! 🐦
|
||||
* [Contact me](https://www.funkypenguin.co.nz/contact/) by a variety of channels
|
||||
>>>>>>> master
|
||||
|
||||
### Buy my book
|
||||
|
||||
@@ -46,11 +56,12 @@ Impulsively **[click here (NOW quick do it!)](https://www.patreon.com/bePatron?u
|
||||
|
||||
I also gratefully accept donations of most fine socialist/anarchist/hobbyist cryptocurrencies, including the list below (_let me know if I've left out the coin-of-the-week, and I'll happily add it_):
|
||||
|
||||
| ist-currency | Address
|
||||
| -ist-currency | Address
|
||||
| ------------- |-------------|
|
||||
| Bitcoin | 1GBJfmqARmL66gQzUy9HtNWdmAEv74nfXj
|
||||
| Ethereum | 0x19e60ec49e1f053cfdfc193560ecfb3caed928f1
|
||||
| Litecoin | LYLEF7xTpeVbjjoZD3jGLVgvKSKTYDKbK8
|
||||
| :turtle: TurtleCoin | TRTLv2qCKYChMbU5sNkc85hzq2VcGpQidaowbnV2N6LAYrFNebMLepKKPrdif75x5hAizwfc1pX4gi5VsR9WQbjQgYcJm21zec4
|
||||
|
||||
|
||||
|
||||
|
||||
92
manuscript/kubernetes/cluster.md
Normal file
@@ -0,0 +1,92 @@
|
||||
# Kubernetes on DigitalOcean
|
||||
|
||||
IMO, the easiest Kubernetes cloud provider to experiment with is [DigitalOcean](https://m.do.co/c/e33b78ad621b) (_this is a referral link_). I've included instructions below to start a basic cluster.
|
||||
|
||||

|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [DigitalOcean](https://www.digitalocean.com/?refcode=e33b78ad621b) account, either linked to a credit card or (_my preference for a trial_) topped up with $5 credit from PayPal. (_yes, this is a referral link, making me some 💰 to buy 🍷_)
|
||||
2. Geek-Fu required : 🐱 (easy - even has screenshots!)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Create DigitalOcean Account
|
||||
|
||||
Create a project, and then from your project page, click **Manage** -> **Kubernetes (LTD)** in the left-hand panel:
|
||||
|
||||

|
||||
|
||||
Until DigitalOcean considers their Kubernetes offering to be "production ready", you'll need the additional step of clicking on **Enable Limited Access**:
|
||||
|
||||

|
||||
|
||||
The _Enable Limited Access_ button changes to read _Create a Kubernetes Cluster_ . Cleeeek it:
|
||||
|
||||

|
||||
|
||||
When prompted, choose some defaults for your first node pool (_your pool of "compute" resources for your cluster_), and give it a name. In more complex deployments, you can use this concept of "node pools" to run certain applications (_like an inconsequential nightly batch job_) on a particular class of compute instance (_such as cheap, preemptible instances_)
|
||||
|
||||

|
||||
|
||||
That's it! Have a sip of your 🍷, a bite of your :cheese:, and wait for your cluster to build. While you wait, follow the instructions to setup kubectl (if you don't already have it)
|
||||
|
||||

|
||||
|
||||
DigitalOcean will provide you with a "kubeconfig" file to use to access your cluster. It's at the bottom of the page (_illustrated below_), and easy to miss (_in my experience_).
|
||||
|
||||

|
||||
|
||||
## Release the kubectl!
|
||||
|
||||
Save your kubeconfig file somewhere, and test it our by running ```kubectl --kubeconfig=<PATH TO KUBECONFIG> get nodes```
|
||||
|
||||
Example output:
|
||||
```
|
||||
[davidy:~/Downloads] 130 % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
festive-merkle-8n9e Ready <none> 20s v1.13.1
|
||||
[davidy:~/Downloads] %
|
||||
```
|
||||
|
||||
In the example above, my nodes were being deployed. Repeat the command to see your nodes spring into existence:
|
||||
|
||||
```
|
||||
[davidy:~/Downloads] % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
festive-merkle-8n96 Ready <none> 6s v1.13.1
|
||||
festive-merkle-8n9e Ready <none> 34s v1.13.1
|
||||
[davidy:~/Downloads] %
|
||||
|
||||
[davidy:~/Downloads] % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
festive-merkle-8n96 Ready <none> 30s v1.13.1
|
||||
festive-merkle-8n9a Ready <none> 17s v1.13.1
|
||||
festive-merkle-8n9e Ready <none> 58s v1.13.1
|
||||
[davidy:~/Downloads] %
|
||||
```
|
||||
|
||||
That's it. You have a beautiful new kubernetes cluster ready for some action!
|
||||
|
||||
## Move on..
|
||||
|
||||
Still with me? Good. Move on to creating your own external load balancer..
|
||||
|
||||
* [Start](/kubernetes/start/) - Why Kubernetes?
|
||||
* [Design](/kubernetes/design/) - How does it fit together?
|
||||
* Cluster (this page) - Setup a basic cluster
|
||||
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
|
||||
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
|
||||
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
1. Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come!
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
138
manuscript/kubernetes/design.md
Normal file
@@ -0,0 +1,138 @@
|
||||
# Design
|
||||
|
||||
Like the [Docker Swarm](ha-docker-swarm/design/) "_private cloud_" design, the Kubernetes design is:
|
||||
|
||||
* **Highly-available** (_can tolerate the failure of a single component_)
|
||||
* **Scalable** (_can add resource or capacity as required_)
|
||||
* **Portable** (_run it in DigitalOcean today, AWS tomorrow and Azure on Thursday_)
|
||||
* **Secure** (_access protected with LetsEncrypt certificates_)
|
||||
* **Automated** (_requires minimal care and feeding_)
|
||||
|
||||
*Unlike* the Docker Swarm design, the Kubernetes design is:
|
||||
|
||||
* **Cloud-Native** (_While you **can** [run your own Kubernetes cluster](https://microk8s.io/), it's far simpler to let someone else manage the infrastructure, freeing you to play with the fun stuff_)
|
||||
* **Complex** (_Requires more basic elements, more verbose configuration, and provides more flexibility and customisability_)
|
||||
|
||||
## Design Decisions
|
||||
|
||||
**The design and recipes are provider-agnostic**
|
||||
|
||||
This means that:
|
||||
|
||||
* The design should work on GKE, AWS, DigitalOcean, Azure, or even MicroK8s
|
||||
* Custom service elements specific to individual providers are avoided
|
||||
|
||||
**The simplest solution to achieve the desired result will be preferred**
|
||||
|
||||
This means that:
|
||||
|
||||
* Persistent volumes from the cloud provider are used for all persistent storage
|
||||
* We'll do things the "_Kubernetes way_", i.e., using secrets and configmaps, rather than trying to engineer around the Kubernetes basic building blocks.
|
||||
|
||||
**Insofar as possible, the format of recipes will align with Docker Swarm**
|
||||
|
||||
This means that:
|
||||
|
||||
* We use Kubernetes namespaces to replicate Docker Swarm's "_per-stack_" networking and service discovery
|
||||
|
||||
## Security
|
||||
|
||||
Under this design, the only inbound connections we're permitting to our Kubernetes swarm are:
|
||||
|
||||
### Network Flows
|
||||
|
||||
* HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy (_Traefik_)
|
||||
* Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](/recipes/mqtt/)_)
|
||||
|
||||
### Authentication
|
||||
|
||||
* Other than when an SSL-served application provides a trusted level of authentication, or where the application requires public exposure, applications served via Traefik will be protected with an OAuth proxy.
|
||||
|
||||
## The challenges of external access
|
||||
|
||||
Because we're Cloude-Native now, it's complex to get traffic **into** our cluster from outside. We basically have 3 options:
|
||||
|
||||
1. **HostIP**: Map a port on the host to a service. This is analogous to Docker's port exposure, but lacking in that it restricts us to one host port per-container, and it's not possible to anticipate _which_ of your Kubernetes hosts is running a given container. Kubernetes does not have Docker Swarm's "routing mesh", allowing for simple load-balancing of incoming connections.
|
||||
|
||||
2. **LoadBalancer**: Purchase a "loadbalancer" per-service from your cloud provider. While this is the simplest way to assure a fixed IP and port combination will always exist for your service, it has 2 significant limitations:
|
||||
1. Cost is prohibitive, at roughly $US10/month per port
|
||||
2. You won't get the _same_ fixed IP for multiple ports. So if you wanted to expose 443 and 25 (_webmail and smtp server, for example_), you'd find yourself assigned a port each on two **unique** IPs, a challenge for a single DNS-based service, like "_mail.batman.com_"
|
||||
|
||||
3. **NodePort**: Expose our service as a port (_between 30000-32767_) on the host which happens to be running the service. This is challenging because you might want to expose port 443, but that's not possible with NodePort.
|
||||
|
||||
To further complicate options #1 and #3 above, our cloud provider may, without notice, change the IP of the host running your containers (_O hai, Google!_).
|
||||
|
||||
Our solution to these challenges is to employ a simple-but-effective solution which places an HAProxy instance in front of the services exposed by NodePort. For example, this allows us to expose a container on 443 as NodePort 30443, and to cause HAProxy to listen on port 443, and forward all requests to our Node's IP on port 30443, after which it'll be forwarded onto our container on the original port 443.
|
||||
|
||||
We use a phone-home container, which calls a simple webhook on our haproxy VM, advising HAProxy to update its backend for the calling IP. This means that when our provider changes the host's IP, we automatically update HAProxy and keep-on-truckin'!
|
||||
|
||||
Here's a high-level diagram:
|
||||
|
||||

|
||||
|
||||
## Overview
|
||||
|
||||
So what's happening in the diagram above? I'm glad you asked - let's go through it!
|
||||
|
||||
### Setting the scene
|
||||
|
||||
In the diagram, we have a Kubernetes cluster comprised of 3 nodes. You'll notice that there's no visible master node. This is because most cloud providers will give you "_free_" master node, but you don't get to access it. The master node is just a part of the Kubernetes "_as-a-service_" which you're purchasing.
|
||||
|
||||
Our nodes are partitioned into several namespaces, which logically separate our individual recipes. (_I.e., allowing both a "gitlab" and a "nextcloud" namespace to include a service named "db", which would be challenging without namespaces_)
|
||||
|
||||
Outside of our cluster (_could be anywhere on the internet_) is a single VM servicing as a load-balancer, running HAProxy and a webhook service. This load-balancer is described in detail, [in its own section](/kubernetes/loadbalancer/), but what's important up-front is that this VM is the **only element of the design for which we need to provide a fixed IP address**.
|
||||
|
||||
### 1 : The mosquitto pod
|
||||
|
||||
In the "mqtt" namespace, we have a single pod, running 2 containers - the mqtt broker, and a "phone-home" container.
|
||||
|
||||
Why 2 containers in one pod, instead of 2 independent pods? Because all the containers in a pod are **always** run on the same physical host. We're using the phone-home container as a simple way to call a webhook on the not-in-the-cluster VM.
|
||||
|
||||
The phone-home container calls the webhook, and tells HAProxy to listen on port 8443, and to forward any incoming requests to port 30843 (_within the NodePort range_) on the IP of the host running the container (_and because of the pod, tho phone-home container is guaranteed to be on the same host as the MQTT container_).
|
||||
|
||||
### 2 : The Traefik Ingress
|
||||
|
||||
In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/docker-ha-swarm/traefik/).
|
||||
|
||||
What's happening in the diagram is that a phone-home pod is tied to the traefik pod using affinity, so that both containers will be executed on the same host. Again, the phone-home container calls a webhook on the HAProxy VM, auto-configuring HAproxy to send any HTTPs traffic to its calling address and customer NodePort port number.
|
||||
|
||||
When an inbound HTTPS request is received by Traefik, based on some internal Kubernetes elements (ingresses), Traefik provides SSL termination, and routes the request to the appropriate service (_In this case, either the GitLab UI or teh UniFi UI_)
|
||||
|
||||
### 3 : The UniFi pod
|
||||
|
||||
What's happening in the UniFi pod is a combination of #1 and #2 above. UniFi controller provides a webUI (_typically 8443, but we serve it via Traefik on 443_), plus some extra ports for device adoption, which are using a proprietary protocol, and can't be proxied with Traefik.
|
||||
|
||||
To make both the webUI and the adoption ports work, we use a combination of an ingress for the webUI (_see #2 above_), and a phone-home container to tell HAProxy to forward port 8080 (_the adoption port_) directly to the host, using a NodePort-exposed service.
|
||||
|
||||
This allows us to retain the use of a single IP for all controller functions, as accessed outside of the cluster.
|
||||
|
||||
### 4 : The webhook
|
||||
|
||||
Each phone-home container is calling a webhook on the HAProxy VM, secured with a secret shared token. The phone-home container passes the desired frontend port (i.e., 443), the corresponding NodeIP port (i.e., 30443), and the node's current public IP address.
|
||||
|
||||
The webhook uses the provided details to update HAProxy for the combination of values, validate the config, and then restart HAProxy.
|
||||
|
||||
### 5 : The user
|
||||
|
||||
Finally, the DNS for all externally-accessible services is pointed to the IP of the HAProxy VM. On receiving an inbound request (_be it port 443, 8080, or anything else configured_), HAProxy will forward the request to the IP and NodePort port learned from the phone-home container.
|
||||
|
||||
## Move on..
|
||||
|
||||
Still with me? Good. Move on to creating your cluster!
|
||||
|
||||
* [Start](/kubernetes/start/) - Why Kubernetes?
|
||||
* Design (this page) - How does it fit together?
|
||||
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
|
||||
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
|
||||
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
|
||||
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
68
manuscript/kubernetes/helm.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# Helm
|
||||
|
||||
[Helm](https://github.com/helm/helm) is a tool for managing Kubernetes "charts" (_think of it as an uber-polished collection of recipes_). Using one simple command, and by tweaking one simple config file (values.yaml), you can launch a complex stack. There are many publicly available helm charts for popular packages like [elasticsearch](https://github.com/helm/charts/tree/master/stable/elasticsearch), [ghost](https://github.com/helm/charts/tree/master/stable/ghost), [grafana](https://github.com/helm/charts/tree/master/stable/grafana), [mediawiki](https://github.com/helm/charts/tree/master/stable/mediawiki), etc.
|
||||
|
||||

|
||||
|
||||
!!! note
|
||||
Given enough interest, I may provide a helm-compatible version of the pre-mix repository for [supporters](/support/). [Hit me up](/whoami/#contact-me) if you're interested!
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Kubernetes cluster](/kubernetes/cluster/)
|
||||
2. Geek-Fu required : 🐤 (_easy - copy and paste_)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Install Helm
|
||||
|
||||
This section is from the Helm README:
|
||||
|
||||
Binary downloads of the Helm client can be found on [the Releases page](https://github.com/helm/helm/releases/latest).
|
||||
|
||||
Unpack the `helm` binary and add it to your PATH and you are good to go!
|
||||
|
||||
If you want to use a package manager:
|
||||
|
||||
- [Homebrew](https://brew.sh/) users can use `brew install kubernetes-helm`.
|
||||
- [Chocolatey](https://chocolatey.org/) users can use `choco install kubernetes-helm`.
|
||||
- [Scoop](https://scoop.sh/) users can use `scoop install helm`.
|
||||
- [GoFish](https://gofi.sh/) users can use `gofish install helm`.
|
||||
|
||||
To rapidly get Helm up and running, start with the [Quick Start Guide](https://docs.helm.sh/using_helm/#quickstart-guide).
|
||||
|
||||
See the [installation guide](https://docs.helm.sh/using_helm/#installing-helm) for more options,
|
||||
including installing pre-releases.
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
### Initialise Helm
|
||||
|
||||
After installing Helm, initialise it by running ```helm init```. This will install "tiller" pod into your cluster, which works with the locally installed helm binaries to launch/update/delete Kubernetes elements based on helm charts.
|
||||
|
||||
That's it - not very exciting I know, but we'll need helm for the next and final step in building our Kubernetes cluster - deploying the [Traefik ingress controller (via helm)](/kubernetes/traefik/)!
|
||||
|
||||
## Move on..
|
||||
|
||||
Still with me? Good. Move on to understanding Helm charts...
|
||||
|
||||
* [Start](/kubernetes/start/) - Why Kubernetes?
|
||||
* [Design](/kubernetes/design/) - How does it fit together?
|
||||
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
|
||||
* [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access
|
||||
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||
* Helm (this page) - Uber-recipes from fellow geeks
|
||||
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
1. Of course, you can have lots of fun deploying all sorts of things via Helm. Check out https://github.com/helm/charts for some examples.
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
340
manuscript/kubernetes/loadbalancer.md
Normal file
@@ -0,0 +1,340 @@
|
||||
# Load Balancer
|
||||
|
||||
One of the issues I encountered early on in migrating my Docker Swarm workloads to Kubernetes on GKE, was how to reliably permit inbound traffic into the cluster.
|
||||
|
||||
There were several complications with the "traditional" mechanisms of providing a load-balanced ingress, not the least of which was cost. I also found that even if I paid my cloud provider (_Google_) for a load-balancer Kubernetes service, this service required a unique IP per exposed port, which was incompatible with my mining pool empire (_mining pools need to expose multiple ports on the same DNS name_).
|
||||
|
||||
See further examination of the problem and possible solutions in the [Kubernetes design](kubernetes/design/#the-challenges-of-external-access) page.
|
||||
|
||||
This recipe details a simple design to permit the exposure of as many ports as you like, on a single public IP, to a cluster of Kubernetes nodes running as many pods/containers as you need, with services exposed via NodePort.
|
||||
|
||||

|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Kubernetes cluster](/kubernetes/cluster/)
|
||||
2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar 🍷 for me!_)
|
||||
3. Geek-Fu required : 🐧🐧🐧 (_complex - inline adjustments required_)
|
||||
|
||||
|
||||
## Preparation
|
||||
|
||||
### Summary
|
||||
|
||||
### Create LetsEncrypt certificate
|
||||
|
||||
!!! warning
|
||||
Safety first, folks. You wouldn't run a webhook exposed to the big bad ol' internte without first securing it with a valid SSL certificate? Of course not, I didn't think so!
|
||||
|
||||
Use whatever method you prefer to generate (and later, renew) your LetsEncrypt cert. The example below uses the CertBot docker image for CloudFlare DNS validation, since that's what I've used elsewhere.
|
||||
|
||||
We **could** run our webhook as a simple HTTP listener, but really, in a world where LetsEncrypt cacn assign you a wildcard certificate in under 30 seconds, thaht's unforgivable. Use the following **general** example to create a LetsEncrypt wildcard cert for your host:
|
||||
|
||||
In my case, since I use CloudFlare, I create /etc/webhook/letsencrypt/cloudflare.ini:
|
||||
|
||||
```
|
||||
dns_cloudflare_email=davidy@funkypenguin.co.nz
|
||||
dns_cloudflare_api_key=supersekritnevergonnatellyou
|
||||
```
|
||||
|
||||
I request my cert by running:
|
||||
```
|
||||
cd /etc/webhook/
|
||||
docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare --preferred-challenges dns certonly --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini -d ''*.funkypenguin.co.nz'
|
||||
```
|
||||
|
||||
!!! question
|
||||
Why use a wildcard cert? So my enemies can't examine my certs to enumerate my various services and discover my weaknesses, of course!
|
||||
|
||||
I add the following as a cron command to renew my certs every day:
|
||||
|
||||
```
|
||||
cd /etc/webhook && docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare renew --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini
|
||||
```
|
||||
|
||||
Once you've confirmed you've got a valid LetsEncrypt certificate stored in ```/etc/webhook/letsencrypt/live/<your domain>/fullcert.pem```, proceed to the next step..
|
||||
|
||||
### Install webhook
|
||||
|
||||
We're going to use https://github.com/adnanh/webhook to run our webhook. On some distributions (_❤️ ya, Debian!_), webhook and its associated systemd config can be installed by running ```apt-get install webhook```.
|
||||
|
||||
### Create webhook config
|
||||
|
||||
We'll create a single webhook, by creating ```/etc/webhook/hooks.json``` as follows. Choose a nice secure random string for your MY_TOKEN value!
|
||||
|
||||
```
|
||||
mkdir /etc/webhook
|
||||
export MY_TOKEN=ilovecheese
|
||||
echo << EOF > /etc/webhook/hooks.json
|
||||
[
|
||||
{
|
||||
"id": "update-haproxy",
|
||||
"execute-command": "/etc/webhook/update-haproxy.sh",
|
||||
"command-working-directory": "/etc/webhook",
|
||||
"pass-arguments-to-command":
|
||||
[
|
||||
{
|
||||
"source": "payload",
|
||||
"name": "name"
|
||||
},
|
||||
{
|
||||
"source": "payload",
|
||||
"name": "frontend-port"
|
||||
},
|
||||
{
|
||||
"source": "payload",
|
||||
"name": "backend-port"
|
||||
},
|
||||
{
|
||||
"source": "payload",
|
||||
"name": "dst-ip"
|
||||
},
|
||||
{
|
||||
"source": "payload",
|
||||
"name": "action"
|
||||
}
|
||||
],
|
||||
"trigger-rule":
|
||||
{
|
||||
"match":
|
||||
{
|
||||
"type": "value",
|
||||
"value": "$MY_TOKEN",
|
||||
"parameter":
|
||||
{
|
||||
"source": "header",
|
||||
"name": "X-Funkypenguin-Token"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
EOF
|
||||
```
|
||||
|
||||
!!! note
|
||||
Note that to avoid any bozo from calling our we're matching on a token header in the request called ```X-Funkypenguin-Token```. Webhook will **ignore** any request which doesn't include a matching token in the request header.
|
||||
|
||||
### Update systemd for webhook
|
||||
|
||||
!!! note
|
||||
This section is particular to Debian Stretch and its webhook package. If you're using another OS for your VM, just ensure that you can start webhook with a config similar to the one illustrated below.
|
||||
|
||||
Since we want to force webhook to run in secure mode (_no point having a token if it can be extracted from a simple packet capture!_) I ran ```systemctl edit webhook```, and pasted in the following:
|
||||
|
||||
```
|
||||
[Service]
|
||||
# Override the default (non-secure) behaviour of webhook by passing our certificate details and custom hooks.json location
|
||||
ExecStart=
|
||||
ExecStart=/usr/bin/webhook -hooks /etc/webhook/hooks.json -verbose -secure -cert /etc/webhook/letsencrypt/live/funkypenguin.co.nz/fullchain.pem -key /etc/webhook/letsencrypt/live/funkypenguin.co.nz/privkey.pem
|
||||
```
|
||||
|
||||
Then I restarted webhook by running ```systemctl enable webhook && systemctl restart webhook```. I watched the subsequent logs by running ```journalctl -u webhook -f```
|
||||
|
||||
### Create /etc/webhook/update-haproxy.sh
|
||||
|
||||
When successfully authenticated with our top-secret token, our webhook will execute a local script, defined as follows (_yes, you should create this file_):
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
NAME=$1
|
||||
FRONTEND_PORT=$2
|
||||
BACKEND_PORT=$3
|
||||
DST_IP=$4
|
||||
ACTION=$5
|
||||
|
||||
# Bail if we haven't received our expected parameters
|
||||
if [[ "$#" -ne 5 ]]
|
||||
then
|
||||
echo "illegal number of parameters"
|
||||
exit 2;
|
||||
fi
|
||||
|
||||
# Either add or remove a service based on $ACTION
|
||||
case $ACTION in
|
||||
add)
|
||||
# Create the portion of haproxy config
|
||||
cat << EOF > /etc/webhook/haproxy/$FRONTEND_PORT.inc
|
||||
### >> Used to run $NAME:${FRONTEND_PORT}
|
||||
frontend ${FRONTEND_PORT}_frontend
|
||||
bind *:$FRONTEND_PORT
|
||||
mode tcp
|
||||
default_backend ${FRONTEND_PORT}_backend
|
||||
|
||||
backend ${FRONTEND_PORT}_backend
|
||||
mode tcp
|
||||
balance roundrobin
|
||||
stick-table type ip size 200k expire 30m
|
||||
stick on src
|
||||
server s1 $DST_IP:$BACKEND_PORT
|
||||
### << Used to run $NAME:$FRONTEND_PORT
|
||||
EOF
|
||||
;;
|
||||
delete)
|
||||
rm /etc/webhook/haproxy/$FRONTEND_PORT.inc
|
||||
;;
|
||||
*)
|
||||
echo "Invalid action $ACTION"
|
||||
exit 2
|
||||
esac
|
||||
|
||||
# Concatenate all the haproxy configs into a single file
|
||||
cat /etc/webhook/haproxy/global /etc/webhook/haproxy/*.inc > /etc/webhook/haproxy/pre_validate.cfg
|
||||
|
||||
# Validate the generated config
|
||||
haproxy -f /etc/webhook/haproxy/pre_validate.cfg -c
|
||||
|
||||
# If validation was successful, only _then_ copy it over to /etc/haproxy/haproxy.cfg, and reload
|
||||
if [[ $? -gt 0 ]]
|
||||
then
|
||||
echo "HAProxy validation failed, not continuing"
|
||||
exit 2
|
||||
else
|
||||
# Remember what the original file looked like
|
||||
m1=$(md5sum "/etc/haproxy/haproxy.cfg")
|
||||
|
||||
# Overwrite the original file
|
||||
cp /etc/webhook/haproxy/pre_validate.cfg /etc/haproxy/haproxy.cfg
|
||||
|
||||
# Get MD5 of new file
|
||||
m2=$(md5sum "/etc/haproxy/haproxy.cfg")
|
||||
|
||||
# Only if file has changed, then we need to reload haproxy
|
||||
if [ "$m1" != "$m2" ] ; then
|
||||
echo "HAProxy config has changed, reloading"
|
||||
systemctl reload haproxy
|
||||
fi
|
||||
fi
|
||||
```
|
||||
|
||||
### Create /etc/webhook/haproxy/global
|
||||
|
||||
Create ```/etc/webhook/haproxy/global``` and populate with something like the following. This will be the non-dynamically generated part of our HAProxy config:
|
||||
|
||||
```
|
||||
global
|
||||
log /dev/log local0
|
||||
log /dev/log local1 notice
|
||||
chroot /var/lib/haproxy
|
||||
stats socket /run/haproxy/admin.sock mode 660 level admin
|
||||
stats timeout 30s
|
||||
user haproxy
|
||||
group haproxy
|
||||
daemon
|
||||
|
||||
# Default SSL material locations
|
||||
ca-base /etc/ssl/certs
|
||||
crt-base /etc/ssl/private
|
||||
|
||||
# Default ciphers to use on SSL-enabled listening sockets.
|
||||
# For more information, see ciphers(1SSL). This list is from:
|
||||
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
|
||||
# An alternative list with additional directives can be obtained from
|
||||
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
|
||||
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
|
||||
ssl-default-bind-options no-sslv3
|
||||
|
||||
defaults
|
||||
log global
|
||||
mode tcp
|
||||
option tcplog
|
||||
option dontlognull
|
||||
timeout connect 5000
|
||||
timeout client 5000000
|
||||
timeout server 5000000
|
||||
errorfile 400 /etc/haproxy/errors/400.http
|
||||
errorfile 403 /etc/haproxy/errors/403.http
|
||||
errorfile 408 /etc/haproxy/errors/408.http
|
||||
errorfile 500 /etc/haproxy/errors/500.http
|
||||
errorfile 502 /etc/haproxy/errors/502.http
|
||||
errorfile 503 /etc/haproxy/errors/503.http
|
||||
errorfile 504 /etc/haproxy/errors/504.http
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Take the bait!
|
||||
|
||||
Whew! We now have all the components of our automated load-balancing solution in place. Browse to your VM's FQDN at https://whatever.it.is:9000/hooks/update-haproxy, and you should see the text "_Hook rules were not satisfied_", with a valid SSL certificate (_You didn't send a token_).
|
||||
|
||||
If you don't see the above, then check the following:
|
||||
|
||||
1. Does the webhook verbose log (```journalctl -u webhook -f```) complain about invalid arguments or missing files?
|
||||
2. Is port 9000 open to the internet on your VM?
|
||||
|
||||
### Apply to pods
|
||||
|
||||
You'll see me use this design in any Kubernetes-based recipe which requires container-specific ports, like UniFi. Here's an excerpt of the .yml which defines the UniFi controller:
|
||||
|
||||
```
|
||||
<snip>
|
||||
spec:
|
||||
containers:
|
||||
- image: linuxserver/unifi
|
||||
name: controller
|
||||
volumeMounts:
|
||||
- name: controller-volumeclaim
|
||||
mountPath: /config
|
||||
- image: funkypenguin/poor-mans-k8s-lb
|
||||
imagePullPolicy: Always
|
||||
name: 8080-phone-home
|
||||
env:
|
||||
- name: REPEAT_INTERVAL
|
||||
value: "600"
|
||||
- name: FRONTEND_PORT
|
||||
value: "8080"
|
||||
- name: BACKEND_PORT
|
||||
value: "30808"
|
||||
- name: NAME
|
||||
value: "unifi-adoption"
|
||||
- name: WEBHOOK
|
||||
value: "https://my-secret.url.wouldnt.ya.like.to.know:9000/hooks/update-haproxy"
|
||||
- name: WEBHOOK_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: unifi-credentials
|
||||
key: webhook_token.secret
|
||||
<snip>
|
||||
```
|
||||
|
||||
The takeaways here are:
|
||||
|
||||
1. We add the funkypenguin/poor-mans-k8s-lb containier to any pod which has special port requirements, forcing the container to run on the same node as the other containers in the pod (_in this case, the UniFi controller_)
|
||||
2. We use a Kubernetes secret for the webhook token, so that our .yml can be shared without exposing sensitive data
|
||||
|
||||
Here's what the webhook logs look like when the above is added to the UniFi deployment:
|
||||
|
||||
```
|
||||
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 Started POST /hooks/update-haproxy
|
||||
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 update-haproxy got matched
|
||||
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 update-haproxy hook triggered successfully
|
||||
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 Completed 200 OK in 2.123921ms
|
||||
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 executing /etc/webhook/update-haproxy.sh (/etc/webhook/update-haproxy.sh) with arguments ["/etc/webhook/update-haproxy.sh" "unifi-adoption" "8080" "30808" "35.244.91.178" "add"] and environment [] using /etc/webhook as cwd
|
||||
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 command output: Configuration file is valid
|
||||
<HAProxy restarts>
|
||||
```
|
||||
|
||||
|
||||
## Move on..
|
||||
|
||||
Still with me? Good. Move on to setting up an ingress SSL terminating proxy with Traefik..
|
||||
|
||||
* [Start](/kubernetes/start/) - Why Kubernetes?
|
||||
* [Design](/kubernetes/design/) - How does it fit together?
|
||||
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
|
||||
* Load Balancer (this page) - Setup inbound access
|
||||
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
|
||||
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome 😉
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
187
manuscript/kubernetes/snapshots.md
Normal file
@@ -0,0 +1,187 @@
|
||||
# Snapshots
|
||||
|
||||
Before we get carried away creating pods, services, deployments etc, let's spare a thought for _security_... (_DevSecPenguinOps, here we come!_). In the context of this recipe, security refers to safe-guarding your data from accidental loss, as well as malicious impact.
|
||||
|
||||
Under [Docker Swarm](/ha-docker-swarm/design/), we used [shared storage](/ha-docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](recipes/elkarbackup/)) to automate backups of our persistent data.
|
||||
|
||||
Now that we're playing in the deep end with Kubernetes, we'll need a Cloud-native backup solution...
|
||||
|
||||
It bears repeating though - don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/1UtFeMoqVHQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
This recipe employs a clever tool ([miracle2k/k8s-snapshots](https://github.com/miracle2k/k8s-snapshots)), running _inside_ your cluster, to trigger automated snapshots of your persistent volumes, using your cloud provider's APIs.
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Kubernetes cluster](/kubernetes/cluster/) with either AWS or GKE (currently, but apparently other providers are [easy to implement](https://github.com/miracle2k/k8s-snapshots/blob/master/k8s_snapshots/backends/abstract.py))
|
||||
2. Geek-Fu required : 🐒🐒 (_medium - minor adjustments may be required_)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Create RoleBinding (GKE only)
|
||||
|
||||
If you're running GKE, run the following to create a RoleBinding, allowing your user to grant rights-it-doesn't-currently-have to the service account responsible for creating the snapshots:
|
||||
|
||||
```kubectl create clusterrolebinding your-user-cluster-admin-binding \
|
||||
--clusterrole=cluster-admin --user=<your user@yourdomain>```
|
||||
|
||||
!!! question
|
||||
Why do we have to do this? Check [this blog post](https://www.funkypenguin.co.nz/workaround-blocked-attempt-to-grant-extra-privileges-on-gke/) for details
|
||||
|
||||
### Apply RBAC
|
||||
|
||||
If your cluster is RBAC-enabled (_it probably is_), you'll need to create a ClusterRole and ClusterRoleBinding to allow k8s_snapshots to see your PVs and friends:
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/master/rbac.yaml
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Deploy the pod
|
||||
|
||||
Ready? Run the following to create a deployment in to the kube-system namespace:
|
||||
|
||||
```
|
||||
cat <<EOF | kubectl create -f -
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: k8s-snapshots
|
||||
namespace: kube-system
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: k8s-snapshots
|
||||
spec:
|
||||
containers:
|
||||
- name: k8s-snapshots
|
||||
image: elsdoerfer/k8s-snapshots:v2.0
|
||||
EOF
|
||||
```
|
||||
|
||||
Confirm your pod is running and happy by running ```kubectl get pods -n kubec-system```, and ```kubectl -n kube-system logs k8s-snapshots<tab-to-auto-complete>```
|
||||
|
||||
### Pick PVs to snapshot
|
||||
|
||||
k8s-snapshots relies on annotations to tell it how frequently to snapshot your PVs. A PV requires the ```backup.kubernetes.io/deltas``` annotation in order to be snapshotted.
|
||||
|
||||
From the k8s-snapshots README:
|
||||
|
||||
```
|
||||
The generations are defined by a list of deltas formatted as ISO 8601 durations (this differs from tarsnapper). PT60S or PT1M means a minute, PT12H or P0.5D is half a day, P1W or P7D is a week. The number of backups in each generation is implied by it's and the parent generation's delta.
|
||||
|
||||
For example, given the deltas PT1H P1D P7D, the first generation will consist of 24 backups each one hour older than the previous (or the closest approximation possible given the available backups), the second generation of 7 backups each one day older than the previous, and backups older than 7 days will be discarded for good.
|
||||
|
||||
The most recent backup is always kept.
|
||||
|
||||
The first delta is the backup interval.
|
||||
```
|
||||
|
||||
To add the annotation to an existing PV, run something like this:
|
||||
|
||||
```
|
||||
kubectl patch pv pvc-01f74065-8fe9-11e6-abdd-42010af00148 -p \
|
||||
'{"metadata": {"annotations": {"backup.kubernetes.io/deltas": "P1D P30D P360D"}}}'
|
||||
```
|
||||
|
||||
To add the annotation to a _new_ PV, add the following annotation to your **PVC**:
|
||||
|
||||
```
|
||||
backup.kubernetes.io/deltas: PT1H P2D P30D P180D
|
||||
```
|
||||
|
||||
Here's an example of the PVC for the UniFi recipe, which includes 7 daily snapshots of the PV:
|
||||
|
||||
```
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: controller-volumeclaim
|
||||
namespace: unifi
|
||||
annotations:
|
||||
backup.kubernetes.io/deltas: P1D P7D
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
```
|
||||
|
||||
And here's what my snapshot list looks like after a few days:
|
||||
|
||||

|
||||
|
||||
### Snapshot a non-Kubernetes volume (optional)
|
||||
|
||||
If you're running traditional compute instances with your cloud provider (I do this for my poor man's load balancer), you might want to backup _these_ volumes as well.
|
||||
|
||||
To do so, first create a custom resource, ```SnapshotRule```:
|
||||
|
||||
```
|
||||
cat <<EOF | kubectl create -f -
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: snapshotrules.k8s-snapshots.elsdoerfer.com
|
||||
spec:
|
||||
group: k8s-snapshots.elsdoerfer.com
|
||||
version: v1
|
||||
scope: Namespaced
|
||||
names:
|
||||
plural: snapshotrules
|
||||
singular: snapshotrule
|
||||
kind: SnapshotRule
|
||||
shortNames:
|
||||
- sr
|
||||
EOF
|
||||
```
|
||||
|
||||
Then identify the volume ID of your volume, and create an appropriate ```SnapshotRule```:
|
||||
|
||||
```
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: "k8s-snapshots.elsdoerfer.com/v1"
|
||||
kind: SnapshotRule
|
||||
metadata:
|
||||
name: haproxy-badass-loadbalancer
|
||||
spec:
|
||||
deltas: P1D P7D
|
||||
backend: google
|
||||
disk:
|
||||
name: haproxy2
|
||||
zone: australia-southeast1-a
|
||||
EOF
|
||||
```
|
||||
|
||||
!!! note
|
||||
Example syntaxes for the SnapshotRule for different providers can be found at https://github.com/miracle2k/k8s-snapshots/tree/master/examples
|
||||
|
||||
## Move on..
|
||||
|
||||
Still with me? Good. Move on to understanding Helm charts...
|
||||
|
||||
* [Start](/kubernetes/start/) - Why Kubernetes?
|
||||
* [Design](/kubernetes/design/) - How does it fit together?
|
||||
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
|
||||
* [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access
|
||||
* Snapshots (this page) - Automatically backup your persistent data
|
||||
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
|
||||
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
1. I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74).
|
||||
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
76
manuscript/kubernetes/start.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Why Kubernetes?
|
||||
|
||||
My first introduction to Kubernetes was a children's story:
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/4ht22ReBjno" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
## Wait, what?
|
||||
|
||||
Why would you want to use Kubernetes for your self-hosted recipes over simple Docker Swarm? Here's my personal take..
|
||||
|
||||
I use Docker swarm both at home (_on a single-node swarm_), and on a trio of Ubuntu 16.04 VPSs in a shared lab OpenStack environment.
|
||||
|
||||
In both cases above, I'm responsible for maintaining the infrastructure supporting Docker - either the physical host, or the VPS operating systems.
|
||||
|
||||
I started experimenting with Kubernetes as a plan to improve the reliability of my cryptocurrency mining pools (_the contended lab VPSs negatively impacted the likelihood of finding a block_), and as a long-term replacement for my aging home server.
|
||||
|
||||
What I enjoy about building recipes and self-hosting is **not** the operating system maintenance, it's the tools and applications that I can quickly launch in my swarms. If I could **only** play with the applications, and not bother with the maintenance, I totally would.
|
||||
|
||||
Kubernetes (_on a cloud provider, mind you!_) does this for me. I feed Kubernetes a series of YAML files, and it takes care of all the rest, including version upgrades, node failures/replacements, disk attach/detachments, etc.
|
||||
|
||||
## Uggh, it's so complicated!
|
||||
|
||||
Yes, but that's a necessary sacrifice for the maturity, power and flexibility it offers. Like docker-compose syntax, Kubernetes uses YAML to define its various, interworking components.
|
||||
|
||||
Let's talk some definitions. Kubernetes.io provides a [glossary](https://kubernetes.io/docs/reference/glossary/?fundamental=true). My definitions are below:
|
||||
|
||||
* **Node** : A compute instance which runs docker containers, managed by a cluster master.
|
||||
|
||||
* **Cluster** : One or more "worker nodes" which run containers. Very similar to a Docker Swarm node. In most cloud provider deployments, the [master node for your cluster is provided free of charge](https://www.sdxcentral.com/articles/news/google-eliminates-gke-management-fees-kubernetes-clusters/2017/11/), but you don't get to access it.
|
||||
|
||||
* **Pod** : A collection of one or more the containers. If a pod runs multiple containers, these containers always run on the same node.
|
||||
|
||||
* **Deployment** : A definition of a desired state. I.e., "I want a pod with containers A and B running". The Kubernetes master then ensures that any changes necessary to maintain the state are taken. (_I.e., if a pod crashes, but is supposed to be running, a new pod will be started_)
|
||||
|
||||
* **Service** : Unlike Docker Swarm, service discovery is not _built in_ to Kubernetes. For your pods to discover each other (say, to have "webserver" talk to "database"), you create a service for each pod, and refer to these services when you want your containers (_in pods_) to talk to each other. Complicated, yes, but the abstraction allows you to do powerful things, like auto-scale-up a bunch of database "pods" behind a service called "database", or perform a rolling container image upgrade with zero impact.
|
||||
|
||||
* **External access** : Services not only allow pods to discover each other, but they're also the mechanism through which the outside world can talk to a container. At the simplest level, this is akin to exposing a container port on a docker host.
|
||||
|
||||
* **Ingress** : When mapping ports to applications is inadequate (think virtual web hosts), an ingress is a sort of "inbound router" which can receive requests on one port (i.e., HTTPS), and forward them to a variety of internal pods, based on things like VHOST, etc. For us, this is the functional equivalent of what Traefik does in Docker Swarm. In fact, we use a Traefik Ingress in Kubernetes to accomplish the same.
|
||||
|
||||
* **Persistent Volume** : A virtual disk which is attached to a pod, storing persistent data. Meets the requirement for shared storage from Docker Swarm. I.e., if a persistent volume (PV) is bound to a pod, and the pod dies and is recreated, or get upgraded to a new image, the PV the data is bound to the new container. PVs can be "claimed" in a YAML definition, so that your Kubernetes provider will auto-create a PV when you launch your pod. PVs can be snapshotted.
|
||||
|
||||
* **Namespace** : An abstraction to separate a collection of pods, services, ingresses, etc. A "virtual cluster within a cluster". Can be used for security, or simplicity. For example, since we don't have individual docker stacks anymore, if you commonly name your database container "db", and you want to deploy two applications which both use a database container, how will you name your services? Use namespaces to keep each application ("nextcloud" vs "kanboard") separate. Namespaces also allow you to allocate resources **limits** to the aggregate of containers in a namespace, so you could, for example, limit the "nextcloud" namespace to 2.3 CPUs and 1200MB RAM.
|
||||
|
||||
## Mm.. maaaaybe, how do I start?
|
||||
|
||||
If you're like me, and you learn by doing, either play with the examples at https://labs.play-with-k8s.com/, or jump right in by setting up a Google Cloud trial (_you get $300 credit for 12 months_), or a small cluster on [Digital Ocean](/kubernetes/digitalocean/).
|
||||
|
||||
If you're the learn-by-watching type, just search for "Kubernetes introduction video". There's a **lot** of great content available.
|
||||
|
||||
## I'm ready, gimme some recipes!
|
||||
|
||||
As of Jan 2019, our first (_and only!_) Kubernetes recipe is a WIP for the Mosquitto [MQTT](/recipes/mqtt/) broker. It's a good, simple starter if you're into home automation (_shoutout to [Home Assistant](/recipes/homeassistant/)!_), since it only requires a single container, and a simple NodePort service.
|
||||
|
||||
I'd love for your [feedback](/support/) on the Kubernetes recipes, as well as suggestions for what to add next. The current rough plan is to replicate the Chef's Favorites recipes (_see the left-hand panel_) into Kubernetes first.
|
||||
|
||||
## Move on..
|
||||
|
||||
Still with me? Good. Move on to reviewing the design elements
|
||||
|
||||
* Start (this page) - Why Kubernetes?
|
||||
* [Design](/kubernetes/design/) - How does it fit together?
|
||||
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
|
||||
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
|
||||
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
|
||||
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
220
manuscript/kubernetes/traefik.md
Normal file
@@ -0,0 +1,220 @@
|
||||
# Traefik
|
||||
|
||||
This recipe utilises the [traefik helm chart](https://github.com/helm/charts/tree/master/stable/traefik) to proving LetsEncrypt-secured HTTPS access to multiple containers within your cluster.
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Kubernetes cluster](/kubernetes/cluster/)
|
||||
2. [Helm](/kubernetes/helm/) installed and initialised in your cluster
|
||||
|
||||
## Preparation
|
||||
|
||||
### Clone helm charts
|
||||
|
||||
Clone the helm charts, by running:
|
||||
|
||||
```
|
||||
git clone https://github.com/helm/charts
|
||||
```
|
||||
|
||||
Change to stable/traefik:
|
||||
|
||||
```
|
||||
cd charts/stable/traefik
|
||||
```
|
||||
|
||||
### Edit values.yaml
|
||||
|
||||
The beauty of the helm approach is that all the complexity of the Kubernetes elements' YAML files are hidden from you (created using templates), and all your changes go into values.yaml.
|
||||
|
||||
These are my values, you'll need to adjust for your own situation:
|
||||
|
||||
```
|
||||
imageTag: alpine
|
||||
serviceType: NodePort
|
||||
# yes, we're not listening on 80 or 443 because we don't want to pay for a loadbalancer IP to do this. I use poor-mans-k8s-lb instead
|
||||
service:
|
||||
nodePorts:
|
||||
http: 30080
|
||||
https: 30443
|
||||
cpuRequest: 1m
|
||||
memoryRequest: 100Mi
|
||||
cpuLimit: 1000m
|
||||
memoryLimit: 500Mi
|
||||
|
||||
ssl:
|
||||
enabled: true
|
||||
enforced: true
|
||||
debug:
|
||||
enabled: false
|
||||
|
||||
rbac:
|
||||
enabled: true
|
||||
dashboard:
|
||||
enabled: true
|
||||
domain: traefik.funkypenguin.co.nz
|
||||
kubernetes:
|
||||
# set these to all the namespaces you intend to use. I standardize on one-per-stack. You can always add more later
|
||||
namespaces:
|
||||
- kube-system
|
||||
- unifi
|
||||
- kanboard
|
||||
- nextcloud
|
||||
- huginn
|
||||
- miniflux
|
||||
accessLogs:
|
||||
enabled: true
|
||||
acme:
|
||||
persistence:
|
||||
enabled: true
|
||||
# Add the necessary annotation to backup ACME store with k8s-snapshots
|
||||
annotations: { "backup.kubernetes.io/deltas: P1D P7D" }
|
||||
staging: false
|
||||
enabled: true
|
||||
logging: true
|
||||
email: "<my letsencrypt email>"
|
||||
challengeType: "dns-01"
|
||||
dnsProvider:
|
||||
name: cloudflare
|
||||
cloudflare:
|
||||
CLOUDFLARE_EMAIL: "<my cloudlare email"
|
||||
CLOUDFLARE_API_KEY: "<my cloudflare API key>"
|
||||
domains:
|
||||
enabled: true
|
||||
domainsList:
|
||||
- main: "*.funkypenguin.co.nz" # name of the wildcard domain name for the certificate
|
||||
- sans:
|
||||
- "funkypenguin.co.nz"
|
||||
metrics:
|
||||
prometheus:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
!!! note
|
||||
The helm chart doesn't enable the Traefik dashboard by default. I intend to add an oauth_proxy pod to secure this, in a future recipe update.
|
||||
|
||||
### Prepare phone-home pod
|
||||
|
||||
[Remember](/kubernetes/loadbalancer/) how our load balancer design ties a phone-home container to another container using a pod, so that the phone-home container can tell our external load balancer (_using a webhook_) where to send our traffic?
|
||||
|
||||
Since we deployed Traefik using helm, we need to take a slightly different approach, so we'll create a pod with an affinity which ensures it runs on the same host which runs the Traefik container (_more precisely, containers with the label app=traefik_).
|
||||
|
||||
Create phone-home.yaml as follows:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: phonehome-traefik
|
||||
spec:
|
||||
affinity:
|
||||
podAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- traefik
|
||||
topologyKey: failure-domain.beta.kubernetes.io/zone
|
||||
containers:
|
||||
- image: funkypenguin/poor-mans-k8s-lb
|
||||
imagePullPolicy: Always
|
||||
name: phonehome-traefik
|
||||
env:
|
||||
- name: REPEAT_INTERVAL
|
||||
value: "600"
|
||||
- name: FRONTEND_PORT
|
||||
value: "443"
|
||||
- name: BACKEND_PORT
|
||||
value: "30443"
|
||||
- name: NAME
|
||||
value: "traefik"
|
||||
- name: WEBHOOK
|
||||
value: "https://<your loadbalancer hostname>:9000/hooks/update-haproxy"
|
||||
- name: WEBHOOK_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: traefik-credentials
|
||||
key: webhook_token.secret
|
||||
```
|
||||
|
||||
Create your webhook token secret by running:
|
||||
|
||||
```
|
||||
echo -n "imtoosecretformyshorts" > webhook_token.secret
|
||||
kubectl create secret generic traefik-credentials --from-file=webhook_token.secret
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Yes, the "-n" in the echo statement is needed. [Read here for why](https://www.funkypenguin.co.nz/beware-the-hidden-newlines-in-kubernetes-secrets/).
|
||||
|
||||
## Serving
|
||||
|
||||
### Install the chart
|
||||
|
||||
To install the chart, simply run ```helm install stable/traefik --name traefik --namespace kube-system```
|
||||
|
||||
That's it, traefik is running.
|
||||
|
||||
You can confirm this by running ```kubectl get pods```, and even watch the traefik logs, by running ```kubectl logs -f traefik<tab-to-autocomplete>```
|
||||
|
||||
### Deploy the phone-home pod
|
||||
|
||||
We still can't access traefik yet, since it's listening on port 30443 on node it happens to be running on. We'll launch our phone-home pod, to tell our [load balancer](/kubernetes/loadbalancer/) where to send incoming traffic on port 443.
|
||||
|
||||
Optionally, on your loadbalancer VM, run ```journalctl -u webhook -f``` to watch for the container calling the webhook.
|
||||
|
||||
Run ```kubectl create -f phone-home.yaml``` to create the pod.
|
||||
|
||||
Run ```kubectl get pods -o wide``` to confirm that both the phone-home pod and the traefik pod are on the same node:
|
||||
|
||||
```
|
||||
# kubectl get pods -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
phonehome-traefik 1/1 Running 0 20h 10.56.2.55 gke-penguins-are-sexy-8b85ef4d-2c9g
|
||||
traefik-69db67f64c-5666c 1/1 Running 0 10d 10.56.2.30 gkepenguins-are-sexy-8b85ef4d-2c9g
|
||||
```
|
||||
|
||||
Now browse to https://<your load balancer>, and you should get a valid SSL cert, along with a 404 error (_you haven't deployed any other recipes yet_)
|
||||
|
||||
### Making changes
|
||||
|
||||
If you change a value in values.yaml, and want to update the traefik pod, run:
|
||||
|
||||
```
|
||||
helm upgrade --values values.yml traefik stable/traefik --recreate-pods
|
||||
```
|
||||
|
||||
## Review
|
||||
|
||||
We're doneburgers! 🍔 We now have all the pieces to safely deploy recipes into our Kubernetes cluster, knowing:
|
||||
|
||||
1. Our HTTPS traffic will be secured with LetsEncrypt (thanks Traefik!)
|
||||
2. Our non-HTTPS ports (like UniFi adoption) will be load-balanced using an free-to-scale [external load balancer](/kubernetes/loadbalancer/)
|
||||
3. Our persistent data will be [automatically backed up](/kubernetes/snapshots/)
|
||||
|
||||
Here's a recap:
|
||||
|
||||
* [Start](/kubernetes/start/) - Why Kubernetes?
|
||||
* [Design](/kubernetes/design/) - How does it fit together?
|
||||
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
|
||||
* [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access
|
||||
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
|
||||
* Traefik (this page) - Traefik Ingress via Helm
|
||||
|
||||
## Where to next?
|
||||
|
||||
I'll be adding more Kubernetes versions of existing recipes soon. Check out the [MQTT](/recipes/mqtt/) recipe for a start!
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
1. It's kinda lame to be able to bring up Traefik but not to use it. I'll be adding the oauth_proxy element shortly, which will make this last step a little more conclusive and exciting!
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
@@ -18,12 +18,13 @@ Tools included in the AutoPirate stack are:
|
||||
* **[NZBGet](https://nzbget.net/)** : downloads data from usenet servers based on .nzb definitions, but written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources (_this is a popular alternative to SABnzbd_)
|
||||
* **[RTorrent](https://github.com/rakshasa/rtorrent/wiki)** is a CLI-based torrent client, which when combined with **[ruTorrent](https://github.com/Novik/ruTorrent)** becomes a powerful and fully browser-managed torrent client. (_Yes, it's not Usenet, but Sonarr/Radarr will let fulfill your watchlist using either Usenet **or** torrents, so it's worth including_)
|
||||
* **[NZBHydra](https://github.com/theotherp/nzbhydra)** : acts as a "meta-indexer", so that your downloading tools (_radarr, sonarr, etc_) only need to be setup for a single indexes. Also produces interesting stats on indexers, which helps when evaluating which indexers are performing well.
|
||||
* **[NZBHydra2](https://github.com/theotherp/nzbhydra2)** : is a high-performance rewrite of the original NZBHydra, with extra features. While still in beta, this NZBHydra2 will eventually supercede NZBHydra
|
||||
* **[Sonarr](https://sonarr.tv)** : finds, downloads and manages TV shows
|
||||
* **[Radarr](https://radarr.video)** : finds, downloads and manages movies
|
||||
* **[Mylar](https://github.com/evilhero/mylar)** : finds, downloads and manages comic books
|
||||
* **[Headphones](https://github.com/rembo10/headphones)** : finds, downloads and manages music
|
||||
* **[Lazy Librarian](https://github.com/itsmegb/LazyLibrarian)** : finds, downloads and manages ebooks
|
||||
* **[Ombi](https://github.com/tidusjar/Ombi)** : provides an interface to request additions to a [Plex](/recipies/plex/)/[Emby](/recipies/emby/) library using the above tools
|
||||
* **[Ombi](https://github.com/tidusjar/Ombi)** : provides an interface to request additions to a [Plex](/recipes/plex/)/[Emby](/recipes/emby/) library using the above tools
|
||||
* **[Jackett](https://github.com/Jackett/Jackett)** : Provides an local, caching, API-based interface to torrent trackers, simplifying the way your tools search for torrents.
|
||||
|
||||
Since this recipe is so long, and so many of the tools are optional to the final result (_i.e., if you're not interested in comics, you won't want Mylar_), I've described each individual tool on its own sub-recipe page (_below_), even though most of them are deployed very similarly.
|
||||
@@ -110,18 +111,19 @@ networks:
|
||||
|
||||
Now work your way through the list of tools below, adding whichever tools your want to use, and finishing with the **end** section:
|
||||
|
||||
* [SABnzbd](/recipies/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipies/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipies/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipies/autopirate/sonarr/)
|
||||
* [Radarr](/recipies/autopirate/radarr/)
|
||||
* [Mylar](/recipies/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipies/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipies/autopirate/headphones/)
|
||||
* [NZBHydra](/recipies/autopirate/nzbhydra/)
|
||||
* [Ombi](/recipies/autopirate/ombi/)
|
||||
* [Jackett](/recipies/autopirate/jackett/)
|
||||
* [End](/recipies/autopirate/end/) (launch the stack)
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](/recipes/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
### Tip your waiter (donate)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's the conclusion to the [AutoPirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's the conclusion to the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
### Launch Autopirate stack
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media 📺 🎥 🎵
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# Headphones
|
||||
|
||||
@@ -11,7 +11,7 @@ hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Headphones in your [AutoPirate](/recipies/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
To include Headphones in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
```
|
||||
headphones:
|
||||
@@ -24,9 +24,8 @@ headphones:
|
||||
- internal
|
||||
|
||||
headphones_proxy:
|
||||
image: zappi/oauth2_proxy
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/headphones.env
|
||||
dns_search: myswarm.example.com
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
@@ -52,20 +51,23 @@ headphones_proxy:
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipies/autopirate/end/)** section:
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipies/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipies/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipies/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipies/autopirate/sonarr/)
|
||||
* [Radarr](/recipies/autopirate/radarr/)
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](https://github.com/evilhero/mylar)
|
||||
* [Lazy Librarian](/recipies/autopirate/lazylibrarian/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* Headphones (this page)
|
||||
* [NZBHydra](/recipies/autopirate/nzbhydra/)
|
||||
* [Ombi](/recipies/autopirate/ombi/)
|
||||
* [Jackett](/recipies/autopirate/jackett/)
|
||||
* [End](/recipies/autopirate/end/) (launch the stack)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
88
manuscript/recipes/autopirate/heimdall.md
Normal file
@@ -0,0 +1,88 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# Heimdall
|
||||
|
||||
[Heimdall Application Dashboard](https://heimdall.site/) is a dashboard for all your web applications. It doesn't need to be limited to applications though, you can add links to anything you like.
|
||||
|
||||
Heimdall is an elegant solution to organise all your web applications. It’s dedicated to this purpose so you won’t lose your links in a sea of bookmarks.
|
||||
|
||||
Heimdall provides a single URL to manage access to all of your autopirate tools, and includes "enhanced" (_i.e., display stats within Heimdall without launching the app_) access to [NZBGet](/recipes/autopirate/nzbget.md), [SABnzbd](/recipes/autopirate/sabnzbd/), and friends.
|
||||
|
||||

|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Heimdall in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
````
|
||||
heimdall:
|
||||
image: linuxserver/heimdall:latest
|
||||
env_file: /var/data/config/autopirate/heimdall.env
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/heimdall:/config
|
||||
networks:
|
||||
- internal
|
||||
|
||||
heimdall_proxy:
|
||||
image: funkypenguin/oauth2_proxy:latest
|
||||
env_file : /var/data/config/autopirate/heimdall.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:heimdall.example.com
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=4180
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
|
||||
command: |
|
||||
-cookie-secure=false
|
||||
-upstream=http://heimdall:80
|
||||
-redirect-url=https://heimdall.example.com
|
||||
-http-address=http://0.0.0.0:4180
|
||||
-email-domain=example.com
|
||||
-provider=github
|
||||
-authenticated-emails-file=/authenticated-emails.txt
|
||||
|
||||
|
||||
|
||||
````
|
||||
|
||||
!!! tip
|
||||
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](/recipes/autopirate/mylarr/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* Heimdall (this page)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
|
||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||
2. The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# Jackett
|
||||
|
||||
@@ -11,7 +11,7 @@ This allows for getting recent uploads (like RSS) and performing searches. Jacke
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Jackett in your [AutoPirate](/recipies/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
To include Jackett in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
```
|
||||
jackett:
|
||||
@@ -23,9 +23,8 @@ jackett:
|
||||
- internal
|
||||
|
||||
jackett_proxy:
|
||||
image: zappi/oauth2_proxy
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/jackett.env
|
||||
dns_search: myswarm.example.com
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
@@ -52,20 +51,23 @@ jackett_proxy:
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipies/autopirate/end/)** section:
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipies/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipies/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipies/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipies/autopirate/sonarr/)
|
||||
* [Radarr](/recipies/autopirate/radarr/)
|
||||
* [Mylar](/recipies/autopirate/mylarr/)
|
||||
* [Lazy Librarian](/recipies/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipies/autopirate/headphones/)
|
||||
* [NZBHydra](/recipies/autopirate/nzbhydra/)
|
||||
* [Ombi](/recipies/autopirate/ombi/)
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](/recipes/autopirate/mylarr/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* Jackett (this page)
|
||||
* [End](/recipies/autopirate/end/) (launch the stack)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# LazyLibrarian
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include LazyLibrarian in your [AutoPirate](/recipies/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
To include LazyLibrarian in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
```
|
||||
lazylibrarian:
|
||||
@@ -28,9 +28,8 @@ lazylibrarian:
|
||||
- internal
|
||||
|
||||
lazylibrarian_proxy:
|
||||
image: zappi/oauth2_proxy
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/lazylibrarian.env
|
||||
dns_search: myswarm.example.com
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
@@ -64,25 +63,28 @@ calibre-server:
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipies/autopirate/end/)** section:
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipies/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipies/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipies/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipies/autopirate/sonarr/)
|
||||
* [Radarr](/recipies/autopirate/radarr/)
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](https://github.com/evilhero/mylar)
|
||||
* Lazy Librarian (this page)
|
||||
* [Headphones](https://github.com/rembo10/headphones)
|
||||
* [NZBHydra](/recipies/autopirate/nzbhydra/)
|
||||
* [Ombi](/recipies/autopirate/ombi/)
|
||||
* [Jackett](/recipies/autopirate/jackett/)
|
||||
* [End](/recipies/autopirate/end/) (launch the stack)
|
||||
* [Headphones](/recipes/autopirate/headphones)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
|
||||
1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](/recipies/calibre-web) recipe.
|
||||
1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](/recipes/calibre-web) recipe.
|
||||
2. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||
|
||||
### Tip your waiter (donate)
|
||||
83
manuscript/recipes/autopirate/lidarr.md
Normal file
@@ -0,0 +1,83 @@
|
||||
hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media 📺 🎥 🎵 📖
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# Lidarr
|
||||
|
||||
[Lidarr](https://lidarr.audio/) is an automated music downloader for NZB and Torrent. It performs the same function as [Headphones](/recipes/autopirate/headphones), but is written using the same(ish) codebase as [Radarr](/recipes/autopirate/radarr/) and [Sonarr](/recipes/autopirate/sonarr). It's blazingly fast, and includes beautiful album/artist art. Lidarr supports [SABnzbd](/recipes/autopirate/sabnzbd/), [NZBGet](/recipes/autopirate/nzbget/), Transmission, µTorrent, Deluge and Blackhole (_just like Sonarr / Radarr_)
|
||||
|
||||

|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Lidarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
````
|
||||
lidarr:
|
||||
image: linuxserver/lidarr:latest
|
||||
env_file : /var/data/config/autopirate/lidarr.env
|
||||
volumes:
|
||||
- /var/data/autopirate/lidarr:/config
|
||||
- /var/data/media:/media
|
||||
networks:
|
||||
- internal
|
||||
|
||||
lidarr_proxy:
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/lidarr.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:lidarr.example.com
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=4180
|
||||
volumes:
|
||||
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
|
||||
command: |
|
||||
-cookie-secure=false
|
||||
-upstream=http://lidarr:8181
|
||||
-redirect-url=https://lidarr.example.com
|
||||
-http-address=http://0.0.0.0:4180
|
||||
-email-domain=example.com
|
||||
-provider=github
|
||||
-authenticated-emails-file=/authenticated-emails.txt
|
||||
````
|
||||
|
||||
!!! tip
|
||||
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](https://github.com/evilhero/mylar)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones/)
|
||||
* Lidarr (this page)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
|
||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||
2. The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel!
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# Mylar
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Mylar in your [AutoPirate](/recipies/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
To include Mylar in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
```
|
||||
mylar:
|
||||
@@ -22,9 +22,8 @@ mylar:
|
||||
- internal
|
||||
|
||||
mylar_proxy:
|
||||
image: zappi/oauth2_proxy
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/mylar.env
|
||||
dns_search: myswarm.example.com
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
@@ -50,20 +49,23 @@ mylar_proxy:
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipies/autopirate/end/)** section:
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipies/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipies/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipies/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipies/autopirate/sonarr/)
|
||||
* [Radarr](/recipies/autopirate/radarr/)
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* Mylar (this page)
|
||||
* [Lazy Librarian](/recipies/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipies/autopirate/headphones/)
|
||||
* [NZBHydra](/recipies/autopirate/nzbhydra/)
|
||||
* [Ombi](/recipies/autopirate/ombi/)
|
||||
* [Jackett](/recipies/autopirate/jackett/)
|
||||
* [End](/recipies/autopirate/end/) (launch the stack)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
@@ -1,18 +1,18 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# NZBGet
|
||||
|
||||
## Introduction
|
||||
|
||||
NZBGet performs the same function as [SABnzbd](/recipies/autopirate/sabnzbd.md) (_downloading content from Usenet servers_), but it's lightweight and fast(er), written in C++ (_as opposed to Python_).
|
||||
NZBGet performs the same function as [SABnzbd](/recipes/autopirate/sabnzbd.md) (_downloading content from Usenet servers_), but it's lightweight and fast(er), written in C++ (_as opposed to Python_).
|
||||
|
||||

|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include NZBGet in your [AutoPirate](/recipies/autopirate/) stack
|
||||
(_The only reason you **wouldn't** use NZBGet, would be if you were using [SABnzbd](/recipies/autopirate/sabnzbd/) instead_), include the following in your autopirate.yml stack definition file:
|
||||
To include NZBGet in your [AutoPirate](/recipes/autopirate/) stack
|
||||
(_The only reason you **wouldn't** use NZBGet, would be if you were using [SABnzbd](/recipes/autopirate/sabnzbd/) instead_), include the following in your autopirate.yml stack definition file:
|
||||
|
||||
!!! tip
|
||||
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy```
|
||||
@@ -28,9 +28,8 @@ nzbget:
|
||||
- internal
|
||||
|
||||
nzbget_proxy:
|
||||
image: zappi/oauth2_proxy
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/nzbget.env
|
||||
dns_search: myswarm.example.com
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
@@ -57,20 +56,23 @@ nzbget_proxy:
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipies/autopirate/end/)** section:
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipies/autopirate/sabnzbd.md)
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* NZBGet (this page)
|
||||
* [RTorrent](/recipies/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipies/autopirate/sonarr/)
|
||||
* [Radarr](/recipies/autopirate/radarr/)
|
||||
* [Mylar](/recipies/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipies/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipies/autopirate/headphones/)
|
||||
* [NZBHydra](/recipies/autopirate/nzbhydra/)
|
||||
* [Ombi](/recipies/autopirate/ombi/)
|
||||
* [Jackett](/recipies/autopirate/jackett/)
|
||||
* [End](/recipies/autopirate/end/) (launch the stack)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](/recipes/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones/)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
|
||||
# NZBHydra
|
||||
@@ -16,7 +16,7 @@
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include NZBHydra in your [AutoPirate](/recipies/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
To include NZBHydra in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
```
|
||||
nzbhydra:
|
||||
@@ -28,9 +28,8 @@ nzbhydra:
|
||||
- internal
|
||||
|
||||
nzbhydra_proxy:
|
||||
image: zappi/oauth2_proxy
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/nzbhydra.env
|
||||
dns_search: myswarm.example.com
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
@@ -56,20 +55,23 @@ nzbhydra_proxy:
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipies/autopirate/end/)** section:
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipies/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipies/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipies/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipies/autopirate/sonarr/)
|
||||
* [Radarr](/recipies/autopirate/radarr/)
|
||||
* [Mylar](/recipies/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipies/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipies/autopirate/headphones/)
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](/recipes/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones/)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* NZBHydra (this page)
|
||||
* [Ombi](/recipies/autopirate/ombi/)
|
||||
* [Jackett](/recipies/autopirate/jackett/)
|
||||
* [End](/recipies/autopirate/end/) (launch the stack)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
101
manuscript/recipes/autopirate/nzbhydra2.md
Normal file
@@ -0,0 +1,101 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
|
||||
# NZBHydra 2
|
||||
|
||||
[NZBHydra 2](https://github.com/theotherp/nzbhydra2) is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as an indexer source for tools like Sonarr, Radarr or CouchPotato.
|
||||
|
||||
!!! note
|
||||
NZBHydra 2 is a complete rewrite of [NZBHydra (1)](/recipes/autopirate/nzbhybra/). It's currently in Beta. It works mostly fine but some functions might not be completely done and incompatibilities with some tools might still exist. You might want to run both in parallel for migration / testing purposes, but ultimately you'll probably want to switch over to NZBHydra 2 exclusively.
|
||||
|
||||

|
||||
|
||||
Features include:
|
||||
|
||||
* Searches Anizb, BinSearch, NZBIndex and any newznab compatible indexers. Merges all results, filters them by a number of configurable restrictions, recognizes duplicates and returns them all in one place
|
||||
* Add results to [NZBGet](/recipes/autopirate/nzbget/) or [SABnzbd](/recipes/autopirate/sabnzbd/)
|
||||
* Support for all relevant media IDs (IMDB, TMDB, TVDB, TVRage, TVMaze) and conversion between them
|
||||
* Query generation, meaning a query will be generated if only a media ID is provided in the search and the indexer doesn't support the ID or if no results were found
|
||||
* Compatible with [Sonarr](/recipes/autopirate/sonarr/), [Radarr](/recipes/autopirate/radarr/), [NZBGet](/recipes/autopirate/nzbget.md), [SABnzbd](/recipes/autopirate/sabnzbd/), nzb360, CouchPotato, [Mylar](/recipes/autopirate/mylar/), [Lazy Librarian](/recipes/autopirate/lazylibrarian/), Sick Beard, [Jackett/Cardigann](/recipes/autopirate/jackett/), Watcher, etc.
|
||||
* Search and download history and extensive stats. E.g. indexer response times, download shares, NZB age, etc.
|
||||
* Authentication and multi-user support
|
||||
* Automatic update of NZB download status by querying configured downloaders
|
||||
* RSS support with configurable cache times
|
||||
* Torrent support (_Although I prefer [Jackett](/recipes/autopirate/jackett/) for this_):
|
||||
* For GUI searches, allowing you to download torrents to a blackhole folder
|
||||
* A separate Torznab compatible endpoint for API requests, allowing you to merge multiple trackers
|
||||
* Extensive configurability
|
||||
* Migration of database and settings from v1
|
||||
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include NZBHydra2 in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
````
|
||||
nzbhydra2:
|
||||
image: linuxserver/hydra2:latest
|
||||
env_file : /var/data/config/autopirate/nzbhydra2.env
|
||||
volumes:
|
||||
- /var/data/autopirate/nzbhydra2:/config
|
||||
networks:
|
||||
- internal
|
||||
|
||||
nzbhydra2_proxy:
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/nzbhydra2.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:nzbhydra2.example.com
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=4180
|
||||
volumes:
|
||||
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
|
||||
command: |
|
||||
-cookie-secure=false
|
||||
-upstream=http://nzbhydra2:5076
|
||||
-redirect-url=https://nzbhydra2.example.com
|
||||
-http-address=http://0.0.0.0:4180
|
||||
-email-domain=example.com
|
||||
-provider=github
|
||||
-authenticated-emails-file=/authenticated-emails.txt
|
||||
````
|
||||
|
||||
!!! tip
|
||||
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](/recipes/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones/)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* NZBHydra2 (this page)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
|
||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra2, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||
2. Note that NZBHydra2 _can_ co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you'll need to change both the target hostname (_to "hydra2"_) and the target port (_to 5076_).
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
@@ -1,9 +1,9 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# Ombi
|
||||
|
||||
[Ombi](https://github.com/tidusjar/Ombi) is a useful addition to the [autopirate](/recipies/autopirate/) stack. Features include:
|
||||
[Ombi](https://github.com/tidusjar/Ombi) is a useful addition to the [autopirate](/recipes/autopirate/) stack. Features include:
|
||||
|
||||
* Lets users request Movies and TV Shows (_whether it being the entire series, an entire season, or even single episodes._)
|
||||
* Easily manage your requests
|
||||
@@ -17,7 +17,7 @@ Automatically updates the status of requests when they are available on Plex/Emb
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Ombi in your [AutoPirate](/recipies/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
To include Ombi in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
```
|
||||
ombi:
|
||||
@@ -29,9 +29,8 @@ ombi:
|
||||
- internal
|
||||
|
||||
ombi_proxy:
|
||||
image: zappi/oauth2_proxy
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/ombi.env
|
||||
dns_search: myswarm.example.com
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
@@ -57,20 +56,23 @@ ombi_proxy:
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipies/autopirate/end/)** section:
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipies/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipies/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipies/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipies/autopirate/sonarr/)
|
||||
* [Radarr](/recipies/autopirate/radarr/)
|
||||
* [Mylar](/recipies/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipies/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipies/autopirate/headphones/)
|
||||
* [NZBHydra](/recipies/autopirate/nzbhydra/)
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](/recipes/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones/)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* Ombi (this page)
|
||||
* [Jackett](/recipies/autopirate/jackett/)
|
||||
* [End](/recipies/autopirate/end/) (launch the stack)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# Radarr
|
||||
|
||||
@@ -22,9 +22,12 @@
|
||||
|
||||

|
||||
|
||||
!!! tip "Sponsored Project"
|
||||
Sonarr is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I forget it's there until I (reliably) receive an email with new and exciting updates 😁
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Radarr in your [AutoPirate](/recipies/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
To include Radarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
```
|
||||
radarr:
|
||||
@@ -37,9 +40,8 @@ radarr:
|
||||
- internal
|
||||
|
||||
radarr_proxy:
|
||||
image: zappi/oauth2_proxy
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/radarr.env
|
||||
dns_search: myswarm.example.com
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
@@ -65,20 +67,23 @@ radarr_proxy:
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipies/autopirate/end/)** section:
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipies/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipies/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipies/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipies/autopirate/sonarr/)
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* Radarr (this page)
|
||||
* [Mylar](/recipies/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipies/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipies/autopirate/headphones/)
|
||||
* [NZBHydra](/recipies/autopirate/nzbhydra/)
|
||||
* [Ombi](/recipies/autopirate/ombi/)
|
||||
* [Jackett](/recipies/autopirate/jackett/)
|
||||
* [End](/recipies/autopirate/end/) (launch the stack)
|
||||
* [Mylar](/recipes/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones/)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# RTorrent / ruTorrent
|
||||
|
||||
@@ -13,7 +13,7 @@ When using a torrent client from behind NAT (_which swarm, by nature, is_), you
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include ruTorrent in your [AutoPirate](/recipies/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
To include ruTorrent in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
```
|
||||
rtorrent:
|
||||
@@ -30,7 +30,6 @@ rtorrent:
|
||||
rtorrent_proxy:
|
||||
image: skippy/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/rtorrent.env
|
||||
dns_search: myswarm.example.com
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
@@ -57,20 +56,23 @@ rtorrent_proxy:
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipies/autopirate/end/)** section:
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipies/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipies/autopirate/nzbget.md)
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* RTorrent (this page)
|
||||
* [Sonarr](/recipies/autopirate/sonarr/)
|
||||
* [Radarr](/recipies/autopirate/radarr/)
|
||||
* [Mylar](/recipies/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipies/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipies/autopirate/headphones/)
|
||||
* [NZBHydra](/recipies/autopirate/nzbhydra/)
|
||||
* [Ombi](/recipies/autopirate/ombi/)
|
||||
* [Jackett](/recipies/autopirate/jackett/)
|
||||
* [End](/recipies/autopirate/end/) (launch the stack)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](/recipes/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones/)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
93
manuscript/recipes/autopirate/sabnzbd.md
Normal file
@@ -0,0 +1,93 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# SABnzbd
|
||||
|
||||
## Introduction
|
||||
|
||||
SABnzbd is the workhorse of the stack. It takes .nzb files as input (_manually or from other [autopirate](/recipes/autopirate/) stack tools_), then connects to your chosen Usenet provider, downloads all the individual binaries referenced by the .nzb, and then tests/repairs/combines/uncompresses them all into the final result - media files.
|
||||
|
||||

|
||||
|
||||
!!! tip "Sponsored Project"
|
||||
SABnzbd is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. It's not sexy, but it's consistent and reliable, and I enjoy the fruits of its labor near-daily.
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include SABnzbd in your [AutoPirate](/recipes/autopirate/) stack
|
||||
(_The only reason you **wouldn't** use SABnzbd, would be if you were using [NZBGet](/recipes/autopirate/nzbget.md) instead_), include the following in your autopirate.yml stack definition file:
|
||||
|
||||
!!! tip
|
||||
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||
|
||||
````
|
||||
sabnzbd:
|
||||
image: linuxserver/sabnzbd:latest
|
||||
env_file : /var/data/config/autopirate/sabnzbd.env
|
||||
volumes:
|
||||
- /var/data/autopirate/sabnzbd:/config
|
||||
- /var/data/media:/media
|
||||
networks:
|
||||
- internal
|
||||
|
||||
sabnzbd_proxy:
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/sabnzbd.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:sabnzbd.example.com
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=4180
|
||||
volumes:
|
||||
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
|
||||
command: |
|
||||
-cookie-secure=false
|
||||
-upstream=http://sabnzbd:8080
|
||||
-redirect-url=https://sabnzbd.example.com
|
||||
-http-address=http://0.0.0.0:4180
|
||||
-email-domain=example.com
|
||||
-provider=github
|
||||
-authenticated-emails-file=/authenticated-emails.txt
|
||||
````
|
||||
|
||||
!!! warning "Important Note re hostname validation"
|
||||
|
||||
(**Updated 10 June 2018**) : In SABnzbd [2.3.3](https://sabnzbd.org/wiki/extra/hostname-check.html), hostname verification was added as a mandatory check. SABnzbd will refuse inbound connections which weren't addressed to its own (_initially, autodetected_) hostname. This presents a problem within Docker Swarm, where container hostnames are random and disposable.
|
||||
|
||||
You'll need to edit sabnzbd.ini (_only created after your first launch_), and **replace** the value in ```host_whitelist``` configuration (_it's comma-separated_) with the name of your service within the swarm definition, as well as your FQDN as accessed via traefik.
|
||||
|
||||
For example, mine simply reads ```host_whitelist = sabnzbd.funkypenguin.co.nz, sabnzbd```
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* SABnzbd (this page)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](/recipes/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones/)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
|
||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
|
||||
# Sonarr
|
||||
@@ -8,10 +8,12 @@
|
||||
|
||||

|
||||
|
||||
!!! tip "Sponsored Project"
|
||||
Sonarr is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I forget it's there until I (reliably) receive an email with new and exciting updates 😁
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Sonarr in your [AutoPirate](/recipies/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
To include Sonarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
```
|
||||
sonarr:
|
||||
@@ -24,9 +26,8 @@ sonarr:
|
||||
- internal
|
||||
|
||||
sonarr_proxy:
|
||||
image: zappi/oauth2_proxy
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/sonarr.env
|
||||
dns_search: myswarm.example.com
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
@@ -52,20 +53,23 @@ sonarr_proxy:
|
||||
|
||||
## Assemble more tools..
|
||||
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipies/autopirate/end/)** section:
|
||||
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
|
||||
|
||||
* [SABnzbd](/recipies/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipies/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipies/autopirate/rtorrent/)
|
||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||
* Sonarr (this page)
|
||||
* [Radarr](/recipies/autopirate/radarr/)
|
||||
* [Mylar](/recipies/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipies/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipies/autopirate/headphones/)
|
||||
* [NZBHydra](/recipies/autopirate/nzbhydra/)
|
||||
* [Ombi](/recipies/autopirate/ombi/)
|
||||
* [Jackett](/recipies/autopirate/jackett/)
|
||||
* [End](/recipies/autopirate/end/) (launch the stack)
|
||||
* [Radarr](/recipes/autopirate/radarr/)
|
||||
* [Mylar](/recipes/autopirate/mylar/)
|
||||
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
|
||||
* [Headphones](/recipes/autopirate/headphones/)
|
||||
* [Lidarr](/recipes/autopirate/lidarr/)
|
||||
* [NZBHydra](/recipes/autopirate/nzbhydra/)
|
||||
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
|
||||
* [Ombi](/recipes/autopirate/ombi/)
|
||||
* [Jackett](/recipes/autopirate/jackett/)
|
||||
* [Heimdall](/recipes/autopirate/heimdall/)
|
||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||
|
||||
|
||||
## Chef's Notes 📓
|
||||
152
manuscript/recipes/bookstack.md
Normal file
@@ -0,0 +1,152 @@
|
||||
hero: Heroic Hero
|
||||
|
||||
# BookStack
|
||||
|
||||
BookStack is a simple, self-hosted, easy-to-use platform for organising and storing information.
|
||||
|
||||
A friendly middle ground between heavyweights like MediaWiki or Confluence and [Gollum](/recipes/gollum/), BookStack relies on a database backend (so searching and versioning is easy), but limits itself to a pre-defined, 3-tier structure (book, chapter, page). The result is a lightweight, approachable personal documentation stack, which includes search and Markdown editing.
|
||||
|
||||

|
||||
|
||||
I like to protect my public-facing web UIs with an [oauth_proxy](/reference/oauth_proxy), ensuring that if an application bug (or a user misconfiguration) exposes the app to unplanned public scrutiny, I have a second layer of defense.
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
||||
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design
|
||||
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/bookstack:
|
||||
|
||||
```
|
||||
mkdir -p /var/data/bookstack/database-dump
|
||||
mkdir -p /var/data/runtime/bookstack/db
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create bookstack.env, and populate with the following variables. Set the [oauth_proxy](/reference/oauth_proxy) variables provided by your OAuth provider (if applicable.)
|
||||
|
||||
```
|
||||
# For oauth-proxy (optional)
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
|
||||
# For MariaDB/MySQL database
|
||||
MYSQL_RANDOM_ROOT_PASSWORD=true
|
||||
MYSQL_DATABASE=bookstack
|
||||
MYSQL_USER=bookstack
|
||||
MYSQL_PASSWORD=secret
|
||||
|
||||
# Bookstack-specific variables
|
||||
DB_HOST=bookstack_db:3306
|
||||
DB_DATABASE=bookstack
|
||||
DB_USERNAME=bookstack
|
||||
DB_PASSWORD=secret
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
!!! tip
|
||||
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||
|
||||
|
||||
```
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
|
||||
db:
|
||||
image: mariadb:10
|
||||
env_file: /var/data/config/bookstack/bookstack.env
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /var/data/runtime/bookstack/db:/var/lib/mysql
|
||||
|
||||
proxy:
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/bookstack/bookstack.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:bookstack.example.com
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=4180
|
||||
volumes:
|
||||
- /var/data/config/bookstack/authenticated-emails.txt:/authenticated-emails.txt
|
||||
command: |
|
||||
-cookie-secure=false
|
||||
-upstream=http://app
|
||||
-redirect-url=https://bookstack.example.com
|
||||
-http-address=http://0.0.0.0:4180
|
||||
-email-domain=example.com
|
||||
-provider=github
|
||||
-authenticated-emails-file=/authenticated-emails.txt
|
||||
|
||||
app:
|
||||
image: solidnerd/bookstack
|
||||
env_file: /var/data/config/bookstack/bookstack.env
|
||||
networks:
|
||||
- internal
|
||||
|
||||
db-backup:
|
||||
image: mariadb:10
|
||||
env_file: /var/data/config/bookstack/bookstack.env
|
||||
volumes:
|
||||
- /var/data/bookstack/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
|
||||
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.33.0/24
|
||||
```
|
||||
|
||||
!!! note
|
||||
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
|
||||
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Bookstack stack
|
||||
|
||||
Launch the BookStack stack by running ```docker stack deploy bookstack -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_proxy, and then login with username 'admin@admin.com' and password 'password'.
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
1. If you wanted to expose the BookStack UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the bookstack container. You'd also need to add the traefik_public network to the bookstack container.
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
@@ -2,9 +2,9 @@ hero: Manage your ebook collection. Like a BOSS.
|
||||
|
||||
# Calibre-Web
|
||||
|
||||
The [AutoPirate](/recipies/autopirate/) recipe includes [Lazy Librarian](https://github.com/itsmegb/LazyLibrarian), a tool for tracking, finding, and downloading eBooks. However, after the eBooks are downloaded, Lazy Librarian is not much use for organising, tracking, and actually **reading** them.
|
||||
The [AutoPirate](/recipes/autopirate/) recipe includes [Lazy Librarian](https://github.com/itsmegb/LazyLibrarian), a tool for tracking, finding, and downloading eBooks. However, after the eBooks are downloaded, Lazy Librarian is not much use for organising, tracking, and actually **reading** them.
|
||||
|
||||
[Calibre-Web](https://github.com/janeczku/calibre-web) could be described as "_[Plex](/recipies/plex/) (or [Emby](/recipies/emby/)) for eBooks_" - it's a web-based interface to manage your eBook library, screenshot below:
|
||||
[Calibre-Web](https://github.com/janeczku/calibre-web) could be described as "_[Plex](/recipes/plex/) (or [Emby](/recipes/emby/)) for eBooks_" - it's a web-based interface to manage your eBook library, screenshot below:
|
||||
|
||||

|
||||
|
||||
@@ -77,7 +77,7 @@ services:
|
||||
- internal
|
||||
|
||||
proxy:
|
||||
image: zappi/oauth2_proxy
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/calibre-web/calibre-web.env
|
||||
dns_search: hq.example.com
|
||||
networks:
|
||||
@@ -125,7 +125,7 @@ Log into your new instance at https://**YOUR-FQDN**. You'll be directed to the i
|
||||
## Chef's Notes
|
||||
|
||||
1. Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
|
||||
2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipies/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.
|
||||
2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.
|
||||
|
||||
### Tip your waiter (donate)
|
||||
|
||||
311
manuscript/recipes/collabora-online.md
Normal file
@@ -0,0 +1,311 @@
|
||||
# Collabora Online
|
||||
|
||||
!!! important
|
||||
Development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
|
||||
|
||||
[](https://www.observe.global/)
|
||||
|
||||
Collabora Online Development Edition (or "[CODE](https://www.collaboraoffice.com/code/#what_is_code)"), is the lightweight, or "home" edition of the commercially-supported [Collabora Online](https://www.collaboraoffice.com/collabora-online/) platform. It
|
||||
|
||||
It's basically the [LibreOffice](https://www.libreoffice.org/) interface in a web-browser. CODE is not a standalone app, it's a backend intended to be accessed via "WOPI" from an existing interface (_in our case, [NextCloud](/recipes/nextcloud/)_)
|
||||
|
||||

|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
||||
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design
|
||||
3. DNS entry for the hostname (_i.e. "collabora.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
|
||||
4. [NextCloud](/recipes/nextcloud/) installed and operational
|
||||
5. [Docker-compose](https://docs.docker.com/compose/install/) installed on your node(s) - this is a special case which needs to run outside of Docker Swarm
|
||||
|
||||
## Preparation
|
||||
|
||||
### Explanation for complexity
|
||||
|
||||
Due to the clever magic that Collabora does to present a "headless" LibreOffice UI to the browser, the CODE docker container requires system capabilities which cannot be granted under Docker Swarm (_specifically, MKNOD_).
|
||||
|
||||
So we have to run Collabora itself in the next best thing to Docker swarm - a docker-compose stack. Using docker-compose will at least provide us with consistent and version-able configuration files.
|
||||
|
||||
This presents another problem though - Docker Swarm with Traefik is superb at making all our stacks "just work" with ingress routing and LetsEncyrpt certificates. We don't want to have to do this manually (_like a cave-man_), so we engage in some trickery to allow us to still use our swarmed Traefik to terminate SSL.
|
||||
|
||||
We run a single swarmed Nginx instance, which forwards all requests to an upstream, with the target IP of the docker0 interface, on port 9980 (_the port exposed by the CODE container_)
|
||||
|
||||
We attach the necessary labels to the Nginx container to instruct Trafeik to setup a front/backend for collabora.<ourdomain\>. Now incoming requests to **https://collabora.<ourdomain\>** will hit Traefik, be forwarded to nginx (_wherever in the swarm it's running_), and then to port 9980 on the same node that nginx is running on.
|
||||
|
||||
What if we're running multiple nodes in our swarm, and nginx ends up on a different node to the one running Collabora via docker-compose? Well, either constrain nginx to the same node as Collabora (_example below_), or just launch an instance of Collabora on _every_ node then. It's just a rendering / GUI engine after all, it doesn't hold any persistent data.
|
||||
|
||||
Here's a (_highly technical_) diagram to illustrate:
|
||||
|
||||

|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a directory for holding config to bind-mount into our containers, so create ```/var/data/collabora```, and ```/var/data/config/collabora``` for holding the docker/swarm config
|
||||
|
||||
```
|
||||
mkdir /var/data/collabora/
|
||||
mkdir /var/data/config/collabora/
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create /var/data/config/collabora/collabora.env, and populate with the following variables, customized for your installation.
|
||||
|
||||
!!! warning
|
||||
Note the following:
|
||||
|
||||
1. Variables are in lower-case, unlike our standard convention. This is to align with the CODE container
|
||||
2. Set domain to your [NextCloud](/recipes/nextcloud/) domain, and escape all the periods as per the example
|
||||
3. Set your server_name to collabora.<yourdomain\>. Escaping periods is unnecessary
|
||||
4. Your password cannot include triangular brackets - the entrypoint script will insert this password into an XML document, and triangular brackets will make bad(tm) things happen 🔥
|
||||
|
||||
```
|
||||
username=admin
|
||||
password=ilovemypassword
|
||||
domain=nextcloud\.batcave\.com
|
||||
server_name=collabora.batcave.com
|
||||
termination=true
|
||||
```
|
||||
|
||||
### Create docker-compose.yml
|
||||
|
||||
Create ```/var/data/config/collabora/docker-compose.yml``` as follows:
|
||||
|
||||
```
|
||||
version: "3.0"
|
||||
|
||||
services:
|
||||
local-collabora:
|
||||
image: funkypenguin/collabora
|
||||
# the funkypenguin version has a patch to include "termination" behind SSL-terminating reverse proxy (traefik), see CODE PR #50.
|
||||
# Once merged, the official container can be used again.
|
||||
#image: collabora/code
|
||||
env_file: /var/data/config/collabora/collabora.env
|
||||
volumes:
|
||||
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
|
||||
cap_add:
|
||||
- MKNOD
|
||||
ports:
|
||||
- 9980:9980
|
||||
```
|
||||
|
||||
### Create nginx.conf
|
||||
|
||||
Create ```/var/data/config/collabora/nginx.conf``` as follows, changing the ```server_name``` value to match the environment variable you established above.
|
||||
|
||||
```
|
||||
upstream collabora-upstream {
|
||||
# Run collabora under docker-compose, since it needs MKNOD cap, which can't be provided by Docker Swarm.
|
||||
# The IP here is the typical IP of docker0 - change if yours is different.
|
||||
server 172.17.0.1:9980;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name collabora.batcave.com;
|
||||
|
||||
# static files
|
||||
location ^~ /loleaflet {
|
||||
proxy_pass http://collabora-upstream;
|
||||
proxy_set_header Host $http_host;
|
||||
}
|
||||
|
||||
# WOPI discovery URL
|
||||
location ^~ /hosting/discovery {
|
||||
proxy_pass http://collabora-upstream;
|
||||
proxy_set_header Host $http_host;
|
||||
}
|
||||
|
||||
# Main websocket
|
||||
location ~ /lool/(.*)/ws$ {
|
||||
proxy_pass http://collabora-upstream;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "Upgrade";
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_read_timeout 36000s;
|
||||
}
|
||||
|
||||
# Admin Console websocket
|
||||
location ^~ /lool/adminws {
|
||||
proxy_buffering off;
|
||||
proxy_pass http://collabora-upstream;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "Upgrade";
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_read_timeout 36000s;
|
||||
}
|
||||
|
||||
# download, presentation and image upload
|
||||
location ~ /lool {
|
||||
proxy_pass https://collabora-upstream;
|
||||
proxy_set_header Host $http_host;
|
||||
}
|
||||
}
|
||||
```
|
||||
### Create loolwsd.xml
|
||||
|
||||
[Until we understand](https://github.com/CollaboraOnline/Docker-CODE/pull/50) how to [pass trusted network parameters to the entrypoint script using environment variables](https://github.com/CollaboraOnline/Docker-CODE/issues/49), we have to maintain a manually edited version of ```loolwsd.xml```, and bind-mount it into our collabora container.
|
||||
|
||||
The way we do this is we mount
|
||||
```/var/data/collabora/loolwsd.xml``` as ```/etc/loolwsd/loolwsd.xml-new```, then allow the container to create its default ```/etc/loolwsd/loolwsd.xml```, copy this default **over** our ```/var/data/collabora/loolwsd.xml``` as ```/etc/loolwsd/loolwsd.xml-new```, and then update the container to use **our** ```/var/data/collabora/loolwsd.xml``` as ```/etc/loolwsd/loolwsd.xml``` instead (_confused yet?_)
|
||||
|
||||
Create an empty ```/var/data/collabora/loolwsd.xml``` by running ```touch /var/data/collabora/loolwsd.xml```. We'll populate this in the next section...
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create ```/var/data/config/collabora/collabora.yml``` as follows, changing the traefik frontend_rule as necessary:
|
||||
|
||||
!!! tip
|
||||
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||
|
||||
```
|
||||
version: "3.0"
|
||||
|
||||
services:
|
||||
|
||||
nginx:
|
||||
image: nginx:latest
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:collabora.batcave.com
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=80
|
||||
- traefik.frontend.passHostHeader=true
|
||||
# uncomment this line if you want to force nginx to always run on one node (i.e., the one running collabora)
|
||||
#placement:
|
||||
# constraints:
|
||||
# - node.hostname == ds1
|
||||
volumes:
|
||||
- /var/data/collabora/nginx.conf:/etc/nginx/conf.d/default.conf:ro
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Generate loolwsd.xml
|
||||
|
||||
Well. This is awkward. There's no documented way to make Collabora work with Docker Swarm, so we're doing a bit of a hack here, until I understand [how to pass these arguments](https://github.com/CollaboraOnline/Docker-CODE/issues/49) via environment variables.
|
||||
|
||||
Launching Collabora is (_for now_) a 2-step process. First.. we launch collabora itself, by running:
|
||||
|
||||
```
|
||||
cd /var/data/config/collabora/
|
||||
docker-compose -d up
|
||||
```
|
||||
|
||||
Output looks something like this:
|
||||
|
||||
```
|
||||
root@ds1:/var/data/config/collabora# docker-compose up -d
|
||||
WARNING: The Docker Engine you're using is running in swarm mode.
|
||||
|
||||
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
|
||||
|
||||
To deploy your application across the swarm, use `docker stack deploy`.
|
||||
|
||||
Pulling local-collabora (funkypenguin/collabora:latest)...
|
||||
latest: Pulling from funkypenguin/collabora
|
||||
7b8b6451c85f: Pull complete
|
||||
ab4d1096d9ba: Pull complete
|
||||
e6797d1788ac: Pull complete
|
||||
e25c5c290bde: Pull complete
|
||||
4b8e1b074e06: Pull complete
|
||||
f51a3d1fb75e: Pull complete
|
||||
8b826e2ae5ad: Pull complete
|
||||
Digest: sha256:6cd38cb5cbd170da0e3f0af85cecf07a6bc366e44555c236f81d5b433421a39d
|
||||
Status: Downloaded newer image for funkypenguin/collabora:latest
|
||||
Creating collabora_local-collabora_1 ...
|
||||
Creating collabora_local-collabora_1 ... done
|
||||
root@ds1:/var/data/config/collabora#
|
||||
```
|
||||
|
||||
Now exec into the container (_from another shell session_), by running ```exec <container name> -it /bin/bash```. Make a copy of /etc/loolwsd/loolwsd, by running ```cp /etc/loolwsd/loolwsd.xml /etc/loolwsd/loolwsd.xml-new```, and then exit the container with ```exit```.
|
||||
|
||||
Delete the collabora container by hitting CTRL-C in the docker-compose shell, running ```docker-compose rm```, and then altering this line in docker-compose.yml:
|
||||
|
||||
```
|
||||
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
|
||||
```
|
||||
|
||||
To this:
|
||||
|
||||
```
|
||||
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml
|
||||
```
|
||||
|
||||
Edit /var/data/collabora/loolwsd.xml, find the **storage.filesystem.wopi** section, and add lines like this to the existing allow rules (_to allow IPv6-enabled hosts to still connect with their IPv4 addreses_):
|
||||
|
||||
```
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
```
|
||||
|
||||
Find the **net.post_allow** section, and add a line like this:
|
||||
|
||||
```
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
```
|
||||
|
||||
Find these 2 lines:
|
||||
|
||||
```
|
||||
<ssl desc="SSL settings">
|
||||
<enable type="bool" default="true">true</enable>
|
||||
```
|
||||
|
||||
And change to:
|
||||
|
||||
```
|
||||
<ssl desc="SSL settings">
|
||||
<enable type="bool" default="true">false</enable>
|
||||
```
|
||||
|
||||
Now re-launch collabora (_with the correct with loolwsd.xml_) under docker-compose, by running:
|
||||
|
||||
```
|
||||
docker-compose -d up
|
||||
```
|
||||
|
||||
Once collabora is up, we launch the swarm stack, by running:
|
||||
|
||||
```
|
||||
docker stack deploy collabora -c /var/data/config/collabora/collabora.yml
|
||||
```
|
||||
|
||||
Visit **https://collabora.<yourdomain\>/l/loleaflet/dist/admin/admin.html** and confirm you can login with the user/password you specified in collabora.env
|
||||
|
||||
### Integrate into NextCloud
|
||||
|
||||
In NextCloud, Install the **Collabora Online** app (https://apps.nextcloud.com/apps/richdocuments), and then under **Settings -> Collabora Online**, set your Collabora Online Server to ```https://collabora.<your domain>```
|
||||
|
||||

|
||||
|
||||
Now browse your NextCloud files. Click the plus (+) sign to create a new document, and create either a new document, spreadsheet, or presentation. Name your document and then click on it. If Collabora is setup correctly, you'll shortly enter into the rich editing interface provided by Collabora :)
|
||||
|
||||
!!! important
|
||||
Development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
|
||||
|
||||
[](https://www.observe.global/)
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
1. Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
@@ -6,23 +6,24 @@ This is a diversion from my usual recipes - recently I've become interested in c
|
||||
|
||||
I honestly didn't expect to enjoy the mining process as much as I did. Part of the enjoyment was getting my hands dirty with hardware.
|
||||
|
||||
Since a [mining rig](/recipies/cryptominer/mining-rig/) relies on hardware, we can't really use a docker swarm for this one!
|
||||
Since a [mining rig](/recipes/cryptominer/mining-rig/) relies on hardware, we can't really use a docker swarm for this one!
|
||||
|
||||

|
||||

|
||||
|
||||
This recipe isn't for everyone - if you just want to make some money from cryptocurrency, then you're better off learning to [invest](https://www.reddit.com/r/CryptoCurrency/) or [trade](https://www.reddit.com/r/CryptoMarkets/). However, if you want to (_ideally_) make money **and** you like tinkering, playing with hardware, optimising and monitoring, read on!
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. Suitable system guts (_CPU, motherboard, RAM, PSU_) for your [mining rig](/recipies/cryptominer/mining-rig/)
|
||||
2. [AMD](/recipies/cryptominer/amd-gpu/) / [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs (_yes, plural, since although you **can** start with just one, you'll soon get hooked!_)
|
||||
1. Suitable system guts (_CPU, motherboard, RAM, PSU_) for your [mining rig](/recipes/cryptominer/mining-rig/)
|
||||
2. [AMD](/recipes/cryptominer/amd-gpu/) / [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs (_yes, plural, since although you **can** start with just one, you'll soon get hooked!_)
|
||||
3. A friendly operating system ([Ubuntu](https://www.ubuntu.com/)/[Debian](https://www.debian.org/)/[CentOS7](https://www.centos.org/download/)) are known to work
|
||||
4. Patience and time
|
||||
|
||||
## Preparation
|
||||
|
||||
For readability, I've split this recipe into multiple sub-recipies, which can be found below, or in the navigation links on the right-hand side:
|
||||
For readability, I've split this recipe into multiple sub-recipes, which can be found below, or in the navigation links on the right-hand side:
|
||||
|
||||
<<<<<<< HEAD:manuscript/recipies/cryptominer.md
|
||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
||||
@@ -30,6 +31,15 @@ For readability, I've split this recipe into multiple sub-recipies, which can be
|
||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipies/cryptominer/profit/)!
|
||||
=======
|
||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||
>>>>>>> master:manuscript/recipes/cryptominer.md
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipies/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# AMD GPU
|
||||
|
||||
@@ -149,6 +149,7 @@ If you want to tweak the BIOS yourself, download the [Polaris bios editor](https
|
||||
|
||||
Now, continue to the next stage of your grand mining adventure:
|
||||
|
||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/amd-gpu.md
|
||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
||||
2. Setup your AMD (_this page_) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
||||
@@ -156,6 +157,15 @@ Now, continue to the next stage of your grand mining adventure:
|
||||
4. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
||||
5. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
||||
6. [Profit](/recipies/cryptominer/profit/)!
|
||||
=======
|
||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||
2. Setup your AMD (_this page_) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||
3. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
||||
4. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||
5. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||
6. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||
>>>>>>> master:manuscript/recipes/cryptominer/amd-gpu.md
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
@@ -1,9 +1,9 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipies/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# Exchange
|
||||
|
||||
You may be mining a particular coin, and want to hold onto it, in the hopes of long-term growth. In that case, stick it in a [wallet](/recipies/cryptominer/wallet/) and be done with it.
|
||||
You may be mining a particular coin, and want to hold onto it, in the hopes of long-term growth. In that case, stick it in a [wallet](/recipes/cryptominer/wallet/) and be done with it.
|
||||
|
||||
You may also not care too much about the coin (you're mining for money, right?), in which case you want to "cash out" your coins into something you can spend.
|
||||
|
||||
@@ -37,6 +37,7 @@ Once you have enough coins in your exchange wallet, you can "trade" them into th
|
||||
|
||||
Now, continue to the next stage of your grand mining adventure:
|
||||
|
||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/exchange.md
|
||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
||||
@@ -44,6 +45,15 @@ Now, continue to the next stage of your grand mining adventure:
|
||||
5. Send your coins to exchanges (_This page_) or [wallets](/recipies/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipies/cryptominer/profit/)!
|
||||
=======
|
||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
||||
5. Send your coins to exchanges (_This page_) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||
>>>>>>> master:manuscript/recipes/cryptominer/exchange.md
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
@@ -1,12 +1,12 @@
|
||||
# Minerhotel
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipies/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
So, you have GPUs. You can mine cryptocurrency. But **what** cryptocurrency should you mine?
|
||||
|
||||
1. You could manually keep track of [whattomine](http://whattomine.com/), and launch/stop miners based on profitability/convenience, as you see fit.
|
||||
2. You can automate the process of mining the most profitable coin based on your GPUs' capabilities and the current market prices, and do better things with your free time! (_[receiving alerts](/recipies/crytominer/monitor/), of course, if anything stops working!_)
|
||||
2. You can automate the process of mining the most profitable coin based on your GPUs' capabilities and the current market prices, and do better things with your free time! (_[receiving alerts](/recipes/crytominer/monitor/), of course, if anything stops working!_)
|
||||
|
||||
This recipe covers option #2 😁
|
||||
|
||||
@@ -87,13 +87,19 @@ To make whattomine start automatically in future, run ```systemctl enable minerh
|
||||
|
||||
Now, continue to the next stage of your grand mining adventure:
|
||||
|
||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||
4. Setup your miners with Miner Hotel 🏨 (_This page_)
|
||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/minerhotel.md
|
||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipies/cryptominer/profit/)!
|
||||
=======
|
||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||
>>>>>>> master:manuscript/recipes/cryptominer/minerhotel.md
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipies/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# Mining Pools
|
||||
|
||||
@@ -11,7 +11,7 @@ You and your puny GPUs don't have a snowball's chance of mining a block on your
|
||||
|
||||
This'll save you some frustration later... Next time you're watching a movie or doing something mindless, visit http://whattomine.com/, and take note of the 10-15 most profitable coins for your GPU type(s).
|
||||
|
||||
On your [exchanges](/recipies/cryptominer/exchange/), identify the "_deposit address_" for each popular coin, and note them down for the next step.
|
||||
On your [exchanges](/recipes/cryptominer/exchange/), identify the "_deposit address_" for each popular coin, and note them down for the next step.
|
||||
|
||||
!!! note
|
||||
If you're wanting to mine directly to a wallet for long-term holding, then substitute your wallet public address for this deposit address.
|
||||
@@ -20,7 +20,7 @@ Now work your way through the following list of pools, creating an account on ea
|
||||
|
||||
* [Mining Pool Hub](https://miningpoolhub.com/) (Lots of coins)
|
||||
* [NiceHash](https://nicehash.com) (Ethereum, Decred)
|
||||
* [suprnova](https://suprnova.cc/) - Lots of coins, but, you generally need a separate login for each pool. You _also_ need to create a worker in each pool with a common username and password, for [Minerhotel](/recipies/crytominer/minerhotel/).
|
||||
* [suprnova](https://suprnova.cc/) - Lots of coins, but, you generally need a separate login for each pool. You _also_ need to create a worker in each pool with a common username and password, for [Minerhotel](/recipes/crytominer/minerhotel/).
|
||||
* [nanopool](https://nanopool.org/) (Ethereum, Ethereum Classic, SiaCoin, ZCash, Monero, Pascal and Electroneum)
|
||||
* [slushpool](https://slushpool.com/home/) (BTC and ZCash)
|
||||
|
||||
@@ -40,6 +40,7 @@ As noted by IronicBadger [here](https://www.linuxserver.io/2018/01/20/how-to-bui
|
||||
|
||||
Now, continue to the next stage of your grand mining adventure:
|
||||
|
||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/mining-pool.md
|
||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
||||
@@ -47,6 +48,15 @@ Now, continue to the next stage of your grand mining adventure:
|
||||
5. Send your coins to exchanges (_This page_) or [wallets](/recipies/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipies/cryptominer/profit/)!
|
||||
=======
|
||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
||||
5. Send your coins to exchanges (_This page_) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||
>>>>>>> master:manuscript/recipes/cryptominer/mining-pool.md
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipies/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# Mining Rig
|
||||
|
||||
@@ -22,7 +22,7 @@ You don't need anything fancy. Here's a photo of the rig my wife built me:
|
||||
I recommend this design (_with the board with little holes in it_) - it takes up more space, but I have more room to place extra components (_PSUs, hard drives, etc_), as illustrated below:
|
||||
|
||||
!!! note
|
||||
You'll note the hard drives in the picture - that's not part of the mining requirements, it's because my rig doubles as my [Plex](/recipies/plex/) server ;)
|
||||
You'll note the hard drives in the picture - that's not part of the mining requirements, it's because my rig doubles as my [Plex](/recipes/plex/) server ;)
|
||||
|
||||

|
||||
|
||||
@@ -31,12 +31,21 @@ I recommend this design (_with the board with little holes in it_) - it takes up
|
||||
Now, continue to the next stage of your grand mining adventure:
|
||||
|
||||
1. Build your mining rig 💻 (This page)
|
||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/mining-rig.md
|
||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
||||
4. Setup your miners with [Miner Hotel](/recipies/cryptominer/minerhotel/) 🏨
|
||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipies/cryptominer/profit/)!
|
||||
=======
|
||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||
>>>>>>> master:manuscript/recipes/cryptominer/mining-rig.md
|
||||
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Monitor
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipies/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
So, you're a miner! But if you're not **actively** mining, are you still a miner? This page details how to **measure** your mining activity, and how to raise an alert when a profit-affecting issue affects your miners.
|
||||
|
||||
@@ -18,7 +18,7 @@ So, you're a miner! But if you're not **actively** mining, are you still a miner
|
||||
|
||||

|
||||
|
||||
Since [Minerhotel](/recipies/crytominer/minerhotel/) switches currency based on what's most profitable in the moment, it's hard to gauge the impact of changes (overclocking, tweaking, mining pools) over time.
|
||||
Since [Minerhotel](/recipes/crytominer/minerhotel/) switches currency based on what's most profitable in the moment, it's hard to gauge the impact of changes (overclocking, tweaking, mining pools) over time.
|
||||
|
||||
I hacked up a bash script which grabs performance data from the output of the miners, and throws it into an InfluxDB database, which can then be visualized using Grafana.
|
||||
|
||||
@@ -49,7 +49,7 @@ I've tried several iOS apps for monitoring my performance across various. The mo
|
||||
|
||||
### Track your portfolio
|
||||
|
||||
Now that you've got your coins happily cha-chinging into you [wallets](/recipies/cryptominer/wallet/) (_and potentially various [exchanges](/recipies/cryptominer/exchange/)_), you'll want to monitor the performance of your portfolio over time.
|
||||
Now that you've got your coins happily cha-chinging into you [wallets](/recipes/cryptominer/wallet/) (_and potentially various [exchanges](/recipes/cryptominer/exchange/)_), you'll want to monitor the performance of your portfolio over time.
|
||||
|
||||
#### Web Apps
|
||||
|
||||
@@ -74,13 +74,17 @@ I've found the following iOS apps to be useful in tracking my portfolio (_really
|
||||
|
||||
Now, continue to the next stage of your grand mining adventure:
|
||||
|
||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
||||
4. Setup your miners with [Miner Hotel](/recipies/cryptominer/minerhotel/) 🏨
|
||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||
6. Monitor your empire :heartbeat: (_this page_)
|
||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/monitor.md
|
||||
7. [Profit](/recipies/cryptominer/profit/)!
|
||||
=======
|
||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||
>>>>>>> master:manuscript/recipes/cryptominer/monitor.md
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# NVidia GPU
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipies/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
## Ingredients
|
||||
|
||||
@@ -146,6 +146,7 @@ Play with changing your settings.conf file until you break it, and then go back
|
||||
|
||||
Now, continue to the next stage of your grand mining adventure:
|
||||
|
||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/nvidia-gpu.md
|
||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or Nvidia (_this page_) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
||||
@@ -153,6 +154,15 @@ Now, continue to the next stage of your grand mining adventure:
|
||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipies/cryptominer/profit/)!
|
||||
=======
|
||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or Nvidia (_this page_) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||
>>>>>>> master:manuscript/recipes/cryptominer/nvidia-gpu.md
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
@@ -6,6 +6,7 @@ Well, that's it really. You're a cryptominer. Welcome to the party.
|
||||
|
||||
To recap, you did all this:
|
||||
|
||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/profit.md
|
||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
||||
@@ -13,6 +14,15 @@ To recap, you did all this:
|
||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. Profit! (_This page_)
|
||||
=======
|
||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. Profit! (_This page_) 💰
|
||||
>>>>>>> master:manuscript/recipes/cryptominer/profit.md
|
||||
|
||||
|
||||
## What next?
|
||||
@@ -1,5 +1,5 @@
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipies/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
# Wallet
|
||||
|
||||
@@ -23,6 +23,7 @@ I mine most of my coins to Exchanges, but I do have the following wallets:
|
||||
|
||||
Now, continue to the next stage of your grand mining adventure:
|
||||
|
||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/wallet.md
|
||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
||||
@@ -30,6 +31,15 @@ Now, continue to the next stage of your grand mining adventure:
|
||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or wallets (_This page_) 💹
|
||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipies/cryptominer/profit/)!
|
||||
=======
|
||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or wallets (_This page_) 💹
|
||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||
>>>>>>> master:manuscript/recipes/cryptominer/wallet.md
|
||||
|
||||
|
||||
## Chef's Notes
|
||||
16
manuscript/recipes/cryptonote-mining-pool.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# CryptoNote Mining Pool
|
||||
|
||||
[Cryptocurrency miners](/recipes/cryptominer) will "pool" their GPU resources ("_hashpower_") into aggregate "_mining pools_", so that by the combined effort of all the miners, the pool will receive a reward for the blocks "mined" into the blockchain, and this reward will be distributed among the miners.
|
||||
|
||||
[CryptoNote](https://cryptonote.org/) is an open-source toolset designed to facilitate the creation of new privacy-focused [cryptocurrencies](https://cryptonote.org/coins)
|
||||
|
||||
(_CryptoNote = 'Kryptonite'. In a pool. Get it?_)
|
||||
|
||||

|
||||
|
||||
The fact that all these currencies share a common ancestry means that a common mining pool platform can be used for miners. The following recipes all use variations of [Dvandal's cryptonote-nodejs-pool ](https://github.com/dvandal/cryptonote-nodejs-pool)
|
||||
|
||||
## Mining Pool Recipies
|
||||
|
||||
* [TurtleCoin](/recipes/turtle-pool/), the no-BS, fun baby cryptocurrency
|
||||
* [Athena](/recipes/cryptonote-mining-pool/athena/), TurtleCoin's newborn baby sister
|
||||