diff --git a/manuscript/CHANGELOG.md b/manuscript/CHANGELOG.md index 39fc5c8..ae9be89 100644 --- a/manuscript/CHANGELOG.md +++ b/manuscript/CHANGELOG.md @@ -12,15 +12,15 @@ * Kubernetes recipes for UniFi controller, Miniflux, Kanboard and PrivateBin coming in March! (_19 Mar 2019_) ## Recently added recipes -* Added recipe for making your own [DIY Kubernetes Cluster](/kubernetes/diycluster/) (_14 December 2019_) -* Added recipe for [authenticating Traefik Forward Auth against KeyCloak](/ha-docker-swarm/traefik-forward-auth/keycloak/) (_16 May 2019_) -* Added [Bitwarden](/recipes/bitwarden/), an **awesome** open-source password manager, with great mobile sync support (_14 May 2019_) -* Added [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), replacing function of multiple [oauth_proxies](/reference/oauth_proxy/) with a single, 7MB Go application, which can authenticate against Google, [KeyCloak](/recipes/keycloak/), and other OIDC providers (_10 May 2019_) -* Added Kubernetes version of [Miniflux](/recipes/kubernetes/miniflux/) recipe, a minimalistic RSS reader supporting the Fever API (_26 Mar 2019_) +* Overhauled [Ceph (Shared Storage)](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/shared-storage-ceph/) recipe for Ceph Octopus (v15) (_25 May 2020_) +* Added recipe for making your own [DIY Kubernetes Cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/diycluster/) (_14 December 2019_) +* Added recipe for [authenticating Traefik Forward Auth against KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik-forward-auth/keycloak/) (_16 May 2019_) +* Added [Bitwarden](https://geek-cookbook.funkypenguin.co.nz/)recipes/bitwarden/), an **awesome** open-source password manager, with great mobile sync support (_14 May 2019_) +* Added [Traefik Forward Auth](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik-forward-auth/), replacing function of multiple [oauth_proxies](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) with a single, 7MB Go application, which can authenticate against Google, [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/), and other OIDC providers (_10 May 2019_) ## Recent improvements -* Added recipe for [automated snapshots of Kubernetes Persistent Volumes](/kubernetes/snapshots/), instructions for using [Helm](/kubernetes/helm/), and recipe for deploying [Traefik](/kubernetes/traefik/), which completes the Kubernetes cluster design! (_9 Feb 2019_) -* Added detailed description (_and diagram_) of our [Kubernetes design](/kubernetes/design/), plus a [simple load-balancer design](kubernetes/loadbalancer/) to avoid the complexities/costs of permitting ingress access to a cluster (_7 Feb 2019_) -* Added an [introductory/explanatory page, including a children's story, on Kubernetes](/kubernetes/start/) (_29 Jan 2019_) -* [NextCloud](/recipes/nextcloud/) updated to fix CalDAV/CardDAV service discovery behind Traefik reverse proxy (_12 Dec 2018_) +* Added recipe for [automated snapshots of Kubernetes Persistent Volumes](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/), instructions for using [Helm](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/helm/), and recipe for deploying [Traefik](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/), which completes the Kubernetes cluster design! (_9 Feb 2019_) +* Added detailed description (_and diagram_) of our [Kubernetes design](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/), plus a [simple load-balancer design](kubernetes/loadbalancer/) to avoid the complexities/costs of permitting ingress access to a cluster (_7 Feb 2019_) +* Added an [introductory/explanatory page, including a children's story, on Kubernetes](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/start/) (_29 Jan 2019_) +* [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/) updated to fix CalDAV/CardDAV service discovery behind Traefik reverse proxy (_12 Dec 2018_) diff --git a/manuscript/Gemfile.lock b/manuscript/Gemfile.lock index f8d4775..ac8ee0e 100644 --- a/manuscript/Gemfile.lock +++ b/manuscript/Gemfile.lock @@ -1,7 +1,7 @@ GEM remote: https://rubygems.org/ specs: - activesupport (5.2.3) + activesupport (5.2.4.3) concurrent-ruby (~> 1.0, >= 1.0.2) i18n (>= 0.7, < 2) minitest (~> 5.1) @@ -9,7 +9,7 @@ GEM addressable (2.6.0) public_suffix (>= 2.0.2, < 4.0) colorize (0.8.1) - concurrent-ruby (1.1.5) + concurrent-ruby (1.1.6) ethon (0.12.0) ffi (>= 1.3.0) ffi (1.10.0) @@ -22,19 +22,19 @@ GEM parallel (~> 1.3) typhoeus (~> 1.3) yell (~> 2.0) - i18n (1.6.0) + i18n (1.8.2) concurrent-ruby (~> 1.0) mercenary (0.3.6) mini_portile2 (2.4.0) - minitest (5.11.3) - nokogiri (1.10.5) + minitest (5.14.1) + nokogiri (1.10.9) mini_portile2 (~> 2.4.0) parallel (1.17.0) public_suffix (3.0.3) thread_safe (0.3.6) typhoeus (1.3.1) ethon (>= 0.9.0) - tzinfo (1.2.5) + tzinfo (1.2.7) thread_safe (~> 0.1) yell (2.1.0) diff --git a/manuscript/advanced/tiny-tiny-rss.md b/manuscript/advanced/tiny-tiny-rss.md deleted file mode 100644 index 471e0ef..0000000 --- a/manuscript/advanced/tiny-tiny-rss.md +++ /dev/null @@ -1,113 +0,0 @@ - -# Introduction - -[Tiny Tiny RSS][ttrss] is a self-hosted, AJAX-based RSS reader, which rose to popularity as a replacement for Google Reader. It supports advanced features, such as: - -* Plugins and themeing in a drop-in fashion -* Filtering (discard all articles with title matching "trump") -* Sharing articles via a unique public URL/feed - -Tiny Tiny RSS requires a database and a webserver - this recipe provides both using docker, exposed to the world via LetsEncrypt. - -# Ingredients - -**Required** - -1. Webserver (nginx container) -2. Database (postgresql container) -3. TTRSS (ttrss container) -3. Nginx reverse proxy with LetsEncrypt - - -**Optional** - -1. Email server (if you want to email articles from TTRSS) - -# Preparation - -**Setup filesystem location** - -I setup a directory for the ttrss data, at /data/ttrss. - -I created docker-compose.yml, as follows: - -``` -rproxy: - image: nginx:1.13-alpine - ports: - - "34804:80" - environment: - - DOMAIN_NAME=ttrss.funkypenguin.co.nz - - VIRTUAL_HOST=ttrss.funkypenguin.co.nz - - LETSENCRYPT_HOST=ttrss.funkypenguin.co.nz - - LETSENCRYPT_EMAIL=davidy@funkypenguin.co.nz - volumes: - - ./nginx.conf:/etc/nginx/nginx.conf:ro - volumes_from: - - ttrss - links: - - ttrss:ttrss - -ttrss: - image: tkaefer/docker-ttrss - restart: always - links: - - postgres:database - environment: - - DB_USER=ttrss - - DB_PASS=uVL53xfmJxW - - SELF_URL_PATH=https://ttrss.funkypenguin.co.nz - volumes: - - ./plugins.local:/var/www/plugins.local - - ./themes.local:/var/www/themes.local - - ./reader:/var/www/reader - -postgres: - image: postgres:latest - volumes: - - /srv/ssd-data/ttrss/db:/var/lib/postgresql/data - restart: always - environment: - - POSTGRES_USER=ttrss - - POSTGRES_PASSWORD=uVL53xfmJxW - -gmailsmtp: - image: softinnov/gmailsmtp - restart: always - environment: - - user=davidy@funkypenguin.co.nz - - pass=eqknehqflfbufzbh - - DOMAIN_NAME=gmailsmtp.funkypenguin.co.nz -``` - -Run ```docker-compose up``` in the same directory, and watch the output. PostgreSQL container will create the "ttrss" database, and ttrss will start using it. - - -# Login to UI - -Log into https://\. Default user is "admin" and password is "password" - -# Optional - Enable af_psql_trgm plugin for similar post detection - -One of the native plugins enables the detection of "similar" articles. This requires the pg_trgm extension enabled in your database. - -From the working directory, use ```docker exec``` to get a shell within your postgres container, and run "postgres" as the postgres user: -``` -[root@kvm nginx]# docker exec -it ttrss_postgres_1 /bin/sh -# su - postgres -No directory, logging in with HOME=/ -$ psql -psql (9.6.3) -Type "help" for help. -``` - -Add the trgm extension to your ttrss database: -``` -postgres=# \c ttrss -You are now connected to database "ttrss" as user "postgres". -ttrss=# CREATE EXTENSION pg_trgm; -CREATE EXTENSION -ttrss=# \q -``` - -[ttrss]:https://tt-rss.org/ diff --git a/manuscript/book.txt b/manuscript/book.txt index fd7468d..8988ec8 100644 --- a/manuscript/book.txt +++ b/manuscript/book.txt @@ -48,21 +48,10 @@ recipes/phpipam.md recipes/plex.md recipes/privatebin.md recipes/swarmprom.md -recipes/turtle-pool.md sections/menu-docker.md recipes/bitwarden.md recipes/bookstack.md -recipes/cryptominer.md -recipes/cryptominer/mining-rig.md -recipes/cryptominer/amd-gpu.md -recipes/cryptominer/nvidia-gpu.md -recipes/cryptominer/mining-pool.md -recipes/cryptominer/wallet.md -recipes/cryptominer/exchange.md -recipes/cryptominer/minerhotel.md -recipes/cryptominer/monitor.md -recipes/cryptominer/profit.md recipes/calibre-web.md recipes/collabora-online.md recipes/ghost.md diff --git a/manuscript/extras/javascript/auto-expand-nav.js b/manuscript/extras/javascript/auto-expand-nav.js new file mode 100644 index 0000000..00c64e3 --- /dev/null +++ b/manuscript/extras/javascript/auto-expand-nav.js @@ -0,0 +1,27 @@ +document.addEventListener("DOMContentLoaded", function() { + load_navpane(); +}); + +function load_navpane() { + var width = window.innerWidth; + if (width <= 1200) { + return; + } + + var nav = document.getElementsByClassName("md-nav"); + for(var i = 0; i < nav.length; i++) { + if (typeof nav.item(i).style === "undefined") { + continue; + } + + if (nav.item(i).getAttribute("data-md-level") && nav.item(i).getAttribute("data-md-component")) { + nav.item(i).style.display = 'block'; + nav.item(i).style.overflow = 'visible'; + } + } + + var nav = document.getElementsByClassName("md-nav__toggle"); + for(var i = 0; i < nav.length; i++) { + nav.item(i).checked = true; + } +} \ No newline at end of file diff --git a/manuscript/extras/javascript/discord.js b/manuscript/extras/javascript/discord.js index 0b455f7..6f377e2 100644 --- a/manuscript/extras/javascript/discord.js +++ b/manuscript/extras/javascript/discord.js @@ -2,7 +2,7 @@ const button = new Crate({ server: '396055506072109067', channel: '456689991326760973', - shard: 'https://disweb.deploys.io', + shard: 'https://e.widgetbot.io', color: '#795548', indicator: false, notifications: true diff --git a/manuscript/ha-docker-swarm/design.md b/manuscript/ha-docker-swarm/design.md index 3333394..f358575 100644 --- a/manuscript/ha-docker-swarm/design.md +++ b/manuscript/ha-docker-swarm/design.md @@ -5,7 +5,7 @@ In the design described below, our "private cloud" platform is: * **Highly-available** (_can tolerate the failure of a single component_) * **Scalable** (_can add resource or capacity as required_) * **Portable** (_run it on your garage server today, run it in AWS tomorrow_) -* **Secure** (_access protected with [LetsEncrypt certificates](/ha-docker-swarm/traefik/) and optional [OIDC with 2FA](/ha-docker-swarm/traefik-forward-auth/)_) +* **Secure** (_access protected with [LetsEncrypt certificates](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik/) and optional [OIDC with 2FA](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik-forward-auth/)_) * **Automated** (_requires minimal care and feeding_) ## Design Decisions @@ -15,7 +15,7 @@ In the design described below, our "private cloud" platform is: This means that: * At least 3 docker swarm manager nodes are required, to provide fault-tolerance of a single failure. -* [Ceph](/ha-docker-swarm/shared-storage-ceph/) is employed for share storage, because it too can be made tolerant of a single failure. +* [Ceph](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph/) is employed for share storage, because it too can be made tolerant of a single failure. !!! note An exception to the 3-nodes decision is running a single-node configuration. If you only **have** one node, then obviously your swarm is only as resilient as that node. It's still a perfectly valid swarm configuration, ideal for starting your self-hosting journey. In fact, under the single-node configuration, you don't need ceph either, and you can simply use the local volume on your host for storage. You'll be able to migrate to ceph/more nodes if/when you expand. @@ -38,8 +38,8 @@ Under this design, the only inbound connections we're permitting to our docker s ### Authentication -* Where the hosted application provides a trusted level of authentication (*i.e., [NextCloud](/recipes/nextcloud/)*), or where the application requires public exposure (*i.e. [Privatebin](/recipes/privatebin/)*), no additional layer of authentication will be required. -* Where the hosted application provides inadequate (*i.e. [NZBGet](/recipes/autopirate/nzbget/)*) or no authentication (*i.e. [Gollum](/recipes/gollum/)*), a further authentication against an OAuth provider will be required. +* Where the hosted application provides a trusted level of authentication (*i.e., [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/)*), or where the application requires public exposure (*i.e. [Privatebin](https://geek-cookbook.funkypenguin.co.nz/)recipes/privatebin/)*), no additional layer of authentication will be required. +* Where the hosted application provides inadequate (*i.e. [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget/)*) or no authentication (*i.e. [Gollum](https://geek-cookbook.funkypenguin.co.nz/)recipes/gollum/)*), a further authentication against an OAuth provider will be required. ## High availability @@ -92,4 +92,4 @@ In summary, although I suffered an **unplanned power outage to all of my infrast [^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient. -## Chef's Notes 📓 +## Chef's Notes 📓 \ No newline at end of file diff --git a/manuscript/ha-docker-swarm/docker-swarm-mode.md b/manuscript/ha-docker-swarm/docker-swarm-mode.md index 9a007e7..8cec755 100644 --- a/manuscript/ha-docker-swarm/docker-swarm-mode.md +++ b/manuscript/ha-docker-swarm/docker-swarm-mode.md @@ -128,7 +128,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. Launch the cleanup stack by running ```docker stack deploy docker-cleanup -c ``` @@ -167,10 +167,9 @@ Launch shepherd by running ```docker stack deploy shepherd -c /var/data/config/s ### Summary -!!! summary - Created +After completing the above, you should have: - * [X] [Docker swarm cluster](/ha-docker-swarm/design/) +* [X] [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) -## Chef's Notes 📓 +## Chef's Notes 📓 \ No newline at end of file diff --git a/manuscript/ha-docker-swarm/keepalived.md b/manuscript/ha-docker-swarm/keepalived.md index 33524ee..84b9d77 100644 --- a/manuscript/ha-docker-swarm/keepalived.md +++ b/manuscript/ha-docker-swarm/keepalived.md @@ -68,4 +68,4 @@ That's it. Each node will talk to the other via unicast (no need to un-firewall ## Chef's notes 📓 1. Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections. -2. More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master. +2. More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master. \ No newline at end of file diff --git a/manuscript/ha-docker-swarm/nodes.md b/manuscript/ha-docker-swarm/nodes.md index 373045c..c989ab9 100644 --- a/manuscript/ha-docker-swarm/nodes.md +++ b/manuscript/ha-docker-swarm/nodes.md @@ -3,7 +3,7 @@ Let's start building our cluster. You can use either bare-metal machines or virtual machines - the configuration would be the same. To avoid confusion, I'll be referring to these as "nodes" from now on. !!! note - In 2017, I **initially** chose the "[Atomic](https://www.projectatomic.io/)" CentOS/Fedora image for the swarm hosts, but later found its outdated version of Docker to be problematic with advanced features like GPU transcoding (in [Plex](/recipes/plex/)), [Swarmprom](/recipes/swarmprom/), etc. In the end, I went mainstream and simply preferred a modern Ubuntu installation. + In 2017, I **initially** chose the "[Atomic](https://www.projectatomic.io/)" CentOS/Fedora image for the swarm hosts, but later found its outdated version of Docker to be problematic with advanced features like GPU transcoding (in [Plex](https://geek-cookbook.funkypenguin.co.nz/)recipes/plex/)), [Swarmprom](https://geek-cookbook.funkypenguin.co.nz/)recipes/swarmprom/), etc. In the end, I went mainstream and simply preferred a modern Ubuntu installation. ## Ingredients diff --git a/manuscript/ha-docker-swarm/registry.md b/manuscript/ha-docker-swarm/registry.md index 75f8d7e..3cb32e1 100644 --- a/manuscript/ha-docker-swarm/registry.md +++ b/manuscript/ha-docker-swarm/registry.md @@ -10,8 +10,8 @@ The registry mirror runs as a swarm stack, using a simple docker-compose.yml. Cu ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP @@ -110,4 +110,4 @@ systemctl restart docker-latest !!! tip "" Note the extra comma required after "false" above -## Chef's notes 📓 +## Chef's notes 📓 \ No newline at end of file diff --git a/manuscript/ha-docker-swarm/shared-storage-ceph.md b/manuscript/ha-docker-swarm/shared-storage-ceph.md index 717acd8..70fcecb 100644 --- a/manuscript/ha-docker-swarm/shared-storage-ceph.md +++ b/manuscript/ha-docker-swarm/shared-storage-ceph.md @@ -2,196 +2,217 @@ While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node. -## Design - -### Why not GlusterFS? -I originally provided shared storage to my nodes using GlusterFS (see the next recipe for details), but found it difficult to deal with because: - -1. GlusterFS requires (n) "bricks", where (n) **has** to be a multiple of your replica count. I.e., if you want 2 copies of everything on shared storage (the minimum to provide redundancy), you **must** have either 2, 4, 6 (etc..) bricks. The HA swarm design calls for minimum of 3 nodes, and so under GlusterFS, my third node can't participate in shared storage at all, unless I start doubling up on bricks-per-node (which then impacts redundancy) -2. GlusterFS turns out to be a giant PITA when you want to restore a failed node. There are at [least 14 steps to follow](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Replacing_Hosts.html) to replace a brick. -3. I'm pretty sure I messed up the 14-step process above anyway. My replaced brick synced with my "original" brick, but produced errors when querying status via the CLI, and hogged 100% of 1 CPU on the replaced node. Inexperienced with GlusterFS, and unable to diagnose the fault, I switched to a Ceph cluster instead. - -### Why Ceph? - -1. I'm more familiar with Ceph - I use it in the OpenStack designs I manage -2. Replacing a failed node is **easy**, provided you can put up with the I/O load of rebalancing OSDs after the replacement. -3. CentOS Atomic includes the ceph client in the OS, so while the Ceph OSD/Mon/MSD are running under containers, I can keep an eye (and later, automatically monitor) the status of Ceph from the base OS. +![Ceph Screenshot](../images/ceph.png) ## Ingredients !!! summary "Ingredients" 3 x Virtual Machines (configured earlier), each with: - * [X] CentOS/Fedora Atomic + * [X] Support for "modern" versions of Python and LVM * [X] At least 1GB RAM * [X] At least 20GB disk space (_but it'll be tight_) * [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_) - * [ ] A second disk dedicated to the Ceph OSD + * [X] A second disk dedicated to the Ceph OSD + * [X] Each node should have the IP of every other participating node hard-coded in /etc/hosts (*including its own IP*) ## Preparation -### SELinux +!!! tip "No more [foolish games](https://www.youtube.com/watch?v=UNoouLa7uxA)" + Earlier iterations of this recipe (*based on [Ceph Jewel](https://docs.ceph.com/docs/master/releases/jewel/)*) required significant manual effort to install Ceph in a Docker environment. In the 2+ years since Jewel was released, significant improvements have been made to the ceph "deploy-in-docker" process, including the [introduction of the cephadm tool](https://ceph.io/ceph-management/introducing-cephadm/). Cephadm is the tool which now does all the heavy lifting, below, for the current version of ceph, codenamed "[Octopus](https://www.youtube.com/watch?v=Gi58pN8W3hY)". -Since our Ceph components will be containerized, we need to ensure the SELinux context on the base OS's ceph files is set correctly: +### Pick a master node + +One of your nodes will become the cephadm "master" node. Although all nodes will participate in the Ceph cluster, the master node will be the node which we bootstrap ceph on. It's also the node which will run the Ceph dashboard, and on which future upgrades will be processed. It doesn't matter _which_ node you pick, and the cluster itself will operate in the event of a loss of the master node (although you won't see the dashboard) + +### Install cephadm on master node + +Run the following on the ==master== node: ``` -mkdir /var/lib/ceph -chcon -Rt svirt_sandbox_file_t /etc/ceph -chcon -Rt svirt_sandbox_file_t /var/lib/ceph -``` -### Setup Monitors - -Pick a node, and run the following to stand up the first Ceph mon. Be sure to replace the values for **MON_IP** and **CEPH_PUBLIC_NETWORK** to those specific to your deployment: - -``` -docker run -d --net=host \ ---restart always \ --v /etc/ceph:/etc/ceph \ --v /var/lib/ceph/:/var/lib/ceph/ \ --e MON_IP=192.168.31.11 \ --e CEPH_PUBLIC_NETWORK=192.168.31.0/24 \ ---name="ceph-mon" \ -ceph/daemon mon +MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'` +curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm +chmod +x cephadm +mkdir -p /etc/ceph +./cephadm bootstrap --mon-ip $MYIP ``` -Now **copy** the contents of /etc/ceph on this first node to the remaining nodes, and **then** run the docker command above (_customizing MON_IP as you go_) on each remaining node. You'll end up with a cluster with 3 monitors (odd number is required for quorum, same as Docker Swarm), and no OSDs (yet) +The process takes about 30 seconds, after which, you'll have a MVC (*Minimum Viable Cluster*)[^1], encompassing a single monitor and mgr instance on your chosen node. Here's the complete output from a fresh install: -### Setup Managers +??? "Example output from a fresh cephadm bootstrap" + ``` + root@raphael:~# MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'` + root@raphael:~# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm -Since Ceph v12 ("Luminous"), some of the non-realtime cluster management responsibilities are delegated to a "manager". Run the following on every node - only one node will be __active__, the others will be in standby: + root@raphael:~# chmod +x cephadm + root@raphael:~# mkdir -p /etc/ceph + root@raphael:~# ./cephadm bootstrap --mon-ip $MYIP + INFO:cephadm:Verifying podman|docker is present... + INFO:cephadm:Verifying lvm2 is present... + INFO:cephadm:Verifying time synchronization is in place... + INFO:cephadm:Unit systemd-timesyncd.service is enabled and running + INFO:cephadm:Repeating the final host check... + INFO:cephadm:podman|docker (https://geek-cookbook.funkypenguin.co.nz/)usr/bin/docker) is present + INFO:cephadm:systemctl is present + INFO:cephadm:lvcreate is present + INFO:cephadm:Unit systemd-timesyncd.service is enabled and running + INFO:cephadm:Host looks OK + INFO:root:Cluster fsid: bf3eff78-9e27-11ea-b40a-525400380101 + INFO:cephadm:Verifying IP 192.168.38.101 port 3300 ... + INFO:cephadm:Verifying IP 192.168.38.101 port 6789 ... + INFO:cephadm:Mon IP 192.168.38.101 is in CIDR network 192.168.38.0/24 + INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container... + INFO:cephadm:Extracting ceph user uid/gid from container image... + INFO:cephadm:Creating initial keys... + INFO:cephadm:Creating initial monmap... + INFO:cephadm:Creating mon... + INFO:cephadm:Waiting for mon to start... + INFO:cephadm:Waiting for mon... + INFO:cephadm:mon is available + INFO:cephadm:Assimilating anything we can from ceph.conf... + INFO:cephadm:Generating new minimal ceph.conf... + INFO:cephadm:Restarting the monitor... + INFO:cephadm:Setting mon public_network... + INFO:cephadm:Creating mgr... + INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring + INFO:cephadm:Wrote config to /etc/ceph/ceph.conf + INFO:cephadm:Waiting for mgr to start... + INFO:cephadm:Waiting for mgr... + INFO:cephadm:mgr not available, waiting (1/10)... + INFO:cephadm:mgr not available, waiting (2/10)... + INFO:cephadm:mgr not available, waiting (3/10)... + INFO:cephadm:mgr is available + INFO:cephadm:Enabling cephadm module... + INFO:cephadm:Waiting for the mgr to restart... + INFO:cephadm:Waiting for Mgr epoch 5... + INFO:cephadm:Mgr epoch 5 is available + INFO:cephadm:Setting orchestrator backend to cephadm... + INFO:cephadm:Generating ssh key... + INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub + INFO:cephadm:Adding key to root@localhost's authorized_keys... + INFO:cephadm:Adding host raphael... + INFO:cephadm:Deploying mon service with default placement... + INFO:cephadm:Deploying mgr service with default placement... + INFO:cephadm:Deploying crash service with default placement... + INFO:cephadm:Enabling mgr prometheus module... + INFO:cephadm:Deploying prometheus service with default placement... + INFO:cephadm:Deploying grafana service with default placement... + INFO:cephadm:Deploying node-exporter service with default placement... + INFO:cephadm:Deploying alertmanager service with default placement... + INFO:cephadm:Enabling the dashboard module... + INFO:cephadm:Waiting for the mgr to restart... + INFO:cephadm:Waiting for Mgr epoch 13... + INFO:cephadm:Mgr epoch 13 is available + INFO:cephadm:Generating a dashboard self-signed certificate... + INFO:cephadm:Creating initial admin user... + INFO:cephadm:Fetching dashboard port number... + INFO:cephadm:Ceph Dashboard is now available at: -``` -docker run -d --net=host \ ---privileged=true \ ---pid=host \ --v /etc/ceph:/etc/ceph \ --v /var/lib/ceph/:/var/lib/ceph/ \ ---name="ceph-mgr" \ ---restart=always \ -ceph/daemon mgr -``` + URL: https://raphael:8443/ + User: admin + Password: mid28k0yg5 -### Setup OSDs + INFO:cephadm:You can access the Ceph CLI with: -Since we have a OSD-less mon-only cluster currently, prepare for OSD creation by dumping the auth credentials for the OSDs into the appropriate location on the base OS: + sudo ./cephadm shell --fsid bf3eff78-9e27-11ea-b40a-525400380101 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring -``` -ceph auth get client.bootstrap-osd -o \ -/var/lib/ceph/bootstrap-osd/ceph.keyring -``` + INFO:cephadm:Please consider enabling telemetry to help improve Ceph: -On each node, you need a dedicated disk for the OSD. In the example below, I used _/dev/vdd_ (the entire disk, no partitions) for the OSD. + ceph telemetry on -Run the following command on every node: + For more information see: -``` -docker run -d --net=host \ ---privileged=true \ ---pid=host \ --v /etc/ceph:/etc/ceph \ --v /var/lib/ceph/:/var/lib/ceph/ \ --v /dev/:/dev/ \ --e OSD_FORCE_ZAP=1 \ --e OSD_DEVICE=/dev/vdd \ --e OSD_TYPE=disk \ ---name="ceph-osd" \ ---restart=always \ -ceph/daemon osd_ceph_disk -``` + https://docs.ceph.com/docs/master/mgr/telemetry/ -Watch the output by running ```docker logs ceph-osd -f```, and confirm success. - -!!! warning "Zapping the device" - The Ceph OSD container will normally refuse to destroy a partition containing existing data, but above we are instructing ceph to zap (destroy) whatever is on the partition currently. Don't run this against a device you care about, and if you're unsure, omit the "OSD_FORCE_ZAP" variable - -### Setup MDSs - -In order to mount our ceph pools as filesystems, we'll need Ceph MDS(s). Run the following on each node: - -``` -docker run -d --net=host \ ---name ceph-mds \ ---restart always \ --v /var/lib/ceph/:/var/lib/ceph/ \ --v /etc/ceph:/etc/ceph \ --e CEPHFS_CREATE=1 \ --e CEPHFS_DATA_POOL_PG=256 \ --e CEPHFS_METADATA_POOL_PG=256 \ -ceph/daemon mds -``` -### Apply tweaks - -The ceph container seems to configure a pool default of 3 replicas (3 copies of each block are retained), which is one too many for our cluster (we are only protecting against the failure of a single node). - -Run the following on any node to reduce the size of the pool to 2 replicas: - -``` -ceph osd pool set cephfs_data size 2 -ceph osd pool set cephfs_metadata size 2 -``` - -Disabled "scrubbing" (which can be IO-intensive, and is unnecessary on a VM) with: - -``` -ceph osd set noscrub -ceph osd set nodeep-scrub -``` + INFO:cephadm:Bootstrap complete. + root@raphael:~# + ``` -### Create credentials for swarm +### Prepare other nodes -In order to mount the ceph volume onto our base host, we need to provide cephx authentication credentials. +It's now necessary to tranfer the following files to your ==other== nodes, so that cephadm can add them to your cluster, and so that they'll be able to mount the cephfs when we're done: -On **one** node, create a client for the docker swarm: +Path on master | Path on non-master +--------------- | ----- +`/etc/ceph/ceph.conf` | `/etc/ceph/ceph.conf` +`/etc/ceph/ceph.client.admin.keyring` | `/etc/ceph/ceph.client.admin.keyring` +`/etc/ceph/ceph.pub` | `/root/.ssh/authorized_keys` (append to anything existing) -``` -ceph auth get-or-create client.dockerswarm osd \ -'allow rw' mon 'allow r' mds 'allow' > /etc/ceph/keyring.dockerswarm -``` -Grab the secret associated with the new user (you'll need this for the /etc/fstab entry below) by running: +Back on the ==master== node, run `ceph orch host add ` once for each other node you want to join to the cluster. You can validate the results by running `ceph orch host ls` -``` -ceph-authtool /etc/ceph/keyring.dockerswarm -p -n client.dockerswarm -``` +!!! question "Should we be concerned about giving cephadm using root access over SSH?" + Not really. Docker is inherently insecure at the host-level anyway (*think what would happen if you launched a global-mode stack with a malicious container image which mounted `/root/.ssh`*), so worrying about cephadm seems a little barn-door-after-horses-bolted. If you take host-level security seriously, consider switching to [Kubernetes](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/start/) :) -### Mount MDS volume +### Add OSDs -On each node, create a mountpoint for the data, by running ```mkdir /var/data```, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually _mounted_ if there's a network / boot delay getting access to the gluster volume: +Now the best improvement since the days of ceph-deploy and manual disks.. on the ==master== node, run `ceph orch apply osd --all-available-devices`. This will identify any unloved (*unpartitioned, unmounted*) disks attached to each participating node, and configure these disks as OSDs. + +### Setup CephFS + +On the ==master== node, create a cephfs volume in your cluster, by running `ceph fs volume create data`. Ceph will handle the necessary orchestration itself, creating the necessary pool, mds daemon, etc. + +You can watch the progress by running `ceph fs ls` (to see the fs is configured), and `ceph -s` to wait for `HEALTH_OK` + +### Mount CephFS volume + +On ==every== node, create a mountpoint for the data, by running ```mkdir /var/data```, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually _mounted_ if there's a network / boot delay getting access to the gluster volume: ``` mkdir /var/data -MYHOST=`hostname -s` +MYNODES=",," # Add your own nodes here, comma-delimited +MYHOST=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'` echo -e " # Mount cephfs volume \n -$MYHOST:6789:/ /var/data/ ceph \ -name=dockerswarm\ -,secret=\ -,noatime,_netdev,context=system_u:object_r:svirt_sandbox_file_t:s0 \ -0 2" >> /etc/fstab +raphael,donatello,leonardo:/ /var/data ceph name=admin,noatime,_netdev 0 0" >> /etc/fstab mount -a ``` -### Install docker-volume plugin - -Upstream bug for docker-latest reported at https://bugs.centos.org/view.php?id=13609 - -And the alpine fault: -https://github.com/gliderlabs/docker-alpine/issues/317 - ## Serving -After completing the above, you should have: +### Sprinkle with tools + +Although it's possible to use `cephadm shell` to exec into a container with the necessary ceph tools, it's more convenient to use the native CLI tools. To this end, on each node, run the following, which will install the appropriate apt repository, and install the latest ceph CLI tools: ``` -[X] Persistent storage available to every node -[X] Resiliency in the event of the failure of a single node +curl -L https://download.ceph.com/keys/release.asc | sudo apt-key add - +cephadm add-repo --release octopus +cephadm install ceph-common ``` +### Drool over dashboard + +Ceph now includes a comprehensive dashboard, provided by the mgr daemon. The dashboard will be accessible at https://[IP of your ceph master node]:8443, but you'll need to run `ceph dashboard ac-user-create administrator` first, to create an administrator account: + +``` +root@raphael:~# ceph dashboard ac-user-create batman supermansucks administrator +{"username": "batman", "password": "$2b$12$3HkjY85mav.dq3HHAZiWP.KkMiuoV2TURZFH.6WFfo/BPZCT/0gr.", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1590372281, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": false} +root@raphael:~# +``` + +## Summary + +What have we achieved? + +!!! summary "Summary" + Created: + + * [X] Persistent storage available to every node + * [X] Resiliency in the event of the failure of a single node + * [X] Beautiful dashboard + +## The easy, 5-minute install + +I share (_with [sponsors][github_sponsor] and [patrons][patreon]_) a private "_premix_" GitHub repository, which includes an ansible playbook for deploying the entire Geek's Cookbook stack, automatically. This means that members can create the entire environment with just a ```git pull``` and an ```ansible-playbook deploy.yml``` + +Here's a screencast of the playbook in action. I sped up the boring parts, it actually takes ==5 min== (*you can tell by the timestamps on the prompt*): + +![Screencast of ceph install via ansible](https://static.funkypenguin.co.nz/ceph_install_via_ansible_playbook.gif) +[patreon]: https://www.patreon.com/bePatron?u=6982506 +[github_sponsor]: https://github.com/sponsors/funkypenguin + ## Chef's Notes 📓 -Future enhancements to this recipe include: - -1. Rather than pasting a secret key into /etc/fstab (which feels wrong), I'd prefer to be able to set "secretfile" in /etc/fstab (which just points ceph.mount to a file containing the secret), but under the current CentOS Atomic, we're stuck with "secret", per https://bugzilla.redhat.com/show_bug.cgi?id=1030402 -2. This recipe was written with Ceph v11 "Jewel". Ceph have subsequently releaesd v12 "Kraken". I've updated the recipe for the addition of "Manager" daemons, but it should be noted that the [only reader so far](https://discourse.geek-kitchen.funkypenguin.co.nz/u/ggilley) to attempt a Ceph install using CentOS Atomic and Ceph v12 had issues with OSDs, which lead him to [move to Ubuntu 1604](https://discourse.geek-kitchen.funkypenguin.co.nz/t/shared-storage-ceph-funky-penguins-geek-cookbook/47/24?u=funkypenguin) instead. +[^1]: Minimum Viable Cluster acronym copyright, trademark, and whatever else, to Funky Penguin for 1,000,000 years. diff --git a/manuscript/ha-docker-swarm/shared-storage-gluster.md b/manuscript/ha-docker-swarm/shared-storage-gluster.md index 3ff5fa3..da238e6 100644 --- a/manuscript/ha-docker-swarm/shared-storage-gluster.md +++ b/manuscript/ha-docker-swarm/shared-storage-gluster.md @@ -3,7 +3,7 @@ While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node. !!! warning - This recipe is deprecated. It didn't work well in 2017, and it's not likely to work any better now. It remains here as a reference. I now recommend the use of [Ceph for shared storage](/ha-docker-swarm/shared-storage-ceph/) instead. - 2019 Chef + This recipe is deprecated. It didn't work well in 2017, and it's not likely to work any better now. It remains here as a reference. I now recommend the use of [Ceph for shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph/) instead. - 2019 Chef ## Design @@ -154,14 +154,12 @@ For non-gluster nodes, you'll need to replace $MYHOST above with the name of one After completing the above, you should have: -``` -[X] Persistent storage available to every node -[X] Resiliency in the event of the failure of a single (gluster) node -``` +* [X] Persistent storage available to every node +* [X] Resiliency in the event of the failure of a single (gluster) node ## Chef's Notes 📓 Future enhancements to this recipe include: 1. Migration of shared storage from GlusterFS to Ceph ()[#2](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/2)) -2. Correct the fact that volumes don't automount on boot ([#3](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/3)) +2. Correct the fact that volumes don't automount on boot ([#3](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/3)) \ No newline at end of file diff --git a/manuscript/ha-docker-swarm/traefik-forward-auth.md b/manuscript/ha-docker-swarm/traefik-forward-auth.md index 10f7ef6..66803d4 100644 --- a/manuscript/ha-docker-swarm/traefik-forward-auth.md +++ b/manuscript/ha-docker-swarm/traefik-forward-auth.md @@ -2,28 +2,28 @@ Now that we have Traefik deployed, automatically exposing SSL access to our Docker Swarm services using LetsEncrypt wildcard certificates, let's pause to consider that we may not _want_ some services exposed directly to the internet... -..Wait, why not? Well, Traefik doesn't provide any form of authentication, it simply secures the **transmission** of the service between Docker Swarm and the end user. If you were to deploy a service with no native security (*[Radarr](/recipes/autopirate/radarr/) or [Sonarr](/recipes/autopirate/sonarr/) come to mind*), then anybody would be able to use it! Even services which _may_ have a layer of authentication **might** not be safe to expose publically - often open source projects may be maintained by enthusiasts who happily add extra features, but just pay lip service to security, on the basis that "*it's the user's problem to secure it in their own network*". +..Wait, why not? Well, Traefik doesn't provide any form of authentication, it simply secures the **transmission** of the service between Docker Swarm and the end user. If you were to deploy a service with no native security (*[Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) or [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) come to mind*), then anybody would be able to use it! Even services which _may_ have a layer of authentication **might** not be safe to expose publically - often open source projects may be maintained by enthusiasts who happily add extra features, but just pay lip service to security, on the basis that "*it's the user's problem to secure it in their own network*". -To give us confidence that **we** can access our services, but BadGuys(tm) cannot, we'll deploy a layer of authentication **in front** of Traefik, using [Forward Authentication](https://docs.traefik.io/configuration/entrypoints/#forward-authentication). You can use your own [KeyCloak](/recipes/keycloak/) instance for authentication, but to lower the barrier to entry, this recipe will assume you're authenticating against your own Google account. +To give us confidence that **we** can access our services, but BadGuys(tm) cannot, we'll deploy a layer of authentication **in front** of Traefik, using [Forward Authentication](https://docs.traefik.io/configuration/entrypoints/#forward-authentication). You can use your own [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/) instance for authentication, but to lower the barrier to entry, this recipe will assume you're authenticating against your own Google account. ## Ingredients !!! summary "Ingredients" Existing: - * [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph) - * [X] [Traefik](/ha-docker-swarm/traefik/) configured per design + * [X] [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph) + * [X] [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik/) configured per design New: - * [ ] Client ID and secret from an OpenID-Connect provider (Google, [KeyCloak](/recipes/keycloak/), Microsoft, etc..) + * [ ] Client ID and secret from an OpenID-Connect provider (Google, [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/), Microsoft, etc..) ## Preparation ### Obtain OAuth credentials !!! note - This recipe will demonstrate using Google OAuth for traefik forward authentication, but it's also possible to use a self-hosted KeyCloak instance - see the [KeyCloak OIDC Provider](/recipes/keycloak/setup-oidc-provider/) recipe for more details! + This recipe will demonstrate using Google OAuth for traefik forward authentication, but it's also possible to use a self-hosted KeyCloak instance - see the [KeyCloak OIDC Provider](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/setup-oidc-provider/) recipe for more details! Log into https://console.developers.google.com/, create a new project then search for and select "Credentials" in the search bar. @@ -48,7 +48,7 @@ COOKIE_DOMAINS=example.com ### Prepare the docker service config -This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/recipes/traefik/) recipe: +This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](https://geek-cookbook.funkypenguin.co.nz/)recipes/traefik/) recipe: ``` traefik-forward-auth: @@ -83,7 +83,7 @@ If you're not confident that forward authentication is working, add a simple "wh ``` !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` @@ -110,7 +110,7 @@ What have we achieved? By adding an additional three simple labels to any servic ## Chef's Notes 📓 -1. Traefik forward auth replaces the use of [oauth_proxy containers](/reference/oauth_proxy/) found in some of the existing recipes +1. Traefik forward auth replaces the use of [oauth_proxy containers](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) found in some of the existing recipes 2. [@thomaseddon's original version](https://github.com/thomseddon/traefik-forward-auth) of traefik-forward-auth only works with Google currently, but I've created a [fork](https://www.github.com/funkypenguin/traefik-forward-auth) of a [fork](https://github.com/noelcatt/traefik-forward-auth), which implements generic OIDC providers. 3. I reviewed several implementations of forward authenticators for Traefik, but found most to be rather heavy-handed, or specific to a single auth provider. @thomaseddon's go-based docker image is 7MB in size, and with the generic OIDC patch (above), it can be extended to work with any OIDC provider. 4. No, not github natively, but you can ferderate GitHub into KeyCloak, and then use KeyCloak as the OIDC provider. diff --git a/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md b/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md index 126eaf8..031b4cd 100644 --- a/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md +++ b/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md @@ -1,13 +1,13 @@ # Using Traefik Forward Auth with KeyCloak -While the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe demonstrated a quick way to protect a set of explicitly-specified URLs using OIDC credentials from a Google account, this recipe will illustrate how to use your own KeyCloak instance to secure **any** URLs within your DNS domain. +While the [Traefik Forward Auth](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik-forward-auth/) recipe demonstrated a quick way to protect a set of explicitly-specified URLs using OIDC credentials from a Google account, this recipe will illustrate how to use your own KeyCloak instance to secure **any** URLs within your DNS domain. ## Ingredients !!! Summary Existing: - * [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully, with a [local user](/recipes/keycloak/create-user/) and an [OIDC client](/recipes/keycloak/setup-oidc-provider/) + * [X] [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/) recipe deployed successfully, with a [local user](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/create-user/) and an [OIDC client](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/setup-oidc-provider/) New: @@ -48,7 +48,7 @@ COOKIE_DOMAIN= ### Prepare the docker service config -This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/recipes/traefik/) recipe: +This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](https://geek-cookbook.funkypenguin.co.nz/)recipes/traefik/) recipe: ``` traefik-forward-auth: @@ -82,7 +82,7 @@ If you're not confident that forward authentication is working, add a simple "wh ``` !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ## Serving diff --git a/manuscript/ha-docker-swarm/traefik.md b/manuscript/ha-docker-swarm/traefik.md index 0732574..ea28883 100644 --- a/manuscript/ha-docker-swarm/traefik.md +++ b/manuscript/ha-docker-swarm/traefik.md @@ -18,7 +18,7 @@ To deal with these gaps, we need a front-end load-balancer, and in this design, !!! summary "You'll need" Existing - * [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph) + * [X] [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph) New @@ -123,7 +123,7 @@ networks: ``` !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` Create `/var/data/config/traefik/traefik-app.yml` as follows: @@ -222,7 +222,7 @@ ID NAME IMAGE You should now be able to access your traefik instance on http://:8080 - It'll look a little lonely currently (*below*), but we'll populate it as we add recipes :) -![Screenshot of Traefik, post-launch](/images/traefik-post-launch.png) +![Screenshot of Traefik, post-launch](https://geek-cookbook.funkypenguin.co.nz/)images/traefik-post-launch.png) ### Summary @@ -236,4 +236,4 @@ You should now be able to access your traefik instance on http://:8080 ## Chef's Notes 📓 -1. Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/ha-docker-swarm/traefik-forward-auth/)! +1. Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik-forward-auth/)! \ No newline at end of file diff --git a/manuscript/images/ceph.png b/manuscript/images/ceph.png new file mode 100644 index 0000000..1012955 Binary files /dev/null and b/manuscript/images/ceph.png differ diff --git a/manuscript/images/kubernetes-dashboard.png b/manuscript/images/kubernetes-dashboard.png new file mode 100644 index 0000000..8d842ad Binary files /dev/null and b/manuscript/images/kubernetes-dashboard.png differ diff --git a/manuscript/images/site-logo.svg b/manuscript/images/site-logo.svg new file mode 100644 index 0000000..970a3a5 --- /dev/null +++ b/manuscript/images/site-logo.svg @@ -0,0 +1,49 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/manuscript/index.md b/manuscript/index.md index 56d3eab..a9771f3 100644 --- a/manuscript/index.md +++ b/manuscript/index.md @@ -1,21 +1,21 @@ # What is this? -Funky Penguin's "**[Geek Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of how-to guides for establishing your own container-based self-hosting platform, using either [Docker Swarm](/ha-docker-swarm/design/) or [Kubernetes](/kubernetes/start/). +Funky Penguin's "**[Geek Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of how-to guides for establishing your own container-based self-hosting platform, using either [Docker Swarm](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) or [Kubernetes](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/start/). -Running such a platform enables you to run self-hosted tools such as [AutoPirate](/recipes/autopirate/) (*Radarr, Sonarr, NZBGet and friends*), [Plex](/recipes/plex/), [NextCloud](/recipes/nextcloud/), and includes elements such as: +Running such a platform enables you to run self-hosted tools such as [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) (*Radarr, Sonarr, NZBGet and friends*), [Plex][plex], [NextCloud][nextcloud], and includes elements such as: -* [Automatic SSL-secured access](/ha-docker-swarm/traefik/) to all services (*with LetsEncrypt*) -* [SSO / authentication layer](/ha-docker-swarm/traefik-forward-auth/) to protect unsecured / vulnerable services -* [Automated backup](/recipes/elkarbackup/) of configuration and data -* [Monitoring and metrics](/recipes/swarmprom/) collection, graphing and alerting +* [Automatic SSL-secured access](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik/) to all services (*with LetsEncrypt*) +* [SSO / authentication layer](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik-forward-auth/) to protect unsecured / vulnerable services +* [Automated backup](https://geek-cookbook.funkypenguin.co.nz/)recipes/elkarbackup/) of configuration and data +* [Monitoring and metrics](https://geek-cookbook.funkypenguin.co.nz/)recipes/swarmprom/) collection, graphing and alerting -Recent updates and additions are posted on the [CHANGELOG](/CHANGELOG/), and there's a friendly community of like-minded geeks in the [Discord server](http://chat.funkypenguin.co.nz). +Recent updates and additions are posted on the [CHANGELOG](https://geek-cookbook.funkypenguin.co.nz/)CHANGELOG/), and there's a friendly community of like-minded geeks in the [Discord server](http://chat.funkypenguin.co.nz). ## Who is this for? -You already have a familiarity with concepts such as [virtual](https://libvirt.org/) [machines](https://www.virtualbox.org/), [Docker](https://www.docker.com/) containers, [LetsEncrypt SSL certificates](https://letsencrypt.org/), databases, and command-line interfaces. +You already have a familiarity with concepts such as virtual machines, [Docker](https://www.docker.com/) containers, [LetsEncrypt SSL certificates](https://letsencrypt.org/), databases, and command-line interfaces. -You've probably played with self-hosting some mainstream apps yourself, like [Plex](https://www.plex.tv/), [OwnCloud](https://owncloud.org/), [Wordpress](https://wordpress.org/) or even [SandStorm](https://sandstorm.io/). +You've probably played with self-hosting some mainstream apps yourself, like [Plex][plex], [NextCloud][nextcloud], [Wordpress][wordpress] or [Ghost][ghost]. ## Why should I read this? @@ -25,32 +25,29 @@ So if you're familiar enough with the concepts above, and you've done self-hosti 2. You want to play. You want a safe sandbox to test new tools, keeping the ones you want and tossing the ones you don't. 3. You want reliability. Once you go from __playing__ with a tool to actually __using__ it, you want it to be available when you need it. Having to "*quickly ssh into the basement server and restart plex*" doesn't cut it when you finally convince your wife to sit down with you to watch sci-fi. +!!! quote "...how useful the recipes are for people just getting started with containers..." + + + + ## What have you done for me lately? (CHANGELOG) -Check out recent change at [CHANGELOG](/CHANGELOG/) +Check out recent change at [CHANGELOG](https://geek-cookbook.funkypenguin.co.nz/)CHANGELOG/) ## What do you want from me? -I want your [patronage](https://www.patreon.com/bePatron?u=6982506), either in the financial sense, or as a member of our [friendly geek community](http://chat.funkypenguin.co.nz) (*or both!*) +I want your [support][github_sponsor], either in the [financial][github_sponsor] sense, or as a member of our [friendly geek community][discord] (*or both!*) ### Get in touch -<<<<<<< HEAD -* Tweet me up, I'm [@funkypenguin](https://twitter.com/funkypenguin)! -* or better yet, come into the [kitchen](https://discourse.geek-kitchen.funkypenguin.co.nz/) (discussion forums) to say hi, ask a question, or suggest a new recipe! -======= -* Come and say hi to me and the friendly geeks in the [Discord](http://chat.funkypenguin.co.nz) chat or the [Discourse](https://discourse.geek-kitchen.funkypenguin.co.nz/) forums - say hi, ask a question, or suggest a new recipe! -* Tweet me up, I'm [@funkypenguin](https://twitter.com/funkypenguin)! 🐦 -* [Contact me](https://www.funkypenguin.co.nz/contact/) by a variety of channels ->>>>>>> master +* Come and say hi to me and the friendly geeks in the [Discord][discord] chat or the [Discourse][discourse] forums - say hi, ask a question, or suggest a new recipe! +* Tweet me up, I'm [@funkypenguin][twitter]! +* [Contact me][contact] by a variety of channels -### Buy my book -I'm also publishing the Geek Cookbook as a formal eBook (*PDF, mobi, epub*), on Leanpub (https://leanpub.com/geek-cookbook). Buy it for as little as $5 (_which is really just a token gesture of support, since all the content is available online anyway!_) or pay what you think it's worth! +### [Sponsor][github_sponsor] / [Patronize][patreon] me -### Donate / [Support me ](https://www.patreon.com/funkypenguin) - -The best way to support this work is to become a [Patreon patron](https://www.patreon.com/bePatron?u=6982506) (_for as little as $1/month!_) - You get : +The best way to support this work is to become a [GitHub Sponsor](https://github.com/sponsors/funkypenguin) / [Patreon patron][patreon]. You get: * warm fuzzies, * access to the pre-mix repo, @@ -59,9 +56,38 @@ The best way to support this work is to become a [Patreon patron](https://www.pa .. and I get some pocket money every month to buy wine, cheese, and cryptocurrency! -Impulsively **[click here (NOW quick do it!)](https://www.patreon.com/bePatron?u=6982506)** to patronize me, or instead thoughtfully and analytically review my Patreon page / history **[here](https://www.patreon.com/funkypenguin)** and make up your own mind. +Impulsively **[click here (NOW quick do it!)][github_sponsor]** to [sponsor me][github_sponsor] via GitHub, or [patronize me via Patreon][patreon]! -### Hire me +### Work with me 🤝 -Need some Cloud / Microservices / DevOps / Infrastructure design work done? I'm a full-time [AWS-certified](https://www.certmetrics.com/amazon/public/badge.aspx?i=4&t=c&d=2019-02-22&ci=AWS00794574) consultant, this stuff is my bread and butter! :bread: :fork_and_knife: [Contact](https://www.funkypenguin.co.nz/contact/) me and let's talk! +Need some Cloud / Microservices / DevOps / Infrastructure design work done? I'm a full-time [AWS-certified][aws_cert] consultant, this stuff is my bread and butter! :bread: :fork_and_knife: [Get in touch][contact], and let's talk business! + +[plex]: https://www.plex.tv/ +[nextcloud]: https://nextcloud.com/ +[wordpress]: https://wordpress.org/ +[ghost]: https://ghost.io/ +[discord]: http://chat.funkypenguin.co.nz +[patreon]: https://www.patreon.com/bePatron?u=6982506 +[github_sponsor]: https://github.com/sponsors/funkypenguin +[github]: https://github.com/sponsors/funkypenguin +[discourse]: https://discourse.geek-kitchen.funkypenguin.co.nz/ +[twitter]: https://twitter.com/funkypenguin +[contact]: https://www.funkypenguin.co.nz +[aws_cert]: https://www.certmetrics.com/amazon/public/badge.aspx?i=4&t=c&d=2019-02-22&ci=AWS00794574 + +!!! quote "He unblocked me on all the technical hurdles to launching my SaaS in GKE!" + + By the time I had enlisted Funky Penguin's help, I'd architected myself into a bit of a nightmare with Kubernetes. I knew what I wanted to achieve, but I'd made a mess of it. Funky Penguin (David) was able to jump right in and offer a vital second-think on everything I'd done, pointing out where things could be simplified and streamlined, and better alternatives. + + He unblocked me on all the technical hurdles to launching my SaaS in GKE! + + With him delivering the container/Kubernetes architecture and helm CI/CD workflow, I was freed up to focus on coding and design, which fast-tracked me to launching on time. And now I have a simple deployment process that is easy for me to execute and maintain as a solo founder. + + I have no hesitation in recommending him for your project, and I'll certainly be calling on him again in the future. + + -- John McDowall, Founder, [kiso.io](https://kiso.io) + +### Buy my book + +I'm publishing the Geek Cookbook as a formal eBook (*PDF, mobi, epub*), on Leanpub (https://leanpub.com/geek-cookbook). Check it out! \ No newline at end of file diff --git a/manuscript/kubernetes/cluster.md b/manuscript/kubernetes/cluster.md index f38fe4f..7da75bb 100644 --- a/manuscript/kubernetes/cluster.md +++ b/manuscript/kubernetes/cluster.md @@ -2,11 +2,11 @@ IMO, the easiest Kubernetes cloud provider to experiment with is [DigitalOcean](https://m.do.co/c/e33b78ad621b) (_this is a referral link_). I've included instructions below to start a basic cluster. -![Kubernetes on Digital Ocean](/images/kubernetes-on-digitalocean.jpg) +![Kubernetes on Digital Ocean](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-on-digitalocean.jpg) ## Ingredients -1. [DigitalOcean](https://www.digitalocean.com/?refcode=e33b78ad621b) account, either linked to a credit card or (_my preference for a trial_) topped up with $5 credit from PayPal. (_yes, this is a referral link, making me some 💰 to buy 🍷_) +1. [DigitalOcean](https://www.digitalocean.com/?refcode=e33b78ad621b) account, either linked to a credit card or (_my preference for a trial_) topped up with $5 credit from PayPal. (_yes, this is a referral link, making me some to buy _) 2. Geek-Fu required : 🐱 (easy - even has screenshots!) ## Preparation @@ -15,27 +15,27 @@ IMO, the easiest Kubernetes cloud provider to experiment with is [DigitalOcean]( Create a project, and then from your project page, click **Manage** -> **Kubernetes (LTD)** in the left-hand panel: -![Kubernetes on Digital Ocean Screenshot #1](/images/kubernetes-on-digitalocean-screenshot-1.png) +![Kubernetes on Digital Ocean Screenshot #1](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-on-digitalocean-screenshot-1.png) Until DigitalOcean considers their Kubernetes offering to be "production ready", you'll need the additional step of clicking on **Enable Limited Access**: -![Kubernetes on Digital Ocean Screenshot #2](/images/kubernetes-on-digitalocean-screenshot-2.png) +![Kubernetes on Digital Ocean Screenshot #2](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-on-digitalocean-screenshot-2.png) The _Enable Limited Access_ button changes to read _Create a Kubernetes Cluster_ . Cleeeek it: -![Kubernetes on Digital Ocean Screenshot #3](/images/kubernetes-on-digitalocean-screenshot-3.png) +![Kubernetes on Digital Ocean Screenshot #3](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-on-digitalocean-screenshot-3.png) When prompted, choose some defaults for your first node pool (_your pool of "compute" resources for your cluster_), and give it a name. In more complex deployments, you can use this concept of "node pools" to run certain applications (_like an inconsequential nightly batch job_) on a particular class of compute instance (_such as cheap, preemptible instances_) -![Kubernetes on Digital Ocean Screenshot #4](/images/kubernetes-on-digitalocean-screenshot-4.png) +![Kubernetes on Digital Ocean Screenshot #4](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-on-digitalocean-screenshot-4.png) -That's it! Have a sip of your 🍷, a bite of your :cheese:, and wait for your cluster to build. While you wait, follow the instructions to setup kubectl (if you don't already have it) +That's it! Have a sip of your , a bite of your :cheese:, and wait for your cluster to build. While you wait, follow the instructions to setup kubectl (if you don't already have it) -![Kubernetes on Digital Ocean Screenshot #5](/images/kubernetes-on-digitalocean-screenshot-5.png) +![Kubernetes on Digital Ocean Screenshot #5](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-on-digitalocean-screenshot-5.png) DigitalOcean will provide you with a "kubeconfig" file to use to access your cluster. It's at the bottom of the page (_illustrated below_), and easy to miss (_in my experience_). -![Kubernetes on Digital Ocean Screenshot #6](/images/kubernetes-on-digitalocean-screenshot-6.png) +![Kubernetes on Digital Ocean Screenshot #6](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-on-digitalocean-screenshot-6.png) ## Release the kubectl! @@ -72,21 +72,15 @@ That's it. You have a beautiful new kubernetes cluster ready for some action! Still with me? Good. Move on to creating your own external load balancer.. -* [Start](/kubernetes/start/) - Why Kubernetes? -* [Design](/kubernetes/design/) - How does it fit together? +* [Start](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/start/) - Why Kubernetes? +* [Design](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/) - How does it fit together? * Cluster (this page) - Setup a basic cluster -* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm +* [Load Balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/) - Setup inbound access +* [Snapshots](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/) - Automatically backup your persistent data +* [Helm](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/helm/) - Uber-recipes from fellow geeks +* [Traefik](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/) - Traefik Ingress via Helm ## Chef's Notes -1. Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come! - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +1. Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come! \ No newline at end of file diff --git a/manuscript/kubernetes/design.md b/manuscript/kubernetes/design.md index d52ad3e..645a70f 100644 --- a/manuscript/kubernetes/design.md +++ b/manuscript/kubernetes/design.md @@ -42,7 +42,7 @@ Under this design, the only inbound connections we're permitting to our Kubernet ### Network Flows * HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy (_Traefik_) -* Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](/recipes/mqtt/)_) +* Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](https://geek-cookbook.funkypenguin.co.nz/)recipes/mqtt/)_) ### Authentication @@ -68,7 +68,7 @@ We use a phone-home container, which calls a simple webhook on our haproxy VM, a Here's a high-level diagram: -![Kubernetes Design](/images/kubernetes-cluster-design.png) +![Kubernetes Design](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-cluster-design.png) ## Overview @@ -80,7 +80,7 @@ In the diagram, we have a Kubernetes cluster comprised of 3 nodes. You'll notice Our nodes are partitioned into several namespaces, which logically separate our individual recipes. (_I.e., allowing both a "gitlab" and a "nextcloud" namespace to include a service named "db", which would be challenging without namespaces_) -Outside of our cluster (_could be anywhere on the internet_) is a single VM servicing as a load-balancer, running HAProxy and a webhook service. This load-balancer is described in detail, [in its own section](/kubernetes/loadbalancer/), but what's important up-front is that this VM is the **only element of the design for which we need to provide a fixed IP address**. +Outside of our cluster (_could be anywhere on the internet_) is a single VM servicing as a load-balancer, running HAProxy and a webhook service. This load-balancer is described in detail, [in its own section](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/), but what's important up-front is that this VM is the **only element of the design for which we need to provide a fixed IP address**. ### 1 : The mosquitto pod @@ -92,7 +92,7 @@ The phone-home container calls the webhook, and tells HAProxy to listen on port ### 2 : The Traefik Ingress -In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/docker-ha-swarm/traefik/). +In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](https://geek-cookbook.funkypenguin.co.nz/)docker-ha-swarm/traefik/). What's happening in the diagram is that a phone-home pod is tied to the traefik pod using affinity, so that both containers will be executed on the same host. Again, the phone-home container calls a webhook on the HAProxy VM, auto-configuring HAproxy to send any HTTPs traffic to its calling address and customer NodePort port number. @@ -120,19 +120,10 @@ Finally, the DNS for all externally-accessible services is pointed to the IP of Still with me? Good. Move on to creating your cluster! -* [Start](/kubernetes/start/) - Why Kubernetes? +* [Start](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/start/) - Why Kubernetes? * Design (this page) - How does it fit together? -* [Cluster](/kubernetes/cluster/) - Setup a basic cluster -* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +* [Cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) - Setup a basic cluster +* [Load Balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/) - Setup inbound access +* [Snapshots](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/) - Automatically backup your persistent data +* [Helm](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/helm/) - Uber-recipes from fellow geeks +* [Traefik](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/) - Traefik Ingress via Helm \ No newline at end of file diff --git a/manuscript/kubernetes/diycluster.md b/manuscript/kubernetes/diycluster.md index ad88036..4d4620d 100644 --- a/manuscript/kubernetes/diycluster.md +++ b/manuscript/kubernetes/diycluster.md @@ -6,7 +6,7 @@ After all, DIY its in our DNA. ## Ingredients -1. Basic knowledge of Kubernetes terms (Will come in handy) [Start](/kubernetes/start) +1. Basic knowledge of Kubernetes terms (Will come in handy) [Start](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/start) 2. Some Linux machines (Depends on what recipe you follow) ## Minikube @@ -118,7 +118,7 @@ From your PC,run `ssh-keygen` to generate a public and private key pair ```sh $ ssh-keygen Generating public/private rsa key pair. -Enter file in which to save the key (/home/thomas/.ssh/id_rsa): [enter] +Enter file in which to save the key (https://geek-cookbook.funkypenguin.co.nz/)home/thomas/.ssh/id_rsa): [enter] Enter passphrase (empty for no passphrase): [password] Enter same passphrase again: [password] Your identification has been saved in /home/thomas/.ssh/id_rsa. @@ -275,7 +275,7 @@ thomas-k3s-node3 Ready 487m v1.16.3-k3s.2 ``` That is all! You have yourself a Kubernetes cluster for you and your dog to enjoy. @@ -290,13 +290,13 @@ This section is WIP, instead, try using the K3S guide above 🙂 Now that you have wasted half a lifetime on installing your very own cluster, you can install more to it. Like a load balancer! -* [Start](/kubernetes/start/) - Why Kubernetes? -* [Design](/kubernetes/design/) - How does it fit together? +* [Start](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/start/) - Why Kubernetes? +* [Design](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/) - How does it fit together? * Cluster (this page) - Setup a basic cluster -* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm +* [Load Balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/) - Setup inbound access +* [Snapshots](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/) - Automatically backup your persistent data +* [Helm](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/helm/) - Uber-recipes from fellow geeks +* [Traefik](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/) - Traefik Ingress via Helm ## About your Chef diff --git a/manuscript/kubernetes/helm.md b/manuscript/kubernetes/helm.md index c09834a..a255b0d 100644 --- a/manuscript/kubernetes/helm.md +++ b/manuscript/kubernetes/helm.md @@ -2,14 +2,14 @@ [Helm](https://github.com/helm/helm) is a tool for managing Kubernetes "charts" (_think of it as an uber-polished collection of recipes_). Using one simple command, and by tweaking one simple config file (values.yaml), you can launch a complex stack. There are many publicly available helm charts for popular packages like [elasticsearch](https://github.com/helm/charts/tree/master/stable/elasticsearch), [ghost](https://github.com/helm/charts/tree/master/stable/ghost), [grafana](https://github.com/helm/charts/tree/master/stable/grafana), [mediawiki](https://github.com/helm/charts/tree/master/stable/mediawiki), etc. -![Kubernetes Snapshots](/images/kubernetes-helm.png) +![Kubernetes Snapshots](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-helm.png) !!! note - Given enough interest, I may provide a helm-compatible version of the pre-mix repository for [supporters](/support/). [Hit me up](/whoami/#contact-me) if you're interested! + Given enough interest, I may provide a helm-compatible version of the pre-mix repository for [supporters](https://geek-cookbook.funkypenguin.co.nz/)support/). [Hit me up](https://geek-cookbook.funkypenguin.co.nz/)whoami/#contact-me) if you're interested! ## Ingredients -1. [Kubernetes cluster](/kubernetes/cluster/) +1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) 2. Geek-Fu required : 🐤 (_easy - copy and paste_) ## Preparation @@ -41,28 +41,22 @@ including installing pre-releases. After installing Helm, initialise it by running ```helm init```. This will install "tiller" pod into your cluster, which works with the locally installed helm binaries to launch/update/delete Kubernetes elements based on helm charts. -That's it - not very exciting I know, but we'll need helm for the next and final step in building our Kubernetes cluster - deploying the [Traefik ingress controller (via helm)](/kubernetes/traefik/)! +That's it - not very exciting I know, but we'll need helm for the next and final step in building our Kubernetes cluster - deploying the [Traefik ingress controller (via helm)](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/)! ## Move on.. Still with me? Good. Move on to understanding Helm charts... -* [Start](/kubernetes/start/) - Why Kubernetes? -* [Design](/kubernetes/design/) - How does it fit together? -* [Cluster](/kubernetes/cluster/) - Setup a basic cluster -* [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data +* [Start](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/start/) - Why Kubernetes? +* [Design](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/) - How does it fit together? +* [Cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) - Setup a basic cluster +* [Load Balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/) Setup inbound access +* [Snapshots](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/) - Automatically backup your persistent data * Helm (this page) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm +* [Traefik](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/) - Traefik Ingress via Helm ## Chef's Notes -1. Of course, you can have lots of fun deploying all sorts of things via Helm. Check out https://github.com/helm/charts for some examples. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +1. Of course, you can have lots of fun deploying all sorts of things via Helm. Check out https://github.com/helm/charts for some examples. \ No newline at end of file diff --git a/manuscript/kubernetes/loadbalancer.md b/manuscript/kubernetes/loadbalancer.md index 8572424..b3ae322 100644 --- a/manuscript/kubernetes/loadbalancer.md +++ b/manuscript/kubernetes/loadbalancer.md @@ -8,12 +8,12 @@ See further examination of the problem and possible solutions in the [Kubernetes This recipe details a simple design to permit the exposure of as many ports as you like, on a single public IP, to a cluster of Kubernetes nodes running as many pods/containers as you need, with services exposed via NodePort. -![Kubernetes Design](/images/kubernetes-cluster-design.png) +![Kubernetes Design](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-cluster-design.png) ## Ingredients -1. [Kubernetes cluster](/kubernetes/cluster/) -2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar 🍷 for me!_) +1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) +2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar for me!_) 3. Geek-Fu required : 🐧🐧🐧 (_complex - inline adjustments required_) @@ -24,7 +24,7 @@ This recipe details a simple design to permit the exposure of as many ports as y ### Create LetsEncrypt certificate !!! warning - Safety first, folks. You wouldn't run a webhook exposed to the big bad ol' internte without first securing it with a valid SSL certificate? Of course not, I didn't think so! + Safety first, folks. You wouldn't run a webhook exposed to the big bad ol' internet without first securing it with a valid SSL certificate? Of course not, I didn't think so! Use whatever method you prefer to generate (and later, renew) your LetsEncrypt cert. The example below uses the CertBot docker image for CloudFlare DNS validation, since that's what I've used elsewhere. @@ -56,7 +56,7 @@ Once you've confirmed you've got a valid LetsEncrypt certificate stored in ```/e ### Install webhook -We're going to use https://github.com/adnanh/webhook to run our webhook. On some distributions (_❤️ ya, Debian!_), webhook and its associated systemd config can be installed by running ```apt-get install webhook```. +We're going to use https://github.com/adnanh/webhook to run our webhook. On some distributions (_ ya, Debian!_), webhook and its associated systemd config can be installed by running ```apt-get install webhook```. ### Create webhook config @@ -310,7 +310,7 @@ Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 Started PO Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 update-haproxy got matched Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 update-haproxy hook triggered successfully Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 Completed 200 OK in 2.123921ms -Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 executing /etc/webhook/update-haproxy.sh (/etc/webhook/update-haproxy.sh) with arguments ["/etc/webhook/update-haproxy.sh" "unifi-adoption" "8080" "30808" "35.244.91.178" "add"] and environment [] using /etc/webhook as cwd +Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 executing /etc/webhook/update-haproxy.sh (https://geek-cookbook.funkypenguin.co.nz/)etc/webhook/update-haproxy.sh) with arguments ["/etc/webhook/update-haproxy.sh" "unifi-adoption" "8080" "30808" "35.244.91.178" "add"] and environment [] using /etc/webhook as cwd Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 command output: Configuration file is valid ``` @@ -320,21 +320,15 @@ Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 command ou Still with me? Good. Move on to setting up an ingress SSL terminating proxy with Traefik.. -* [Start](/kubernetes/start/) - Why Kubernetes? -* [Design](/kubernetes/design/) - How does it fit together? -* [Cluster](/kubernetes/cluster/) - Setup a basic cluster +* [Start](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/start/) - Why Kubernetes? +* [Design](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/) - How does it fit together? +* [Cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) - Setup a basic cluster * Load Balancer (this page) - Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm +* [Snapshots](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/) - Automatically backup your persistent data +* [Helm](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/helm/) - Uber-recipes from fellow geeks +* [Traefik](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/) - Traefik Ingress via Helm ## Chef's Notes -1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome 😉 - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome 😉 \ No newline at end of file diff --git a/manuscript/kubernetes/snapshots.md b/manuscript/kubernetes/snapshots.md index b41f1cf..25aca3b 100644 --- a/manuscript/kubernetes/snapshots.md +++ b/manuscript/kubernetes/snapshots.md @@ -2,7 +2,7 @@ Before we get carried away creating pods, services, deployments etc, let's spare a thought for _security_... (_DevSecPenguinOps, here we come!_). In the context of this recipe, security refers to safe-guarding your data from accidental loss, as well as malicious impact. -Under [Docker Swarm](/ha-docker-swarm/design/), we used [shared storage](/ha-docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](recipes/elkarbackup/)) to automate backups of our persistent data. +Under [Docker Swarm](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/), we used [shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph/) with [Duplicity](https://geek-cookbook.funkypenguin.co.nz/)recipes/duplicity/) (or [ElkarBackup](recipes/elkarbackup/)) to automate backups of our persistent data. Now that we're playing in the deep end with Kubernetes, we'll need a Cloud-native backup solution... @@ -14,7 +14,7 @@ This recipe employs a clever tool ([miracle2k/k8s-snapshots](https://github.com/ ## Ingredients -1. [Kubernetes cluster](/kubernetes/cluster/) with either AWS or GKE (currently, but apparently other providers are [easy to implement](https://github.com/miracle2k/k8s-snapshots/blob/master/k8s_snapshots/backends/abstract.py)) +1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) with either AWS or GKE (currently, but apparently other providers are [easy to implement](https://github.com/miracle2k/k8s-snapshots/blob/master/k8s_snapshots/backends/abstract.py)) 2. Geek-Fu required : 🐒🐒 (_medium - minor adjustments may be required_) ## Preparation @@ -114,7 +114,7 @@ spec: And here's what my snapshot list looks like after a few days: -![Kubernetes Snapshots](/images/kubernetes-snapshots.png) +![Kubernetes Snapshots](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-snapshots.png) ### Snapshot a non-Kubernetes volume (optional) @@ -165,23 +165,16 @@ EOF Still with me? Good. Move on to understanding Helm charts... -* [Start](/kubernetes/start/) - Why Kubernetes? -* [Design](/kubernetes/design/) - How does it fit together? -* [Cluster](/kubernetes/cluster/) - Setup a basic cluster -* [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access +* [Start](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/start/) - Why Kubernetes? +* [Design](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/) - How does it fit together? +* [Cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) - Setup a basic cluster +* [Load Balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/) Setup inbound access * Snapshots (this page) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm +* [Helm](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/helm/) - Uber-recipes from fellow geeks +* [Traefik](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/) - Traefik Ingress via Helm ## Chef's Notes -1. I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74). - - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +1. I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74). \ No newline at end of file diff --git a/manuscript/kubernetes/start.md b/manuscript/kubernetes/start.md index cebc09e..314d0b1 100644 --- a/manuscript/kubernetes/start.md +++ b/manuscript/kubernetes/start.md @@ -44,33 +44,24 @@ Let's talk some definitions. Kubernetes.io provides a [glossary](https://kuberne ## Mm.. maaaaybe, how do I start? -If you're like me, and you learn by doing, either play with the examples at https://labs.play-with-k8s.com/, or jump right in by setting up a Google Cloud trial (_you get $300 credit for 12 months_), or a small cluster on [Digital Ocean](/kubernetes/digitalocean/). +If you're like me, and you learn by doing, either play with the examples at https://labs.play-with-k8s.com/, or jump right in by setting up a Google Cloud trial (_you get $300 credit for 12 months_), or a small cluster on [Digital Ocean](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/digitalocean/). If you're the learn-by-watching type, just search for "Kubernetes introduction video". There's a **lot** of great content available. ## I'm ready, gimme some recipes! -As of Jan 2019, our first (_and only!_) Kubernetes recipe is a WIP for the Mosquitto [MQTT](/recipes/mqtt/) broker. It's a good, simple starter if you're into home automation (_shoutout to [Home Assistant](/recipes/homeassistant/)!_), since it only requires a single container, and a simple NodePort service. +As of Jan 2019, our first (_and only!_) Kubernetes recipe is a WIP for the Mosquitto [MQTT](https://geek-cookbook.funkypenguin.co.nz/)recipes/mqtt/) broker. It's a good, simple starter if you're into home automation (_shoutout to [Home Assistant](https://geek-cookbook.funkypenguin.co.nz/)recipes/homeassistant/)!_), since it only requires a single container, and a simple NodePort service. -I'd love for your [feedback](/support/) on the Kubernetes recipes, as well as suggestions for what to add next. The current rough plan is to replicate the Chef's Favorites recipes (_see the left-hand panel_) into Kubernetes first. +I'd love for your [feedback](https://geek-cookbook.funkypenguin.co.nz/)support/) on the Kubernetes recipes, as well as suggestions for what to add next. The current rough plan is to replicate the Chef's Favorites recipes (_see the left-hand panel_) into Kubernetes first. ## Move on.. Still with me? Good. Move on to reviewing the design elements * Start (this page) - Why Kubernetes? -* [Design](/kubernetes/design/) - How does it fit together? -* [Cluster](/kubernetes/cluster/) - Setup a basic cluster -* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +* [Design](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/) - How does it fit together? +* [Cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) - Setup a basic cluster +* [Load Balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/) - Setup inbound access +* [Snapshots](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/) - Automatically backup your persistent data +* [Helm](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/helm/) - Uber-recipes from fellow geeks +* [Traefik](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/) - Traefik Ingress via Helm \ No newline at end of file diff --git a/manuscript/kubernetes/traefik.md b/manuscript/kubernetes/traefik.md index a728f15..0862fb5 100644 --- a/manuscript/kubernetes/traefik.md +++ b/manuscript/kubernetes/traefik.md @@ -4,8 +4,8 @@ This recipe utilises the [traefik helm chart](https://github.com/helm/charts/tre ## Ingredients -1. [Kubernetes cluster](/kubernetes/cluster/) -2. [Helm](/kubernetes/helm/) installed and initialised in your cluster +1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) +2. [Helm](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/helm/) installed and initialised in your cluster ## Preparation @@ -95,7 +95,7 @@ metrics: ### Prepare phone-home pod -[Remember](/kubernetes/loadbalancer/) how our load balancer design ties a phone-home container to another container using a pod, so that the phone-home container can tell our external load balancer (_using a webhook_) where to send our traffic? +[Remember](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/) how our load balancer design ties a phone-home container to another container using a pod, so that the phone-home container can tell our external load balancer (_using a webhook_) where to send our traffic? Since we deployed Traefik using helm, we need to take a slightly different approach, so we'll create a pod with an affinity which ensures it runs on the same host which runs the Traefik container (_more precisely, containers with the label app=traefik_). @@ -161,7 +161,7 @@ You can confirm this by running ```kubectl get pods```, and even watch the traef ### Deploy the phone-home pod -We still can't access traefik yet, since it's listening on port 30443 on node it happens to be running on. We'll launch our phone-home pod, to tell our [load balancer](/kubernetes/loadbalancer/) where to send incoming traffic on port 443. +We still can't access traefik yet, since it's listening on port 30443 on node it happens to be running on. We'll launch our phone-home pod, to tell our [load balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/) where to send incoming traffic on port 443. Optionally, on your loadbalancer VM, run ```journalctl -u webhook -f``` to watch for the container calling the webhook. @@ -191,30 +191,24 @@ helm upgrade --values values.yml traefik stable/traefik --recreate-pods We're doneburgers! 🍔 We now have all the pieces to safely deploy recipes into our Kubernetes cluster, knowing: 1. Our HTTPS traffic will be secured with LetsEncrypt (thanks Traefik!) -2. Our non-HTTPS ports (like UniFi adoption) will be load-balanced using an free-to-scale [external load balancer](/kubernetes/loadbalancer/) -3. Our persistent data will be [automatically backed up](/kubernetes/snapshots/) +2. Our non-HTTPS ports (like UniFi adoption) will be load-balanced using an free-to-scale [external load balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/) +3. Our persistent data will be [automatically backed up](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/) Here's a recap: -* [Start](/kubernetes/start/) - Why Kubernetes? -* [Design](/kubernetes/design/) - How does it fit together? -* [Cluster](/kubernetes/cluster/) - Setup a basic cluster -* [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks +* [Start](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/start/) - Why Kubernetes? +* [Design](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/) - How does it fit together? +* [Cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) - Setup a basic cluster +* [Load Balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/) Setup inbound access +* [Snapshots](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/) - Automatically backup your persistent data +* [Helm](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/helm/) - Uber-recipes from fellow geeks * Traefik (this page) - Traefik Ingress via Helm ## Where to next? -I'll be adding more Kubernetes versions of existing recipes soon. Check out the [MQTT](/recipes/mqtt/) recipe for a start! +I'll be adding more Kubernetes versions of existing recipes soon. Check out the [MQTT](https://geek-cookbook.funkypenguin.co.nz/)recipes/mqtt/) recipe for a start! ## Chef's Notes -1. It's kinda lame to be able to bring up Traefik but not to use it. I'll be adding the oauth_proxy element shortly, which will make this last step a little more conclusive and exciting! - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +1. It's kinda lame to be able to bring up Traefik but not to use it. I'll be adding the oauth_proxy element shortly, which will make this last step a little more conclusive and exciting! \ No newline at end of file diff --git a/manuscript/recipes/autopirate.md b/manuscript/recipes/autopirate.md index b7149a4..71fff73 100644 --- a/manuscript/recipes/autopirate.md +++ b/manuscript/recipes/autopirate.md @@ -1,4 +1,4 @@ -hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media 📺 🎥 🎵 +hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media # AutoPirate @@ -24,7 +24,7 @@ Tools included in the AutoPirate stack are: * **[Mylar](https://github.com/evilhero/mylar)** : finds, downloads and manages comic books * **[Headphones](https://github.com/rembo10/headphones)** : finds, downloads and manages music * **[Lazy Librarian](https://github.com/itsmegb/LazyLibrarian)** : finds, downloads and manages ebooks -* **[Ombi](https://github.com/tidusjar/Ombi)** : provides an interface to request additions to a [Plex](/recipes/plex/)/[Emby](/recipes/emby/) library using the above tools +* **[Ombi](https://github.com/tidusjar/Ombi)** : provides an interface to request additions to a [Plex](https://geek-cookbook.funkypenguin.co.nz/)recipes/plex/)/[Emby](https://geek-cookbook.funkypenguin.co.nz/)recipes/emby/) library using the above tools * **[Jackett](https://github.com/Jackett/Jackett)** : Provides an local, caching, API-based interface to torrent trackers, simplifying the way your tools search for torrents. Since this recipe is so long, and so many of the tools are optional to the final result (_i.e., if you're not interested in comics, you won't want Mylar_), I've described each individual tool on its own sub-recipe page (_below_), even though most of them are deployed very similarly. @@ -32,8 +32,8 @@ Since this recipe is so long, and so many of the tools are optional to the final ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design 3. Access to NZB indexers and Usenet servers 4. DNS entries configured for each of the NZB tools in this recipe that you want to use @@ -59,7 +59,7 @@ Create a user to "own" the above directories, and note the uid and gid of the cr ### Secure public access -What you'll quickly notice about this recipe is that __every__ web interface is protected by an [OAuth proxy](/reference/oauth_proxy/). +What you'll quickly notice about this recipe is that __every__ web interface is protected by an [OAuth proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/). Why? Because these tools are developed by a handful of volunteer developers who are focused on adding features, not necessarily implementing robust security. Most users wouldn't expose these tools directly to the internet, so the tools have rudimentary (if any) access control. @@ -105,28 +105,22 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. #### Assemble the tools.. Now work your way through the list of tools below, adding whichever tools your want to use, and finishing with the **end** section: -* [SABnzbd](/recipes/autopirate/sabnzbd/) -* [NZBGet](/recipes/autopirate/nzbget/) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) -* [Mylar](/recipes/autopirate/mylar/) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [End](/recipes/autopirate/end/) (launch the stack) - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget/) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) +* [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylar/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) \ No newline at end of file diff --git a/manuscript/recipes/autopirate/end.md b/manuscript/recipes/autopirate/end.md index aa5f843..d40d645 100644 --- a/manuscript/recipes/autopirate/end.md +++ b/manuscript/recipes/autopirate/end.md @@ -1,5 +1,5 @@ !!! warning - This is not a complete recipe - it's the conclusion to the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's the conclusion to the [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. ### Launch Autopirate stack @@ -11,10 +11,4 @@ Log into each of your new tools at its respective HTTPS URL. You'll be prompted ## Chef's Notes 📓 -1. This is a complex stack. Sing out in the comments if you found a flaw or need a hand :) - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +1. This is a complex stack. Sing out in the comments if you found a flaw or need a hand :) \ No newline at end of file diff --git a/manuscript/recipes/autopirate/headphones.md b/manuscript/recipes/autopirate/headphones.md index 27e768c..5b178a3 100644 --- a/manuscript/recipes/autopirate/headphones.md +++ b/manuscript/recipes/autopirate/headphones.md @@ -1,7 +1,7 @@ -hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media 📺 🎥 🎵 +hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media !!! warning - This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [autopirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # Headphones @@ -11,7 +11,7 @@ hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and ## Inclusion into AutoPirate -To include Headphones in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include Headphones in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: ``` headphones: @@ -51,31 +51,25 @@ headphones_proxy: ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) * [Mylar](https://github.com/evilhero/mylar) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) * Headphones (this page) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 -1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. \ No newline at end of file diff --git a/manuscript/recipes/autopirate/heimdall.md b/manuscript/recipes/autopirate/heimdall.md index 2e2184c..7985f79 100644 --- a/manuscript/recipes/autopirate/heimdall.md +++ b/manuscript/recipes/autopirate/heimdall.md @@ -1,5 +1,5 @@ !!! warning - This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [autopirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # Heimdall @@ -7,15 +7,15 @@ Heimdall is an elegant solution to organise all your web applications. It’s dedicated to this purpose so you won’t lose your links in a sea of bookmarks. -Heimdall provides a single URL to manage access to all of your autopirate tools, and includes "enhanced" (_i.e., display stats within Heimdall without launching the app_) access to [NZBGet](/recipes/autopirate/nzbget.md), [SABnzbd](/recipes/autopirate/sabnzbd/), and friends. +Heimdall provides a single URL to manage access to all of your autopirate tools, and includes "enhanced" (_i.e., display stats within Heimdall without launching the app_) access to [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md), [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/), and friends. ![Heimdall Screenshot](../../images/heimdall.jpg) ## Inclusion into AutoPirate -To include Heimdall in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include Heimdall in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: -```` +``` heimdall: image: linuxserver/heimdall:latest env_file: /var/data/config/autopirate/heimdall.env @@ -50,39 +50,33 @@ To include Heimdall in your [AutoPirate](/recipes/autopirate/) stack, include th -```` +``` !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) -* [Mylar](/recipes/autopirate/mylarr/) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) +* [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylarr/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) * Heimdall (this page) -* [End](/recipes/autopirate/end/) (launch the stack) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. -2. The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk! - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +2. The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk! \ No newline at end of file diff --git a/manuscript/recipes/autopirate/jackett.md b/manuscript/recipes/autopirate/jackett.md index eee66ec..10d9d3d 100644 --- a/manuscript/recipes/autopirate/jackett.md +++ b/manuscript/recipes/autopirate/jackett.md @@ -1,5 +1,5 @@ !!! warning - This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [autopirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # Jackett @@ -11,7 +11,7 @@ This allows for getting recent uploads (like RSS) and performing searches. Jacke ## Inclusion into AutoPirate -To include Jackett in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include Jackett in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: ``` jackett: @@ -51,31 +51,25 @@ jackett_proxy: ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) -* [Mylar](/recipes/autopirate/mylarr/) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) +* [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylarr/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) * Jackett (this page) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 -1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. \ No newline at end of file diff --git a/manuscript/recipes/autopirate/lazylibrarian.md b/manuscript/recipes/autopirate/lazylibrarian.md index d986e92..452e88c 100644 --- a/manuscript/recipes/autopirate/lazylibrarian.md +++ b/manuscript/recipes/autopirate/lazylibrarian.md @@ -1,5 +1,5 @@ !!! warning - This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [autopirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # LazyLibrarian @@ -15,7 +15,7 @@ ## Inclusion into AutoPirate -To include LazyLibrarian in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include LazyLibrarian in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: ``` lazylibrarian: @@ -63,32 +63,26 @@ calibre-server: ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) * [Mylar](https://github.com/evilhero/mylar) * Lazy Librarian (this page) -* [Headphones](/recipes/autopirate/headphones) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 -1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](/recipes/calibre-web) recipe. -2. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](https://geek-cookbook.funkypenguin.co.nz/)recipes/calibre-web) recipe. +2. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. \ No newline at end of file diff --git a/manuscript/recipes/autopirate/lidarr.md b/manuscript/recipes/autopirate/lidarr.md index ded7115..4efd007 100644 --- a/manuscript/recipes/autopirate/lidarr.md +++ b/manuscript/recipes/autopirate/lidarr.md @@ -1,19 +1,19 @@ -hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media 📺 🎥 🎵 📖 +hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media !!! warning - This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [autopirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # Lidarr -[Lidarr](https://lidarr.audio/) is an automated music downloader for NZB and Torrent. It performs the same function as [Headphones](/recipes/autopirate/headphones), but is written using the same(ish) codebase as [Radarr](/recipes/autopirate/radarr/) and [Sonarr](/recipes/autopirate/sonarr). It's blazingly fast, and includes beautiful album/artist art. Lidarr supports [SABnzbd](/recipes/autopirate/sabnzbd/), [NZBGet](/recipes/autopirate/nzbget/), Transmission, µTorrent, Deluge and Blackhole (_just like Sonarr / Radarr_) +[Lidarr](https://lidarr.audio/) is an automated music downloader for NZB and Torrent. It performs the same function as [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones), but is written using the same(ish) codebase as [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) and [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr). It's blazingly fast, and includes beautiful album/artist art. Lidarr supports [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/), [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget/), Transmission, µTorrent, Deluge and Blackhole (_just like Sonarr / Radarr_) ![Lidarr Screenshot](../../images/lidarr.png) ## Inclusion into AutoPirate -To include Lidarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include Lidarr in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: -```` +``` lidarr: image: linuxserver/lidarr:latest env_file : /var/data/config/autopirate/lidarr.env @@ -44,40 +44,34 @@ lidarr_proxy: -email-domain=example.com -provider=github -authenticated-emails-file=/authenticated-emails.txt -```` +``` !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) * [Mylar](https://github.com/evilhero/mylar) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones/) * Lidarr (this page) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. -2. The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel! - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +2. The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel! \ No newline at end of file diff --git a/manuscript/recipes/autopirate/mylar.md b/manuscript/recipes/autopirate/mylar.md index 443b62f..07af1bd 100644 --- a/manuscript/recipes/autopirate/mylar.md +++ b/manuscript/recipes/autopirate/mylar.md @@ -1,5 +1,5 @@ !!! warning - This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [autopirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # Mylar @@ -9,7 +9,7 @@ ## Inclusion into AutoPirate -To include Mylar in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include Mylar in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: ``` mylar: @@ -49,23 +49,23 @@ mylar_proxy: ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) * Mylar (this page) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 @@ -74,10 +74,4 @@ Continue through the list of tools below, adding whichever tools your want to us 2. If you intend to configure Mylar to perform its own NZB searches and push the hits to a downloader such as SABnzbd, then in addition to configuring the connection to SAB with host, port and api key, you will need to set the parameter `host_return` parameter to the fully qualified Mylar address (e.g. `http://mylar:8090`). - This will provide the link to the downloader necessary to initiate the download. This parameter is not presented in the user interface so the config file (`$MYLAR_HOME/config.ini`) will need to be manually updated. The parameter can be found under the [Interface] section of the file. ([Details](https://github.com/evilhero/mylar/issues/2242)) - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? + This will provide the link to the downloader necessary to initiate the download. This parameter is not presented in the user interface so the config file (`$MYLAR_HOME/config.ini`) will need to be manually updated. The parameter can be found under the [Interface] section of the file. ([Details](https://github.com/evilhero/mylar/issues/2242)) \ No newline at end of file diff --git a/manuscript/recipes/autopirate/nzbget.md b/manuscript/recipes/autopirate/nzbget.md index 4f19e42..2b03549 100644 --- a/manuscript/recipes/autopirate/nzbget.md +++ b/manuscript/recipes/autopirate/nzbget.md @@ -1,18 +1,18 @@ !!! warning - This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # NZBGet ## Introduction -NZBGet performs the same function as [SABnzbd](/recipes/autopirate/sabnzbd.md) (_downloading content from Usenet servers_), but it's lightweight and fast(er), written in C++ (_as opposed to Python_). +NZBGet performs the same function as [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) (_downloading content from Usenet servers_), but it's lightweight and fast(er), written in C++ (_as opposed to Python_). ![NZBGet Screenshot](../../images/nzbget.jpg) ## Inclusion into AutoPirate -To include NZBGet in your [AutoPirate](/recipes/autopirate/) stack -(_The only reason you **wouldn't** use NZBGet, would be if you were using [SABnzbd](/recipes/autopirate/sabnzbd/) instead_), include the following in your autopirate.yml stack definition file: +To include NZBGet in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack +(_The only reason you **wouldn't** use NZBGet, would be if you were using [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/) instead_), include the following in your autopirate.yml stack definition file: !!! tip I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` @@ -56,31 +56,25 @@ nzbget_proxy: ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) * NZBGet (this page) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) -* [Mylar](/recipes/autopirate/mylar/) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones/) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) +* [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylar/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones/) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 -1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. \ No newline at end of file diff --git a/manuscript/recipes/autopirate/nzbhydra.md b/manuscript/recipes/autopirate/nzbhydra.md index b79616c..286c63d 100644 --- a/manuscript/recipes/autopirate/nzbhydra.md +++ b/manuscript/recipes/autopirate/nzbhydra.md @@ -1,5 +1,5 @@ !!! warning - This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # NZBHydra @@ -16,7 +16,7 @@ ## Inclusion into AutoPirate -To include NZBHydra in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include NZBHydra in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: ``` nzbhydra: @@ -55,31 +55,25 @@ nzbhydra_proxy: ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) -* [Mylar](/recipes/autopirate/mylar/) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones/) -* [Lidarr](/recipes/autopirate/lidarr/) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) +* [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylar/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones/) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) * NZBHydra (this page) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 -1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. \ No newline at end of file diff --git a/manuscript/recipes/autopirate/nzbhydra2.md b/manuscript/recipes/autopirate/nzbhydra2.md index 00e31db..871fe17 100644 --- a/manuscript/recipes/autopirate/nzbhydra2.md +++ b/manuscript/recipes/autopirate/nzbhydra2.md @@ -1,5 +1,5 @@ !!! warning - This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # NZBHydra 2 @@ -7,22 +7,22 @@ [NZBHydra 2](https://github.com/theotherp/nzbhydra2) is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as an indexer source for tools like Sonarr, Radarr or CouchPotato. !!! note - NZBHydra 2 is a complete rewrite of [NZBHydra (1)](/recipes/autopirate/nzbhybra/). It's currently in Beta. It works mostly fine but some functions might not be completely done and incompatibilities with some tools might still exist. You might want to run both in parallel for migration / testing purposes, but ultimately you'll probably want to switch over to NZBHydra 2 exclusively. + NZBHydra 2 is a complete rewrite of [NZBHydra (1)](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhybra/). It's currently in Beta. It works mostly fine but some functions might not be completely done and incompatibilities with some tools might still exist. You might want to run both in parallel for migration / testing purposes, but ultimately you'll probably want to switch over to NZBHydra 2 exclusively. ![NZBHydra Screenshot](../../images/nzbhydra2.png) Features include: * Searches Anizb, BinSearch, NZBIndex and any newznab compatible indexers. Merges all results, filters them by a number of configurable restrictions, recognizes duplicates and returns them all in one place -* Add results to [NZBGet](/recipes/autopirate/nzbget/) or [SABnzbd](/recipes/autopirate/sabnzbd/) +* Add results to [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget/) or [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/) * Support for all relevant media IDs (IMDB, TMDB, TVDB, TVRage, TVMaze) and conversion between them * Query generation, meaning a query will be generated if only a media ID is provided in the search and the indexer doesn't support the ID or if no results were found -* Compatible with [Sonarr](/recipes/autopirate/sonarr/), [Radarr](/recipes/autopirate/radarr/), [NZBGet](/recipes/autopirate/nzbget.md), [SABnzbd](/recipes/autopirate/sabnzbd/), nzb360, CouchPotato, [Mylar](/recipes/autopirate/mylar/), [Lazy Librarian](/recipes/autopirate/lazylibrarian/), Sick Beard, [Jackett/Cardigann](/recipes/autopirate/jackett/), Watcher, etc. +* Compatible with [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/), [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/), [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md), [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/), nzb360, CouchPotato, [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylar/), [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/), Sick Beard, [Jackett/Cardigann](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/), Watcher, etc. * Search and download history and extensive stats. E.g. indexer response times, download shares, NZB age, etc. * Authentication and multi-user support * Automatic update of NZB download status by querying configured downloaders * RSS support with configurable cache times -* Torrent support (_Although I prefer [Jackett](/recipes/autopirate/jackett/) for this_): +* Torrent support (_Although I prefer [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) for this_): * For GUI searches, allowing you to download torrents to a blackhole folder * A separate Torznab compatible endpoint for API requests, allowing you to merge multiple trackers * Extensive configurability @@ -31,9 +31,9 @@ Features include: ## Inclusion into AutoPirate -To include NZBHydra2 in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include NZBHydra2 in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: -```` +``` nzbhydra2: image: linuxserver/hydra2:latest env_file : /var/data/config/autopirate/nzbhydra2.env @@ -63,39 +63,33 @@ nzbhydra2_proxy: -email-domain=example.com -provider=github -authenticated-emails-file=/authenticated-emails.txt -```` +``` !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) -* [Mylar](/recipes/autopirate/mylar/) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones/) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) +* [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylar/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones/) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) * NZBHydra2 (this page) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra2, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. -2. Note that NZBHydra2 _can_ co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you'll need to change both the target hostname (_to "hydra2"_) and the target port (_to 5076_). - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +2. Note that NZBHydra2 _can_ co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you'll need to change both the target hostname (_to "hydra2"_) and the target port (_to 5076_). \ No newline at end of file diff --git a/manuscript/recipes/autopirate/ombi.md b/manuscript/recipes/autopirate/ombi.md index 31730be..a240ee6 100644 --- a/manuscript/recipes/autopirate/ombi.md +++ b/manuscript/recipes/autopirate/ombi.md @@ -1,9 +1,9 @@ !!! warning - This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # Ombi -[Ombi](https://github.com/tidusjar/Ombi) is a useful addition to the [autopirate](/recipes/autopirate/) stack. Features include: +[Ombi](https://github.com/tidusjar/Ombi) is a useful addition to the [autopirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack. Features include: * Lets users request Movies and TV Shows (_whether it being the entire series, an entire season, or even single episodes._) * Easily manage your requests @@ -17,7 +17,7 @@ Automatically updates the status of requests when they are available on Plex/Emb ## Inclusion into AutoPirate -To include Ombi in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include Ombi in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: ``` ombi: @@ -56,31 +56,25 @@ ombi_proxy: ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) -* [Mylar](/recipes/autopirate/mylar/) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones/) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) +* [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylar/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones/) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) * Ombi (this page) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 -1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. \ No newline at end of file diff --git a/manuscript/recipes/autopirate/radarr.md b/manuscript/recipes/autopirate/radarr.md index 39d5e40..e7e0784 100644 --- a/manuscript/recipes/autopirate/radarr.md +++ b/manuscript/recipes/autopirate/radarr.md @@ -1,5 +1,5 @@ !!! warning - This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # Radarr @@ -23,11 +23,11 @@ ![Radarr Screenshot](../../images/radarr.png) !!! tip "Sponsored Project" - Sonarr is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I forget it's there until I (reliably) receive an email with new and exciting updates 😁 + Sonarr is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I forget it's there until I (reliably) receive an email with new and exciting updates ## Inclusion into AutoPirate -To include Radarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include Radarr in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: ``` radarr: @@ -67,31 +67,25 @@ radarr_proxy: ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) * Radarr (this page) -* [Mylar](/recipes/autopirate/mylar/) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones/) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylar/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones/) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 -1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. \ No newline at end of file diff --git a/manuscript/recipes/autopirate/rtorrent.md b/manuscript/recipes/autopirate/rtorrent.md index 6420c1c..f279afa 100644 --- a/manuscript/recipes/autopirate/rtorrent.md +++ b/manuscript/recipes/autopirate/rtorrent.md @@ -1,5 +1,5 @@ !!! warning - This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # RTorrent / ruTorrent @@ -13,7 +13,7 @@ When using a torrent client from behind NAT (_which swarm, by nature, is_), you ## Inclusion into AutoPirate -To include ruTorrent in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include ruTorrent in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: ``` rtorrent: @@ -56,31 +56,25 @@ rtorrent_proxy: ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) * RTorrent (this page) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) -* [Mylar](/recipes/autopirate/mylar/) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones/) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) +* [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylar/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones/) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 -1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. \ No newline at end of file diff --git a/manuscript/recipes/autopirate/sabnzbd.md b/manuscript/recipes/autopirate/sabnzbd.md index 050c075..110845f 100644 --- a/manuscript/recipes/autopirate/sabnzbd.md +++ b/manuscript/recipes/autopirate/sabnzbd.md @@ -1,26 +1,26 @@ !!! warning - This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # SABnzbd ## Introduction -SABnzbd is the workhorse of the stack. It takes .nzb files as input (_manually or from other [autopirate](/recipes/autopirate/) stack tools_), then connects to your chosen Usenet provider, downloads all the individual binaries referenced by the .nzb, and then tests/repairs/combines/uncompresses them all into the final result - media files. +SABnzbd is the workhorse of the stack. It takes .nzb files as input (_manually or from other [autopirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack tools_), then connects to your chosen Usenet provider, downloads all the individual binaries referenced by the .nzb, and then tests/repairs/combines/uncompresses them all into the final result - media files. ![SABNZBD Screenshot](../../images/sabnzbd.png) !!! tip "Sponsored Project" - SABnzbd is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. It's not sexy, but it's consistent and reliable, and I enjoy the fruits of its labor near-daily. + SABnzbd is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. It's not sexy, but it's consistent and reliable, and I enjoy the fruits of its labor near-daily. ## Inclusion into AutoPirate -To include SABnzbd in your [AutoPirate](/recipes/autopirate/) stack -(_The only reason you **wouldn't** use SABnzbd, would be if you were using [NZBGet](/recipes/autopirate/nzbget.md) instead_), include the following in your autopirate.yml stack definition file: +To include SABnzbd in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack +(_The only reason you **wouldn't** use SABnzbd, would be if you were using [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) instead_), include the following in your autopirate.yml stack definition file: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` -```` +``` sabnzbd: image: linuxserver/sabnzbd:latest env_file : /var/data/config/autopirate/sabnzbd.env @@ -51,7 +51,7 @@ sabnzbd_proxy: -email-domain=example.com -provider=github -authenticated-emails-file=/authenticated-emails.txt -```` +``` !!! warning "Important Note re hostname validation" @@ -63,31 +63,25 @@ sabnzbd_proxy: ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: * SABnzbd (this page) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) -* [Sonarr](/recipes/autopirate/sonarr/) -* [Radarr](/recipes/autopirate/radarr/) -* [Mylar](/recipes/autopirate/mylar/) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones/) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) +* [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) +* [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylar/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones/) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 -1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. \ No newline at end of file diff --git a/manuscript/recipes/autopirate/sonarr.md b/manuscript/recipes/autopirate/sonarr.md index a28e82a..10eabcd 100644 --- a/manuscript/recipes/autopirate/sonarr.md +++ b/manuscript/recipes/autopirate/sonarr.md @@ -1,5 +1,5 @@ !!! warning - This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. + This is not a complete recipe - it's a component of the [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. # Sonarr @@ -9,11 +9,11 @@ ![Sonarr Screenshot](../../images/sonarr.png) !!! tip "Sponsored Project" - Sonarr is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I forget it's there until I (reliably) receive an email with new and exciting updates 😁 + Sonarr is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I forget it's there until I (reliably) receive an email with new and exciting updates ## Inclusion into AutoPirate -To include Sonarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: +To include Sonarr in your [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: ``` sonarr: @@ -53,31 +53,25 @@ sonarr_proxy: ## Assemble more tools.. -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: +Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/)** section: -* [SABnzbd](/recipes/autopirate/sabnzbd.md) -* [NZBGet](/recipes/autopirate/nzbget.md) -* [RTorrent](/recipes/autopirate/rtorrent/) +* [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd.md) +* [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md) +* [RTorrent](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/rtorrent/) * Sonarr (this page) -* [Radarr](/recipes/autopirate/radarr/) -* [Mylar](/recipes/autopirate/mylar/) -* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) -* [Headphones](/recipes/autopirate/headphones/) -* [Lidarr](/recipes/autopirate/lidarr/) -* [NZBHydra](/recipes/autopirate/nzbhydra/) -* [NZBHydra2](/recipes/autopirate/nzbhydra2/) -* [Ombi](/recipes/autopirate/ombi/) -* [Jackett](/recipes/autopirate/jackett/) -* [Heimdall](/recipes/autopirate/heimdall/) -* [End](/recipes/autopirate/end/) (launch the stack) +* [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) +* [Mylar](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/mylar/) +* [Lazy Librarian](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lazylibrarian/) +* [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones/) +* [Lidarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/lidarr/) +* [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) +* [NZBHydra2](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra2/) +* [Ombi](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/ombi/) +* [Jackett](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/jackett/) +* [Heimdall](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/heimdall/) +* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) ## Chef's Notes 📓 -1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. \ No newline at end of file diff --git a/manuscript/recipes/bitwarden.md b/manuscript/recipes/bitwarden.md index 42f8193..e118098 100644 --- a/manuscript/recipes/bitwarden.md +++ b/manuscript/recipes/bitwarden.md @@ -25,8 +25,8 @@ Bitwarden is a free and open source password management solution for individuals !!! summary "Ingredients" Existing: - 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) - 2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design + 1. [X] [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) + 2. [X] [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -50,7 +50,7 @@ Create `/var/data/config/bitwarden/bitwarden.env`, and **leave it empty for now* Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` diff --git a/manuscript/recipes/bookstack.md b/manuscript/recipes/bookstack.md index fe05804..459aad1 100644 --- a/manuscript/recipes/bookstack.md +++ b/manuscript/recipes/bookstack.md @@ -1,19 +1,17 @@ -hero: Heroic Hero - # BookStack BookStack is a simple, self-hosted, easy-to-use platform for organising and storing information. -A friendly middle ground between heavyweights like MediaWiki or Confluence and [Gollum](/recipes/gollum/), BookStack relies on a database backend (so searching and versioning is easy), but limits itself to a pre-defined, 3-tier structure (book, chapter, page). The result is a lightweight, approachable personal documentation stack, which includes search and Markdown editing. +A friendly middle ground between heavyweights like MediaWiki or Confluence and [Gollum](https://geek-cookbook.funkypenguin.co.nz/)recipes/gollum/), BookStack relies on a database backend (so searching and versioning is easy), but limits itself to a pre-defined, 3-tier structure (book, chapter, page). The result is a lightweight, approachable personal documentation stack, which includes search and Markdown editing. ![BookStack Screenshot](../images/bookstack.png) -I like to protect my public-facing web UIs with an [oauth_proxy](/reference/oauth_proxy), ensuring that if an application bug (or a user misconfiguration) exposes the app to unplanned public scrutiny, I have a second layer of defense. +I like to protect my public-facing web UIs with an [oauth_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy), ensuring that if an application bug (or a user misconfiguration) exposes the app to unplanned public scrutiny, I have a second layer of defense. ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik/) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik/) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -29,7 +27,7 @@ mkdir -p /var/data/runtime/bookstack/db ### Prepare environment -Create bookstack.env, and populate with the following variables. Set the [oauth_proxy](/reference/oauth_proxy) variables provided by your OAuth provider (if applicable.) +Create bookstack.env, and populate with the following variables. Set the [oauth_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy) variables provided by your OAuth provider (if applicable.) ``` # For oauth-proxy (optional) @@ -55,7 +53,7 @@ DB_PASSWORD=secret Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` @@ -129,7 +127,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. diff --git a/manuscript/recipes/calibre-web.md b/manuscript/recipes/calibre-web.md index 156fd76..bd20a30 100644 --- a/manuscript/recipes/calibre-web.md +++ b/manuscript/recipes/calibre-web.md @@ -2,9 +2,9 @@ hero: Manage your ebook collection. Like a BOSS. # Calibre-Web -The [AutoPirate](/recipes/autopirate/) recipe includes [Lazy Librarian](https://github.com/itsmegb/LazyLibrarian), a tool for tracking, finding, and downloading eBooks. However, after the eBooks are downloaded, Lazy Librarian is not much use for organising, tracking, and actually **reading** them. +The [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/) recipe includes [Lazy Librarian](https://github.com/itsmegb/LazyLibrarian), a tool for tracking, finding, and downloading eBooks. However, after the eBooks are downloaded, Lazy Librarian is not much use for organising, tracking, and actually **reading** them. -[Calibre-Web](https://github.com/janeczku/calibre-web) could be described as "_[Plex](/recipes/plex/) (or [Emby](/recipes/emby/)) for eBooks_" - it's a web-based interface to manage your eBook library, screenshot below: +[Calibre-Web](https://github.com/janeczku/calibre-web) could be described as "_[Plex](https://geek-cookbook.funkypenguin.co.nz/)recipes/plex/) (or [Emby](https://geek-cookbook.funkypenguin.co.nz/)recipes/emby/)) for eBooks_" - it's a web-based interface to manage your eBook library, screenshot below: ![Calibre-Web Screenshot](../images/calibre-web.png) @@ -23,8 +23,8 @@ Support for editing eBook metadata and deleting eBooks from Calibre library ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -42,7 +42,7 @@ Ensure that your Calibre library is accessible to the swarm (_i.e., exists on sh ### Prepare environment -We'll use an [oauth-proxy](/reference/oauth_proxy/) to protect the UI from public access, so create calibre-web.env, and populate with the following variables: +We'll use an [oauth-proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) to protect the UI from public access, so create calibre-web.env, and populate with the following variables: ``` OAUTH2_PROXY_CLIENT_ID= @@ -110,7 +110,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. @@ -125,4 +125,4 @@ Log into your new instance at https://**YOUR-FQDN**. You'll be directed to the i ## Chef's Notes 📓 1. Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_) -2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web. +2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web. \ No newline at end of file diff --git a/manuscript/recipes/collabora-online.md b/manuscript/recipes/collabora-online.md index 229201c..edbd60e 100644 --- a/manuscript/recipes/collabora-online.md +++ b/manuscript/recipes/collabora-online.md @@ -7,16 +7,16 @@ Collabora Online Development Edition (or "[CODE](https://www.collaboraoffice.com/code/#what_is_code)"), is the lightweight, or "home" edition of the commercially-supported [Collabora Online](https://www.collaboraoffice.com/collabora-online/) platform. It -It's basically the [LibreOffice](https://www.libreoffice.org/) interface in a web-browser. CODE is not a standalone app, it's a backend intended to be accessed via "WOPI" from an existing interface (_in our case, [NextCloud](/recipes/nextcloud/)_) +It's basically the [LibreOffice](https://www.libreoffice.org/) interface in a web-browser. CODE is not a standalone app, it's a backend intended to be accessed via "WOPI" from an existing interface (_in our case, [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/)_) ![CODE Screenshot](../images/collabora-online.png) ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. DNS entry for the hostname (_i.e. "collabora.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP -4. [NextCloud](/recipes/nextcloud/) installed and operational +4. [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/) installed and operational 5. [Docker-compose](https://docs.docker.com/compose/install/) installed on your node(s) - this is a special case which needs to run outside of Docker Swarm ## Preparation @@ -56,7 +56,7 @@ Create /var/data/config/collabora/collabora.env, and populate with the following Note the following: 1. Variables are in lower-case, unlike our standard convention. This is to align with the CODE container - 2. Set domain to your [NextCloud](/recipes/nextcloud/) domain, and escape all the periods as per the example + 2. Set domain to your [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/) domain, and escape all the periods as per the example 3. Set your server_name to collabora.. Escaping periods is unnecessary 4. Your password cannot include triangular brackets - the entrypoint script will insert this password into an XML document, and triangular brackets will make bad(tm) things happen 🔥 @@ -157,7 +157,7 @@ Create an empty ```/var/data/collabora/loolwsd.xml``` by running ```touch /var/d Create ```/var/data/config/collabora/collabora.yml``` as follows, changing the traefik frontend_rule as necessary: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` version: "3.0" diff --git a/manuscript/recipes/cryptominer.md b/manuscript/recipes/cryptominer.md deleted file mode 100644 index 47e8f71..0000000 --- a/manuscript/recipes/cryptominer.md +++ /dev/null @@ -1,42 +0,0 @@ -hero: We dig dig digga-dig dig! - -# CryptoMiner - -This is a diversion from my usual recipes - recently I've become interested in cryptocurrency, both in mining, and in investing. - -I honestly didn't expect to enjoy the mining process as much as I did. Part of the enjoyment was getting my hands dirty with hardware. - -Since a [mining rig](/recipes/cryptominer/mining-rig/) relies on hardware, we can't really use a docker swarm for this one! - -![CryptoMiner Screenshot](../images/cryptominer.png) - -This recipe isn't for everyone - if you just want to make some money from cryptocurrency, then you're better off learning to [invest](https://www.reddit.com/r/CryptoCurrency/) or [trade](https://www.reddit.com/r/CryptoMarkets/). However, if you want to (_ideally_) make money **and** you like tinkering, playing with hardware, optimising and monitoring, read on! - -## Ingredients - -1. Suitable system guts (_CPU, motherboard, RAM, PSU_) for your [mining rig](/recipes/cryptominer/mining-rig/) -2. [AMD](/recipes/cryptominer/amd-gpu/) / [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs (_yes, plural, since although you **can** start with just one, you'll soon get hooked!_) -3. A friendly operating system ([Ubuntu](https://www.ubuntu.com/)/[Debian](https://www.debian.org/)/[CentOS7](https://www.centos.org/download/)) are known to work -4. Patience and time - -## Preparation - -For readability, I've split this recipe into multiple sub-recipes, which can be found below, or in the navigation links on the right-hand side: - -1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻 -2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨 -3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer: -4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨 -5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹 -6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat: -7. [Profit](/recipes/cryptominer/profit/)! 💰 - -## Chef's Notes - -1. Ultimately I hope to move all the configuration / mining executables into docker containers, but for now, they're running on a CentOS7 host for direct access to GPUs. (Apparently it _may_ be possible to pass-thru the GPUs to docker containers, but I wanted stability first, before abstracting my hardware away from my miners) - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? diff --git a/manuscript/recipes/cryptominer/amd-gpu.md b/manuscript/recipes/cryptominer/amd-gpu.md deleted file mode 100644 index e967df0..0000000 --- a/manuscript/recipes/cryptominer/amd-gpu.md +++ /dev/null @@ -1,169 +0,0 @@ -!!! warning - This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity. - -# AMD GPU - -## Ingredients - -1. [AMD drivers](http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-for-Linux-Release-Notes.aspx) for your GPU -2. [Linux version](https://bitcointalk.org/index.php?topic=1809527.0) of "atiflash" command -3. A [VBIOS rom](https://anorak.tech/c/downloads) compatible with your GPU model and memory manufacturer - -## Preparation - -### Install the drivers - -There are links on the AMD driver download page (_linked above_) to drivers for RHEL/CentOS6, RHEL/CentOS7, and Ubuntu 16.04. As I write this, the latest version is **amdgpu-pro-17.50-511655**. - -!!! note - You'll find reference online to the "blockchain" drivers. These were an earlier, [beta release](http://support.amd.com/en-us/kb-articles/Pages/AMDGPU-Pro-Beta-Mining-Driver-for-Linux-Release-Notes.aspx) which have been superseded by version 17.50 and later. You can ignore these. - -Uncompress the drivers package, and run the following: - -```./amdgpu-install --opencl=legacy --headless``` - -If you have a newer (_than my 5-year-old one!_) motherboard/CPU, you can also try the following, for ROCm support (_which might allow you some more software-based overclocking powers_): - -```./amdgpu-install --opencl=legacy,rocm --headless``` - -Reboot upon completion. - -### Flash the BIOS - -Yes, this sounds scary, but it's not as bad as it sounds, if you want better performance from your GPUs, you **have** to flash your GPU BIOS. - -#### Why flash BIOS? - -Here's my noob-level version of why: - -1. GPU-mining performance is all about the **memory speed** of your GPU - you get the best mining from the fastest internal timings. So you want to optimize your GPU to do really fast memory work, which is not how it's designed by default. - -2. The **processor** on your GPU sits almost idle, so you **lower** the power to the processor (_undervolt_) to save some power. - -3. As it turns out, the factory memory timings of the RX5xx series were particularly poor. - -As an aside, here's an illustration re why you'd **want** to flash your BIOS. Below is the mining throughput of 2 AMD RX580s I purchased together. Guess which one had its BIOS flashed? - -``` -ETH: GPU0 30.115 Mh/s, GPU1 22.176 Mh/s -``` - -Here's the power consumption of the two GPUs while doing the above test: - -GPU1 (original ROM) -``` -GFX Clocks and Power: - 1750 MHz (MCLK) - 1411 MHz (SCLK) - 144.107 W (VDDC) - 16.0 W (VDDCI) - 171.161 W (max GPU) - 172.209 W (average GPU) - -GPU Temperature: 67 C -GPU Load: 100 % -``` - -GPU0 (flashed ROM) -``` -GFX Clocks and Power: - 2050 MHz (MCLK) - 1150 MHz (SCLK) - 87.155 W (VDDC) - 16.0 W (VDDCI) - 117.152 W (max GPU) - 116.1 W (average GPU) - -GPU Temperature: 62 C -GPU Load: 100 % -``` - -So, by flashing the BIOS, I gained 8 MH/s (a 36% increase), while reducing power consumption by ~40W! - -#### How to flash AMD GPU BIOS? - -1. Get [atiflash for linux](https://bitcointalk.org/index.php?topic=1809527.0). - -2. Identify which card you want to flash, by running ```./atiflash -i``` - -Example output below: - -``` -[root@kvm ~]# ./atiflash -i - -adapter bn dn dID asic flash romsize test bios p/n -======= == == ==== =============== ============== ======= ==== ================ - 0 01 00 67DF Ellesmere M25P20/c 40000 pass 113-1E3660EU-O55 -[root@kvm ~]# -``` - -3. Save the original, factory ROM, by running ```./atiflash -s ``` - -Example below: -``` -[root@kvm ~]# ./atiflash -s 0 rx580-4gb-299-1E366-101SA.orig.rom -0x40000 bytes saved, checksum = 0x7FBF -``` - -Now find an appropriate ROM to flash onto the card, and run ```atiflash -p - -!!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes a range of RX580-compatible ROMs, some of which I've tweaked for my own GPUs. - - -Example below: -``` -[root@kvm ~]# ./atiflash -f -p 0 Insan1ty\ R9\ 390X\ BIOS\ v1.81/R9\ 290X/MEM\ MOD\ --\ ELPIDA/290X_ELPIDA_MOD_V1.8.rom -Old SSID: E285 -New SSID: 9395 -Old P/N: 113-E285FOC-U005 -New P/N: 113-GRENADA_XT_C671_D5_8GB_HY_W -Old DeviceID: 67B1 -New DeviceID: 67B0 -Old Product Name: C67111 Hawaii PRO OC GDDR5 4GB 64Mx32 300e/150m -New Product Name: C67130 Grenada XT A0 GDDR5 8GB 128Mx32 300e/150m -Old BIOS Version: 015.044.000.011.000000 -New BIOS Version: 015.049.000.000.000000 -Flash type: M25P10/c -Burst size is 256 -20000/20000h bytes programmed -20000/20000h bytes verified - -Restart System To Complete VBIOS Update. -[root@kvm ~]# -``` - -Reboot the system, [hold onto your butts](https://www.youtube.com/watch?v=o0YWRXJsMyM), and wait for your newly-flashed GPU to fire up. - -#### If it goes wrong - -The safest way to do this is to run more than one GPU, and to flash the GPUs one-at-a-time, rebooting after each. That way, even if you make your GPU totally unresponsive, you'll still get access to your system to flash it back to the factory ROM. - -That said, it's very unlikely that a flashed GPU won't let you boot at all though. In the (legion) cases where I overclocked my RX580 too far, I was able choose to boot into rescue mode in CentOS7 (bypassing the framebuffer / drm initialisation), and reflash my card back to its original BIOS. - -#### Mooar tweaking! 🔧 - -If you want to tweak the BIOS yourself, download the [Polaris bios editor](https://github.com/jaschaknack/PolarisBiosEditor) and tweak away! - -## Continue your adventure - -Now, continue to the next stage of your grand mining adventure: - -1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻 -2. Setup your AMD (_this page_) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨 -3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer: -3. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨 -4. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹 -5. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat: -6. [Profit](/recipes/cryptominer/profit/)! 💰 - - -## Chef's Notes - -1. My two RX580 cards (_bought alongside each other_) perform slightly differently. GPU0 works with a 2050Mhz memory clock, but GPU1 only works at 2000Mhz. Anything over 2000Mhz causes system instability. YMMV. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? diff --git a/manuscript/recipes/cryptominer/exchange.md b/manuscript/recipes/cryptominer/exchange.md deleted file mode 100644 index 4a5d4ca..0000000 --- a/manuscript/recipes/cryptominer/exchange.md +++ /dev/null @@ -1,55 +0,0 @@ -!!! warning - This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity. - -# Exchange - -You may be mining a particular coin, and want to hold onto it, in the hopes of long-term growth. In that case, stick it in a [wallet](/recipes/cryptominer/wallet/) and be done with it. - -You may also not care too much about the coin (you're mining for money, right?), in which case you want to "cash out" your coins into something you can spend. - -In this case, you'll want to configure your mining pool to send your coin-of-choice to an **exchange**, so that you can turn it into a **different** coin, or extract it into FIAT (_oldschool, cave-man currency_). - -## Preparation - -### Get verified at exchanges - -Most exchanges (Binance is currently a notable exception) require some sort of verification of your ID before they'll let you trade, or withdraw coins as FIAT. -So, you may as well get yourself verified in anticipation (_it can take a while during periods of increased crypto-hype_). - -Here are (_referral_) links to exchanges I've used personally: - -* [Cryptopia](https://www.cryptopia.co.nz/Register?referrer=funkypenguin) : Trades obscure altcoins that other exchanges don't, and can withdraw to USD and NZD -* [Binance](https://www.binance.com/?ref=15312815) : Doesn't require verification for small-time traders/miners -* [Coinbase](https://www.coinbase.com/join/5a4d1ed0ee3de40195a695c8) : Beginner's exchange. Coins mined in Nicehash can be sent to coinbase with zero fees. - -### Send coins to Exchanges - -Now simply configure your mining pool (or your miner) to send your coins to your wallet's deposit address for each coin. Note that every coin has a unique wallet address. - -!!! warning - Don't try to send one coin (i.e., LTC) to a different coin's (i.e. BTC) wallet. You will **loose** your money and be **sad** 😞 - -## Withdraw coins to FIAT (cash) - -Once you have enough coins in your exchange wallet, you can "trade" them into the real-world currency of your choice. For example, if you mined 100 Ella to [Cryptopia](https://www.cryptopia.co.nz/Register?referrer=funkypenguin), you could trade it for NZDT or USDT, and withdraw it to your bank account. - -## Continue your adventure - -Now, continue to the next stage of your grand mining adventure: - -1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻 -2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨 -3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer: -4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨 -5. Send your coins to exchanges (_This page_) or [wallets](/recipes/cryptominer/wallet/) 💹 -6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat: -7. [Profit](/recipes/cryptominer/profit/)! 💰 - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? diff --git a/manuscript/recipes/cryptominer/minerhotel.md b/manuscript/recipes/cryptominer/minerhotel.md deleted file mode 100644 index a1b76b4..0000000 --- a/manuscript/recipes/cryptominer/minerhotel.md +++ /dev/null @@ -1,105 +0,0 @@ -# Minerhotel - -!!! warning - This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity. - -So, you have GPUs. You can mine cryptocurrency. But **what** cryptocurrency should you mine? - -1. You could manually keep track of [whattomine](http://whattomine.com/), and launch/stop miners based on profitability/convenience, as you see fit. -2. You can automate the process of mining the most profitable coin based on your GPUs' capabilities and the current market prices, and do better things with your free time! (_[receiving alerts](/recipes/crytominer/monitor/), of course, if anything stops working!_) - -This recipe covers option #2 😁 - -[Miner hotel](http://minerhotel.com/) is a collection of scripts and config files to fully automate your mining across AMD or Nvidia cards. - - -## Ingredients - -* [Latest Minerhotel release](http://minerhotel.com/download.html) for Linux -* Time and patience - -## Preparation - -### Unpack Minerhotel - -Unpack the minerhotel release. You can technically unpack it anywhere, but this guide, and all pre-configured miners, expect an installation at /opt/minerhotel. - -### Prepare miner.config - -Copy /opt/minerhotel/miner.config.example to /opt/minerhotel/miner.config, and start making changes. Here's a rundown of the variables: - -* **WALLET** : Set these WALLET variables to your wallet addresses for all the currencies you want to mine. Your miner will fail to start without the wallet variable, but it won't confirm it's a **valid** wallet. **Now, double-check to confirm the wallet is correct, and you're not just mining coins to /dev/null, or someone else's wallet!** You can either use your [exchange](/recipes/cryptominer/exchange/) wallet address or your own [wallet](/recipes/cryptominer/wallet/). -* **WORKER** : Set this to the name you'll use to define your miner in the various pools you mine. Some pools (_i.e. NiceHash_) auto-create workers based on whatever worked name you specify, whereas others (_Supernova.cc_) will refuse to authenticate you unless you've manually created the worker/password in their UI first. -* **SUPRUSER** : Set this to your supernova.cc login username (**not** your worker name) (_optional, only use this if you want to use supernova.cc_) -* **SUPRPASS** : Set this to the password you've configured within Supernova.cc for your **worker** as defined by the WORKER variable. Note that this require syou to use the **same** worker name and password across all your supernova.cc pools (_optional, only necessary if you want to use supernova.cc_) -* **MPHUSER** : Set this to your miningpoolhub login username (_optional, only necessary if you want to use [miningpoolhub.com](https://miningpoolhub.com/)_) -* **TBFUSER** : Set this to your theblocksfactory login username (_optional, only necessary if you want to use t[heblocksfactory.com](https://theblocksfactory.com/)_) -* **VERTPOOLUSER/VERTPOOLPASS** : Set these to your vertpool user/password (_optional, only necessary if you want to use [vertpool.org](http://vertpool.org/)_) - -### Install services - -1. Run ```/opt/minerhotel/scripts/install-services.sh``` to install the necessary services for systemd -2. Run ```/opt/minerhotel/scripts/fixmods.sh``` to correctly set the filesystem permissions for the various miner executables - -!!! note - fixmods.sh doesn't correctly set permissions on subdirectories, so until this is fixed, you also need to run ```chmod 755 /opt/minerhotel/bin/claymore/ethdcrminer64``` - -### Setup whattomine-linux - -For the whattomine bot to select the most profitable coin to mine for **your** GPUs, you'll need to feed your cookie from https://whattomine.com - -1. Start by installing [this](https://chrome.google.com/webstore/detail/cookie-inspector/jgbbilmfbammlbbhmmgaagdkbkepnijn) addon for Chrome, or [this](https://addons.mozilla.org/en-US/firefox/addon/firecookie/) addon for firefox -2. Then visit http://whattomine.com/ and tweak settings for you GPUs, power costs, etc. -3. Grab the cookie per the whattomine [README](http://git.minerhotel.com:3000/minerhotel/minerhotel/src/master/whattomine/README.md), and paste it (_about 2200 characters_) into /opt/minerhotel/whattomine/config.json -4. Ensure that only the coins/miners that you **want** are enabled in config.json - delete the others, or put a dash ("-") after the ones you want to disable. Set the service names as defined in /opt/minerhotel/services/ - -### Test miners - -Before trusting the whattomine service to automatically launch your miners, test each one first by starting them manually, and then checking their status. - -For example, to test the **miner-amd-eth-ethhash-ethermine** miner, run - -1. ```systemctl start miner-amd-eth-ethhash-ethermine.service``` to start the service -2. And then watch the output by running ```journalctl -u miner-amd-eth-ethhash-ethermine -f``` -3. When you're satisfied it's working correctly (_without errors and with a decent hashrate_), stop the miner again by running ```systemctl stop miner-amd-eth-ethhash-ethermine```, and move onto testing the next one. - -## Serving - -### Launch whattomine - -Finally, run ```systemctl start minerhotel-whattomine``` and then ```journalctl -u minerhotel-whattomine -f``` to watch the output. Within a minute, you should see whattomime launching the most profitable miner, as illustrated below: - -``` -Jan 29 13:49:38 kvm.funkypenguin.co.nz whattomine-linux[2057]: 2018-01-29T13:49:38+1300 whattomine.js Loading whattominebot -Jan 29 13:49:38 kvm.funkypenguin.co.nz whattomine-linux[2057]: 2018-01-29T13:49:38+1300 whattomine.js Starting whattominebot now. -Jan 29 13:50:45 kvm.funkypenguin.co.nz whattomine-linux[2057]: 2018-01-29T13:50:45+1300 whattomine.js Mining Ethereum|ETH|Ethash|0.0089|0.00093|100 -Jan 29 13:50:45 kvm.funkypenguin.co.nz whattomine-linux[2057]: 2018-01-29T13:50:45+1300 whattomine.js Could not find a miner for Ubiq. -Jan 29 13:51:39 kvm.funkypenguin.co.nz whattomine-linux[2057]: 2018-01-29T13:51:39+1300 whattomine.js Mining Ethereum|ETH|Ethash|0.0089|0.00094|100 -Jan 29 13:51:39 kvm.funkypenguin.co.nz whattomine-linux[2057]: 2018-01-29T13:51:39+1300 whattomine.js Could not find a miner for Ubiq. -``` - -!!! note - The messages about "Could not find miner" can be ignored, they indicate that one of the preferred coins on whattomine does not have a miner defined. - -To make whattomine start automatically in future, run ```systemctl enable minerhotel-whattomine``` - -## Continue your adventure - -Now, continue to the next stage of your grand mining adventure: - -1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻 -2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨 -3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer: -4. Setup your miners with Miner Hotel 🏨 (_This page_) -5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹 -6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat: -7. [Profit](/recipes/cryptominer/profit/)! 💰 - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? diff --git a/manuscript/recipes/cryptominer/mining-pool.md b/manuscript/recipes/cryptominer/mining-pool.md deleted file mode 100644 index 219b97b..0000000 --- a/manuscript/recipes/cryptominer/mining-pool.md +++ /dev/null @@ -1,58 +0,0 @@ -!!! warning - This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity. - -# Mining Pools - -You and your puny GPUs don't have a snowball's chance of mining a block on your own. Your only option is to join a mass mining conglomerate (_a mining pool_), throw in your share of the effort for a share of the reward. - -## Preparation - -### Setup accounts at mining pools - -This'll save you some frustration later... Next time you're watching a movie or doing something mindless, visit http://whattomine.com/, and take note of the 10-15 most profitable coins for your GPU type(s). - -On your [exchanges](/recipes/cryptominer/exchange/), identify the "_deposit address_" for each popular coin, and note them down for the next step. - -!!! note - If you're wanting to mine directly to a wallet for long-term holding, then substitute your wallet public address for this deposit address. - -Now work your way through the following list of pools, creating an account on each one you want as you go. In the case of each pool/coin, setup your "payout address" to match your change address for the coin (above). - -* [Mining Pool Hub](https://miningpoolhub.com/) (Lots of coins) -* [NiceHash](https://nicehash.com) (Ethereum, Decred) -* [suprnova](https://suprnova.cc/) - Lots of coins, but, you generally need a separate login for each pool. You _also_ need to create a worker in each pool with a common username and password, for [Minerhotel](/recipes/crytominer/minerhotel/). -* [nanopool](https://nanopool.org/) (Ethereum, Ethereum Classic, SiaCoin, ZCash, Monero, Pascal and Electroneum) -* [slushpool](https://slushpool.com/home/) (BTC and ZCash) - - -## Serving - -### Avoid fees - -As noted by IronicBadger [here](https://www.linuxserver.io/2018/01/20/how-to-build-a-cryptocurrency-mining-rig/), the name of the game is avoiding fees where possible. Here are a few tips: - -* [Mining Pool Hub](https://miningpoolhub.com/) is a popular pool for multiple coins, and it allows you (_for a fee_) to auto-exchange the coins you mine for coins that you actually _want_. -* [NiceHash](https://nicehash.com) will allow you to send your earned bitcoin (_whatever you mine, they pay you in bitcoin_) to coinbase for free. - - - -## Continue your adventure - -Now, continue to the next stage of your grand mining adventure: - -1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻 -2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨 -3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer: -4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨 -5. Send your coins to exchanges (_This page_) or [wallets](/recipes/cryptominer/wallet/) 💹 -6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat: -7. [Profit](/recipes/cryptominer/profit/)! 💰 - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? diff --git a/manuscript/recipes/cryptominer/mining-rig.md b/manuscript/recipes/cryptominer/mining-rig.md deleted file mode 100644 index 03805f7..0000000 --- a/manuscript/recipes/cryptominer/mining-rig.md +++ /dev/null @@ -1,55 +0,0 @@ -!!! warning - This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity. - -# Mining Rig - -## Hardware - -You can surely [find](https://www.reddit.com/r/gpumining/) a better tutorial on how to build a mining rig than this one. However, to summarise what I've learned: - -1. You want a beefy power supply, with lots of PCI-e 8pin and 6pin cables. -2. You need 1 x PCI express (_PCI-e_) port per GPU -3. You don't need powerful CPU or much RAM - the GPUs do all the mining work. My current guts (_minus the PSU_) are 5 years old. - -## Do I need a open-air rig? - -Initially, no. You can use any old PC chassis. But as soon as you want more than one GPU, you're going to start to run into cooling problems. - -You don't need anything fancy. Here's a photo of the rig my wife built me: - -![My mining rig, naked](../../images/mining_rig_naked.jpg) - -I recommend this design (_with the board with little holes in it_) - it takes up more space, but I have more room to place extra components (_PSUs, hard drives, etc_), as illustrated below: - -!!! note - You'll note the hard drives in the picture - that's not part of the mining requirements, it's because my rig doubles as my [Plex](/recipes/plex/) server ;) - -![My mining rig, populated](../../images/mining_rig_populated.jpg) - -## Continue your adventure - -Now, continue to the next stage of your grand mining adventure: - -1. Build your mining rig 💻 (This page) -2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨 -3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer: -4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨 -5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹 -6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat: -7. [Profit](/recipes/cryptominer/profit/)! 💰 - - - -## Chef's Notes - -1. Pro-tip : You're going to spend some time overclocking. Which is going to make your mining host unstable. - -Yes. It's the ultimate _#firstworldproblem_, but if you have a means to remotely reboot your host, use it! You can thank me later. - -(_I hooked up a remote-controlled outlet to my rig, so that I can power-cycle it without having to crawl under the desk!_) - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? diff --git a/manuscript/recipes/cryptominer/monitor.md b/manuscript/recipes/cryptominer/monitor.md deleted file mode 100644 index e474cd6..0000000 --- a/manuscript/recipes/cryptominer/monitor.md +++ /dev/null @@ -1,93 +0,0 @@ -# Monitor - -!!! warning - This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity. - -So, you're a miner! But if you're not **actively** mining, are you still a miner? This page details how to **measure** your mining activity, and how to raise an alert when a profit-affecting issue affects your miners. - -## Ingredients - -1. [InfluxDB+Grafana](https://www.funkypenguin.co.nz/note/adding-custom-data-to-influxdb-and-grafana/) instance, for visualising data -2. [Icinga](https://www.icinga.com/), [Nagios](https://www.nagios.org/) etc for alarming on GPU/miner status -3. [Asi MPM](https://www.asimpm.com/) (iOS) for monitoring your miner/pool status -4. [Altpocket](https://altpocket.io/?ref=ilVqdeWbAv), [CoinTracking](https://cointracking.info?ref=F560640), etc for managing your crypto-asset portfolio (_referral links_) - -## Preparation - -### Visualising performance - -![Visualise mining performance](../../images/cryptominer_grafana.png) - -Since [Minerhotel](/recipes/crytominer/minerhotel/) switches currency based on what's most profitable in the moment, it's hard to gauge the impact of changes (overclocking, tweaking, mining pools) over time. - -I hacked up a bash script which grabs performance data from the output of the miners, and throws it into an InfluxDB database, which can then be visualized using Grafana. - -Here's an early version of the script (_it's since been updated for clockspeed and power usage too_): - - - -!!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes up-to-date versions of the InfluxDB /Grafana script mentioned above, as well as pre-setup Grafana graphs, so that patrons can simply "_git pull_" and start monitoring - - -### Alarming on failure - -![Visualise mining performance](../../images/cryptominer_alarm.png) - -GPU mining can fail in subtle ways. On occasion, I've tweaked my GPUs to the point that the miner will start, but one or all GPUs will report a zero hash rate. I wanted to be alerted to such profit-affecting issues, so I wrote a bash script (_intended to be executed by NRPE from Icinga, Nagios, etc_). - -The script tests the output of the currently active miner, and ensures the GPUs have a valid hashrate. - - - -!!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes up-to-date versions of the Icinga scripts mentioned above, so that patrons can simply "_git pull_" and start monitoring - -### Monitoring pool/miner status - -I've tried several iOS apps for monitoring my performance across various. The most useful app I've found thus far is [Asi MPM](https://www.asimpm.com/). It requires a little upfront effort to configure for all your coins/pools, but thereafter it's a handy way to keep tabs on your new obsession! - -### Track your portfolio - -Now that you've got your coins happily cha-chinging into you [wallets](/recipes/cryptominer/wallet/) (_and potentially various [exchanges](/recipes/cryptominer/exchange/)_), you'll want to monitor the performance of your portfolio over time. - -#### Web Apps - -There's a detailed breakdown of porfolio-management apps [here](https://www.cryptostache.com/2017/11/10/keeping-track-cryptocurrency-portfolio-best-apps-2017/). - -Personally, I use: - -* [Altpocket](https://altpocket.io/?ref=ilVqdeWbAv) (A free web app which can auto-sync with certain exchanges and wallets) -* [CoinTracking](https://cointracking.info?ref=F560640) - The top crypto-portfolio manager, by far. But it's expensive when you get to > 200 trades. You get what you pay for ;) - -#### Mobile Apps - -I've found the following iOS apps to be useful in tracking my portfolio (_really more for investing than mining though, since portfolio tracking requires a manual entry for each trade_) - -* [Delta](https://itunes.apple.com/us/app/delta-crypto-ico-portfolio/id1288676542?mt=8) (iOS) - Track your portfolio (losses/gains) and alert you to changes in the coins you watch -* [Bitscreener](https://itunes.apple.com/app/apple-store/id1240849311?mt=8) )(iOS) - Track multiple currencies on a watchlist, and quickly view news/discussion per coin - -!!! note - Some of the links above are referral links. I get some goodies when you use them. - -## Continue your adventure - -Now, continue to the next stage of your grand mining adventure: - -1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻 -2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨 -3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer: -4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨 -5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹 -6. Monitor your empire :heartbeat: (_this page_) -7. [Profit](/recipes/cryptominer/profit/)! 💰 - -## Chef's Notes - -1. Ultimately I hope to move all the configuration / mining executables into docker containers, but for now, they're running on a CentOS7 host for direct access to GPUs. (_Apparently it **may** be possible to pass-thru the GPUs to docker containers, but I wanted stability first, before abstracting my hardware away from my miners_) - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? diff --git a/manuscript/recipes/cryptominer/nvidia-gpu.md b/manuscript/recipes/cryptominer/nvidia-gpu.md deleted file mode 100644 index d9af40e..0000000 --- a/manuscript/recipes/cryptominer/nvidia-gpu.md +++ /dev/null @@ -1,164 +0,0 @@ -# NVidia GPU - -!!! warning - This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity. - -## Ingredients - -1. [Nvidia drivers](http://www.nvidia.com/Download/driverResults.aspx/104284/en-us) for your GPU -2. Some form of X11 GUI preconfigured on your linux host (yes, it's a PITA, but it's necessary for overclocking) - -## Preparation - -### Install kernel-devel and gcc - -The nVidia drivers will need the kernel development packages for your OS installed, as well as gcc. Run the following (for CentOS - there will be an Ubuntu equivalent): - -```yum install kernel-devel-$(uname -r) gcc``` - -### Remove nouveau - -Your host probably already includes nouveau, free/libre drivers for Nvidia graphics card. These won't cut it for mining, so blacklist them to avoid conflict with the dirty, proprietary Nvidia drivers: - -``` -echo 'blacklist nouveau' >> /etc/modprobe.d/blacklist.conf -dracut /boot/initramfs-$(uname -r).img $(uname -r) --force -systemctl disable gdm -reboot -``` - -### Install Nvidia drivers - -Download and uncompress the [Nvidia drivers](http://www.nvidia.com/Download/driverResults.aspx/104284/en-us), and execute the installation as root, with a command something like this: - -```bash NVIDIA-Linux-x86_64-352.30.run``` - -Update your X11 config by running: - -``` -nvidia-xconfig -``` - -### Enable GUID - -``` -systemctl enable gdm -ln -s '/usr/lib/systemd/system/gdm.service' '/etc/systemd/system/display-manager.service' -reboot -``` - -## Overclock - -### Preparation - -!!! warning - Like overclocking itself, this process is still a work in progress. YMMV. - -Of course, you want to squeeze the optimal performance out of your GPU. This is where the X11 environment is required - to adjust GPU clock/memory settings, you need to use the ```nvidia-settings``` command, which (_stupidly_) **requires** an X11 display, even if you're just using the command line. - -The following command: configures X11 for a "fake" screen so that X11 will run, even on a headless machine managed by SSH only, and ensures that the PCI bus ID of every NVidia device is added to the xorg.conf file (to avoid errors about "_(EE) no screens found(EE)_") - -``` -nvidia-xconfig -a --allow-empty-initial-configuration --cool-bits=28 --use-display-device="DFP-0" --connected-monitor="DFP-0" --enable-all-gpus --separate-x-screens -``` - -!!! note - The script below was taken from https://github.com/Cyclenerd/ethereum_nvidia_miner - -Make a directory for your overclocking script. Mine happens to be /root/overclock/, but use whatever you like. - -Create settings.conf as follows: - -``` -# Known to work with Nvidia 1080ti, but probably not optimal. It's an eternal work-in-progress. -MY_WATT="200" -MY_CLOCK="100" -MY_MEM="400" -MY_FAN="60" -``` - -Then create nvidia-overclock.sh as follows: - -``` -#!/usr/bin/env bash - -# -# nvidia-overclock.sh -# Author: Nils Knieling - https://github.com/Cyclenerd/ethereum_nvidia_miner -# -# Overclocking with nvidia-settings -# - -# Load global settings settings.conf -if ! source ~/overclock/settings.conf; then - echo "FAILURE: Can not load global settings 'settings.conf'" - exit 9 -fi - -export DISPLAY=:0 - -# Graphics card 1 to 6 -for MY_DEVICE in {0..5} -do - # Check if card exists - if nvidia-smi -i $MY_DEVICE >> /dev/null 2>&1; then - nvidia-settings -a "[gpu:$MY_DEVICE]/GPUPowerMizerMode=1" - # Fan speed - nvidia-settings -a "[gpu:$MY_DEVICE]/GPUFanControlState=1" - nvidia-settings -a "[fan:$MY_DEVICE]/GPUTargetFanSpeed=$MY_FAN" - # Graphics clock - nvidia-settings -a "[gpu:$MY_DEVICE]/GPUGraphicsClockOffset[3]=$MY_CLOCK" - # Memory clock - nvidia-settings -a "[gpu:$MY_DEVICE]/GPUMemoryTransferRateOffset[3]=$MY_MEM" - # Set watt/powerlimit. This is also set in miner.sh at autostart. - sudo nvidia-smi -i "$MY_DEVICE" -pl "$MY_WATT" - fi -done - -echo -echo "Done" -echo -``` - -### Start your engine! - -**Once** you've got X11 running correctly, execute ,/nvidia-overclock.sh, and you should see something like the following: - -``` -[root@kvm overclock]# ./nvidia-overclock.sh - Attribute 'GPUPowerMizerMode' (kvm.funkypenguin.co.nz:0[gpu:0]) assigned value 1. - Attribute 'GPUFanControlState' (kvm.funkypenguin.co.nz:0[gpu:0]) assigned value 1. - Attribute 'GPUTargetFanSpeed' (kvm.funkypenguin.co.nz:0[fan:0]) assigned value 60. - Attribute 'GPUGraphicsClockOffset' (kvm.funkypenguin.co.nz:0[gpu:0]) assigned value 100. - Attribute 'GPUMemoryTransferRateOffset' (kvm.funkypenguin.co.nz:0[gpu:0]) assigned value 400. - -Power limit for GPU 00000000:04:00.0 was set to 150.00 W from 150.00 W. -All done. - -Done - -[root@kvm overclock]# -``` - -Play with changing your settings.conf file until you break it, and then go back one revision :) - -## Continue your adventure - -Now, continue to the next stage of your grand mining adventure: - -1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻 -2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or Nvidia (_this page_) GPUs 🎨 -3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer: -4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨 -5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹 -6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat: -7. [Profit](/recipes/cryptominer/profit/)! 💰 - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? diff --git a/manuscript/recipes/cryptominer/profit.md b/manuscript/recipes/cryptominer/profit.md deleted file mode 100644 index 32f1d6f..0000000 --- a/manuscript/recipes/cryptominer/profit.md +++ /dev/null @@ -1,28 +0,0 @@ -# Profit! - -Well, that's it really. You're a cryptominer. Welcome to the party. - -## Your adventure has only just begun! - -To recap, you did all this: - -1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻 -2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨 -3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer: -4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨 -5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹 -6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat: -7. Profit! (_This page_) 💰 - - -## What next? - -Get in touch and share your experience - there's a special [discord](https://discord.gg/Y9aUhrj) channel if you're the IM type, else post a comment/thread at the [kitchen](http://discourse.geek-kitchen.funkypenguin.co.nz/) :) - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? diff --git a/manuscript/recipes/cryptominer/wallet.md b/manuscript/recipes/cryptominer/wallet.md deleted file mode 100644 index f07857f..0000000 --- a/manuscript/recipes/cryptominer/wallet.md +++ /dev/null @@ -1,41 +0,0 @@ -!!! warning - This is not a complete recipe - it's a component of the [cryptominer](/recipes/cryptominer/) "_uber-recipe_", but has been split into its own page to reduce complexity. - -# Wallet - -You may be mining a particular coin, and want to hold onto it, in the hopes of long-term growth. The safest place to stick that coin, therefore, is an a wallet. - -## Preparation - -### Get your wallets - -Your favorite coin probably has a link to various desktop wallets on their website. All you have to do is download the wallet to your desktop, fire it up, and then **backup your public/private key somewhere safe**. - -### Wallets I use - -I mine most of my coins to Exchanges, but I do have the following wallets: - -* [Jaxx](https://itunes.apple.com/nz/app/jaxx-blockchain-wallet/id1084514516?mt=8) on my iPhone for popular coins (BTC, ETH, etc) -* Eleos wallet for [ZClassic](https://zclassic.org/) - - -## Continue your adventure - -Now, continue to the next stage of your grand mining adventure: - -1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻 -2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨 -3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer: -4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨 -5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or wallets (_This page_) 💹 -6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat: -7. [Profit](/recipes/cryptominer/profit/)! 💰 - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? diff --git a/manuscript/recipes/cryptonote-mining-pool.md b/manuscript/recipes/cryptonote-mining-pool.md index 135f6e7..71a91cc 100644 --- a/manuscript/recipes/cryptonote-mining-pool.md +++ b/manuscript/recipes/cryptonote-mining-pool.md @@ -1,16 +1,16 @@ # CryptoNote Mining Pool -[Cryptocurrency miners](/recipes/cryptominer) will "pool" their GPU resources ("_hashpower_") into aggregate "_mining pools_", so that by the combined effort of all the miners, the pool will receive a reward for the blocks "mined" into the blockchain, and this reward will be distributed among the miners. +[Cryptocurrency miners](https://geek-cookbook.funkypenguin.co.nz/)recipes/cryptominer) will "pool" their GPU resources ("_hashpower_") into aggregate "_mining pools_", so that by the combined effort of all the miners, the pool will receive a reward for the blocks "mined" into the blockchain, and this reward will be distributed among the miners. [CryptoNote](https://cryptonote.org/) is an open-source toolset designed to facilitate the creation of new privacy-focused [cryptocurrencies](https://cryptonote.org/coins) (_CryptoNote = 'Kryptonite'. In a pool. Get it?_) -![CryptoNote Mining Pool Screenshot](/images/cryptonote-mining-pool.png) +![CryptoNote Mining Pool Screenshot](https://geek-cookbook.funkypenguin.co.nz/)images/cryptonote-mining-pool.png) The fact that all these currencies share a common ancestry means that a common mining pool platform can be used for miners. The following recipes all use variations of [Dvandal's cryptonote-nodejs-pool ](https://github.com/dvandal/cryptonote-nodejs-pool) ## Mining Pool Recipies -* [TurtleCoin](/recipes/turtle-pool/), the no-BS, fun baby cryptocurrency -* [Athena](/recipes/cryptonote-mining-pool/athena/), TurtleCoin's newborn baby sister +* [TurtleCoin](https://geek-cookbook.funkypenguin.co.nz/)recipes/turtle-pool/), the no-BS, fun baby cryptocurrency +* [Athena](https://geek-cookbook.funkypenguin.co.nz/)recipes/cryptonote-mining-pool/athena/), TurtleCoin's newborn baby sister diff --git a/manuscript/recipes/duplicity.md b/manuscript/recipes/duplicity.md index 6614f84..b01aa97 100644 --- a/manuscript/recipes/duplicity.md +++ b/manuscript/recipes/duplicity.md @@ -28,7 +28,7 @@ So what does this mean for our stack? It means we can leverage Duplicity to back ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) 2. Credentials for one of the Duplicity's supported upload destinations ## Preparation @@ -68,7 +68,7 @@ PASSPHRASE= ``` !!! note - See the [data layout reference](/reference/data_layout/) for an explanation of the included/excluded paths above. + See the [data layout reference](https://geek-cookbook.funkypenguin.co.nz/)reference/data_layout/) for an explanation of the included/excluded paths above. ### Run a test backup @@ -90,7 +90,7 @@ Repeat after me: "If you don't verify your backup, **it's not a backup**". !!! warning Depending on what tier of storage you chose from your provider (_i.e., Google Coldline, or Amazon S3_), you may be charged for downloading data. -Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](/recipie/traefik/), since this is likely to exist for every reader_). +Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](https://geek-cookbook.funkypenguin.co.nz/)recipie/traefik/), since this is likely to exist for every reader_). ``` docker run --env-file duplicity.env -it --rm \ @@ -148,7 +148,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. @@ -163,4 +163,4 @@ Nothing will happen. Very boring. But when the cron script fires (daily), duplic ## Chef's Notes 📓 1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs. -2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env +2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env \ No newline at end of file diff --git a/manuscript/recipes/elkarbackup.md b/manuscript/recipes/elkarbackup.md index ccd40f1..80849e7 100644 --- a/manuscript/recipes/elkarbackup.md +++ b/manuscript/recipes/elkarbackup.md @@ -11,7 +11,7 @@ Don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Ba [![Common Observatory](../images/common_observatory.png)](https://www.observe.global/) -ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. I find this result more reassuring than a blob of compressed, (encrypted?) data that [more sophisticated backup solutions](/recipes/duplicity/) would produce for you. +ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. I find this result more reassuring than a blob of compressed, (encrypted?) data that [more sophisticated backup solutions](https://geek-cookbook.funkypenguin.co.nz/)recipes/duplicity/) would produce for you. ![ElkarBackup Screenshot](../images/elkarbackup.png) @@ -19,8 +19,8 @@ ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -67,20 +67,20 @@ Create ```/var/data/config/elkarbackup/elkarbackup-db-backup.env```, and populat No, me either :shrug: -```` +``` # For database backup (keep 7 days daily backups) MYSQL_PWD= MYSQL_USER=root BACKUP_NUM_KEEP=7 BACKUP_FREQUENCY=1d -```` +``` ### Setup Docker Swarm Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` @@ -159,7 +159,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. @@ -171,11 +171,11 @@ Launch the ElkarBackup stack by running ```docker stack deploy elkarbackup -c

Your LDAP name > Mappers: -![KeyCloak Add Realm Screenshot](/images/sso-stack-keycloak-3.png) +![KeyCloak Add Realm Screenshot](https://geek-cookbook.funkypenguin.co.nz/)images/sso-stack-keycloak-3.png) For each of the following mappers, click the name, and set the "_Read Only_" flag to "_Off_" (_this enables 2-way sync between KeyCloak and OpenLDAP_) @@ -53,16 +53,16 @@ For each of the following mappers, click the name, and set the "_Read Only_" fla * email * first name -![KeyCloak Add Realm Screenshot](/images/sso-stack-keycloak-4.png) +![KeyCloak Add Realm Screenshot](https://geek-cookbook.funkypenguin.co.nz/)images/sso-stack-keycloak-4.png) ## Summary -We've setup a new realm in KeyCloak, and configured read-write federation to an [OpenLDAP](/recipes/openldap/) backend. We can now manage our LDAP users using either KeyCloak or LDAP directly, and we can protect vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). +We've setup a new realm in KeyCloak, and configured read-write federation to an [OpenLDAP](https://geek-cookbook.funkypenguin.co.nz/)recipes/openldap/) backend. We can now manage our LDAP users using either KeyCloak or LDAP directly, and we can protect vulnerable services using [Traefik Forward Auth](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik-forward-auth/). !!! Summary Created: - * [X] KeyCloak realm in read-write federation with [OpenLDAP](/recipes/openldap/) directory + * [X] KeyCloak realm in read-write federation with [OpenLDAP](https://geek-cookbook.funkypenguin.co.nz/)recipes/openldap/) directory ## Chef's Notes 📓 \ No newline at end of file diff --git a/manuscript/recipes/keycloak/create-user.md b/manuscript/recipes/keycloak/create-user.md index 603107d..12190b4 100644 --- a/manuscript/recipes/keycloak/create-user.md +++ b/manuscript/recipes/keycloak/create-user.md @@ -1,38 +1,38 @@ # Create KeyCloak Users !!! warning - This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity. + This is not a complete recipe - it's an optional component of the [Keycloak recipe](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/), but has been split into its own page to reduce complexity. -Unless you plan to authenticate against an outside provider (*[OpenLDAP](/recipes/keycloak/openldap/), below, for example*), you'll want to create some local users.. +Unless you plan to authenticate against an outside provider (*[OpenLDAP](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/openldap/), below, for example*), you'll want to create some local users.. ## Ingredients !!! Summary Existing: - * [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully + * [X] [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/) recipe deployed successfully ### Create User Within the "Master" realm (*no need for more realms yet*), navigate to **Manage** -> **Users**, and then click **Add User** at the top right: -![Navigating to the add user interface in Keycloak](/images/keycloak-add-user-1.png) +![Navigating to the add user interface in Keycloak](https://geek-cookbook.funkypenguin.co.nz/)images/keycloak-add-user-1.png) Populate your new user's username (it's the only mandatory field) -![Populating a username in the add user interface in Keycloak](/images/keycloak-add-user-2.png) +![Populating a username in the add user interface in Keycloak](https://geek-cookbook.funkypenguin.co.nz/)images/keycloak-add-user-2.png) ### Set User Credentials Once your user is created, to set their password, click on the "**Credentials**" tab, and procede to reset it. Set the password to non-temporary, unless you like extra work! -![Resetting a user's password in Keycloak](/images/keycloak-add-user-3.png) +![Resetting a user's password in Keycloak](https://geek-cookbook.funkypenguin.co.nz/)images/keycloak-add-user-3.png) ## Summary -We've setup users in KeyCloak, which we can now use to authenticate to KeyCloak, when it's used as an [OIDC Provider](/recipes/keycloak/setup-oidc-provider/), potentially to secure vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). +We've setup users in KeyCloak, which we can now use to authenticate to KeyCloak, when it's used as an [OIDC Provider](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/setup-oidc-provider/), potentially to secure vulnerable services using [Traefik Forward Auth](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik-forward-auth/). !!! Summary Created: - * [X] Username / password to authenticate against [KeyCloak](/recipes/keycloak/) + * [X] Username / password to authenticate against [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/) diff --git a/manuscript/recipes/keycloak/setup-oidc-provider.md b/manuscript/recipes/keycloak/setup-oidc-provider.md index 188107e..3422370 100644 --- a/manuscript/recipes/keycloak/setup-oidc-provider.md +++ b/manuscript/recipes/keycloak/setup-oidc-provider.md @@ -1,20 +1,20 @@ # Add OIDC Provider to KeyCloak !!! warning - This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity. + This is not a complete recipe - it's an optional component of the [Keycloak recipe](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/), but has been split into its own page to reduce complexity. -Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against KeyCloak using OpenID Connect (OIDC), which is required for [Traefik Forward Auth](/recipe/traefik-forward-auth/), we'll setup a client in KeyCloak... +Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against KeyCloak using OpenID Connect (OIDC), which is required for [Traefik Forward Auth](https://geek-cookbook.funkypenguin.co.nz/)recipe/traefik-forward-auth/), we'll setup a client in KeyCloak... ## Ingredients !!! Summary Existing: - * [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully + * [X] [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/) recipe deployed successfully New: - * [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](/recipe/traefik-forward-auth/) recipe for more information + * [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](https://geek-cookbook.funkypenguin.co.nz/)recipe/traefik-forward-auth/) recipe for more information ## Preparation @@ -22,11 +22,11 @@ Having an authentication provider is not much use until you start authenticating Within the "Master" realm (*no need for more realms yet*), navigate to **Clients**, and then click **Create** at the top right: -![Navigating to the add user interface in Keycloak](/images/keycloak-add-client-1.png) +![Navigating to the add user interface in Keycloak](https://geek-cookbook.funkypenguin.co.nz/)images/keycloak-add-client-1.png) Enter a name for your client (*remember, we're authenticating **applications** now, not users, so use an application-specific name*): -![Adding a client in KeyCloak](/images/keycloak-add-client-2.png) +![Adding a client in KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)images/keycloak-add-client-2.png) ### Configure Client @@ -35,17 +35,17 @@ Once your client is created, set at **least** the following, and click **Save** * **Access Type** : Confidential * **Valid Redirect URIs** : -![Set KeyCloak client to confidential access type, add redirect URIs](/images/keycloak-add-client-3.png) +![Set KeyCloak client to confidential access type, add redirect URIs](https://geek-cookbook.funkypenguin.co.nz/)images/keycloak-add-client-3.png) ### Retrieve Client Secret Now that you've changed the access type, and clicked **Save**, an additional **Credentials** tab appears at the top of the window. Click on the tab, and capture the KeyCloak-generated secret. This secret, plus your client name, is required to authenticate against KeyCloak via OIDC. -![Capture client secret from KeyCloak](/images/keycloak-add-client-4.png) +![Capture client secret from KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)images/keycloak-add-client-4.png) ## Summary -We've setup an OIDC client in KeyCloak, which we can now use to protect vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). The OIDC URL provided by KeyCloak in the master realm, is *https:///realms/master/.well-known/openid-configuration* +We've setup an OIDC client in KeyCloak, which we can now use to protect vulnerable services using [Traefik Forward Auth](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik-forward-auth/). The OIDC URL provided by KeyCloak in the master realm, is *https:///realms/master/.well-known/openid-configuration* !!! Summary Created: diff --git a/manuscript/recipes/kubernetes/kanboard.md b/manuscript/recipes/kubernetes/kanboard.md index 76d24f7..7993da1 100644 --- a/manuscript/recipes/kubernetes/kanboard.md +++ b/manuscript/recipes/kubernetes/kanboard.md @@ -1,11 +1,11 @@ #Kanboard -Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](/recipes/miniflux/)_) +Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](https://geek-cookbook.funkypenguin.co.nz/)recipes/miniflux/)_) -![Kanboard Screenshot](/images/kanboard.png) +![Kanboard Screenshot](https://geek-cookbook.funkypenguin.co.nz/)images/kanboard.png) !!! tip "Sponsored Project" - Kanboard is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments! 😓 + Kanboard is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments! 😓 Features include: @@ -22,14 +22,14 @@ Features include: ## Ingredients -1. A [Kubernetes Cluster](/kubernetes/design/) including [Traefik Ingress](/kubernetes/traefik/) -2. A DNS name for your kanboard instance (*kanboard.example.com*, below) pointing to your [load balancer](/kubernetes/loadbalancer/), fronting your Traefik ingress +1. A [Kubernetes Cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/) including [Traefik Ingress](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/) +2. A DNS name for your kanboard instance (*kanboard.example.com*, below) pointing to your [load balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/), fronting your Traefik ingress ## Preparation ### Prepare traefik for namespace -When you deployed [Traefik via the helm chart](/kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *kanboard* namespace, as illustrated below: +When you deployed [Traefik via the helm chart](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *kanboard* namespace, as illustrated below: ``` @@ -90,7 +90,7 @@ kubectl create -f /var/data/config/kanboard/kanboard-volumeclaim.yaml ``` !!! question "What's that annotation about?" - The annotation is used by [k8s-snapshots](/kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days. + The annotation is used by [k8s-snapshots](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days. ### Create ConfigMap @@ -117,7 +117,7 @@ Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/work Create a deployment to tell Kubernetes about the desired state of the pod (*which it will then attempt to maintain*). Note below that we mount the persistent volume **twice**, to both ```/var/www/app/data``` and ```/var/www/app/plugins```, using the subPath value to differentiate them. This trick avoids us having to provision **two** persistent volumes just for data mounted in 2 separate locations. !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` ``` cat < /var/data/kanboard/deployment.yml @@ -258,14 +258,8 @@ kubectl patch -n kanboard deployment app -p "{\"spec\":{\"template\":{\"metadata ### Troubleshooting -To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/). +To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard -f```. For further troubleshooting hints, see [Troubleshooting](https://geek-cookbook.funkypenguin.co.nz/)reference/kubernetes/troubleshooting/). ## Chef's Notes -1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;) - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;) \ No newline at end of file diff --git a/manuscript/recipes/kubernetes/kubernetes-dashboard.md b/manuscript/recipes/kubernetes/kubernetes-dashboard.md new file mode 100644 index 0000000..4cfc045 --- /dev/null +++ b/manuscript/recipes/kubernetes/kubernetes-dashboard.md @@ -0,0 +1,35 @@ +# Kubernetes Dashboard + +Yes, Kubernetes is complicated. There are lots of moving parts, and debugging _what's_ gone wrong and _why_, can be challenging. + +Fortunately, to assist in day-to-day operation of our cluster, and in the occasional "how-did-that-ever-work" troubleshooting, we have available to us, the mighty **[Kubernetes Dashboard](https://github.com/kubernetes/dashboard)**: + +![Kubernetes Dashboard Screenshot](https://geek-cookbook.funkypenguin.co.nz/)images/kubernetes-dashboard.png) + +Using the dashboard, you can: + +* Visual cluster load, pod distribution +* Examine Kubernetes objects, such as Deployments, Daemonsets, ConfigMaps, etc +* View logs +* Deploy new YAML manifests +* Lots more! + +## Ingredients + +1. A [Kubernetes Cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/), with +2. OIDC-enabled authentication +3. An Ingress Controller ([Traefik Ingress](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/) or [NGinx Ingress](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/nginx-ingress/)) +4. A DNS name for your dashboard instance (*dashboard.example.com*, below) pointing to your [load balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/), fronting your ingress controller +5. A [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/) instance for authentication + +## Preparation + + +### Access Kanboard + +At this point, you should be able to access your instance on your chosen DNS name (*i.e. https://dashboard.example.com*) + + +## Chef's Notes + +1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;) \ No newline at end of file diff --git a/manuscript/recipes/kubernetes/miniflux.md b/manuscript/recipes/kubernetes/miniflux.md index 38edc31..022829b 100644 --- a/manuscript/recipes/kubernetes/miniflux.md +++ b/manuscript/recipes/kubernetes/miniflux.md @@ -1,11 +1,11 @@ #Miniflux -Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](/recipes/kanboard/)_) +Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](https://geek-cookbook.funkypenguin.co.nz/)recipes/kanboard/)_) -![Miniflux Screenshot](/images/miniflux.png) +![Miniflux Screenshot](https://geek-cookbook.funkypenguin.co.nz/)images/miniflux.png) !!! tip "Sponsored Project" - Miniflux is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. Although I get to process my RSS feeds less frequently than I'd like to! + Miniflux is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. Although I get to process my RSS feeds less frequently than I'd like to! I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/review/miniflux-lightweight-self-hosted-rss-reader/), but features (among many) that I appreciate: @@ -20,14 +20,14 @@ I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/rev ## Ingredients -1. A [Kubernetes Cluster](/kubernetes/design/) including [Traefik Ingress](/kubernetes/traefik/) -2. A DNS name for your miniflux instance (*miniflux.example.com*, below) pointing to your [load balancer](/kubernetes/loadbalancer/), fronting your Traefik ingress +1. A [Kubernetes Cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/) including [Traefik Ingress](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/) +2. A DNS name for your miniflux instance (*miniflux.example.com*, below) pointing to your [load balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/), fronting your Traefik ingress ## Preparation ### Prepare traefik for namespace -When you deployed [Traefik via the helm chart](/kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *miniflux* namespace, as illustrated below: +When you deployed [Traefik via the helm chart](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *miniflux* namespace, as illustrated below: ``` @@ -88,7 +88,7 @@ kubectl create -f /var/data/config/miniflux/db-persistent-volumeclaim.yaml ``` !!! question "What's that annotation about?" - The annotation is used by [k8s-snapshots](/kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days. + The annotation is used by [k8s-snapshots](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days. ### Create secrets @@ -118,7 +118,7 @@ Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/work Deployments tell Kubernetes about the desired state of the pod (*which it will then attempt to maintain*). Create the db deployment by excecuting the following. Note that the deployment refers to the secrets created above. !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` ``` cat < /var/data/miniflux/db-deployment.yml @@ -317,12 +317,4 @@ At this point, you should be able to access your instance on your chosen DNS nam ### Troubleshooting -To look at the Miniflux pod's logs, run ```kubectl logs -n miniflux -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/). - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +To look at the Miniflux pod's logs, run ```kubectl logs -n miniflux -f```. For further troubleshooting hints, see [Troubleshooting](https://geek-cookbook.funkypenguin.co.nz/)reference/kubernetes/troubleshooting/). \ No newline at end of file diff --git a/manuscript/recipes/kubernetes/nextcloud.md b/manuscript/recipes/kubernetes/nextcloud.md index 5424206..fa5ebec 100644 --- a/manuscript/recipes/kubernetes/nextcloud.md +++ b/manuscript/recipes/kubernetes/nextcloud.md @@ -1,9 +1,9 @@ hero: Not all heroes wear capes !!! danger "This recipe is a work in progress" - This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` - So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁 + So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues # NAME @@ -15,7 +15,7 @@ Details ## Ingredients -1. [Kubernetes cluster](/kubernetes/digital-ocean/) +1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/digital-ocean/) ## Preparation @@ -57,7 +57,7 @@ MAIL_FROM="Wekan " Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` @@ -110,7 +110,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. @@ -124,10 +124,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa ## Chef's Notes -1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container. \ No newline at end of file diff --git a/manuscript/recipes/kubernetes/phpipam.md b/manuscript/recipes/kubernetes/phpipam.md index cb6af62..fc88827 100644 --- a/manuscript/recipes/kubernetes/phpipam.md +++ b/manuscript/recipes/kubernetes/phpipam.md @@ -8,7 +8,7 @@ Details ## Ingredients -1. [Kubernetes cluster](/kubernetes/digital-ocean/) +1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/digital-ocean/) ## Preparation @@ -103,7 +103,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. @@ -117,10 +117,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa ## Chef's Notes -1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container. \ No newline at end of file diff --git a/manuscript/recipes/kubernetes/privatebin.md b/manuscript/recipes/kubernetes/privatebin.md index 5424206..fa5ebec 100644 --- a/manuscript/recipes/kubernetes/privatebin.md +++ b/manuscript/recipes/kubernetes/privatebin.md @@ -1,9 +1,9 @@ hero: Not all heroes wear capes !!! danger "This recipe is a work in progress" - This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` - So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁 + So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues # NAME @@ -15,7 +15,7 @@ Details ## Ingredients -1. [Kubernetes cluster](/kubernetes/digital-ocean/) +1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/digital-ocean/) ## Preparation @@ -57,7 +57,7 @@ MAIL_FROM="Wekan " Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` @@ -110,7 +110,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. @@ -124,10 +124,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa ## Chef's Notes -1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container. \ No newline at end of file diff --git a/manuscript/recipes/kubernetes/template-k8s.md b/manuscript/recipes/kubernetes/template-k8s.md index 76d24f7..7993da1 100644 --- a/manuscript/recipes/kubernetes/template-k8s.md +++ b/manuscript/recipes/kubernetes/template-k8s.md @@ -1,11 +1,11 @@ #Kanboard -Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](/recipes/miniflux/)_) +Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](https://geek-cookbook.funkypenguin.co.nz/)recipes/miniflux/)_) -![Kanboard Screenshot](/images/kanboard.png) +![Kanboard Screenshot](https://geek-cookbook.funkypenguin.co.nz/)images/kanboard.png) !!! tip "Sponsored Project" - Kanboard is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments! 😓 + Kanboard is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments! 😓 Features include: @@ -22,14 +22,14 @@ Features include: ## Ingredients -1. A [Kubernetes Cluster](/kubernetes/design/) including [Traefik Ingress](/kubernetes/traefik/) -2. A DNS name for your kanboard instance (*kanboard.example.com*, below) pointing to your [load balancer](/kubernetes/loadbalancer/), fronting your Traefik ingress +1. A [Kubernetes Cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/design/) including [Traefik Ingress](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/) +2. A DNS name for your kanboard instance (*kanboard.example.com*, below) pointing to your [load balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/), fronting your Traefik ingress ## Preparation ### Prepare traefik for namespace -When you deployed [Traefik via the helm chart](/kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *kanboard* namespace, as illustrated below: +When you deployed [Traefik via the helm chart](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *kanboard* namespace, as illustrated below: ``` @@ -90,7 +90,7 @@ kubectl create -f /var/data/config/kanboard/kanboard-volumeclaim.yaml ``` !!! question "What's that annotation about?" - The annotation is used by [k8s-snapshots](/kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days. + The annotation is used by [k8s-snapshots](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days. ### Create ConfigMap @@ -117,7 +117,7 @@ Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/work Create a deployment to tell Kubernetes about the desired state of the pod (*which it will then attempt to maintain*). Note below that we mount the persistent volume **twice**, to both ```/var/www/app/data``` and ```/var/www/app/plugins```, using the subPath value to differentiate them. This trick avoids us having to provision **two** persistent volumes just for data mounted in 2 separate locations. !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` ``` cat < /var/data/kanboard/deployment.yml @@ -258,14 +258,8 @@ kubectl patch -n kanboard deployment app -p "{\"spec\":{\"template\":{\"metadata ### Troubleshooting -To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/). +To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard -f```. For further troubleshooting hints, see [Troubleshooting](https://geek-cookbook.funkypenguin.co.nz/)reference/kubernetes/troubleshooting/). ## Chef's Notes -1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;) - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;) \ No newline at end of file diff --git a/manuscript/recipes/mail.md b/manuscript/recipes/mail.md index 07b0eb8..625b4bc 100644 --- a/manuscript/recipes/mail.md +++ b/manuscript/recipes/mail.md @@ -1,4 +1,4 @@ -hero: Docker-mailserver - A recipe for a self-contained mailserver and friends ✉️ +hero: Docker-mailserver - A recipe for a self-contained mailserver and friends # Mail Server @@ -14,8 +14,8 @@ docker-mailserver doesn't include a webmail client, and one is not strictly need ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design 3. LetsEncrypt authorized email address for domain 4. Access to manage DNS records for domains diff --git a/manuscript/recipes/mattermost.md b/manuscript/recipes/mattermost.md index 2ccd766..0ae034d 100644 --- a/manuscript/recipes/mattermost.md +++ b/manuscript/recipes/mattermost.md @@ -8,8 +8,8 @@ Details ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -49,7 +49,7 @@ BACKUP_FREQUENCY=1d Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` @@ -104,7 +104,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. diff --git a/manuscript/recipes/miniflux.md b/manuscript/recipes/miniflux.md index e9ddc02..61bf42d 100644 --- a/manuscript/recipes/miniflux.md +++ b/manuscript/recipes/miniflux.md @@ -2,12 +2,12 @@ hero: Miniflux - A recipe for a lightweight minimalist RSS reader # Miniflux -Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](/recipes/kanboard/)_) +Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](https://geek-cookbook.funkypenguin.co.nz/)recipes/kanboard/)_) ![Miniflux Screenshot](../images/miniflux.png) !!! tip "Sponsored Project" - Miniflux is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. Although I get to process my RSS feeds less frequently than I'd like to! + Miniflux is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. Although I get to process my RSS feeds less frequently than I'd like to! I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/review/miniflux-lightweight-self-hosted-rss-reader/), but features (among many) that I appreciate: @@ -21,8 +21,8 @@ I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/rev ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design 3. DNS entry pointing your Miniflux url (i.e. _miniflux.example.com_) to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -138,4 +138,4 @@ Log into your new instance at https://**YOUR-FQDN**, using the credentials you s ## Chef's Notes 📓 -1. Find the bookmarklet under the **Settings -> Integration** page. +1. Find the bookmarklet under the **Settings -> Integration** page. \ No newline at end of file diff --git a/manuscript/recipes/minio.md b/manuscript/recipes/minio.md index 49780fe..94f54fc 100644 --- a/manuscript/recipes/minio.md +++ b/manuscript/recipes/minio.md @@ -17,8 +17,8 @@ Possible use-cases: ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -46,7 +46,7 @@ MINIO_SECRET_KEY= Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` @@ -173,6 +173,6 @@ goofys#bucket /mnt/mountpoint fuse _netdev,allow_other,--file-mode= ## Chef's Notes 📓 1. There are many S3-filesystem-mounting tools available, I just picked Goofys because it's simple. Google is your friend :) -2. Some applications (_like [NextCloud](/recipes/nextcloud/)_) can natively mount S3 buckets -3. Some backup tools (_like [Duplicity](/recipes/duplicity/)_) can backup directly to S3 buckets +2. Some applications (_like [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/)_) can natively mount S3 buckets +3. Some backup tools (_like [Duplicity](https://geek-cookbook.funkypenguin.co.nz/)recipes/duplicity/)_) can backup directly to S3 buckets diff --git a/manuscript/recipes/mqtt.md b/manuscript/recipes/mqtt.md index 342647d..39f200f 100644 --- a/manuscript/recipes/mqtt.md +++ b/manuscript/recipes/mqtt.md @@ -1,13 +1,13 @@ hero: Kubernetes. The hero we deserve. !!! danger "This recipe is a work in progress" - This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes all the necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍 + This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes all the necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` - So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁 + So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues # MQTT broker -I use Elias Kotlyar's [excellent custom firmware](https://github.com/EliasKotlyar/Xiaomi-Dafang-Hacks) for Xiaomi DaFang/XiaoFang cameras, enabling RTSP, MQTT, motion tracking, and other features, integrating directly with [Home Assistant](/recipes/homeassistant/). +I use Elias Kotlyar's [excellent custom firmware](https://github.com/EliasKotlyar/Xiaomi-Dafang-Hacks) for Xiaomi DaFang/XiaoFang cameras, enabling RTSP, MQTT, motion tracking, and other features, integrating directly with [Home Assistant](https://geek-cookbook.funkypenguin.co.nz/)recipes/homeassistant/). There's currently a [mysterious bug](https://github.com/EliasKotlyar/Xiaomi-Dafang-Hacks/issues/638) though, which prevents TCP communication between Home Assistant and the camera, when MQTT services are enabled on the camera and the mqtt broker runs on the same Raspberry Pi as Home Assistant, using [Hass.io](https://www.home-assistant.io/hassio/). @@ -19,7 +19,7 @@ A workaround to this bug is to run an MQTT broker **external** to the raspberry ## Ingredients -1. A [Kubernetes cluster](/kubernetes/digital-ocean/) +1. A [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/digital-ocean/) ## Preparation @@ -114,7 +114,7 @@ kubectl create secret -n mqtt generic mqtt-credentials \ Now that we have a volume, a service, and a namespace, we can create a deployment for the mqtt pod. Note below the use of volume mounts, environment variables, as well as the secrets. !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` ``` cat < /var/data/mqtt/mqtt.yml diff --git a/manuscript/recipes/munin.md b/manuscript/recipes/munin.md index aec1292..6b89492 100644 --- a/manuscript/recipes/munin.md +++ b/manuscript/recipes/munin.md @@ -10,8 +10,8 @@ Munin uses the excellent ​RRDTool (written by Tobi Oetiker) and the framework ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -46,7 +46,7 @@ mkdir -p {log,lib,run,cache} ### Prepare environment -Create /var/data/config/munin/munin.env, and populate with the following variables. Use the OAUTH2 variables if you plan to use an [oauth2_proxy](/reference/oauth_proxy/) to protect munin, and set at a **minimum** the ```MUNIN_USER```, ```MUNIN_PASSWORD```, and ```NODES``` values: +Create /var/data/config/munin/munin.env, and populate with the following variables. Use the OAUTH2 variables if you plan to use an [oauth2_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) to protect munin, and set at a **minimum** the ```MUNIN_USER```, ```MUNIN_PASSWORD```, and ```NODES``` values: ``` # Use these if you plan to protect the webUI with an oauth_proxy @@ -74,7 +74,7 @@ SNMP_NODES="router1:10.0.0.254:9999" Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` @@ -123,7 +123,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. ## Serving diff --git a/manuscript/recipes/nextcloud.md b/manuscript/recipes/nextcloud.md index 7d560a7..e97b14c 100644 --- a/manuscript/recipes/nextcloud.md +++ b/manuscript/recipes/nextcloud.md @@ -16,15 +16,15 @@ This recipe is based on the official NextCloud docker image, but includes seprat ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design 3. DNS entry pointing your NextCloud url (_nextcloud.example.com_) to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation ### Setup data locations -We'll need several directories for [static data](/reference/data_layout/#static-data) to bind-mount into our container, so create them in /var/data/nextcloud (_so that they can be [backed up](/recipes/duplicity/)_) +We'll need several directories for [static data](https://geek-cookbook.funkypenguin.co.nz/)reference/data_layout/#static-data) to bind-mount into our container, so create them in /var/data/nextcloud (_so that they can be [backed up](https://geek-cookbook.funkypenguin.co.nz/)recipes/duplicity/)_) ``` mkdir /var/data/nextcloud @@ -32,7 +32,7 @@ cd /var/data/nextcloud mkdir -p {html,apps,config,data,database-dump} ``` -Now make **more** directories for [runtime data](/reference/data_layout/#runtime-data) (_so that they can be **not** backed-up_): +Now make **more** directories for [runtime data](https://geek-cookbook.funkypenguin.co.nz/)reference/data_layout/#runtime-data) (_so that they can be **not** backed-up_): ``` mkdir /var/data/runtime/nextcloud @@ -159,7 +159,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. @@ -188,7 +188,7 @@ Want to use Calendar/Contacts on your iOS device? Want to avoid dictating long, Huzzah! NextCloud supports [service discovery for CalDAV/CardDAV](https://tools.ietf.org/html/rfc6764), allowing you to simply tell your device the primary URL of your server (_**nextcloud.batcave.org**, for example_), and have the device figure out the correct WebDAV path to use. -We (_and anyone else using the [NextCloud Docker image](https://hub.docker.com/_/nextcloud/)_) are using an SSL-terminating reverse proxy ([Traefik](/ha-docker-swarm/traefik/)) in front of our NextCloud container. In fact, it's not **possible** to setup SSL **within** the NextCloud container. +We (_and anyone else using the [NextCloud Docker image](https://hub.docker.com/_/nextcloud/)_) are using an SSL-terminating reverse proxy ([Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik/)) in front of our NextCloud container. In fact, it's not **possible** to setup SSL **within** the NextCloud container. When using a reverse proxy, your device requests a URL from your proxy (https://nextcloud.batcave.com/.well-known/caldav), and the reverse proxy then passes that request **unencrypted** to the internal URL of the NextCloud instance (i.e., http://172.16.12.123/.well-known/caldav) @@ -232,4 +232,4 @@ Note that this .htaccess can be overwritten by NextCloud, and you may have to re ## Chef's Notes 📓 1. Since many of my other recipes use PostgreSQL, I'd have preferred to use Postgres over MariaDB, but MariaDB seems to be the [preferred database type](https://github.com/nextcloud/server/issues/5912). -2. I'm [not the first user](https://github.com/nextcloud/docker/issues/528) to stumble across the service discovery bug with reverse proxies. +2. I'm [not the first user](https://github.com/nextcloud/docker/issues/528) to stumble across the service discovery bug with reverse proxies. \ No newline at end of file diff --git a/manuscript/recipes/openldap.md b/manuscript/recipes/openldap.md index 94f3246..fca03e8 100644 --- a/manuscript/recipes/openldap.md +++ b/manuscript/recipes/openldap.md @@ -5,7 +5,7 @@ [![Common Observatory](../images/common_observatory.png)](https://www.observe.global/) -LDAP is probably the most ubiquitous authentication backend, before the current era of "[stupid social sign-ons](https://www.usatoday.com/story/tech/columnist/2018/10/23/how-separate-your-social-networks-your-regular-sites/1687763002/)". Many of the recipes featured in the cookbook (_[NextCloud](/recipe/nextcloud/), [Kanboard](/recipe/kanboard/), [Gitlab](/recipe/gitlab/), etc_) offer LDAP integration. +LDAP is probably the most ubiquitous authentication backend, before the current era of "[stupid social sign-ons](https://www.usatoday.com/story/tech/columnist/2018/10/23/how-separate-your-social-networks-your-regular-sites/1687763002/)". Many of the recipes featured in the cookbook (_[NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipe/nextcloud/), [Kanboard](https://geek-cookbook.funkypenguin.co.nz/)recipe/kanboard/), [Gitlab](https://geek-cookbook.funkypenguin.co.nz/)recipe/gitlab/), etc_) offer LDAP integration. ## Big deal, who cares? @@ -21,12 +21,12 @@ This recipe combines the raw power of OpenLDAP with the flexibility and features ## What's the takeaway? -What you'll end up with is a directory structure which will allow integration with popular tools (_[NextCloud](/recipe/nextcloud/), [Kanboard](/recipe/kanboard/), [Gitlab](/recipe/gitlab/), etc_), as well as with KeyCloak (_an upcoming recipe_), for **true** SSO. +What you'll end up with is a directory structure which will allow integration with popular tools (_[NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipe/nextcloud/), [Kanboard](https://geek-cookbook.funkypenguin.co.nz/)recipe/kanboard/), [Gitlab](https://geek-cookbook.funkypenguin.co.nz/)recipe/gitlab/), etc_), as well as with KeyCloak (_an upcoming recipe_), for **true** SSO. ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. DNS entry for the hostname (_i.e. "lam.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -41,7 +41,7 @@ mkdir /var/data/runtime/openldap/ ``` !!! note "Why 2 directories?" - For rationale, see my [data layout explanation](/reference/data_layout/) + For rationale, see my [data layout explanation](https://geek-cookbook.funkypenguin.co.nz/)reference/data_layout/) ### Prepare environment @@ -60,7 +60,7 @@ OAUTH2_PROXY_COOKIE_SECRET= ``` !!! note - I use an [OAuth proxy](/reference/oauth_proxy/) to protect access to the web UI, when the sensitivity of the protected data (i.e. my authentication store) warrants it, or if I don't necessarily trust the security of the webUI. + I use an [OAuth proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) to protect access to the web UI, when the sensitivity of the protected data (i.e. my authentication store) warrants it, or if I don't necessarily trust the security of the webUI. Create ```authenticated-emails.txt```, and populate with the email addresses (_matched to GitHub user accounts, in my case_) to which you want grant access, using OAuth2. @@ -335,7 +335,7 @@ Create yours profile (_you chose a default profile in config.cfg above, remember Create a docker swarm config file in docker-compose syntax (v3), something like this, at (```/var/data/config/openldap/openldap.yml```) !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` version: '3' @@ -389,7 +389,7 @@ networks: ``` !!! warning - **Normally**, we set unique static subnets for every stack you deploy, and put the non-public facing components (like databases) in an dedicated _internal network. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + **Normally**, we set unique static subnets for every stack you deploy, and put the non-public facing components (like databases) in an dedicated _internal network. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. However, you're likely to want to use OpenLdap with KeyCloak, whose JBOSS startup script assumes a single interface, and will crash in a ball of 🔥 if you try to assign multiple interfaces to the container. @@ -447,4 +447,4 @@ Create your users using the "**New User**" button. ## Chef's Notes 📓 -1. [The KeyCloak](/recipes/keycloak/authenticate-against-openldap/) recipe illustrates how to integrate KeyCloak with your LDAP directory, giving you a cleaner interface to manage users, and a raft of SSO / OAuth features. +1. [The KeyCloak](https://geek-cookbook.funkypenguin.co.nz/)recipes/keycloak/authenticate-against-openldap/) recipe illustrates how to integrate KeyCloak with your LDAP directory, giving you a cleaner interface to manage users, and a raft of SSO / OAuth features. diff --git a/manuscript/recipes/owntracks.md b/manuscript/recipes/owntracks.md index b9c77b4..68f132c 100644 --- a/manuscript/recipes/owntracks.md +++ b/manuscript/recipes/owntracks.md @@ -7,12 +7,12 @@ Using a smartphone app, OwnTracks allows you to collect and analyse your own location data **without** sharing this data with a cloud provider (_i.e. Apple, Google_). Potential use cases are: * Sharing family locations without relying on Apple Find-My-friends -* Performing automated actions in [HomeAssistant](/recipes/homeassistant/) when you arrive/leave home +* Performing automated actions in [HomeAssistant](https://geek-cookbook.funkypenguin.co.nz/)recipes/homeassistant/) when you arrive/leave home ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -44,7 +44,7 @@ OTR_HOST=owntracks.example.com Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` @@ -96,7 +96,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. diff --git a/manuscript/recipes/phpipam.md b/manuscript/recipes/phpipam.md index a19cb47..10f6be1 100644 --- a/manuscript/recipes/phpipam.md +++ b/manuscript/recipes/phpipam.md @@ -8,18 +8,18 @@ phpIPAM fulfils a non-sexy, but important role - It helps you manage your IP add ## Why should you care about this? -You probably have a home network, with 20-30 IP addresses, for your family devices, your ![IoT devices](/recipe/home-assistant), your smart TV, etc. If you want to (a) monitor them, and (b) audit who does what, you care about what IPs they're assigned by your DHCP server. +You probably have a home network, with 20-30 IP addresses, for your family devices, your ![IoT devices](https://geek-cookbook.funkypenguin.co.nz/)recipe/home-assistant), your smart TV, etc. If you want to (a) monitor them, and (b) audit who does what, you care about what IPs they're assigned by your DHCP server. You could simple keep track of all devices with leases in your DHCP server, but what happens if your (_hypothetical?_) Ubiquity Edge Router X crashes and burns due to lack of disk space, and you loose track of all your leases? Well, you have to start from scratch, is what! -And that [HomeAssistant](/recipes/homeassistant/) config, which you so carefully compiled, refers to each device by IP/DNS name, so you'd better make sure you recreate it consistently! +And that [HomeAssistant](https://geek-cookbook.funkypenguin.co.nz/)recipes/homeassistant/) config, which you so carefully compiled, refers to each device by IP/DNS name, so you'd better make sure you recreate it consistently! Enter phpIPAM. A tool designed to help home keeps as well as large organisations keep track of their IP (_and VLAN, VRF, and AS number_) allocations. ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. DNS entry for the hostname (_i.e. "phpipam.your-domain.com"_) you intend to use for phpIPAM, pointed to your [keepalived](ha-docker-swarm/keepalived/) IPIP ## Preparation @@ -75,7 +75,7 @@ BACKUP_FREQUENCY=1d ### Create nginx.conf -I usually protect my stacks using an [oauth proxy](/reference/oauth_proxy/) container in front of the app. This protects me from either accidentally exposing a platform to the world, or having a insecure platform accessed and abused. +I usually protect my stacks using an [oauth proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) container in front of the app. This protects me from either accidentally exposing a platform to the world, or having a insecure platform accessed and abused. In the case of phpIPAM, the oauth_proxy creates an additional complexity, since it passes the "Authorization" HTTP header to the phpIPAM container. phpIPAH then examines the header, determines that the provided username (_my email address associated with my oauth provider_) doesn't match a local user account, and denies me access without the opportunity to retry. @@ -108,7 +108,7 @@ server { Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` @@ -193,7 +193,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. diff --git a/manuscript/recipes/piwik.md b/manuscript/recipes/piwik.md index 2524ecc..bce634c 100644 --- a/manuscript/recipes/piwik.md +++ b/manuscript/recipes/piwik.md @@ -6,8 +6,8 @@ ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design ## Preparation @@ -83,17 +83,11 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. ## Serving Launch the Piwik stack by running ```docker stack deploy piwik -c ``` -Log into your new instance at https://**YOUR-FQDN**, and follow the wizard to complete the setup. - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +Log into your new instance at https://**YOUR-FQDN**, and follow the wizard to complete the setup. \ No newline at end of file diff --git a/manuscript/recipes/plex.md b/manuscript/recipes/plex.md index 98aee93..0316a03 100644 --- a/manuscript/recipes/plex.md +++ b/manuscript/recipes/plex.md @@ -1,4 +1,4 @@ -hero: A recipe to manage your Media 🎥 📺 🎵 +hero: A recipe to manage your Media # Plex @@ -8,8 +8,8 @@ hero: A recipe to manage your Media 🎥 📺 🎵 ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design 3. A DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -82,7 +82,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. @@ -97,4 +97,4 @@ Log into your new instance at https://**YOUR-FQDN** (You'll need to setup a plex ## Chef's Notes 📓 1. Plex uses port 32400 for remote access, using your plex.tv user/password to authenticate you. The inclusion of the traefik proxy in this recipe is simply to allow you to use the web client (as opposed to a client app) by connecting directly to your instance, as opposed to browsing your media via https://plex.tv/web -2. Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media! +2. Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media! \ No newline at end of file diff --git a/manuscript/recipes/portainer.md b/manuscript/recipes/portainer.md index 462caf8..344e9f7 100644 --- a/manuscript/recipes/portainer.md +++ b/manuscript/recipes/portainer.md @@ -10,8 +10,8 @@ This is a "lightweight" recipe, because Portainer is so "lightweight". But it ** ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -66,4 +66,4 @@ Log into your new instance at https://**YOUR-FQDN**. You'll be prompted to set y ## Chef's Notes 📓 -1. I wanted to use oauth2_proxy to provide an additional layer of security for Portainer, but the proxy seems to break the authentication mechanism, effectively making the stack **so** secure, that it can't be logged into! +1. I wanted to use oauth2_proxy to provide an additional layer of security for Portainer, but the proxy seems to break the authentication mechanism, effectively making the stack **so** secure, that it can't be logged into! \ No newline at end of file diff --git a/manuscript/recipes/privatebin.md b/manuscript/recipes/privatebin.md index 3baa693..c762258 100644 --- a/manuscript/recipes/privatebin.md +++ b/manuscript/recipes/privatebin.md @@ -6,8 +6,8 @@ PrivateBin is a minimalist, open source online pastebin where the server (can) h ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -26,7 +26,7 @@ chmod 777 /var/data/privatebin/ Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` diff --git a/manuscript/recipes/realms.md b/manuscript/recipes/realms.md index 637c98e..c2f0b6c 100644 --- a/manuscript/recipes/realms.md +++ b/manuscript/recipes/realms.md @@ -1,6 +1,6 @@ # Realms -Realms is a git-based wiki (_like [Gollum](/recipes/gollum/), but with basic authentication and registration_) +Realms is a git-based wiki (_like [Gollum](https://geek-cookbook.funkypenguin.co.nz/)recipes/gollum/), but with basic authentication and registration_) ![Realms Screenshot](../images/realms.png) @@ -16,14 +16,14 @@ Features include: !!! warning "Project likely abandoned" - In my limited trial, Realms seems _less_ useful than [Gollum](/recipes/gollum/) for my particular use-case (_i.e., you're limited to markdown syntax only_), but other users may enjoy the basic user authentication and registration features, which Gollum lacks. + In my limited trial, Realms seems _less_ useful than [Gollum](https://geek-cookbook.funkypenguin.co.nz/)recipes/gollum/) for my particular use-case (_i.e., you're limited to markdown syntax only_), but other users may enjoy the basic user authentication and registration features, which Gollum lacks. Also of note is that the docker image is 1.17GB in size, and the handful of commits to the [source GitHub repo](https://github.com/scragg0x/realms-wiki/commits/master) in the past year has listed TravisCI build failures. This has many of the hallmarks of an abandoned project, to my mind. ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -36,7 +36,7 @@ Since we'll start with a basic Realms install, let's just create a single direct mkdir /var/data/realms/ ``` -Create realms.env, and populate with the following variables (_if you intend to use an [oauth_proxy](/reference/oauth_proxy) to double-secure your installation, which I recommend_) +Create realms.env, and populate with the following variables (_if you intend to use an [oauth_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy) to double-secure your installation, which I recommend_) ``` OAUTH2_PROXY_CLIENT_ID= OAUTH2_PROXY_CLIENT_SECRET= @@ -48,7 +48,7 @@ OAUTH2_PROXY_COOKIE_SECRET= Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` @@ -96,7 +96,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. diff --git a/manuscript/recipes/swarmprom.md b/manuscript/recipes/swarmprom.md index e4d6c9b..35dffab 100644 --- a/manuscript/recipes/swarmprom.md +++ b/manuscript/recipes/swarmprom.md @@ -21,8 +21,8 @@ I'd encourage you to spend some time reading https://github.com/stefanprodan/swa ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) on **17.09.0 or newer** (_doesn't work with CentOS Atomic, unfortunately_) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) on **17.09.0 or newer** (_doesn't work with CentOS Atomic, unfortunately_) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. DNS entry for the hostnames you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -31,7 +31,7 @@ This is basically a rehash of stefanprodan's [instructions](https://github.com/s ### Setup oauth provider -Grafana includes decent login protections, but from what I can see, Prometheus, AlertManager, and Unsee do no authentication. In order to expose these publicly for your own consumption (my assumption for the rest of this recipe), you'll want to prepare to run [oauth_proxy](/reference/oauth_proxy/) containers in front of each of the 4 web UIs in this recipe. +Grafana includes decent login protections, but from what I can see, Prometheus, AlertManager, and Unsee do no authentication. In order to expose these publicly for your own consumption (my assumption for the rest of this recipe), you'll want to prepare to run [oauth_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) containers in front of each of the 4 web UIs in this recipe. ### Setup metrics @@ -99,7 +99,7 @@ Create a docker swarm config file in docker-compose syntax (v3), based on the or ???+ note "This example is 274 lines long. Click here to collapse it for better readability" !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` version: "3.3" @@ -379,7 +379,7 @@ Create a docker swarm config file in docker-compose syntax (v3), based on the or ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. diff --git a/manuscript/recipes/template.md b/manuscript/recipes/template.md index faec3ae..9673906 100644 --- a/manuscript/recipes/template.md +++ b/manuscript/recipes/template.md @@ -1,9 +1,9 @@ hero: Not all heroes wear capes !!! danger "This recipe is a work in progress" - This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` - So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁 + So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues # NAME @@ -15,8 +15,8 @@ Details ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation @@ -102,7 +102,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. diff --git a/manuscript/recipes/tiny-tiny-rss.md b/manuscript/recipes/tiny-tiny-rss.md index c144280..69a285f 100644 --- a/manuscript/recipes/tiny-tiny-rss.md +++ b/manuscript/recipes/tiny-tiny-rss.md @@ -10,8 +10,8 @@ ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design ## Preparation @@ -115,7 +115,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. ## Serving diff --git a/manuscript/recipes/turtle-pool.md b/manuscript/recipes/turtle-pool.md deleted file mode 100644 index 1002648..0000000 --- a/manuscript/recipes/turtle-pool.md +++ /dev/null @@ -1,449 +0,0 @@ -hero: How to setup a TurtleCoin Mining Pool - -# TurtleCoin Mining Pool - -[Cryptocurrency miners](/recipes/cryptominer) will "pool" their GPU resources ("_hashpower_") into aggregate "_mining pools_", so that by the combined effort of all the miners, the pool will receive a reward for the blocks "mined" into the blockchain, and this reward will be distributed among the miners. - -![Turtle Pool Screenshot](../images/turtle-pool.png) - -This recipe illustrates how to build a mining pool for [TurtleCoin](https://turtlecoin.lol), one of many [CryptoNote](https://cryptonote.org/) [currencies](https://cryptonote.org/coins) (_which include [Monero](https://www.coingecko.com/en/coins/monero)_), but the principles can be applied to most mineable coins. - -The end result is a mining pool which looks like this: https://trtl.heigh-ho.funkypenguin.co.nz/ - -!!! question "WTF is a TurtleCoin and why do I want it?"" - - In my opinion - because it's a fun, no-BS project with a [silly origin story](https://turtlecoin.lol/#story), a [friendly, welcoming community](http://chat.turtlecoin.lol/), and you'll learn more about cryptocurrency/blockchain than you expect. - -## Ingredients - -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. DNS entry for the hostnames (_pool and api_) you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP -4. At least 16GB disk space (12GB used, 4GB for future growth) - -## Preparation - -### Create user account - -The TurtleCoin pool elements won't (_and shouldn't_) run as root, but they'll need access to write data to some parts of the filesystem (_like logs, etc_). - -To manage access control, we'll want to create a local user on **each docker node** with the same UID. - -``` -useradd -u 3506 turtle-pool -``` - -!!! question "Why 3506?" - I'm glad you asked. [TurtleCoin hard-forked at block 350K](https://medium.com/@turtlecoin/take-your-baikal-and-shove-it-up-your-asic-b05c96187790) to avoid ASIC miners. The Ninja Turtles' human friend [April O'Neil](http://turtlepedia.wikia.com/wiki/April_O'Neil) works at [Channel 6 News](http://turtlepedia.wikia.com/wiki/Channel_6). 350 + 6 = 3506 😁. Aren't **you** glad you asked? - - -### Setup Redis - - -The pool uses Redis for in-memory and persistent storage. This comes in handy for the Docker Swarm deployment, since while the various pool modules weren't _designed_ to run as microservices, the fact that they all rely on Redis for data storage makes this possible. - -!!! warning "Playing it safe" - - Be aware that by default, Redis stores some data **only** in memory, and writes to the filesystem at default intervals (_can be up to 5 minutes by default_). Given we don't **want** to loose 5 minutes of miner's data if we restart Redis (_what happens if we found a block during those 5 minutes but haven't paid any miners yet?_), we want to ensure that Redis runs in "appendonly" mode, which ensures that every change is immediately written to disk. - - We also want to make sure that we retain all Redis logs persistently (_We're dealing with people's cryptocurrency here, it's a good idea to keep persistent logs for debugging/auditing_) - -Create directories to hold Redis data. We use separate directories for future flexibility - One day, we may want to backup the data but not the logs, or move the data to an SSD partition but leave the logs on slower, cheaper disk. - -``` -mkdir -p /var/data/turtle-pool/redis/config -mkdir -p /var/data/turtle-pool/redis/data -mkdir -p /var/data/turtle-pool/redis/logs -chown turtle-pool /var/data/turtle-pool/redis/data -chown turtle-pool /var/data/turtle-pool/redis/logs -``` - -Create **/var/data/turtle-pool/redis/config/redis.conf** using http://download.redis.io/redis-stable/redis.conf as a guide. The following are the values I changed from default on my deployment (_but I'm not a Redis expert!_): - -``` -appendonly yes -appendfilename "appendonly.aof" -loglevel notice -logfile "/logs/redis.log" -protected-mode no -``` - -I also had to **disable** the following line, by commenting it out (_thus ensuring Redis container will respond to the other containers_): - -``` -bind 127.0.0.1 -``` - -### Setup Nginx - -We'll run a simple Nginx container to serve the static front-end of the web UI. - -The simplest way to get the frontend is just to clone the upstream turtle-pool repo, and mount the "/website" subdirectory into Nginx. - -``` -git clone https://github.com/turtlecoin/turtle-pool.git /var/data/turtle-pool/nginx/ -``` - -Edit **/var/data/turtle-pool/nginx/website/config.js**, and change at least the following: - -``` -var api = "https://"; -var poolHost = " -2018-May-01 11:14:59.920932 INFO New wallet added TRTL, creation timestamp 0 -2018-May-01 11:14:59.932367 INFO Container shut down -2018-May-01 11:14:59.932419 INFO Loading container... -2018-May-01 11:14:59.961814 INFO Consumer added, consumer 0x55b0fb5bc070, count 1 -2018-May-01 11:14:59.961996 INFO Starting... -2018-May-01 11:14:59.962173 INFO Container loaded, view public key , wallet count 1, actual balance 0.00, pending balance 0.00 -2018-May-01 11:14:59.962508 INFO New wallet is generated. Address: TRTL -2018-May-01 11:14:59.962581 INFO Saving container... -2018-May-01 11:14:59.962683 INFO Stopping... -2018-May-01 11:14:59.962862 INFO Stopped -``` - -Take careful note of your wallet password, public view key, and wallet address (which starts with TRTL) - -Create **/var/data/turtle-pool/wallet/config/wallet.conf**, containing the following: - -``` -bind-address = 0.0.0.0 -container-file = /container/wallet.container -container-password = -rpc-password = -log-file = /dev/stdout -log-level = 3 -daemon-address = daemon -``` - -### Setup TurtleCoin mining pool - -Following the convention we've set above, create directories to hold pool data: - -``` -mkdir -p /var/data/turtle-pool/pool/config -mkdir -p /var/data/turtle-pool/pool/logs -chown -R turtle-pool /var/data/turtle-pool/pool/logs -``` - -Now create **/var/data/turtle-pool/pool/config/config.json**, using https://github.com/turtlecoin/turtle-pool/blob/master/config.json as a guide, and adjusting at least the following: - -Send logs to /logs/, so that they can potentially be stored / backed up separately from the config: - -``` -"logging": { - "files": { - "level": "debug", - "directory": "/logs", - "flushInterval": 5 - }, -``` - -Set the "poolAddress" field to your wallet address -``` -"poolServer": { - "enabled": true, - "clusterForks": "auto", - "poolAddress": "", -``` - -Add the "host" value to the api section, since our API will run on its own container, and choose a password you'll use for the webUI admin page - -``` -"api": { - "enabled": true, - "hashrateWindow": 600, - "updateInterval": 5, - "host": "pool-api", - "port": 8117, - "blocks": 30, - "payments": 30, - "password": "" -``` - -Set the host value for the daemon: - -``` -"daemon": { - "host": "daemon", - "port": 11898 -}, -``` - -Set the host value for the wallet, and set your container password (_you recorded it earlier, remember?_) - -``` -"wallet": { - "host": "wallet", - "port": 8070, - "password": "" -}, -``` - -Set the host value for Redis: - -``` -"redis": { - "host": "redis", - "port": 6379 -}, -``` - -That's it! The above config files mean each element of the pool will be able to communicate with the other elements within the docker swarm, by name. - - - - - -### Setup Docker Swarm - -Create a docker swarm config file in docker-compose syntax (v3), something like this: - -!!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` - - -``` -version: '3' - -services: - daemon: - image: funkypenguin/turtlecoind - volumes: - - /var/data/runtime/turtle-pool/daemon/1:/var/lib/turtlecoind/ - - /etc/localtime:/etc/localtime:ro - networks: - - internal - - traefik_public - ports: - - 11897:11897 - labels: - - traefik.frontend.rule=Host:explorer.trtl.heigh-ho.funkypenguin.co.nz - - traefik.docker.network=traefik_public - - traefik.port=11898 - - daemon-failsafe1: - image: funkypenguin/turtlecoind - volumes: - - /var/data/runtime/turtle-pool/daemon/failsafe1:/var/lib/turtlecoind/ - - /etc/localtime:/etc/localtime:ro - networks: - - internal - - daemon-failsafe2: - image: funkypenguin/turtlecoind - volumes: - - /var/data/runtime/turtle-pool/daemon/failsafe2:/var/lib/turtlecoind/ - - /etc/localtime:/etc/localtime:ro - networks: - - internal - - pool-pool: - image: funkypenguin/turtle-pool - volumes: - - /var/data/turtle-pool/pool/config:/config:ro - - /var/data/turtle-pool/pool/logs:/logs - - /etc/localtime:/etc/localtime:ro - networks: - - internal - ports: - - 3333:3333 - - 5555:5555 - - 7777:7777 - entrypoint: | - node init.js -module=pool -config=/config/config.json - - pool-api: - image: funkypenguin/turtle-pool - volumes: - - /var/data/turtle-pool/pool/config:/config:ro - - /var/data/turtle-pool/pool/logs:/logs - - /etc/localtime:/etc/localtime:ro - networks: - - internal - - traefik_public - deploy: - labels: - - traefik.frontend.rule=Host:api.trtl.heigh-ho.funkypenguin.co.nz - - traefik.docker.network=traefik_public - - traefik.port=8117 - entrypoint: | - node init.js -module=api -config=/config/config.json - - pool-unlocker: - image: funkypenguin/turtle-pool - volumes: - - /var/data/turtle-pool/pool/config:/config:ro - - /var/data/turtle-pool/pool/logs:/logs - - /etc/localtime:/etc/localtime:ro - networks: - - internal - entrypoint: | - node init.js -module=unlocker -config=/config/config.json - - pool-payments: - image: funkypenguin/turtle-pool - volumes: - - /var/data/turtle-pool/pool/config:/config:ro - - /var/data/turtle-pool/pool/logs:/logs - - /etc/localtime:/etc/localtime:ro - networks: - - internal - entrypoint: | - node init.js -module=payments -config=/config/config.json - - pool-charts: - image: funkypenguin/turtle-pool - volumes: - - /var/data/turtle-pool/pool/config:/config:ro - - /var/data/turtle-pool/pool/logs:/logs - - /etc/localtime:/etc/localtime:ro - networks: - - internal - entrypoint: | - node init.js -module=chartsDataCollector -config=/config/config.json - - wallet: - image: funkypenguin/turtlecoind - volumes: - - /var/data/turtle-pool/wallet/config:/config:ro - - /var/data/turtle-pool/wallet/container:/container - - /var/data/turtle-pool/wallet/logs:/logs - - /etc/localtime:/etc/localtime:ro - networks: - - internal - entrypoint: | - walletd --config /config/wallet.conf | tee /logs/walletd.log - - redis: - volumes: - - /var/data/turtle-pool/redis/config:/config:ro - - /var/data/turtle-pool/redis/data:/data - - /var/data/turtle-pool/redis/logs:/logs - - /etc/localtime:/etc/localtime:ro - image: redis - entrypoint: | - redis-server /config/redis.conf - networks: - - internal - - nginx: - volumes: - - /var/data/turtle-pool/nginx/website:/usr/share/nginx/html:ro - - /etc/localtime:/etc/localtime:ro - image: nginx - networks: - - internal - - traefik_public - deploy: - labels: - - traefik.frontend.rule=Host:trtl.heigh-ho.funkypenguin.co.nz - - traefik.docker.network=traefik_public - - traefik.port=80 - -networks: - traefik_public: - external: true - internal: - driver: overlay - ipam: - config: - - subnet: 172.16.21.0/24 -``` - -!!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. - - -## Serving - -### Launch the Turtle! 🐢 - -Launch the Turtle pool stack by running ```docker stack deploy turtle-pool -c ```, and then run ```"```docker stack ps turtle-pool``` to ensure the stack has come up properly. (_See [troubleshooting](/reference/troubleshooting) if not_) - -The first thing that'll happen is that TurtleCoind will start syncing the blockchain from the bootstrap data. You can watch this happening with ```docker service logs turtle-pool_daemon -f```. While the daemon is syncing, it won't respond to requests, so walletd, the pool, etc will be non-functional. - -You can watch the various elements of the pool doing their thing, by running ```tail -f /var/data/turtle-pool/pool/logs/*.log``` - -### So how do I mine to it? - -That.. is another recipe. Start with the "[CryptoMiner](/recipes/cryptominer/)" uber-recipe for GPU/rig details, grab a copy of xmr-stack (_patched for the forked TurtleCoin_) from https://github.com/turtlecoin/trtl-stak/releases, and follow your nose. Jump into the TurtleCoin discord (_below_) #mining channel for help. - -### What to do if it breaks? - -TurtleCoin is a baby cryptocurrency. There are scaling issues to solve, and large amounts components of this recipe are under rapid development. So, elements may break/change in time, and this recipe itself is a work-in-progress. - -Jump into the [TurtleCoin Discord server](http://chat.turtlecoin.lol/) to ask questions, contribute, and send/receive some TRTL tips! - -## Chef's Notes 📓 - -1. Because Docker Swarm performs ingress NAT for its load-balanced "routing mesh", the source address of inbound miner traffic is rewritten to a (_common_) docker node IP address. This means it's [not possible to determine the actual source IP address](https://github.com/moby/moby/issues/25526) of a miner. Which, in turn, means that any **one** misconfigured miner could trigger an IP ban, and lock out all other miners for 5 minutes at a time. - -Two possible solutions to this are (1) disable banning, or (2) update the pool banning code to ban based on a combination of IP address and miner wallet address. I'll be working on a change to implement #2 if this becomes a concern. - -2. The traefik labels in the docker-compose are to permit automatic LetsEncrypt SSL-protected proxying of your pool UI and API addresses. - -3. After a [power fault in my datacenter caused daemon DB corruption](https://www.reddit.com/r/TRTL/comments/8jftzt/funky_penguin_nz_mining_pool_down_with_daemon/), I added a second daemon, running in parallel to the first. The failsafe daemon runs once an hour, syncs with the running daemons, and shuts down again, providing a safely halted version of the daemon DB for recovery. diff --git a/manuscript/recipes/wallabag.md b/manuscript/recipes/wallabag.md index 2b9b5f5..11d1758 100644 --- a/manuscript/recipes/wallabag.md +++ b/manuscript/recipes/wallabag.md @@ -8,21 +8,21 @@ All saved data (_pages, annotations, images, tags, etc_) are stored on your own ![Wallabag Screenshot](../images/wallabag.png) -There are plugins for [Chrome](https://chrome.google.com/webstore/detail/wallabagger/gbmgphmejlcoihgedabhgjdkcahacjlj) and [Firefox](https://addons.mozilla.org/firefox/addon/wallabagger/), as well as apps for [iOS](https://appsto.re/fr/YeqYfb.i), [Android](https://play.google.com/store/apps/details?id=fr.gaulupeau.apps.InThePoche), etc. Wallabag will also integrate nicely with my favorite RSS reader, [Miniflux](https://miniflux.net/) (_for which there is an [existing recipe](/recipes/miniflux)_). +There are plugins for [Chrome](https://chrome.google.com/webstore/detail/wallabagger/gbmgphmejlcoihgedabhgjdkcahacjlj) and [Firefox](https://addons.mozilla.org/firefox/addon/wallabagger/), as well as apps for [iOS](https://appsto.re/fr/YeqYfb.i), [Android](https://play.google.com/store/apps/details?id=fr.gaulupeau.apps.InThePoche), etc. Wallabag will also integrate nicely with my favorite RSS reader, [Miniflux](https://miniflux.net/) (_for which there is an [existing recipe](https://geek-cookbook.funkypenguin.co.nz/)recipes/miniflux)_). [Here's a video](https://player.vimeo.com/video/167435064) which shows off the UI a bit more. ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation ### Setup data locations -We need a filesystem location to store images that Wallabag downloads from the original sources, to re-display when you read your articles, as well as nightly database dumps (_which you **should [backup](/recipes/duplicity/)**_), so create something like this: +We need a filesystem location to store images that Wallabag downloads from the original sources, to re-display when you read your articles, as well as nightly database dumps (_which you **should [backup](https://geek-cookbook.funkypenguin.co.nz/)recipes/duplicity/)**_), so create something like this: ``` mkdir -p /var/data/wallabag @@ -175,7 +175,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. @@ -199,4 +199,4 @@ Even with all these elements in place, you still need to enable Redis under Inte ## Chef's Notes 📓 1. If you wanted to expose the Wallabag UI directly (_required for the iOS/Android apps_), you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the wallabag container. You'd also need to add the traefik_public network to the wallabag container. I found the iOS app to be unreliable and clunky, so elected to leave my oauth_proxy enabled, and to simply use the webUI on my mobile devices instead. YMMMV. -2. I've not tested the email integration, but you'd need an SMTP server listening on port 25 (_since we can't change the port_) to use it +2. I've not tested the email integration, but you'd need an SMTP server listening on port 25 (_since we can't change the port_) to use it \ No newline at end of file diff --git a/manuscript/recipes/wekan.md b/manuscript/recipes/wekan.md index 990fad9..26fc0b6 100644 --- a/manuscript/recipes/wekan.md +++ b/manuscript/recipes/wekan.md @@ -9,12 +9,12 @@ Wekan allows to create Boards, on which Cards can be moved around between a numb There's a [video](https://www.youtube.com/watch?v=N3iMLwCNOro) of the developer showing off the app, as well as a f[unctional demo](https://wekan.indie.host/b/t2YaGmyXgNkppcFBq/wekan-fork-roadmap). !!! note - For added privacy, this design secures wekan behind an [oauth2 proxy](/reference/oauth_proxy/), so that in order to gain access to the wekan UI at all, oauth2 authentication (_to GitHub, GitLab, Google, etc_) must have already occurred. + For added privacy, this design secures wekan behind an [oauth2 proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/), so that in order to gain access to the wekan UI at all, oauth2 authentication (_to GitHub, GitLab, Google, etc_) must have already occurred. ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik) configured per design ## Preparation @@ -128,7 +128,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. @@ -142,4 +142,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa ## Chef's Notes 📓 -1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the wekan container. You'd also need to add the traefik network to the wekan container. +1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the wekan container. You'd also need to add the traefik network to the wekan container. \ No newline at end of file diff --git a/manuscript/recipes/wetty.md b/manuscript/recipes/wetty.md index ba4499e..c4cf98f 100644 --- a/manuscript/recipes/wetty.md +++ b/manuscript/recipes/wetty.md @@ -8,7 +8,7 @@ hero: Terminal in a browser, baby! 💻 ## Why would you need SSH in a browser window? -Need shell access to a node with no external access? Deploy Wetty behind an [oauth_proxy](/reference/oauth_proxy/) with a SSL-terminating reverse proxy ([traefik](/ha-docker-swarm/traefik/)), and suddenly you have the means to SSH to your private host from any web browser (_protected by your [oauth_proxy](/reference/oauth_proxy/) of course, and your OAuth provider's 2FA_) +Need shell access to a node with no external access? Deploy Wetty behind an [oauth_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) with a SSL-terminating reverse proxy ([traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik/)), and suddenly you have the means to SSH to your private host from any web browser (_protected by your [oauth_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) of course, and your OAuth provider's 2FA_) Here are some other possible use cases: @@ -18,15 +18,15 @@ Here are some other possible use cases: ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik_public) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP ## Preparation ### Prepare environment -Create wetty.env, and populate with the following variables per the [oauth_proxy](/reference/oauth_proxy/) instructions: +Create wetty.env, and populate with the following variables per the [oauth_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) instructions: ``` OAUTH2_PROXY_CLIENT_ID= OAUTH2_PROXY_CLIENT_SECRET= @@ -42,7 +42,7 @@ SSHUSER=batman Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ``` @@ -86,7 +86,7 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/)reference/networks/) here. diff --git a/manuscript/recipies/autopirate/sabnzbd.md b/manuscript/recipies/autopirate/sabnzbd.md deleted file mode 100644 index d03e17f..0000000 --- a/manuscript/recipies/autopirate/sabnzbd.md +++ /dev/null @@ -1,81 +0,0 @@ -!!! warning - This is not a complete recipe - it's a component of the [AutoPirate](/recipies/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. - -# SABnzbd - -## Introduction - -SABnzbd is the workhorse of the stack. It takes .nzb files as input (_manually or from other [autopirate](/recipies/autopirate/) stack tools_), then connects to your chosen Usenet provider, downloads all the individual binaries referenced by the .nzb, and then tests/repairs/combines/uncompresses them all into the final result - media files. - -![SABNZBD Screenshot](../../images/sabnzbd.png) - -## Inclusion into AutoPirate - -To include SABnzbd in your [AutoPirate](/recipies/autopirate/) stack -(_The only reason you **wouldn't** use SABnzbd, would be if you were using [NZBGet](/recipies/autopirate/nzbget.md) instead_), include the following in your autopirate.yml stack definition file: - -``` -sabnzbd: - image: linuxserver/sabnzbd:latest - env_file : /var/data/config/autopirate/sabnzbd.env - volumes: - - /var/data/autopirate/sabnzbd:/config - - /var/data/media:/media - networks: - - internal - -sabnzbd_proxy: - image: zappi/oauth2_proxy - env_file : /var/data/config/autopirate/sabnzbd.env - dns_search: myswarm.example.com - networks: - - internal - - traefik_public - deploy: - labels: - - traefik.frontend.rule=Host:sabnzbd.example.com - - traefik.docker.network=traefik_public - - traefik.port=4180 - volumes: - - /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt - command: | - -cookie-secure=false - -upstream=http://sabnzbd:8080 - -redirect-url=https://sabnzbd.example.com - -http-address=http://0.0.0.0:4180 - -email-domain=example.com - -provider=github - -authenticated-emails-file=/authenticated-emails.txt -``` - -!!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` - - -## Assemble more tools.. - -Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipies/autopirate/end/)** section: - -* SABnzbd (this page) -* [NZBGet](/recipies/autopirate/nzbget.md) -* [RTorrent](/recipies/autopirate/rtorrent/) -* [Sonarr](/recipies/autopirate/sonarr/) -* [Radarr](/recipies/autopirate/radarr/) -* [Mylar](/recipies/autopirate/mylar/) -* [Lazy Librarian](/recipies/autopirate/lazylibrarian/) -* [Headphones](/recipies/autopirate/headphones/) -* [NZBHydra](/recipies/autopirate/nzbhydra/) -* [Ombi](/recipies/autopirate/ombi/) -* [Jackett](/recipies/autopirate/jackett/) -* [End](/recipies/autopirate/end/) (launch the stack) - - -## Chef's Notes 📓 - -1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. - -### Tip your waiter (donate) - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? diff --git a/manuscript/reference/containers.md b/manuscript/reference/containers.md index 8a72ab6..3c8aafa 100644 --- a/manuscript/reference/containers.md +++ b/manuscript/reference/containers.md @@ -39,13 +39,4 @@ Name | Description | Badges [funkypenguin/turtle-pool](https://hub.docker.com/r/funkypenguin/turtle-pool/)
[![Size](https://images.microbadger.com/badges/image/funkypenguin/turtle-pool.svg)](https://hub.docker.com/r/funkypenguin/turtle-pool//)| turtle-pool |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/turtle-pool.svg)](https://hub.docker.com/r/funkypenguin/turtle-pool/)
[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/turtle-pool.svg)](https://hub.docker.com/r/funkypenguin/turtle-pool/) [funkypenguin/turtlecoin](https://hub.docker.com/r/funkypenguin/turtlecoin/)
[![Size](https://images.microbadger.com/badges/image/funkypenguin/turtlecoin.svg)](https://hub.docker.com/r/funkypenguin/turtlecoin/)| turtlecoin |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/turtlecoin.svg)](https://hub.docker.com/r/funkypenguin/turtlecoin/)
[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/turtlecoin.svg)](https://hub.docker.com/r/funkypenguin/turtlecoin/) [funkypenguin/x-cash](https://hub.docker.com/r/funkypenguin/x-cash/)
[![Size](https://images.microbadger.com/badges/image/funkypenguin/x-cash.svg)](https://hub.docker.com/r/funkypenguin/x-cash/)| X-CASH cryptocurrency daemon/services |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/x-cash.svg)](https://hub.docker.com/r/funkypenguin/x-cash/)
[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/x-cash.svg)](https://hub.docker.com/r/funkypenguin/x-cash/) -[funkypenguin/xmrig-cpu](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)
[![Size](https://images.microbadger.com/badges/image/funkypenguin/xmrig-cpu.svg)](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)| xmrig-cpu |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/xmrig-cpu.svg)](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)
[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/xmrig-cpu.svg)](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)| - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +[funkypenguin/xmrig-cpu](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)
[![Size](https://images.microbadger.com/badges/image/funkypenguin/xmrig-cpu.svg)](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)| xmrig-cpu |[![Docker Pulls](https://img.shields.io/docker/pulls/funkypenguin/xmrig-cpu.svg)](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)
[![Docker Stars](https://img.shields.io/docker/stars/funkypenguin/xmrig-cpu.svg)](https://hub.docker.com/r/funkypenguin/xmrig-cpu/)| \ No newline at end of file diff --git a/manuscript/reference/data_layout.md b/manuscript/reference/data_layout.md index ff77cd1..885faa0 100644 --- a/manuscript/reference/data_layout.md +++ b/manuscript/reference/data_layout.md @@ -1,6 +1,6 @@ # Data layout -The applications deployed in the stack utilize a combination of data-at-rest (_static config, files, etc_) and runtime data (_live database files_). The realtime data can't be [backed up](/recipes/duplicity) with a simple copy-paste, so where we employ databases, we also include containers to perform a regular export of database data to a filesystem location. +The applications deployed in the stack utilize a combination of data-at-rest (_static config, files, etc_) and runtime data (_live database files_). The realtime data can't be [backed up](https://geek-cookbook.funkypenguin.co.nz/)recipes/duplicity) with a simple copy-paste, so where we employ databases, we also include containers to perform a regular export of database data to a filesystem location. So that we can confidently backup all our data, I've setup a data layout as follows: @@ -14,13 +14,4 @@ Realtime data (typically database files or files-in-use) are stored in /var/data ## Static data -Static data goes into /var/data/[recipe name], and includes anything that can be safely backed up while a container is running. This includes database exports of the runtime data above. - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +Static data goes into /var/data/[recipe name], and includes anything that can be safely backed up while a container is running. This includes database exports of the runtime data above. \ No newline at end of file diff --git a/manuscript/reference/git-docker.md b/manuscript/reference/git-docker.md index f0558c5..7e662fa 100644 --- a/manuscript/reference/git-docker.md +++ b/manuscript/reference/git-docker.md @@ -49,13 +49,4 @@ The key's randomart image is: +----[SHA256]-----+ ``` -Now add the contents of /var/data/git-docker/data/.ssh/id_ed25519.pub to your git account, and off you go - just run "git" from your Atomic host as usual, and pretend that you have the client installed! - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +Now add the contents of /var/data/git-docker/data/.ssh/id_ed25519.pub to your git account, and off you go - just run "git" from your Atomic host as usual, and pretend that you have the client installed! \ No newline at end of file diff --git a/manuscript/reference/networks.md b/manuscript/reference/networks.md index afe4f96..6719711 100644 --- a/manuscript/reference/networks.md +++ b/manuscript/reference/networks.md @@ -53,14 +53,4 @@ Network | Range [Magento](https://geek-cookbook.funkypenguin.co.nz/recipes/magento/) | 172.16.51.0/24 [Graylog](https://geek-cookbook.funkypenguin.co.nz/recipes/graylog/) | 172.16.52.0/24 [Harbor](https://geek-cookbook.funkypenguin.co.nz/recipes/graylog/) | 172.16.53.0/24 -[Harbor-Clair](https://geek-cookbook.funkypenguin.co.nz/recipes/graylog/) | 172.16.54.0/24 - - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +[Harbor-Clair](https://geek-cookbook.funkypenguin.co.nz/recipes/graylog/) | 172.16.54.0/24 \ No newline at end of file diff --git a/manuscript/reference/oauth_proxy.md b/manuscript/reference/oauth_proxy.md index 50d1997..fab701b 100644 --- a/manuscript/reference/oauth_proxy.md +++ b/manuscript/reference/oauth_proxy.md @@ -15,7 +15,7 @@ This is the role of the OAuth proxy. When employing the **OAuth proxy** , the proxy sits in the middle of this transaction - traefik sends the web client to the OAuth proxy, the proxy authenticates the user against a 3rd-party source (_GitHub, Google, etc_), and then passes authenticated requests on to the web app in the container. Illustrated below: -![OAuth proxy](/images/oauth_proxy.png) +![OAuth proxy](https://geek-cookbook.funkypenguin.co.nz/)images/oauth_proxy.png) The advantage under this design is additional security. If I'm deploying a web app which I expect only myself to require access to, I'll put the oauth_proxy in front of it. The overhead is negligible, and the additional layer of security is well-worth it. @@ -47,7 +47,7 @@ I created **/var/data/oauth_proxy/authenticated-emails.txt**, and add my own ema ### Configure stack -You'll need to define a service for the oauth_proxy in every stack which you want to protect. Here's an example from the [Wekan](/recipes/wekan/) recipe: +You'll need to define a service for the oauth_proxy in every stack which you want to protect. Here's an example from the [Wekan](https://geek-cookbook.funkypenguin.co.nz/)recipes/wekan/) recipe: ``` proxy: @@ -76,13 +76,4 @@ proxy: Note above how: * Labels are required to tell Traefik to forward the traffic to the proxy, rather than the backend container running the app * An environment file is defined, but.. -* The redirect URL must still be passed to the oauth_proxy in the command argument - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +* The redirect URL must still be passed to the oauth_proxy in the command argument \ No newline at end of file diff --git a/manuscript/reference/openvpn.md b/manuscript/reference/openvpn.md index ff831c5..8094bcd 100644 --- a/manuscript/reference/openvpn.md +++ b/manuscript/reference/openvpn.md @@ -55,13 +55,4 @@ docker run -d --name vpn-client \ ekristen/openvpn-client --config /vpn/my-host-config.ovpn ``` -Now every time my node boots, it establishes a VPN tunnel back to my pfsense host and (_by using custom configuration directives in OpenVPN_) is assigned a static VPN IP. - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +Now every time my node boots, it establishes a VPN tunnel back to my pfsense host and (_by using custom configuration directives in OpenVPN_) is assigned a static VPN IP. \ No newline at end of file diff --git a/manuscript/reference/troubleshooting.md b/manuscript/reference/troubleshooting.md index 6123442..2830da9 100644 --- a/manuscript/reference/troubleshooting.md +++ b/manuscript/reference/troubleshooting.md @@ -23,12 +23,4 @@ For a visual "top-like" display of your container's activity (_as well as a [det To execute, simply run `docker run --rm -ti --name ctop -v /var/run/docker.sock:/var/run/docker.sock quay.io/vektorlab/ctop:latest` Example: -![](https://github.com/bcicen/ctop/raw/master/_docs/img/grid.gif) - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! - -### Your comments? +![](https://github.com/bcicen/ctop/raw/master/_docs/img/grid.gif) \ No newline at end of file diff --git a/manuscript/sponsored-projects.md b/manuscript/sponsored-projects.md index 1fd3dbe..b8ff29c 100644 --- a/manuscript/sponsored-projects.md +++ b/manuscript/sponsored-projects.md @@ -6,12 +6,12 @@ I regularly donate to / sponsor the following projects. **Join me** in supportin | Project | Donate via.. | ------------- |-------------| -| [Kanboard](/recipes/kanboard/) | [PayPal](https://kanboard.org/#donations) -| [Miniflux](/recipes/miniflux/) | [PayPal](https://miniflux.net/#donations) -| [SABnzbd](/recipes/autopirate/sabnzbd/) | [Paypal / Credit Card / Crypto](https://sabnzbd.org/donate/) -| [Radarr](/recipes/autopirate/radarr/) | [OpenCollective](https://opencollective.com/radarr#budget) -| [Sonarr](/recipes/autopirate/sonarr/) | [BitCoin/CC](https://sonarr.tv/donate) -| [NZBHydra](/recipes/autopirate/nzbhydra/) | [Cryptocurrency](https://github.com/theotherp/nzbhydra2) +| [Kanboard](https://geek-cookbook.funkypenguin.co.nz/)recipes/kanboard/) | [PayPal](https://kanboard.org/#donations) +| [Miniflux](https://geek-cookbook.funkypenguin.co.nz/)recipes/miniflux/) | [PayPal](https://miniflux.net/#donations) +| [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/) | [Paypal / Credit Card / Crypto](https://sabnzbd.org/donate/) +| [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) | [OpenCollective](https://opencollective.com/radarr#budget) +| [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr/) | [BitCoin/CC](https://sonarr.tv/donate) +| [NZBHydra](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbhydra/) | [Cryptocurrency](https://github.com/theotherp/nzbhydra2) | [Calibre](https://calibre-ebook.com/) | [Credit Card](https://calibre-ebook.com/donate) / [Patreon](https://www.patreon.com/kovidgoyal) / [LibrePay](https://liberapay.com/kovidgoyal/donate) | [LinuxServer.io](https://www.linuxserver.io) | [PayPal](https://www.linuxserver.io/donate) | [Pi-hole](https://pi-hole.net/) | [Patreon](https://www.patreon.com/pihole/posts) diff --git a/manuscript/support.md b/manuscript/support.md index 42115d3..49f7f7d 100644 --- a/manuscript/support.md +++ b/manuscript/support.md @@ -1,14 +1,16 @@ -hero: "Excuse me... waiter, there's a bug in this recipe!" - # Support +> "Excuse me... waiter, there's a bug in this recipe!" + +How do you get support for these receipes? There are several options... + ## Discord: Where the cool kids are -All the cool kids are hanging out in the [Discord server](http://chat.funkypenguin.co.nz). +All the cool kids are hanging out in the [Discord server][1]. > "Eh? What's Discord? Get off my lawn, young whippersnappers!!" -Yeah, I know. I also thought Discord was just for the gamer kids, but it turns out it's great for a geeky community. Why? [Let me elucidate ya.](https://www.youtube.com/watch?v=1qHoSWxVqtE).. +Yeah, I know. I also thought Discord was just for the gamer kids, but it turns out it's great for a geeky community. Why? [Let me elucidate ya.][2].. 1. Native markdown for code blocks 2. Drag-drop screenshots @@ -17,24 +19,24 @@ Yeah, I know. I also thought Discord was just for the gamer kids, but it turns o ## Forums: Party like it's 1999 -For community support and engagement, I've setup a [Discourse forum](https://discourse.geek-kitchen.funkypenguin.co.nz/). Using this as the primary means of topical discussions makes it easy to share recipes / experiences with future geeks. +For community support and engagement, I've setup a [Discourse forum][3]. Using this as the primary means of topical discussions makes it easy to share recipes / experiences with future geeks. ## Discuss a recipe Every recipe includes a section at the end for comments. -If you have a comment / question about a specific recipe, navigate to the recipe, scroll to the bottom, and add your comment. You'll be sent to the [kitchen](https://discourse.geek-kitchen.funkypenguin.co.nz/) to post the actual comment, but it'll be visible beneath the recipe _and_ at the kitchen. (_To post, you'll need to sign in using OAuth from github, google, etc, or create a new account_) +If you have a comment / question about a specific recipe, navigate to the recipe, scroll to the bottom, and add your comment. You'll be sent to the [kitchen][4] to post the actual comment, but it'll be visible beneath the recipe _and_ at the kitchen. (_To post, you'll need to sign in using OAuth from github, google, etc, or create a new account_) ## Request a recipe -I'd love to hear your ideas for more recipes. To request/suggest a recipe, create a new post in the [kitchen](https://discourse.geek-kitchen.funkypenguin.co.nz/) with the details. +I'd love to hear your ideas for more recipes. To request/suggest a recipe, create a new post in the [kitchen][5] with the details. ## Spit out a bug Found a bug in your soup? Tell the chef by either: 1. Commenting on the recipe (see above), or -2. Submitting an issue against the github [repo](https://github.com/funkypenguin/geek-cookbook/issues) +2. Submitting an issue against the github [repo][6] ## Tip the chef @@ -42,9 +44,9 @@ Found a bug in your soup? Tell the chef by either: I'm also writing the Geek Cookbook as a formal eBook, on Leanpub (https://leanpub.com/geeks-cookbook). Buy it for $0.99 (_which is really just a token gesture of support_) - you can get it for free (_in PDF, mobi, or epub format_), or pay me what you think it's worth! -### Donate / [Patrotize](https://www.patreon.com/funkypenguin) / [Sponsor](https://github.com/sponsors/funkypenguin) me 💰 +### [Sponsor][7] / [Patreonize][8] me -The best way to support this work is to become a [patron](https://www.patreon.com/bePatron?u=6982506) (Patreon) or a [Sponsor](https://github.com/sponsors/funkypenguin) (github) (_for as little as $1/month!_) - You get : +The best way to support this work is to become a [Sponsor]() (_GitHub_) or a [Patron][10] (_Patreon_). For as little as $5/month, you get: * warm fuzzies, * access to the pre-mix repo, @@ -53,14 +55,28 @@ The best way to support this work is to become a [patron](https://www.patreon.co .. and I get some pocket money every month to buy wine, cheese, and cryptocurrency! -Impulsively **[click here (NOW quick do it!)](https://www.patreon.com/bePatron?u=6982506)** to patronize me, or instead thoughtfully and analytically review my Patreon page / history **[here](https://www.patreon.com/funkypenguin)** and make up your own mind. +Impulsively **[click here (NOW quick do it!)][11]** to sponsor me, or instead thoughtfully and analytically review my GitHub profile **[here][12]** and make up your own mind. -### Hire me 🏢 +### Engage me -Need some system design work done? I do freelance consulting - [contact](mailto:davidy@funkypenguin.co.nz) me for details. +Need some Cloud / Microservices / DevOps / Infrastructure design work done? I'm a full-time [AWS-certified][13] consultant, this stuff is my bread and butter! :bread: :fork\_and\_knife: [Contact][14] me and let's talk! ### Discord Come and hang out in Discord.. + +[1]: http://chat.funkypenguin.co.nz +[2]: https://www.youtube.com/watch?v=1qHoSWxVqtE +[3]: https://discourse.geek-kitchen.funkypenguin.co.nz/ +[4]: https://discourse.geek-kitchen.funkypenguin.co.nz/ +[5]: https://discourse.geek-kitchen.funkypenguin.co.nz/ +[6]: https://github.com/funkypenguin/geek-cookbook/issues +[7]: https://github.com/sponsors/funkypenguin +[8]: https://www.patreon.com/funkypenguin +[10]: https://www.patreon.com/bePatron?u=6982506 +[11]: https://github.com/sponsors/funkypenguin +[12]: https://github.com/funkypenguin +[13]: https://www.certmetrics.com/amazon/public/badge.aspx?i=4&t=c&d=2019-02-22&ci=AWS00794574 +[14]: https://www.funkypenguin.co.nz \ No newline at end of file diff --git a/manuscript/test.md b/manuscript/test.md deleted file mode 100644 index 101d53d..0000000 --- a/manuscript/test.md +++ /dev/null @@ -1 +0,0 @@ -Nananana... Batman! diff --git a/manuscript/whoami.md b/manuscript/whoami.md index 2cb826d..20c6d1c 100644 --- a/manuscript/whoami.md +++ b/manuscript/whoami.md @@ -2,9 +2,9 @@ ## Hello world, -I'm [David](https://www.funkypenguin.co.nz/contact/). +I'm [David](https://www.funkypenguin.co.nz/). -I'm a contracting IT consultant, with a broad range of experience and skills. I'm an [AWS Certified Solution Architect (Professional)](https://www.certmetrics.com/amazon/public/badge.aspx?i=4&t=c&d=2019-02-22&ci=AWS00794574), a remote worker, I've had a [book published](https://www.funkypenguin.co.nz/book/phplist-2-email-campaign-manager/), and I [blog](https://www.funkypenguin.co.nz/blog/) on topics that interest me. +I'm a contracting IT consultant, with a broad range of experience and skills. I'm an [AWS Certified Solution Architect (Professional)](https://www.certmetrics.com/amazon/public/badge.aspx?i=4&t=c&d=2019-02-22&ci=AWS00794574), a remote worker, I've had a [book published](https://www.funkypenguin.co.nz/book/phplist-2-email-campaign-manager/), and I [blog](https://www.funkypenguin.co.nz/) on topics that interest me. ## Why Funky Penguin? @@ -20,13 +20,41 @@ To get management approval to deploy, I wrote a logger (with web UI) for jabber Due to my contributions to [phpList](http://www.phplist.com), I was approached in 2011 by [Packt Publishing](http://www.packtpub.com), to [write a book](https://www.funkypenguin.co.nz/book/phplist-2-email-campaign-manager) about using PHPList. +## Work with me 🤝 + +Need some Cloud / Microservices / DevOps / Infrastructure design work done? I'm a full-time [AWS-certified][aws_cert] consultant, this stuff is my bread and butter! :bread: :fork_and_knife: [Get in touch][contact], and let's talk business! + +[plex]: https://www.plex.tv/ +[nextcloud]: https://nextcloud.com/ +[wordpress]: https://wordpress.org/ +[ghost]: https://ghost.io/ +[discord]: http://chat.funkypenguin.co.nz +[patreon]: https://www.patreon.com/bePatron?u=6982506 +[github_sponsor]: https://github.com/sponsors/funkypenguin +[github]: https://github.com/sponsors/funkypenguin +[discourse]: https://discourse.geek-kitchen.funkypenguin.co.nz/ +[twitter]: https://twitter.com/funkypenguin +[contact]: https://www.funkypenguin.co.nz +[aws_cert]: https://www.certmetrics.com/amazon/public/badge.aspx?i=4&t=c&d=2019-02-22&ci=AWS00794574 + +!!! quote "He unblocked me on all the technical hurdles to launching my SaaS in GKE!" + + By the time I had enlisted Funky Penguin's help, I'd architected myself into a bit of a nightmare with Kubernetes. I knew what I wanted to achieve, but I'd made a mess of it. Funky Penguin (David) was able to jump right in and offer a vital second-think on everything I'd done, pointing out where things could be simplified and streamlined, and better alternatives. + + He unblocked me on all the technical hurdles to launching my SaaS in GKE! + + With him delivering the container/Kubernetes architecture and helm CI/CD workflow, I was freed up to focus on coding and design, which fast-tracked me to launching on time. And now I have a simple deployment process that is easy for me to execute and maintain as a solo founder. + + I have no hesitation in recommending him for your project, and I'll certainly be calling on him again in the future. + + - John McDowall, Founder, [kiso.io](https://kiso.io) + ## Contact Me Contact me by: * Jumping into our [Discord server](http://chat.funkypenguin.co.nz) * Email ([davidy@funkypenguin.co.nz](mailto:davidy@funkypenguin.co.nz)) -* Private, encrypted email with ProtonMail ([funkypenguin@pm.me](mailto:funkypenguin@pm.me)) * Twitter ([@funkypenguin](https://twitter.com/funkypenguin)) Or by using the form below: