Merge branch 'master' into leanpub-preview
3
.gitignore
vendored
@@ -1,6 +1,9 @@
|
|||||||
# Don't include built site
|
# Don't include built site
|
||||||
site/
|
site/
|
||||||
|
|
||||||
|
# Don't include staging area for publishing
|
||||||
|
publish/
|
||||||
|
|
||||||
# Don't include random notes
|
# Don't include random notes
|
||||||
notes/
|
notes/
|
||||||
|
|
||||||
|
|||||||
12
.markdownlint.json
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
{
|
||||||
|
"MD046": {
|
||||||
|
"style": "fenced"
|
||||||
|
},
|
||||||
|
"MD013": {
|
||||||
|
"code_block_line_length": 200,
|
||||||
|
"line_length": 200
|
||||||
|
},
|
||||||
|
"MD024": {
|
||||||
|
"siblings_only": true
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
# What is this?
|
# What is this?
|
||||||
|
|
||||||
The "**[Geek's Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of guides for establishing your own highly-available docker container cluster (swarm). This swarm enables you to run self-hosted services such as [GitLab](https://gitlab.com/), [Plex](https://www.plex.tv/), [NextCloud](https://nextcloud.com), etc.
|
Funky Penguin's "**[Geek's Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of how-to guides for establishing your own container-based self-hosting platform, using either Docker Swarm or Kubernetes. Container orchestration enables you to run self-hosted services such as [GitLab](https://gitlab.com/), [Plex](https://www.plex.tv/), [NextCloud](https://nextcloud.com), etc.
|
||||||
|
|
||||||
|
|
||||||
## Who is this for?
|
## Who is this for?
|
||||||
|
|
||||||
@@ -35,4 +36,4 @@ See [my Patreon page](https://www.patreon.com/funkypenguin) for details!
|
|||||||
|
|
||||||
### Hire me 🏢
|
### Hire me 🏢
|
||||||
|
|
||||||
Need some system design work done? I do freelance consulting - [contact](https://www.funkypenguin.co.nz/contact/) me for details.
|
Need some system design work done? I do freelance consulting - [contact](mailto:davidy@funypenguin.co.nz) me for details.
|
||||||
|
|||||||
BIN
manuscript/Book.epub
Normal file
@@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
* Email : Sign up [here](http://eepurl.com/dfx95n) (double-opt-in) to receive email updates on new and improve recipes!
|
* Email : Sign up [here](http://eepurl.com/dfx95n) (double-opt-in) to receive email updates on new and improve recipes!
|
||||||
* Mastodon: https://mastodon.social/@geekcookbook_changes
|
* Mastodon: https://mastodon.social/@geekcookbook_changes
|
||||||
* RSS: https://mastodon.social/@geekcookbook_changes.atom
|
* RSS: https://mastodon.social/@geekcookbook_changes.rss
|
||||||
* The #changelog channel in our [Discord server](http://chat.funkypenguin.co.nz)
|
* The #changelog channel in our [Discord server](http://chat.funkypenguin.co.nz)
|
||||||
|
|
||||||
## Recent additions to work-in-progress
|
## Recent additions to work-in-progress
|
||||||
@@ -12,13 +12,11 @@
|
|||||||
* Kubernetes recipes for UniFi controller, Miniflux, Kanboard and PrivateBin coming in March! (_19 Mar 2019_)
|
* Kubernetes recipes for UniFi controller, Miniflux, Kanboard and PrivateBin coming in March! (_19 Mar 2019_)
|
||||||
|
|
||||||
## Recently added recipes
|
## Recently added recipes
|
||||||
|
* Added recipe for making your own [DIY Kubernetes Cluster](/kubernetes/diycluster/) (_14 December 2019_)
|
||||||
|
* Added recipe for [authenticating Traefik Forward Auth against KeyCloak](/ha-docker-swarm/traefik-forward-auth/keycloak/) (_16 May 2019_)
|
||||||
|
* Added [Bitwarden](/recipes/bitwarden/), an **awesome** open-source password manager, with great mobile sync support (_14 May 2019_)
|
||||||
|
* Added [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), replacing function of multiple [oauth_proxies](/reference/oauth_proxy/) with a single, 7MB Go application, which can authenticate against Google, [KeyCloak](/recipes/keycloak/), and other OIDC providers (_10 May 2019_)
|
||||||
* Added Kubernetes version of [Miniflux](/recipes/kubernetes/miniflux/) recipe, a minimalistic RSS reader supporting the Fever API (_26 Mar 2019_)
|
* Added Kubernetes version of [Miniflux](/recipes/kubernetes/miniflux/) recipe, a minimalistic RSS reader supporting the Fever API (_26 Mar 2019_)
|
||||||
* Added Kubernetes version of [Kanboard](/recipes/kubernetes/kanboard/) recipe, a lightweight, well-supported Kanban tool for visualizing your work (_19 Mar 2019_)
|
|
||||||
* Added [Minio](/recipes/minio/), a high performance distributed object storage server, designed for large-scale private cloud infrastructure, but perfect for simple use cases where emulating AWS S3 is useful. (_27 Jan 2019_)
|
|
||||||
* Added the beginning of the **Kubernetes** design, including a getting started on using [Digital Ocean,](/kubernetes/digitalocean/) and a WIP recipe for an [MQTT](/recipes/mqtt/) broker (_21 Jan 2019_)
|
|
||||||
* [ElkarBackup](/recipes/elkarbackup/), a beautiful GUI-based backup solution built on rsync/rsnapshot (_1 Jan 2019_)
|
|
||||||
|
|
||||||
|
|
||||||
## Recent improvements
|
## Recent improvements
|
||||||
|
|
||||||
|
|||||||
2
manuscript/Gemfile
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
source 'https://rubygems.org'
|
||||||
|
gem 'html-proofer'
|
||||||
48
manuscript/Gemfile.lock
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
GEM
|
||||||
|
remote: https://rubygems.org/
|
||||||
|
specs:
|
||||||
|
activesupport (5.2.3)
|
||||||
|
concurrent-ruby (~> 1.0, >= 1.0.2)
|
||||||
|
i18n (>= 0.7, < 2)
|
||||||
|
minitest (~> 5.1)
|
||||||
|
tzinfo (~> 1.1)
|
||||||
|
addressable (2.6.0)
|
||||||
|
public_suffix (>= 2.0.2, < 4.0)
|
||||||
|
colorize (0.8.1)
|
||||||
|
concurrent-ruby (1.1.5)
|
||||||
|
ethon (0.12.0)
|
||||||
|
ffi (>= 1.3.0)
|
||||||
|
ffi (1.10.0)
|
||||||
|
html-proofer (3.10.2)
|
||||||
|
activesupport (>= 4.2, < 6.0)
|
||||||
|
addressable (~> 2.3)
|
||||||
|
colorize (~> 0.8)
|
||||||
|
mercenary (~> 0.3.2)
|
||||||
|
nokogiri (~> 1.10.8)
|
||||||
|
parallel (~> 1.3)
|
||||||
|
typhoeus (~> 1.3)
|
||||||
|
yell (~> 2.0)
|
||||||
|
i18n (1.6.0)
|
||||||
|
concurrent-ruby (~> 1.0)
|
||||||
|
mercenary (0.3.6)
|
||||||
|
mini_portile2 (2.4.0)
|
||||||
|
minitest (5.11.3)
|
||||||
|
nokogiri (1.10.5)
|
||||||
|
mini_portile2 (~> 2.4.0)
|
||||||
|
parallel (1.17.0)
|
||||||
|
public_suffix (3.0.3)
|
||||||
|
thread_safe (0.3.6)
|
||||||
|
typhoeus (1.3.1)
|
||||||
|
ethon (>= 0.9.0)
|
||||||
|
tzinfo (1.2.5)
|
||||||
|
thread_safe (~> 0.1)
|
||||||
|
yell (2.1.0)
|
||||||
|
|
||||||
|
PLATFORMS
|
||||||
|
ruby
|
||||||
|
|
||||||
|
DEPENDENCIES
|
||||||
|
html-proofer
|
||||||
|
|
||||||
|
BUNDLED WITH
|
||||||
|
2.0.1
|
||||||
@@ -1,20 +1,19 @@
|
|||||||
# This file determines what documents are loaded into the book, and in what sequence.
|
|
||||||
|
|
||||||
index.md
|
index.md
|
||||||
README.md
|
README-UI.md
|
||||||
CHANGELOG.md
|
CHANGELOG.md
|
||||||
whoami.md
|
whoami.md
|
||||||
|
|
||||||
sections/ha-docker-swarm.md
|
sections/ha-docker-swarm.md
|
||||||
ha-docker-swarm/design.md
|
ha-docker-swarm/design.md
|
||||||
ha-docker-swarm/vms.md
|
ha-docker-swarm/nodes.md
|
||||||
ha-docker-swarm/shared-storage-ceph.md
|
ha-docker-swarm/shared-storage-ceph.md
|
||||||
ha-docker-swarm/shared-storage-gluster.md
|
ha-docker-swarm/shared-storage-gluster.md
|
||||||
ha-docker-swarm/keepalived.md
|
ha-docker-swarm/keepalived.md
|
||||||
ha-docker-swarm/docker-swarm-mode.md
|
ha-docker-swarm/docker-swarm-mode.md
|
||||||
ha-docker-swarm/traefik.md
|
ha-docker-swarm/traefik.md
|
||||||
|
ha-docker-swarm/traefik-forward-auth.md
|
||||||
|
ha-docker-swarm/traefik-forward-auth/keycloak.md
|
||||||
ha-docker-swarm/registry.md
|
ha-docker-swarm/registry.md
|
||||||
ha-docker-swarm/duplicity.md
|
|
||||||
|
|
||||||
sections/chefs-favorites-docker.md
|
sections/chefs-favorites-docker.md
|
||||||
recipes/autopirate.md
|
recipes/autopirate.md
|
||||||
@@ -34,6 +33,7 @@ recipes/autopirate/jackett.md
|
|||||||
recipes/autopirate/heimdall.md
|
recipes/autopirate/heimdall.md
|
||||||
recipes/autopirate/end.md
|
recipes/autopirate/end.md
|
||||||
|
|
||||||
|
recipes/duplicity.md
|
||||||
recipes/elkarbackup.md
|
recipes/elkarbackup.md
|
||||||
recipes/emby.md
|
recipes/emby.md
|
||||||
recipes/homeassistant.md
|
recipes/homeassistant.md
|
||||||
@@ -51,6 +51,7 @@ recipes/swarmprom.md
|
|||||||
recipes/turtle-pool.md
|
recipes/turtle-pool.md
|
||||||
|
|
||||||
sections/menu-docker.md
|
sections/menu-docker.md
|
||||||
|
recipes/bitwarden.md
|
||||||
recipes/bookstack.md
|
recipes/bookstack.md
|
||||||
recipes/cryptominer.md
|
recipes/cryptominer.md
|
||||||
recipes/cryptominer/mining-rig.md
|
recipes/cryptominer/mining-rig.md
|
||||||
@@ -70,6 +71,9 @@ recipes/gitlab-runner.md
|
|||||||
recipes/gollum.md
|
recipes/gollum.md
|
||||||
recipes/instapy.md
|
recipes/instapy.md
|
||||||
recipes/keycloak.md
|
recipes/keycloak.md
|
||||||
|
recipes/keycloak/create-user.md
|
||||||
|
recipes/keycloak/authenticate-against-openldap.md
|
||||||
|
recipes/keycloak/setup-oidc-provider.md
|
||||||
recipes/openldap.md
|
recipes/openldap.md
|
||||||
recipes/mail.md
|
recipes/mail.md
|
||||||
recipes/minio.md
|
recipes/minio.md
|
||||||
|
|||||||
12
manuscript/extras/javascript/discord.js
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
<script src="https://cdn.jsdelivr.net/npm/@widgetbot/crate@3" async defer>
|
||||||
|
const button = new Crate({
|
||||||
|
server: '396055506072109067',
|
||||||
|
channel: '456689991326760973',
|
||||||
|
shard: 'https://disweb.deploys.io',
|
||||||
|
color: '#795548',
|
||||||
|
indicator: false,
|
||||||
|
notifications: true
|
||||||
|
})
|
||||||
|
|
||||||
|
button.notify('Need a 🤚? Hot sweaty geeks are waiting to chat to you! Click 👇')
|
||||||
|
</script>
|
||||||
8
manuscript/generate_preview.py
Executable file
@@ -0,0 +1,8 @@
|
|||||||
|
#!/usr/bin/python
|
||||||
|
|
||||||
|
with open("Book.txt") as f:
|
||||||
|
print ('echo "Starting build of {book}.epub";'
|
||||||
|
"pandoc {files} " +
|
||||||
|
"--table-of-contents --top-level-division=chapter -o {book}.epub;"
|
||||||
|
'echo " {book}.epub created."'
|
||||||
|
).format(book="Book", files=f.read().replace("\n", " "))
|
||||||
1
manuscript/go.sh
Executable file
@@ -0,0 +1 @@
|
|||||||
|
echo "Starting build of Book.epub";pandoc index.md README-UI.md CHANGELOG.md whoami.md sections/ha-docker-swarm.md ha-docker-swarm/design.md ha-docker-swarm/vms.md ha-docker-swarm/shared-storage-ceph.md ha-docker-swarm/shared-storage-gluster.md ha-docker-swarm/keepalived.md ha-docker-swarm/docker-swarm-mode.md ha-docker-swarm/traefik.md ha-docker-swarm/registry.md sections/chefs-favorites-docker.md recipes/autopirate.md recipes/autopirate/sabnzbd.md recipes/autopirate/nzbget.md recipes/autopirate/rtorrent.md recipes/autopirate/sonarr.md recipes/autopirate/radarr.md recipes/autopirate/mylar.md recipes/autopirate/lazylibrarian.md recipes/autopirate/headphones.md recipes/autopirate/lidarr.md recipes/autopirate/nzbhydra.md recipes/autopirate/nzbhydra2.md recipes/autopirate/ombi.md recipes/autopirate/jackett.md recipes/autopirate/heimdall.md recipes/autopirate/end.md recipes/duplicity.md recipes/elkarbackup.md recipes/emby.md recipes/homeassistant.md recipes/homeassistant/ibeacon.md recipes/huginn.md recipes/kanboard.md recipes/miniflux.md recipes/munin.md recipes/nextcloud.md recipes/owntracks.md recipes/phpipam.md recipes/plex.md recipes/privatebin.md recipes/swarmprom.md recipes/turtle-pool.md sections/menu-docker.md recipes/bookstack.md recipes/cryptominer.md recipes/cryptominer/mining-rig.md recipes/cryptominer/amd-gpu.md recipes/cryptominer/nvidia-gpu.md recipes/cryptominer/mining-pool.md recipes/cryptominer/wallet.md recipes/cryptominer/exchange.md recipes/cryptominer/minerhotel.md recipes/cryptominer/monitor.md recipes/cryptominer/profit.md recipes/calibre-web.md recipes/collabora-online.md recipes/ghost.md recipes/gitlab.md recipes/gitlab-runner.md recipes/gollum.md recipes/instapy.md recipes/keycloak.md recipes/openldap.md recipes/mail.md recipes/minio.md recipes/piwik.md recipes/portainer.md recipes/realms.md recipes/tiny-tiny-rss.md recipes/wallabag.md recipes/wekan.md recipes/wetty.md sections/reference.md reference/oauth_proxy.md reference/data_layout.md reference/networks.md reference/containers.md reference/git-docker.md reference/openvpn.md reference/troubleshooting.md --table-of-contents --top-level-division=chapter -o Book.epub;echo " Book.epub created."
|
||||||
@@ -1,11 +1,11 @@
|
|||||||
# Design
|
# Design
|
||||||
|
|
||||||
In the design described below, the "private cloud" platform is:
|
In the design described below, our "private cloud" platform is:
|
||||||
|
|
||||||
* **Highly-available** (_can tolerate the failure of a single component_)
|
* **Highly-available** (_can tolerate the failure of a single component_)
|
||||||
* **Scalable** (_can add resource or capacity as required_)
|
* **Scalable** (_can add resource or capacity as required_)
|
||||||
* **Portable** (_run it on your garage server today, run it in AWS tomorrow_)
|
* **Portable** (_run it on your garage server today, run it in AWS tomorrow_)
|
||||||
* **Secure** (_access protected with LetsEncrypt certificates_)
|
* **Secure** (_access protected with [LetsEncrypt certificates](/ha-docker-swarm/traefik/) and optional [OIDC with 2FA](/ha-docker-swarm/traefik-forward-auth/)_)
|
||||||
* **Automated** (_requires minimal care and feeding_)
|
* **Automated** (_requires minimal care and feeding_)
|
||||||
|
|
||||||
## Design Decisions
|
## Design Decisions
|
||||||
@@ -15,7 +15,10 @@ In the design described below, the "private cloud" platform is:
|
|||||||
This means that:
|
This means that:
|
||||||
|
|
||||||
* At least 3 docker swarm manager nodes are required, to provide fault-tolerance of a single failure.
|
* At least 3 docker swarm manager nodes are required, to provide fault-tolerance of a single failure.
|
||||||
* GlusterFS is employed for share filesystem, because it too can be made tolerant of a single failure.
|
* [Ceph](/ha-docker-swarm/shared-storage-ceph/) is employed for share storage, because it too can be made tolerant of a single failure.
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
An exception to the 3-nodes decision is running a single-node configuration. If you only **have** one node, then obviously your swarm is only as resilient as that node. It's still a perfectly valid swarm configuration, ideal for starting your self-hosting journey. In fact, under the single-node configuration, you don't need ceph either, and you can simply use the local volume on your host for storage. You'll be able to migrate to ceph/more nodes if/when you expand.
|
||||||
|
|
||||||
**Where multiple solutions to a requirement exist, preference will be given to the most portable solution.**
|
**Where multiple solutions to a requirement exist, preference will be given to the most portable solution.**
|
||||||
|
|
||||||
@@ -26,30 +29,31 @@ This means that:
|
|||||||
|
|
||||||
## Security
|
## Security
|
||||||
|
|
||||||
Under this design, the only inbound connections we're permitting to our docker swarm are:
|
Under this design, the only inbound connections we're permitting to our docker swarm in a **minimal** configuration (*you may add custom services later, like UniFi Controller*) are:
|
||||||
|
|
||||||
### Network Flows
|
### Network Flows
|
||||||
|
|
||||||
* HTTP (TCP 80) : Redirects to https
|
* **HTTP (TCP 80)** : Redirects to https
|
||||||
* HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy
|
* **HTTPS (TCP 443)** : Serves individual docker containers via SSL-encrypted reverse proxy
|
||||||
|
|
||||||
### Authentication
|
### Authentication
|
||||||
|
|
||||||
* Where the proxied application provides a trusted level of authentication, or where the application requires public exposure,
|
* Where the hosted application provides a trusted level of authentication (*i.e., [NextCloud](/recipes/nextcloud/)*), or where the application requires public exposure (*i.e. [Privatebin](/recipes/privatebin/)*), no additional layer of authentication will be required.
|
||||||
|
* Where the hosted application provides inadequate (*i.e. [NZBGet](/recipes/autopirate/nzbget/)*) or no authentication (*i.e. [Gollum](/recipes/gollum/)*), a further authentication against an OAuth provider will be required.
|
||||||
|
|
||||||
|
|
||||||
## High availability
|
## High availability
|
||||||
|
|
||||||
### Normal function
|
### Normal function
|
||||||
|
|
||||||
Assuming 3 nodes, under normal circumstances the following is illustrated:
|
Assuming a 3-node configuration, under normal circumstances the following is illustrated:
|
||||||
|
|
||||||
* All 3 nodes provide shared storage via GlusterFS, which is provided by a docker container on each node. (i.e., not running in swarm mode)
|
* All 3 nodes provide shared storage via Ceph, which is provided by a docker container on each node.
|
||||||
* All 3 nodes participate in the Docker Swarm as managers.
|
* All 3 nodes participate in the Docker Swarm as managers.
|
||||||
* The various containers belonging to the application "stacks" deployed within Docker Swarm are automatically distributed amongst the swarm nodes.
|
* The various containers belonging to the application "stacks" deployed within Docker Swarm are automatically distributed amongst the swarm nodes.
|
||||||
* Persistent storage for the containers is provide via GlusterFS mount.
|
* Persistent storage for the containers is provide via cephfs mount.
|
||||||
* The **traefik** service (in swarm mode) receives incoming requests (on http and https), and forwards them to individual containers. Traefik knows the containers names because it's able to access the docker socket.
|
* The **traefik** service (*in swarm mode*) receives incoming requests (*on HTTP and HTTPS*), and forwards them to individual containers. Traefik knows the containers names because it's able to read the docker socket.
|
||||||
* All 3 nodes run keepalived, at different priorities. Since traefik is running as a swarm service and listening on TCP 80/443, requests made to the keepalived VIP and arriving at **any** of the swarm nodes will be forwarded to the traefik container (no matter which node it's on), and then onto the target backend.
|
* All 3 nodes run keepalived, at varying priorities. Since traefik is running as a swarm service and listening on TCP 80/443, requests made to the keepalived VIP and arriving at **any** of the swarm nodes will be forwarded to the traefik container (*no matter which node it's on*), and then onto the target backend.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@@ -57,9 +61,9 @@ Assuming 3 nodes, under normal circumstances the following is illustrated:
|
|||||||
|
|
||||||
In the case of a failure (or scheduled maintenance) of one of the nodes, the following is illustrated:
|
In the case of a failure (or scheduled maintenance) of one of the nodes, the following is illustrated:
|
||||||
|
|
||||||
* The failed node no longer participates in GlusterFS, but the remaining nodes provide enough fault-tolerance for the cluster to operate.
|
* The failed node no longer participates in Ceph, but the remaining nodes provide enough fault-tolerance for the cluster to operate.
|
||||||
* The remaining two nodes in Docker Swarm achieve a quorum and agree that the failed node is to be removed.
|
* The remaining two nodes in Docker Swarm achieve a quorum and agree that the failed node is to be removed.
|
||||||
* The (possibly new) leader manager node reschedules the containers known to be running on the failed node, onto other nodes.
|
* The (*possibly new*) leader manager node reschedules the containers known to be running on the failed node, onto other nodes.
|
||||||
* The **traefik** service is either restarted or unaffected, and as the backend containers stop/start and change IP, traefik is aware and updates accordingly.
|
* The **traefik** service is either restarted or unaffected, and as the backend containers stop/start and change IP, traefik is aware and updates accordingly.
|
||||||
* The keepalived VIP continues to function on the remaining nodes, and docker swarm continues to forward any traffic received on TCP 80/443 to the appropriate node.
|
* The keepalived VIP continues to function on the remaining nodes, and docker swarm continues to forward any traffic received on TCP 80/443 to the appropriate node.
|
||||||
|
|
||||||
@@ -67,9 +71,9 @@ In the case of a failure (or scheduled maintenance) of one of the nodes, the fol
|
|||||||
|
|
||||||
### Node restore
|
### Node restore
|
||||||
|
|
||||||
When the failed (or upgraded) host is restored to service, the following is illustrated:
|
When the failed (*or upgraded*) host is restored to service, the following is illustrated:
|
||||||
|
|
||||||
* GlusterFS regains full redundancy
|
* Ceph regains full redundancy
|
||||||
* Docker Swarm managers become aware of the recovered node, and will use it for scheduling **new** containers
|
* Docker Swarm managers become aware of the recovered node, and will use it for scheduling **new** containers
|
||||||
* Existing containers which were migrated off the node are not migrated backend
|
* Existing containers which were migrated off the node are not migrated backend
|
||||||
* Keepalived VIP regains full redundancy
|
* Keepalived VIP regains full redundancy
|
||||||
@@ -88,10 +92,4 @@ In summary, although I suffered an **unplanned power outage to all of my infrast
|
|||||||
[^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient.
|
[^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient.
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -4,21 +4,40 @@ For truly highly-available services with Docker containers, we need an orchestra
|
|||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
* 3 x CentOS Atomic hosts (bare-metal or VMs). A reasonable minimum would be:
|
!!! summary
|
||||||
* 1 x vCPU
|
Existing
|
||||||
* 1GB repo_name
|
|
||||||
* 10GB HDD
|
* [X] 3 x nodes (*bare-metal or VMs*), each with:
|
||||||
* Hosts must be within the same subnet, and connected on a low-latency link (i.e., no WAN links)
|
* A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
|
||||||
|
* At least 2GB RAM
|
||||||
|
* At least 20GB disk space (_but it'll be tight_)
|
||||||
|
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|
||||||
|
### Bash auto-completion
|
||||||
|
|
||||||
|
Add some handy bash auto-completion for docker. Without this, you'll get annoyed that you can't autocomplete ```docker stack deploy <blah> -c <blah.yml>``` commands.
|
||||||
|
|
||||||
|
```
|
||||||
|
cd /etc/bash_completion.d/
|
||||||
|
curl -O https://raw.githubusercontent.com/docker/cli/b75596e1e4d5295ac69b9934d1bd8aff691a0de8/contrib/completion/bash/docker
|
||||||
|
```
|
||||||
|
|
||||||
|
Install some useful bash aliases on each host
|
||||||
|
```
|
||||||
|
cd ~
|
||||||
|
curl -O https://raw.githubusercontent.com/funkypenguin/geek-cookbook/master/examples/scripts/gcb-aliases.sh
|
||||||
|
echo 'source ~/gcb-aliases.sh' >> ~/.bash_profile
|
||||||
|
```
|
||||||
|
|
||||||
|
## Serving
|
||||||
|
|
||||||
### Release the swarm!
|
### Release the swarm!
|
||||||
|
|
||||||
Now, to launch my swarm:
|
Now, to launch a swarm. Pick a target node, and run `docker swarm init`
|
||||||
|
|
||||||
```docker swarm init```
|
Yeah, that was it. Seriously. Now we have a 1-node swarm.
|
||||||
|
|
||||||
Yeah, that was it. Now I have a 1-node swarm.
|
|
||||||
|
|
||||||
```
|
```
|
||||||
[root@ds1 ~]# docker swarm init
|
[root@ds1 ~]# docker swarm init
|
||||||
@@ -35,7 +54,7 @@ To add a manager to this swarm, run 'docker swarm join-token manager' and follow
|
|||||||
[root@ds1 ~]#
|
[root@ds1 ~]#
|
||||||
```
|
```
|
||||||
|
|
||||||
Run ```docker node ls``` to confirm that I have a 1-node swarm:
|
Run `docker node ls` to confirm that you have a 1-node swarm:
|
||||||
|
|
||||||
```
|
```
|
||||||
[root@ds1 ~]# docker node ls
|
[root@ds1 ~]# docker node ls
|
||||||
@@ -44,7 +63,7 @@ b54vls3wf8xztwfz79nlkivt8 * ds1.funkypenguin.co.nz Ready Active Leade
|
|||||||
[root@ds1 ~]#
|
[root@ds1 ~]#
|
||||||
```
|
```
|
||||||
|
|
||||||
Note that when I ran ```docker swarm init``` above, the CLI output gave me a command to run to join further nodes to my swarm. This would join the nodes as __workers__ (as opposed to __managers__). Workers can easily be promoted to managers (and demoted again), but since we know that we want our other two nodes to be managers too, it's simpler just to add them to the swarm as managers immediately.
|
Note that when you run `docker swarm init` above, the CLI output gives youe a command to run to join further nodes to my swarm. This command would join the nodes as __workers__ (*as opposed to __managers__*). Workers can easily be promoted to managers (*and demoted again*), but since we know that we want our other two nodes to be managers too, it's simpler just to add them to the swarm as managers immediately.
|
||||||
|
|
||||||
On the first swarm node, generate the necessary token to join another manager by running ```docker swarm join-token manager```:
|
On the first swarm node, generate the necessary token to join another manager by running ```docker swarm join-token manager```:
|
||||||
|
|
||||||
@@ -59,7 +78,7 @@ To add a manager to this swarm, run the following command:
|
|||||||
[root@ds1 ~]#
|
[root@ds1 ~]#
|
||||||
```
|
```
|
||||||
|
|
||||||
Run the command provided on your second node to join it to the swarm as a manager. After adding the second node, the output of ```docker node ls``` (on either host) should reflect two nodes:
|
Run the command provided on your other nodes to join them to the swarm as managers. After addition of a node, the output of ```docker node ls``` (on either host) should reflect all the nodes:
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
@@ -70,19 +89,6 @@ xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reach
|
|||||||
[root@ds2 davidy]#
|
[root@ds2 davidy]#
|
||||||
```
|
```
|
||||||
|
|
||||||
Repeat the process to add your third node.
|
|
||||||
|
|
||||||
Finally, ```docker node ls``` should reflect that you have 3 reachable manager nodes, one of whom is the "Leader":
|
|
||||||
|
|
||||||
```
|
|
||||||
[root@ds3 ~]# docker node ls
|
|
||||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
|
||||||
36b4twca7i3hkb7qr77i0pr9i ds1.example.com Ready Active Reachable
|
|
||||||
l14rfzazbmibh1p9wcoivkv1s * ds3.example.com Ready Active Reachable
|
|
||||||
tfsgxmu7q23nuo51wwa4ycpsj ds2.example.com Ready Active Leader
|
|
||||||
[root@ds3 ~]#
|
|
||||||
```
|
|
||||||
|
|
||||||
### Setup automated cleanup
|
### Setup automated cleanup
|
||||||
|
|
||||||
Docker swarm doesn't do any cleanup of old images, so as you experiment with various stacks, and as updated containers are released upstream, you'll soon find yourself loosing gigabytes of disk space to old, unused images.
|
Docker swarm doesn't do any cleanup of old images, so as you experiment with various stacks, and as updated containers are released upstream, you'll soon find yourself loosing gigabytes of disk space to old, unused images.
|
||||||
@@ -135,8 +141,8 @@ Create /var/data/config/shepherd/shepherd.env as follows:
|
|||||||
```
|
```
|
||||||
# Don't auto-update Plex or Emby, I might be watching a movie! (Customize this for the containers you _don't_ want to auto-update)
|
# Don't auto-update Plex or Emby, I might be watching a movie! (Customize this for the containers you _don't_ want to auto-update)
|
||||||
BLACKLIST_SERVICES="plex_plex emby_emby"
|
BLACKLIST_SERVICES="plex_plex emby_emby"
|
||||||
# Run every 24 hours. I _really_ don't need new images more frequently than that!
|
# Run every 24 hours. Note that SLEEP_TIME appears to be in seconds.
|
||||||
SLEEP_TIME=1440
|
SLEEP_TIME=86400
|
||||||
```
|
```
|
||||||
|
|
||||||
Then create /var/data/config/shepherd/shepherd.yml as follows:
|
Then create /var/data/config/shepherd/shepherd.yml as follows:
|
||||||
@@ -159,27 +165,12 @@ services:
|
|||||||
|
|
||||||
Launch shepherd by running ```docker stack deploy shepherd -c /var/data/config/shepherd/shepherd.yml```, and then just forget about it, comfortable in the knowledge that every day, Shepherd will check that your images are the latest available, and if not, will destroy and recreate the container on the latest available image.
|
Launch shepherd by running ```docker stack deploy shepherd -c /var/data/config/shepherd/shepherd.yml```, and then just forget about it, comfortable in the knowledge that every day, Shepherd will check that your images are the latest available, and if not, will destroy and recreate the container on the latest available image.
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
|
||||||
### Tweaks
|
!!! summary
|
||||||
|
Created
|
||||||
|
|
||||||
Add some handy bash auto-completion for docker. Without this, you'll get annoyed that you can't autocomplete ```docker stack deploy <blah> -c <blah.yml>``` commands.
|
* [X] [Docker swarm cluster](/ha-docker-swarm/design/)
|
||||||
|
|
||||||
```
|
|
||||||
cd /etc/bash_completion.d/
|
|
||||||
curl -O https://raw.githubusercontent.com/docker/cli/b75596e1e4d5295ac69b9934d1bd8aff691a0de8/contrib/completion/bash/docker
|
|
||||||
```
|
|
||||||
|
|
||||||
Install some useful bash aliases on each host
|
## Chef's Notes 📓
|
||||||
```
|
|
||||||
cd ~
|
|
||||||
curl -O https://raw.githubusercontent.com/funkypenguin/geek-cookbook/master/examples/scripts/gcb-aliases.sh
|
|
||||||
echo 'source ~/gcb-aliases.sh' >> ~/.bash_profile
|
|
||||||
```
|
|
||||||
|
|
||||||
## Chef's Notes
|
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -4,20 +4,21 @@ While having a self-healing, scalable docker swarm is great for availability and
|
|||||||
|
|
||||||
In order to provide seamless external access to clustered resources, regardless of which node they're on and tolerant of node failure, you need to present a single IP to the world for external access.
|
In order to provide seamless external access to clustered resources, regardless of which node they're on and tolerant of node failure, you need to present a single IP to the world for external access.
|
||||||
|
|
||||||
Normally this is done using a HA loadbalancer, but since Docker Swarm aready provides the load-balancing capabilities (routing mesh), all we need for seamless HA is a virtual IP which will be provided by more than one docker node.
|
Normally this is done using a HA loadbalancer, but since Docker Swarm aready provides the load-balancing capabilities (*[routing mesh](https://docs.docker.com/engine/swarm/ingress/)*), all we need for seamless HA is a virtual IP which will be provided by more than one docker node.
|
||||||
|
|
||||||
This is accomplished with the use of keepalived on at least two nodes.
|
This is accomplished with the use of keepalived on at least two nodes.
|
||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
```
|
!!! summary "Ingredients"
|
||||||
Already deployed:
|
Already deployed:
|
||||||
[X] At least 2 x CentOS/Fedora Atomic VMs
|
|
||||||
[X] low-latency link (i.e., no WAN links)
|
|
||||||
|
|
||||||
New:
|
* [X] At least 2 x swarm nodes
|
||||||
[ ] 3 x IPv4 addresses (one for each node and one for the virtual IP)
|
* [X] low-latency link (i.e., no WAN links)
|
||||||
```
|
|
||||||
|
New:
|
||||||
|
|
||||||
|
* [ ] At least 3 x IPv4 addresses (one for each node and one for the virtual IP)
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|
||||||
@@ -64,13 +65,7 @@ docker run -d --name keepalived --restart=always \
|
|||||||
|
|
||||||
That's it. Each node will talk to the other via unicast (no need to un-firewall multicast addresses), and the node with the highest priority gets to be the master. When ingress traffic arrives on the master node via the VIP, docker's routing mesh will deliver it to the appropriate docker node.
|
That's it. Each node will talk to the other via unicast (no need to un-firewall multicast addresses), and the node with the highest priority gets to be the master. When ingress traffic arrives on the master node via the VIP, docker's routing mesh will deliver it to the appropriate docker node.
|
||||||
|
|
||||||
## Chef's notes
|
## Chef's notes 📓
|
||||||
|
|
||||||
1. Some hosting platforms (OpenStack, for one) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS and Azure would likely include similar protections.
|
1. Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
|
||||||
2. More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.
|
2. More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
79
manuscript/ha-docker-swarm/nodes.md
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
# Nodes
|
||||||
|
|
||||||
|
Let's start building our cluster. You can use either bare-metal machines or virtual machines - the configuration would be the same. To avoid confusion, I'll be referring to these as "nodes" from now on.
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
In 2017, I **initially** chose the "[Atomic](https://www.projectatomic.io/)" CentOS/Fedora image for the swarm hosts, but later found its outdated version of Docker to be problematic with advanced features like GPU transcoding (in [Plex](/recipes/plex/)), [Swarmprom](/recipes/swarmprom/), etc. In the end, I went mainstream and simply preferred a modern Ubuntu installation.
|
||||||
|
|
||||||
|
## Ingredients
|
||||||
|
|
||||||
|
!!! summary "Ingredients"
|
||||||
|
New in this recipe:
|
||||||
|
|
||||||
|
* [ ] 3 x nodes (*bare-metal or VMs*), each with:
|
||||||
|
* A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
|
||||||
|
* At least 2GB RAM
|
||||||
|
* At least 20GB disk space (_but it'll be tight_)
|
||||||
|
* [ ] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
||||||
|
|
||||||
|
|
||||||
|
## Preparation
|
||||||
|
|
||||||
|
### Permit connectivity
|
||||||
|
|
||||||
|
Most modern Linux distributions include firewall rules which only only permit minimal required incoming connections (like SSH). We'll want to allow all traffic between our nodes. The steps to achieve this in CentOS/Ubuntu are a little different...
|
||||||
|
|
||||||
|
#### CentOS
|
||||||
|
|
||||||
|
Add something like this to `/etc/sysconfig/iptables`:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Allow all inter-node communication
|
||||||
|
-A INPUT -s 192.168.31.0/24 -j ACCEPT
|
||||||
|
```
|
||||||
|
|
||||||
|
And restart iptables with ```systemctl restart iptables```
|
||||||
|
|
||||||
|
#### Ubuntu
|
||||||
|
|
||||||
|
Install the (*non-default*) persistent iptables tools, by running `apt-get install iptables-persistent`, establishing some default rules (*dkpg will prompt you to save current ruleset*), and then add something like this to `/etc/iptables/rules.v4`:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Allow all inter-node communication
|
||||||
|
-A INPUT -s 192.168.31.0/24 -j ACCEPT
|
||||||
|
```
|
||||||
|
|
||||||
|
And refresh your running iptables rules with `iptables-restore < /etc/iptables/rules.v4`
|
||||||
|
|
||||||
|
### Enable hostname resolution
|
||||||
|
|
||||||
|
Depending on your hosting environment, you may have DNS automatically setup for your VMs. If not, it's useful to set up static entries in /etc/hosts for the nodes. For example, I setup the following:
|
||||||
|
|
||||||
|
```
|
||||||
|
192.168.31.11 ds1 ds1.funkypenguin.co.nz
|
||||||
|
192.168.31.12 ds2 ds2.funkypenguin.co.nz
|
||||||
|
192.168.31.13 ds3 ds3.funkypenguin.co.nz
|
||||||
|
```
|
||||||
|
|
||||||
|
### Set timezone
|
||||||
|
|
||||||
|
Set your local timezone, by running:
|
||||||
|
|
||||||
|
```
|
||||||
|
ln -sf /usr/share/zoneinfo/<your timezone> /etc/localtime
|
||||||
|
```
|
||||||
|
|
||||||
|
## Serving
|
||||||
|
|
||||||
|
After completing the above, you should have:
|
||||||
|
|
||||||
|
!!! summary "Summary"
|
||||||
|
Deployed in this recipe:
|
||||||
|
|
||||||
|
* [X] 3 x nodes (*bare-metal or VMs*), each with:
|
||||||
|
* A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
|
||||||
|
* At least 2GB RAM
|
||||||
|
* At least 20GB disk space (_but it'll be tight_)
|
||||||
|
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
||||||
|
|
||||||
|
## Chef's Notes 📓
|
||||||
@@ -110,10 +110,4 @@ systemctl restart docker-latest
|
|||||||
!!! tip ""
|
!!! tip ""
|
||||||
Note the extra comma required after "false" above
|
Note the extra comma required after "false" above
|
||||||
|
|
||||||
## Chef's notes
|
## Chef's notes 📓
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -91,7 +91,7 @@ docker run -d --net=host \
|
|||||||
-v /etc/ceph:/etc/ceph \
|
-v /etc/ceph:/etc/ceph \
|
||||||
-v /var/lib/ceph/:/var/lib/ceph/ \
|
-v /var/lib/ceph/:/var/lib/ceph/ \
|
||||||
-v /dev/:/dev/ \
|
-v /dev/:/dev/ \
|
||||||
-e OSD_FORCE_ZAP=1
|
-e OSD_FORCE_ZAP=1 \
|
||||||
-e OSD_DEVICE=/dev/vdd \
|
-e OSD_DEVICE=/dev/vdd \
|
||||||
-e OSD_TYPE=disk \
|
-e OSD_TYPE=disk \
|
||||||
--name="ceph-osd" \
|
--name="ceph-osd" \
|
||||||
@@ -189,15 +189,9 @@ After completing the above, you should have:
|
|||||||
[X] Resiliency in the event of the failure of a single node
|
[X] Resiliency in the event of the failure of a single node
|
||||||
```
|
```
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
Future enhancements to this recipe include:
|
Future enhancements to this recipe include:
|
||||||
|
|
||||||
1. Rather than pasting a secret key into /etc/fstab (which feels wrong), I'd prefer to be able to set "secretfile" in /etc/fstab (which just points ceph.mount to a file containing the secret), but under the current CentOS Atomic, we're stuck with "secret", per https://bugzilla.redhat.com/show_bug.cgi?id=1030402
|
1. Rather than pasting a secret key into /etc/fstab (which feels wrong), I'd prefer to be able to set "secretfile" in /etc/fstab (which just points ceph.mount to a file containing the secret), but under the current CentOS Atomic, we're stuck with "secret", per https://bugzilla.redhat.com/show_bug.cgi?id=1030402
|
||||||
2. This recipe was written with Ceph v11 "Jewel". Ceph have subsequently releaesd v12 "Kraken". I've updated the recipe for the addition of "Manager" daemons, but it should be noted that the [only reader so far](https://discourse.geek-kitchen.funkypenguin.co.nz/u/ggilley) to attempt a Ceph install using CentOS Atomic and Ceph v12 had issues with OSDs, which lead him to [move to Ubuntu 1604](https://discourse.geek-kitchen.funkypenguin.co.nz/t/shared-storage-ceph-funky-penguins-geek-cookbook/47/24?u=funkypenguin) instead.
|
2. This recipe was written with Ceph v11 "Jewel". Ceph have subsequently releaesd v12 "Kraken". I've updated the recipe for the addition of "Manager" daemons, but it should be noted that the [only reader so far](https://discourse.geek-kitchen.funkypenguin.co.nz/u/ggilley) to attempt a Ceph install using CentOS Atomic and Ceph v12 had issues with OSDs, which lead him to [move to Ubuntu 1604](https://discourse.geek-kitchen.funkypenguin.co.nz/t/shared-storage-ceph-funky-penguins-geek-cookbook/47/24?u=funkypenguin) instead.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -2,6 +2,9 @@
|
|||||||
|
|
||||||
While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node.
|
While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node.
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
This recipe is deprecated. It didn't work well in 2017, and it's not likely to work any better now. It remains here as a reference. I now recommend the use of [Ceph for shared storage](/ha-docker-swarm/shared-storage-ceph/) instead. - 2019 Chef
|
||||||
|
|
||||||
## Design
|
## Design
|
||||||
|
|
||||||
### Why GlusterFS?
|
### Why GlusterFS?
|
||||||
@@ -156,15 +159,9 @@ After completing the above, you should have:
|
|||||||
[X] Resiliency in the event of the failure of a single (gluster) node
|
[X] Resiliency in the event of the failure of a single (gluster) node
|
||||||
```
|
```
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
Future enhancements to this recipe include:
|
Future enhancements to this recipe include:
|
||||||
|
|
||||||
1. Migration of shared storage from GlusterFS to Ceph ()[#2](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/2))
|
1. Migration of shared storage from GlusterFS to Ceph ()[#2](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/2))
|
||||||
2. Correct the fact that volumes don't automount on boot ([#3](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/3))
|
2. Correct the fact that volumes don't automount on boot ([#3](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/3))
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
116
manuscript/ha-docker-swarm/traefik-forward-auth.md
Normal file
@@ -0,0 +1,116 @@
|
|||||||
|
# Traefik Forward Auth
|
||||||
|
|
||||||
|
Now that we have Traefik deployed, automatically exposing SSL access to our Docker Swarm services using LetsEncrypt wildcard certificates, let's pause to consider that we may not _want_ some services exposed directly to the internet...
|
||||||
|
|
||||||
|
..Wait, why not? Well, Traefik doesn't provide any form of authentication, it simply secures the **transmission** of the service between Docker Swarm and the end user. If you were to deploy a service with no native security (*[Radarr](/recipes/autopirate/radarr/) or [Sonarr](/recipes/autopirate/sonarr/) come to mind*), then anybody would be able to use it! Even services which _may_ have a layer of authentication **might** not be safe to expose publically - often open source projects may be maintained by enthusiasts who happily add extra features, but just pay lip service to security, on the basis that "*it's the user's problem to secure it in their own network*".
|
||||||
|
|
||||||
|
To give us confidence that **we** can access our services, but BadGuys(tm) cannot, we'll deploy a layer of authentication **in front** of Traefik, using [Forward Authentication](https://docs.traefik.io/configuration/entrypoints/#forward-authentication). You can use your own [KeyCloak](/recipes/keycloak/) instance for authentication, but to lower the barrier to entry, this recipe will assume you're authenticating against your own Google account.
|
||||||
|
|
||||||
|
## Ingredients
|
||||||
|
|
||||||
|
!!! summary "Ingredients"
|
||||||
|
Existing:
|
||||||
|
|
||||||
|
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph)
|
||||||
|
* [X] [Traefik](/ha-docker-swarm/traefik/) configured per design
|
||||||
|
|
||||||
|
New:
|
||||||
|
|
||||||
|
* [ ] Client ID and secret from an OpenID-Connect provider (Google, [KeyCloak](/recipes/keycloak/), Microsoft, etc..)
|
||||||
|
|
||||||
|
## Preparation
|
||||||
|
|
||||||
|
### Obtain OAuth credentials
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
This recipe will demonstrate using Google OAuth for traefik forward authentication, but it's also possible to use a self-hosted KeyCloak instance - see the [KeyCloak OIDC Provider](/recipes/keycloak/setup-oidc-provider/) recipe for more details!
|
||||||
|
|
||||||
|
Log into https://console.developers.google.com/, create a new project then search for and select "Credentials" in the search bar.
|
||||||
|
|
||||||
|
Fill out the "OAuth Consent Screen" tab, and then click, "**Create Credentials**" > "**OAuth client ID**". Select "**Web Application**", fill in the name of your app, skip "**Authorized JavaScript origins**" and fill "**Authorized redirect URIs**" with either all the domains you will allow authentication from, appended with the url-path (*e.g. https://radarr.example.com/_oauth, https://radarr.example.com/_oauth, etc*), or if you don't like frustration, use a "auth host" URL instead, like "*https://auth.example.com/_oauth*" (*see below for details*)
|
||||||
|
|
||||||
|
Store your client ID and secret safely - you'll need them for the next step.
|
||||||
|
|
||||||
|
|
||||||
|
### Prepare environment
|
||||||
|
|
||||||
|
Create `/var/data/config/traefik/traefik-forward-auth.env` as follows:
|
||||||
|
|
||||||
|
```
|
||||||
|
CLIENT_ID=<your client id>
|
||||||
|
CLIENT_SECRET=<your client secret>
|
||||||
|
OIDC_ISSUER=https://accounts.google.com
|
||||||
|
SECRET=<a random string, make it up>
|
||||||
|
# uncomment this to use a single auth host instead of individual redirect_uris (recommended but advanced)
|
||||||
|
#AUTH_HOST=auth.example.com
|
||||||
|
COOKIE_DOMAINS=example.com
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prepare the docker service config
|
||||||
|
|
||||||
|
This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/recipes/traefik/) recipe:
|
||||||
|
|
||||||
|
```
|
||||||
|
traefik-forward-auth:
|
||||||
|
image: funkypenguin/traefik-forward-auth
|
||||||
|
env_file: /var/data/config/traefik/traefik-forward-auth.env
|
||||||
|
networks:
|
||||||
|
- traefik_public
|
||||||
|
# Uncomment these lines if you're using auth host mode
|
||||||
|
#deploy:
|
||||||
|
# labels:
|
||||||
|
# - traefik.port=4181
|
||||||
|
# - traefik.frontend.rule=Host:auth.example.com
|
||||||
|
# - traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||||
|
# - traefik.frontend.auth.forward.trustForwardHeader=true
|
||||||
|
```
|
||||||
|
|
||||||
|
If you're not confident that forward authentication is working, add a simple "whoami" test container, to help debug traefik forward auth, before attempting to add it to a more complex container.
|
||||||
|
|
||||||
|
```
|
||||||
|
# This simply validates that traefik forward authentication is working
|
||||||
|
whoami:
|
||||||
|
image: containous/whoami
|
||||||
|
networks:
|
||||||
|
- traefik_public
|
||||||
|
deploy:
|
||||||
|
labels:
|
||||||
|
- traefik.frontend.rule=Host:whoami.example.com
|
||||||
|
- traefik.port=80
|
||||||
|
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||||
|
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||||
|
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Serving
|
||||||
|
|
||||||
|
### Launch
|
||||||
|
|
||||||
|
Redeploy traefik with ```docker stack deploy traefik-app -c /var/data/traefik/traeifk-app.yml```, to launch the traefik-forward-auth container.
|
||||||
|
|
||||||
|
### Test
|
||||||
|
|
||||||
|
Browse to https://whoami.example.com (*obviously, customized for your domain and having created a DNS record*), and all going according to plan, you should be redirected to a Google login. Once successfully logged in, you'll be directed to the basic whoami page.
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our choice of OAuth provider, with minimal processing / handling overhead.
|
||||||
|
|
||||||
|
!!! summary "Summary"
|
||||||
|
Created:
|
||||||
|
|
||||||
|
* [X] Traefik-forward-auth configured to authenticate against an OIDC provider
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Chef's Notes 📓
|
||||||
|
|
||||||
|
1. Traefik forward auth replaces the use of [oauth_proxy containers](/reference/oauth_proxy/) found in some of the existing recipes
|
||||||
|
2. [@thomaseddon's original version](https://github.com/thomseddon/traefik-forward-auth) of traefik-forward-auth only works with Google currently, but I've created a [fork](https://www.github.com/funkypenguin/traefik-forward-auth) of a [fork](https://github.com/noelcatt/traefik-forward-auth), which implements generic OIDC providers.
|
||||||
|
3. I reviewed several implementations of forward authenticators for Traefik, but found most to be rather heavy-handed, or specific to a single auth provider. @thomaseddon's go-based docker image is 7MB in size, and with the generic OIDC patch (above), it can be extended to work with any OIDC provider.
|
||||||
|
4. No, not github natively, but you can ferderate GitHub into KeyCloak, and then use KeyCloak as the OIDC provider.
|
||||||
122
manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
# Using Traefik Forward Auth with KeyCloak
|
||||||
|
|
||||||
|
While the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe demonstrated a quick way to protect a set of explicitly-specified URLs using OIDC credentials from a Google account, this recipe will illustrate how to use your own KeyCloak instance to secure **any** URLs within your DNS domain.
|
||||||
|
|
||||||
|
## Ingredients
|
||||||
|
|
||||||
|
!!! Summary
|
||||||
|
Existing:
|
||||||
|
|
||||||
|
* [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully, with a [local user](/recipes/keycloak/create-user/) and an [OIDC client](/recipes/keycloak/setup-oidc-provider/)
|
||||||
|
|
||||||
|
New:
|
||||||
|
|
||||||
|
* [ ] DNS entry for your auth host (*"auth.yourdomain.com" is a good choice*), pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
|
||||||
|
|
||||||
|
## Preparation
|
||||||
|
|
||||||
|
### What is AuthHost mode
|
||||||
|
|
||||||
|
Under normal OIDC auth, you have to tell your auth provider which URLs it may redirect an authenticated user back to, post-authentication. This is a security feture of the OIDC spec, preventing a malicious landing page from capturing your session and using it to impersonate you. When you're securing many URLs though, explicitly listing them can be a PITA.
|
||||||
|
|
||||||
|
[@thomaseddon's traefik-forward-auth](https://github.com/thomseddon/traefik-forward-auth) includes an ingenious mechanism to simulate an "*auth host*" in your OIDC authentication, so that you can protect an unlimited amount of DNS names (*with a common domain suffix*), without having to manually maintain a list.
|
||||||
|
|
||||||
|
#### How does it work?
|
||||||
|
|
||||||
|
Say you're protecting **radarr.example.com**. When you first browse to **https://radarr.example.com**, Traefik forwards your session to traefik-forward-auth, to be authenticated. Traefik-forward-auth redirects you to your OIDC provider's login (*KeyCloak, in this case*), but instructs the OIDC provider to redirect a successfully authenticated session **back** to **https://auth.example.com/_oauth**, rather than to **https://radarr.example.com/_oauth**.
|
||||||
|
|
||||||
|
When you successfully authenticate against the OIDC provider, you are redirected to the "*redirect_uri*" of https://auth.example.com. Again, your request hits Traefik, whichforwards the session to traefik-forward-auth, which **knows** that you've just been authenticated (*cookies have a role to play here*). Traefik-forward-auth also knows the URL of your **original** request (*thanks to the X-Forwarded-Whatever header*). Traefik-forward-auth redirects you to your original destination, and everybody is happy.
|
||||||
|
|
||||||
|
This clever workaround only works under 2 conditions:
|
||||||
|
|
||||||
|
|
||||||
|
1. Your "auth host" has the same domain name as the hosts you're protecting (*i.e., auth.example.com protecting radarr.example.com*)
|
||||||
|
2. You explictly tell traefik-forward-auth to use a cookie authenticating your **whole** domain (*i.e. example.com*)
|
||||||
|
|
||||||
|
### Setup environment
|
||||||
|
|
||||||
|
Create `/var/data/config/traefik/traefik-forward-auth.env` as follows (*change "master" if you created a different realm*):
|
||||||
|
|
||||||
|
```
|
||||||
|
CLIENT_ID=<your keycloak client name>
|
||||||
|
CLIENT_SECRET=<your keycloak client secret>
|
||||||
|
OIDC_ISSUER=https://<your keycloak URL>/auth/realms/master
|
||||||
|
SECRET=<a random string to secure your cookie>
|
||||||
|
AUTH_HOST=<the FQDN to use for your auth host>
|
||||||
|
COOKIE_DOMAIN=<the root FQDN of your domain>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prepare the docker service config
|
||||||
|
|
||||||
|
This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/recipes/traefik/) recipe:
|
||||||
|
|
||||||
|
```
|
||||||
|
traefik-forward-auth:
|
||||||
|
image: funkypenguin/traefik-forward-auth
|
||||||
|
env_file: /var/data/config/traefik/traefik-forward-auth.env
|
||||||
|
networks:
|
||||||
|
- traefik_public
|
||||||
|
deploy:
|
||||||
|
labels:
|
||||||
|
- traefik.port=4181
|
||||||
|
- traefik.frontend.rule=Host:auth.example.com
|
||||||
|
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||||
|
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||||
|
```
|
||||||
|
|
||||||
|
If you're not confident that forward authentication is working, add a simple "whoami" test container, to help debug traefik forward auth, before attempting to add it to a more complex container.
|
||||||
|
|
||||||
|
```
|
||||||
|
# This simply validates that traefik forward authentication is working
|
||||||
|
whoami:
|
||||||
|
image: containous/whoami
|
||||||
|
networks:
|
||||||
|
- traefik_public
|
||||||
|
deploy:
|
||||||
|
labels:
|
||||||
|
- traefik.frontend.rule=Host:whoami.example.com
|
||||||
|
- traefik.port=80
|
||||||
|
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||||
|
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||||
|
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||||
|
|
||||||
|
## Serving
|
||||||
|
|
||||||
|
### Launch
|
||||||
|
|
||||||
|
Redeploy traefik with ```docker stack deploy traefik-app -c /var/data/traefik/traeifk-app.yml```, to launch the traefik-forward-auth container.
|
||||||
|
|
||||||
|
### Test
|
||||||
|
|
||||||
|
Browse to https://whoami.example.com (*obviously, customized for your domain and having created a DNS record*), and all going according to plan, you'll be redirected to a KeyCloak login. Once successfully logged in, you'll be directed to the basic whoami page.
|
||||||
|
|
||||||
|
### Protect services
|
||||||
|
|
||||||
|
To protect any other service, ensure the service itself is exposed by Traefik (*if you were previously using an oauth_proxy for this, you may have to migrate some labels from the oauth_proxy serivce to the service itself*). Add the following 3 labels:
|
||||||
|
|
||||||
|
```
|
||||||
|
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||||
|
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||||
|
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||||
|
```
|
||||||
|
|
||||||
|
And re-deploy your services :)
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our KeyCloak OIDC provider, with minimal processing / handling overhead.
|
||||||
|
|
||||||
|
!!! summary "Summary"
|
||||||
|
Created:
|
||||||
|
|
||||||
|
* [X] Traefik-forward-auth configured to authenticate against KeyCloak
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Chef's Notes 📓
|
||||||
|
|
||||||
|
1. KeyCloak is very powerful. You can add 2FA and all other clever things outside of the scope of this simple recipe ;)
|
||||||
@@ -11,15 +11,24 @@ There are some gaps to this approach though:
|
|||||||
|
|
||||||
To deal with these gaps, we need a front-end load-balancer, and in this design, that role is provided by [Traefik](https://traefik.io/).
|
To deal with these gaps, we need a front-end load-balancer, and in this design, that role is provided by [Traefik](https://traefik.io/).
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
!!! summary "You'll need"
|
||||||
|
Existing
|
||||||
|
|
||||||
|
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph)
|
||||||
|
|
||||||
|
New
|
||||||
|
|
||||||
|
* [ ] Access to update your DNS records for manual/automated [LetsEncrypt](https://letsencrypt.org/docs/challenge-types/) DNS-01 validation, or ingress HTTP/HTTPS for HTTP-01 validation
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|
||||||
### Prepare the host
|
### Prepare the host
|
||||||
|
|
||||||
The traefik container is aware of the __other__ docker containers in the swarm, because it has access to the docker socket at **/var/run/docker.sock**. This allows traefik to dynamically configure itself based on the labels found on containers in the swarm, which is hugely useful. To make this functionality work on our SELinux-enabled Atomic hosts, we need to add custom SELinux policy.
|
The traefik container is aware of the __other__ docker containers in the swarm, because it has access to the docker socket at **/var/run/docker.sock**. This allows traefik to dynamically configure itself based on the labels found on containers in the swarm, which is hugely useful. To make this functionality work on a SELinux-enabled CentOS7 host, we need to add custom SELinux policy.
|
||||||
|
|
||||||
!!! tip
|
!!! tip
|
||||||
The following is only necessary if you're using SELinux!
|
The following is only necessary if you're using SELinux!
|
||||||
@@ -38,9 +47,9 @@ make && semodule -i dockersock.pp
|
|||||||
|
|
||||||
### Prepare traefik.toml
|
### Prepare traefik.toml
|
||||||
|
|
||||||
While it's possible to configure traefik via docker command arguments, I prefer to create a config file (traefik.toml). This allows me to change traefik's behaviour by simply changing the file, and keeps my docker config simple.
|
While it's possible to configure traefik via docker command arguments, I prefer to create a config file (`traefik.toml`). This allows me to change traefik's behaviour by simply changing the file, and keeps my docker config simple.
|
||||||
|
|
||||||
Create ```/var/data/traefik/```, and then create ```traefik.toml``` inside it as follows:
|
Create `/var/data/traefik/traefik.toml` as follows:
|
||||||
|
|
||||||
```
|
```
|
||||||
checkNewVersion = true
|
checkNewVersion = true
|
||||||
@@ -55,8 +64,10 @@ acmeLogging = true
|
|||||||
onDemand = true
|
onDemand = true
|
||||||
OnHostRule = true
|
OnHostRule = true
|
||||||
|
|
||||||
|
# Request wildcard certificates per https://docs.traefik.io/configuration/acme/#wildcard-domains
|
||||||
[[acme.domains]]
|
[[acme.domains]]
|
||||||
main = "<your primary domain>"
|
main = "*.example.com"
|
||||||
|
sans = ["example.com"]
|
||||||
|
|
||||||
# Redirect all HTTP to HTTPS (why wouldn't you?)
|
# Redirect all HTTP to HTTPS (why wouldn't you?)
|
||||||
[entryPoints]
|
[entryPoints]
|
||||||
@@ -74,19 +85,48 @@ watch = true
|
|||||||
|
|
||||||
[docker]
|
[docker]
|
||||||
endpoint = "tcp://127.0.0.1:2375"
|
endpoint = "tcp://127.0.0.1:2375"
|
||||||
domain = "<your primary domain>"
|
domain = "example.com"
|
||||||
watch = true
|
watch = true
|
||||||
swarmmode = true
|
swarmmode = true
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
### Prepare the docker service config
|
### Prepare the docker service config
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
"We'll want an overlay network, independent of our traefik stack, so that we can attach/detach all our other stacks (including traefik) to the overlay network. This way, we can undeploy/redepoly the traefik stack without having to bring every other stack first!" - voice of experience
|
||||||
|
|
||||||
|
Create `/var/data/config/traefik/traefik.yml` as follows:
|
||||||
|
|
||||||
|
```
|
||||||
|
version: "3.2"
|
||||||
|
|
||||||
|
# What is this?
|
||||||
|
#
|
||||||
|
# This stack exists solely to deploy the traefik_public overlay network, so that
|
||||||
|
# other stacks (including traefik-app) can attach to it
|
||||||
|
|
||||||
|
services:
|
||||||
|
scratch:
|
||||||
|
image: scratch
|
||||||
|
deploy:
|
||||||
|
replicas: 0
|
||||||
|
networks:
|
||||||
|
- public
|
||||||
|
|
||||||
|
networks:
|
||||||
|
public:
|
||||||
|
driver: overlay
|
||||||
|
attachable: true
|
||||||
|
ipam:
|
||||||
|
config:
|
||||||
|
- subnet: 172.16.200.0/24
|
||||||
|
```
|
||||||
|
|
||||||
!!! tip
|
!!! tip
|
||||||
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||||
|
|
||||||
|
|
||||||
Create /var/data/config/traefik/docker-compose.yml as follows:
|
Create `/var/data/config/traefik/traefik-app.yml` as follows:
|
||||||
|
|
||||||
```
|
```
|
||||||
version: "3"
|
version: "3"
|
||||||
@@ -94,7 +134,11 @@ version: "3"
|
|||||||
services:
|
services:
|
||||||
traefik:
|
traefik:
|
||||||
image: traefik
|
image: traefik
|
||||||
command: --web --docker --docker.swarmmode --docker.watch --docker.domain=funkypenguin.co.nz --logLevel=DEBUG
|
command: --web --docker --docker.swarmmode --docker.watch --docker.domain=example.com --logLevel=DEBUG
|
||||||
|
# Note below that we use host mode to avoid source nat being applied to our ingress HTTP/HTTPS sessions
|
||||||
|
# Without host mode, all inbound sessions would have the source IP of the swarm nodes, rather than the
|
||||||
|
# original source IP, which would impact logging. If you don't care about this, you can expose ports the
|
||||||
|
# "minimal" way instead
|
||||||
ports:
|
ports:
|
||||||
- target: 80
|
- target: 80
|
||||||
published: 80
|
published: 80
|
||||||
@@ -109,13 +153,16 @@ services:
|
|||||||
protocol: tcp
|
protocol: tcp
|
||||||
volumes:
|
volumes:
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
- /var/data/traefik/traefik.toml:/traefik.toml:ro
|
- /var/data/config/traefik:/etc/traefik
|
||||||
|
- /var/data/traefik/traefik.log:/traefik.log
|
||||||
- /var/data/traefik/acme.json:/acme.json
|
- /var/data/traefik/acme.json:/acme.json
|
||||||
labels:
|
|
||||||
- "traefik.enable=false"
|
|
||||||
networks:
|
networks:
|
||||||
- public
|
- traefik_public
|
||||||
|
# Global mode makes an instance of traefik listen on _every_ node, so that regardless of which
|
||||||
|
# node the request arrives on, it'll be forwarded to the correct backend service.
|
||||||
deploy:
|
deploy:
|
||||||
|
labels:
|
||||||
|
- "traefik.enable=false"
|
||||||
mode: global
|
mode: global
|
||||||
placement:
|
placement:
|
||||||
constraints: [node.role == manager]
|
constraints: [node.role == manager]
|
||||||
@@ -123,47 +170,70 @@ services:
|
|||||||
condition: on-failure
|
condition: on-failure
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
public:
|
traefik_public:
|
||||||
driver: overlay
|
external: true
|
||||||
ipam:
|
|
||||||
driver: default
|
|
||||||
config:
|
|
||||||
- subnet: 10.1.0.0/24
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Docker won't start an image with a bind-mount to a non-existent file, so prepare an empty acme.json (_with the appropriate permissions_) by running:
|
Docker won't start a service with a bind-mount to a non-existent file, so prepare an empty acme.json (_with the appropriate permissions_) by running:
|
||||||
|
|
||||||
```
|
```
|
||||||
touch /var/data/traefik/acme.json
|
touch /var/data/traefik/acme.json
|
||||||
chmod 600 /var/data/traefik/acme.json
|
chmod 600 /var/data/traefik/acme.json
|
||||||
```
|
```
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
Pay attention above. You **must** set `acme.json`'s permissions to owner-readable-only, else the container will fail to start with an [ID-10T](https://en.wikipedia.org/wiki/User_error#ID-10-T_error) error!
|
||||||
|
|
||||||
Traefik will populate acme.json itself when it runs, but it needs to exist before the container will start (_Chicken, meet egg._)
|
Traefik will populate acme.json itself when it runs, but it needs to exist before the container will start (_Chicken, meet egg._)
|
||||||
|
|
||||||
### Launch
|
|
||||||
|
|
||||||
Deploy traefik with ```docker stack deploy traefik -c /var/data/traefik/docker-compose.yml```
|
|
||||||
|
|
||||||
Confirm traefik is running with ```docker stack ps traefik```
|
|
||||||
|
|
||||||
## Serving
|
## Serving
|
||||||
|
|
||||||
You now have:
|
### Launch
|
||||||
|
|
||||||
1. Frontend proxy which will dynamically configure itself for new backend containers
|
First, launch the traefik stack, which will do nothing other than create an overlay network by running `docker stack deploy traefik -c /var/data/traefik/traefik.yml`
|
||||||
2. Automatic SSL support for all proxied resources
|
|
||||||
|
```
|
||||||
|
[root@kvm ~]# docker stack deploy traefik -c traefik.yml
|
||||||
|
Creating network traefik_public
|
||||||
|
Creating service traefik_scratch
|
||||||
|
[root@kvm ~]#
|
||||||
|
```
|
||||||
|
|
||||||
|
Now deploy the traefik appliation itself (*which will attach to the overlay network*) by running `docker stack deploy traefik-app -c /var/data/traefik/traefik-app.yml`
|
||||||
|
|
||||||
|
```
|
||||||
|
[root@kvm ~]# docker stack deploy traefik-app -c traefik-app.yml
|
||||||
|
Creating service traefik-app_app
|
||||||
|
[root@kvm ~]#
|
||||||
|
```
|
||||||
|
|
||||||
|
Confirm traefik is running with `docker stack ps traefik-app`:
|
||||||
|
|
||||||
|
```
|
||||||
|
[root@kvm ~]# docker stack ps traefik-app
|
||||||
|
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
|
||||||
|
74uipz4sgasm traefik-app_app.t4vcm8siwc9s1xj4c2o4orhtx traefik:alpine kvm.funkypenguin.co.nz Running Running 33 seconds ago *:443->443/tcp,*:80->80/tcp
|
||||||
|
[root@kvm ~]#
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Traefik Dashboard
|
||||||
|
|
||||||
|
You should now be able to access your traefik instance on http://<node IP\>:8080 - It'll look a little lonely currently (*below*), but we'll populate it as we add recipes :)
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Summary
|
||||||
|
|
||||||
|
!!! summary
|
||||||
|
We've achieved:
|
||||||
|
|
||||||
|
* [X] An overlay network to permit traefik to access all future stacks we deploy
|
||||||
|
* [X] Frontend proxy which will dynamically configure itself for new backend containers
|
||||||
|
* [X] Automatic SSL support for all proxied resources
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
Additional features I'd like to see in this recipe are:
|
1. Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/ha-docker-swarm/traefik-forward-auth/)!
|
||||||
|
|
||||||
1. Include documentation of oauth2_proxy container for protecting individual backends
|
|
||||||
2. Traefik webUI is available via HTTPS, protected with oauth_proxy
|
|
||||||
3. Pending a feature in docker-swarm to avoid NAT on routing-mesh-delivered traffic, update the design
|
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -1,93 +0,0 @@
|
|||||||
# Virtual Machines
|
|
||||||
|
|
||||||
Let's start building our cloud with virtual machines. You could use bare-metal machines as well, the configuration would be the same. Given that most readers (myself included) will be using virtual infrastructure, from now on I'll be referring strictly to VMs.
|
|
||||||
|
|
||||||
I chose the "[Atomic](https://www.projectatomic.io/)" CentOS/Fedora image for the VM layer because:
|
|
||||||
|
|
||||||
1. I want less responsibility for maintaining the system, including ensuring regular software updates and reboots. Atomic's idempotent nature means the OS is largely read-only, and updates/rollbacks are "atomic" (haha) procedures, which can be easily rolled back if required.
|
|
||||||
2. For someone used to administrating servers individually, Atomic is a PITA. You have to employ [tricky](https://spinningmatt.wordpress.com/2014/01/08/a-recipe-for-starting-cloud-images-with-virt-install/) [tricks](http://blog.oddbit.com/2015/03/10/booting-cloud-images-with-libvirt/) to get it to install in a non-cloud environment. It's not designed for tweaking or customizing beyond what cloud-config is capable of. For my purposes, this is good, because it forces me to change my thinking - to consider every daemon as a container, and every config as code, to be checked in and version-controlled. Atomic forces this thinking on you.
|
|
||||||
3. I want the design to be as "portable" as possible. While I run it on VPSs now, I may want to migrate it to a "cloud" provider in the future, and I'll want the most portable, reproducible design.
|
|
||||||
|
|
||||||
|
|
||||||
## Ingredients
|
|
||||||
|
|
||||||
!!! summary "Ingredients"
|
|
||||||
3 x Virtual Machines, each with:
|
|
||||||
|
|
||||||
* [ ] CentOS/Fedora Atomic
|
|
||||||
* [ ] At least 1GB RAM
|
|
||||||
* [ ] At least 20GB disk space (_but it'll be tight_)
|
|
||||||
* [ ] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
|
||||||
|
|
||||||
|
|
||||||
## Preparation
|
|
||||||
|
|
||||||
### Install Virtual machines
|
|
||||||
|
|
||||||
1. Install / launch virtual machines.
|
|
||||||
2. The default username on CentOS atomic is "centos", and you'll have needed to supply your SSH key during the build process.
|
|
||||||
|
|
||||||
!!! tip
|
|
||||||
If you're not using a platform with cloud-init support (i.e., you're building a VM manually, not provisioning it through a cloud provider), you'll need to refer to [trick #1](https://spinningmatt.wordpress.com/2014/01/08/a-recipe-for-starting-cloud-images-with-virt-install/) and [trick #2](http://blog.oddbit.com/2015/03/10/booting-cloud-images-with-libvirt/) for a means to override the automated setup, apply a manual password to the CentOS account, and enable SSH password logins.
|
|
||||||
|
|
||||||
|
|
||||||
### Prefer docker-latest
|
|
||||||
|
|
||||||
Run the following on each node to replace the default docker 1.12 with docker 1.13 (_which we need for swarm mode_):
|
|
||||||
```
|
|
||||||
systemctl disable docker --now
|
|
||||||
systemctl enable docker-latest --now
|
|
||||||
sed -i '/DOCKERBINARY/s/^#//g' /etc/sysconfig/docker
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### Upgrade Atomic
|
|
||||||
|
|
||||||
Finally, apply any Atomic host updates, and reboot, by running: ```atomic host upgrade && systemctl reboot```.
|
|
||||||
|
|
||||||
|
|
||||||
### Permit connectivity between VMs
|
|
||||||
|
|
||||||
By default, Atomic only permits incoming SSH. We'll want to allow all traffic between our nodes, so add something like this to /etc/sysconfig/iptables:
|
|
||||||
|
|
||||||
```
|
|
||||||
# Allow all inter-node communication
|
|
||||||
-A INPUT -s 192.168.31.0/24 -j ACCEPT
|
|
||||||
```
|
|
||||||
|
|
||||||
And restart iptables with ```systemctl restart iptables```
|
|
||||||
|
|
||||||
### Enable host resolution
|
|
||||||
|
|
||||||
Depending on your hosting environment, you may have DNS automatically setup for your VMs. If not, it's useful to set up static entries in /etc/hosts for the nodes. For example, I setup the following:
|
|
||||||
|
|
||||||
```
|
|
||||||
192.168.31.11 ds1 ds1.funkypenguin.co.nz
|
|
||||||
192.168.31.12 ds2 ds2.funkypenguin.co.nz
|
|
||||||
192.168.31.13 ds3 ds3.funkypenguin.co.nz
|
|
||||||
```
|
|
||||||
|
|
||||||
### Set timezone
|
|
||||||
|
|
||||||
Set your local timezone, by running:
|
|
||||||
|
|
||||||
```
|
|
||||||
ln -sf /usr/share/zoneinfo/<your timezone> /etc/localtime
|
|
||||||
```
|
|
||||||
|
|
||||||
## Serving
|
|
||||||
|
|
||||||
After completing the above, you should have:
|
|
||||||
|
|
||||||
```
|
|
||||||
[X] 3 x fresh atomic instances, at the latest releases,
|
|
||||||
running Docker v1.13 (docker-latest)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Chef's Notes
|
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
0
manuscript/htpasswd
Normal file
BIN
manuscript/images/bitwarden.png
Normal file
|
After Width: | Height: | Size: 76 KiB |
BIN
manuscript/images/diycluster-k3s-profile-setup-node2.png
Normal file
|
After Width: | Height: | Size: 155 KiB |
BIN
manuscript/images/diycluster-k3s-profile-setup.png
Normal file
|
After Width: | Height: | Size: 160 KiB |
BIN
manuscript/images/keycloak-add-client-1.png
Normal file
|
After Width: | Height: | Size: 57 KiB |
BIN
manuscript/images/keycloak-add-client-2.png
Normal file
|
After Width: | Height: | Size: 54 KiB |
BIN
manuscript/images/keycloak-add-client-3.png
Normal file
|
After Width: | Height: | Size: 114 KiB |
BIN
manuscript/images/keycloak-add-client-4.png
Normal file
|
After Width: | Height: | Size: 66 KiB |
BIN
manuscript/images/keycloak-add-user-1.png
Normal file
|
After Width: | Height: | Size: 70 KiB |
BIN
manuscript/images/keycloak-add-user-2.png
Normal file
|
After Width: | Height: | Size: 80 KiB |
BIN
manuscript/images/keycloak-add-user-3.png
Normal file
|
After Width: | Height: | Size: 69 KiB |
BIN
manuscript/images/traefik-post-launch.png
Normal file
|
After Width: | Height: | Size: 73 KiB |
BIN
manuscript/images/traefik.png
Normal file
|
After Width: | Height: | Size: 354 KiB |
@@ -1,6 +1,15 @@
|
|||||||
# What is this?
|
# What is this?
|
||||||
|
|
||||||
The "**[Geek's Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of guides for establishing your own highly-available docker container cluster (swarm). This swarm enables you to run self-hosted services such as [GitLab](/recipes/gitlab/), [Plex](/recipes/plex/), [NextCloud](/recipes/nextcloud/), etc. Recent updates and additions are posted on the [CHANGELOG](/CHANGELOG/).
|
Funky Penguin's "**[Geek Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of how-to guides for establishing your own container-based self-hosting platform, using either [Docker Swarm](/ha-docker-swarm/design/) or [Kubernetes](/kubernetes/start/).
|
||||||
|
|
||||||
|
Running such a platform enables you to run self-hosted tools such as [AutoPirate](/recipes/autopirate/) (*Radarr, Sonarr, NZBGet and friends*), [Plex](/recipes/plex/), [NextCloud](/recipes/nextcloud/), and includes elements such as:
|
||||||
|
|
||||||
|
* [Automatic SSL-secured access](/ha-docker-swarm/traefik/) to all services (*with LetsEncrypt*)
|
||||||
|
* [SSO / authentication layer](/ha-docker-swarm/traefik-forward-auth/) to protect unsecured / vulnerable services
|
||||||
|
* [Automated backup](/recipes/elkarbackup/) of configuration and data
|
||||||
|
* [Monitoring and metrics](/recipes/swarmprom/) collection, graphing and alerting
|
||||||
|
|
||||||
|
Recent updates and additions are posted on the [CHANGELOG](/CHANGELOG/), and there's a friendly community of like-minded geeks in the [Discord server](http://chat.funkypenguin.co.nz).
|
||||||
|
|
||||||
## Who is this for?
|
## Who is this for?
|
||||||
|
|
||||||
@@ -10,11 +19,11 @@ You've probably played with self-hosting some mainstream apps yourself, like [Pl
|
|||||||
|
|
||||||
## Why should I read this?
|
## Why should I read this?
|
||||||
|
|
||||||
So if you're familiar enough with the tools, and you've done self-hosting before, why would you read this book?
|
So if you're familiar enough with the concepts above, and you've done self-hosting before, why would you read any further?
|
||||||
|
|
||||||
1. You want to upskill. You want to do container orchestration, LetsEncrypt certificates, git collaboration.
|
1. You want to upskill. You want to work with container orchestration, Prometheus and Grafana, Kubernetes
|
||||||
2. You want to play. You want a safe sandbox to test new tools, keeping the ones you want and tossing the ones you don't.
|
2. You want to play. You want a safe sandbox to test new tools, keeping the ones you want and tossing the ones you don't.
|
||||||
3. You want reliability. Once you go from __playing__ with a tool to actually __using__ it, you want it to be available when you need it. Having to "_quickly ssh into the host and restart the webserver_" doesn't cut it when your wife wants to know why her phone won't sync!
|
3. You want reliability. Once you go from __playing__ with a tool to actually __using__ it, you want it to be available when you need it. Having to "*quickly ssh into the basement server and restart plex*" doesn't cut it when you finally convince your wife to sit down with you to watch sci-fi.
|
||||||
|
|
||||||
## What have you done for me lately? (CHANGELOG)
|
## What have you done for me lately? (CHANGELOG)
|
||||||
|
|
||||||
@@ -22,9 +31,7 @@ Check out recent change at [CHANGELOG](/CHANGELOG/)
|
|||||||
|
|
||||||
## What do you want from me?
|
## What do you want from me?
|
||||||
|
|
||||||
I want your money.
|
I want your [patronage](https://www.patreon.com/bePatron?u=6982506), either in the financial sense, or as a member of our [friendly geek community](http://chat.funkypenguin.co.nz) (*or both!*)
|
||||||
|
|
||||||
No, seriously (_but yes, I do want your money - see below_), If the above applies to you, then you're like me. I want everything I wrote above, so I ended up learning all this as I went along. I enjoy it, and I'm good at it. So I created this website, partly to make sure I documented my own setup properly.
|
|
||||||
|
|
||||||
### Get in touch
|
### Get in touch
|
||||||
|
|
||||||
@@ -39,7 +46,7 @@ No, seriously (_but yes, I do want your money - see below_), If the above applie
|
|||||||
|
|
||||||
### Buy my book
|
### Buy my book
|
||||||
|
|
||||||
I'm also writing the Geek Cookbook as a formal eBook, on Leanpub (https://leanpub.com/geeks-cookbook). Buy it for $0.99 (_which is really just a token gesture of support_) - you can get it for free (_in PDF, mobi, or epub format_), or pay me what you think it's worth!
|
I'm also publishing the Geek Cookbook as a formal eBook (*PDF, mobi, epub*), on Leanpub (https://leanpub.com/geek-cookbook). Buy it for as little as $5 (_which is really just a token gesture of support, since all the content is available online anyway!_) or pay what you think it's worth!
|
||||||
|
|
||||||
### Donate / [Support me ](https://www.patreon.com/funkypenguin)
|
### Donate / [Support me ](https://www.patreon.com/funkypenguin)
|
||||||
|
|
||||||
@@ -54,17 +61,7 @@ The best way to support this work is to become a [Patreon patron](https://www.pa
|
|||||||
|
|
||||||
Impulsively **[click here (NOW quick do it!)](https://www.patreon.com/bePatron?u=6982506)** to patronize me, or instead thoughtfully and analytically review my Patreon page / history **[here](https://www.patreon.com/funkypenguin)** and make up your own mind.
|
Impulsively **[click here (NOW quick do it!)](https://www.patreon.com/bePatron?u=6982506)** to patronize me, or instead thoughtfully and analytically review my Patreon page / history **[here](https://www.patreon.com/funkypenguin)** and make up your own mind.
|
||||||
|
|
||||||
I also gratefully accept donations of most fine socialist/anarchist/hobbyist cryptocurrencies, including the list below (_let me know if I've left out the coin-of-the-week, and I'll happily add it_):
|
|
||||||
|
|
||||||
| -ist-currency | Address
|
|
||||||
| ------------- |-------------|
|
|
||||||
| Bitcoin | 1GBJfmqARmL66gQzUy9HtNWdmAEv74nfXj
|
|
||||||
| Ethereum | 0x19e60ec49e1f053cfdfc193560ecfb3caed928f1
|
|
||||||
| Litecoin | LYLEF7xTpeVbjjoZD3jGLVgvKSKTYDKbK8
|
|
||||||
| :turtle: TurtleCoin | TRTLv2qCKYChMbU5sNkc85hzq2VcGpQidaowbnV2N6LAYrFNebMLepKKPrdif75x5hAizwfc1pX4gi5VsR9WQbjQgYcJm21zec4
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Hire me
|
### Hire me
|
||||||
|
|
||||||
Need some system design work done? I do freelance consulting - [contact](https://www.funkypenguin.co.nz/contact/) me for details.
|
Need some Cloud / Microservices / DevOps / Infrastructure design work done? I'm a full-time [AWS-certified](https://www.certmetrics.com/amazon/public/badge.aspx?i=4&t=c&d=2019-02-22&ci=AWS00794574) consultant, this stuff is my bread and butter! :bread: :fork_and_knife: [Contact](https://www.funkypenguin.co.nz/contact/) me and let's talk!
|
||||||
|
|||||||
@@ -85,7 +85,7 @@ Still with me? Good. Move on to creating your own external load balancer..
|
|||||||
|
|
||||||
1. Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come!
|
1. Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come!
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -131,7 +131,7 @@ Still with me? Good. Move on to creating your cluster!
|
|||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
311
manuscript/kubernetes/diycluster.md
Normal file
@@ -0,0 +1,311 @@
|
|||||||
|
# DIY Kubernetes
|
||||||
|
|
||||||
|
If you are looking for a little more of a challenge, or just don't have the money to fork out to managed Kubernetes, you're in luck.
|
||||||
|
Kubernetes provides many ways to run a cluster, by far the simplest method is with `minikube` but there are other methods like `k3s` and using `drp` to deploy a cluster.
|
||||||
|
After all, DIY its in our DNA.
|
||||||
|
|
||||||
|
## Ingredients
|
||||||
|
|
||||||
|
1. Basic knowledge of Kubernetes terms (Will come in handy) [Start](/kubernetes/start)
|
||||||
|
2. Some Linux machines (Depends on what recipe you follow)
|
||||||
|
|
||||||
|
## Minikube
|
||||||
|
|
||||||
|
First, what is minikube?
|
||||||
|
Minikube is a method of running Kubernetes on your local machine.
|
||||||
|
It is mainly targeted at developers looking to test if their application will work with Kubernetes without deploying it to a production cluster. For this reason,
|
||||||
|
I do not recommend running your cluster on minikube as it isn't designed for deployment, and is only a single node cluster.
|
||||||
|
|
||||||
|
If you want to use minikube, there is a guide below but again, I recommend using something more production-ready like `k3s` or `drp`
|
||||||
|
|
||||||
|
### Ingredients
|
||||||
|
|
||||||
|
1. A Fresh Linux Machine
|
||||||
|
2. Some basic Linux knowledge (or can just copy-paste)
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
Make sure you are running a SystemD based distro like Ubuntu.
|
||||||
|
Although minikube will run on macOS and Windows,
|
||||||
|
they add in additional complexities to the installation as they
|
||||||
|
require running a Linux based image running in a VM,
|
||||||
|
that although minikube will manage, adds to the complexities. And
|
||||||
|
even then, who uses Windows or macOS in production anyways? 🙂
|
||||||
|
If you are serious about running on windows/macOS,
|
||||||
|
check the official MiniKube guides
|
||||||
|
[here](https://minikube.sigs.k8s.io/docs/start/)
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
After booting yourself up a fresh Linux machine and getting to a console,
|
||||||
|
you can now install minikube.
|
||||||
|
|
||||||
|
Download and install our minikube binary
|
||||||
|
|
||||||
|
```sh
|
||||||
|
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
|
||||||
|
sudo install minikube-linux-amd64 /usr/local/bin/minikube
|
||||||
|
```
|
||||||
|
|
||||||
|
Now we can boot up our cluster
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo minikube start --vm-driver=none
|
||||||
|
#Start our minikube instance, and make it use the machine to host the cluster, instead of a VM
|
||||||
|
sudo minikube config set vm-driver none #Set our default vm driver to none
|
||||||
|
```
|
||||||
|
|
||||||
|
You are now set up with minikube!
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
MiniKube is not a production-grade method of deploying Kubernetes
|
||||||
|
|
||||||
|
## K3S
|
||||||
|
|
||||||
|
What is k3s?
|
||||||
|
K3s is a production-ready method of deploying Kubernetes on many machines,
|
||||||
|
where a full Kubernetes deployment is not required, AKA - your cluster (unless your a big SaaS company, in that case, can I get a job?).
|
||||||
|
|
||||||
|
### Ingredients
|
||||||
|
|
||||||
|
1. A handful of Linux machines (3 or more, virtualized or not)
|
||||||
|
2. Some Linux knowledge.
|
||||||
|
3. Patience.
|
||||||
|
|
||||||
|
### Setting your Linux Machines up
|
||||||
|
|
||||||
|
Firstly, my flavour of choice for deployment is Ubuntu Server,
|
||||||
|
although it is not as enterprise-friendly as RHEL (That's Red Hat Enterprise Linux for my less geeky readers) or CentOS (The free version of RHEL).
|
||||||
|
Ubuntu ticks all the boxes for k3s to run on and allows you to follow lots of other guides on managing and maintaining your Ubuntu server.
|
||||||
|
|
||||||
|
Firstly, download yourself a version of Ubuntu Server from [here](https://ubuntu.com/download/server) (Whatever is latest)
|
||||||
|
Then spin yourself up as many systems as you need with the following guide
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
I am running a 3 node cluster, with nodes running on Ubuntu 19.04, all virtualized with VMWare ESXi
|
||||||
|
Your setup doesn't need to be as complex as mine, you can use 3 old Dell OptiPlex if you really want 🙂
|
||||||
|
|
||||||
|
1. Insert your installation medium into the machine, and boot it.
|
||||||
|
2. Select your language
|
||||||
|
3. Select your keyboard layout
|
||||||
|
4. Select `Install Ubuntu`
|
||||||
|
5. Check and modify your network settings if required, make sure to write down your IPs
|
||||||
|
6. Select Done on Proxy, unless you use a proxy
|
||||||
|
7. Select Done on Mirror, as it has picked the best mirror for you unless you have a local mirror you want to use (in that case you are uber-geek)
|
||||||
|
8. Select `Use An Entire Disk` for Filesystem, and basically hit enter for the rest of the disk setup,
|
||||||
|
just make sure to read the prompts and understand what you are doing
|
||||||
|
9. Now that you are up to setting up the profile, this is where things change.
|
||||||
|
You are going to want to set up the same account on all the machines, but change the server name just a tad every time.
|
||||||
|

|
||||||
|

|
||||||
|
10. Now install OpenSSH on the server, if you wish to import your existing SSH key from GitHub or Launchpad,
|
||||||
|
you can do that now and save yourself a step later.
|
||||||
|
11. Skip over Featured Server snaps by clicking `Done`
|
||||||
|
12. Wait for your server to install everything and drop you to a Linux prompt
|
||||||
|
|
||||||
|
13. Repeat for all your nodes
|
||||||
|
|
||||||
|
### Pre-installation of k3s
|
||||||
|
|
||||||
|
For the rest of this guide, you will need some sort of Linux/macOS based terminal.
|
||||||
|
On Windows you can do this with Windows Subsystem for Linux (WSL) see [here for information on WSL.](https://aka.ms/wslinstall)
|
||||||
|
|
||||||
|
The rest of this guide will all be from your local terminal.
|
||||||
|
|
||||||
|
If you already have an SSH key generated or added an existing one, skip this step.
|
||||||
|
From your PC,run `ssh-keygen` to generate a public and private key pair
|
||||||
|
(You can use this instead of typing your password in every time you want to connect via ssh)
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ ssh-keygen
|
||||||
|
Generating public/private rsa key pair.
|
||||||
|
Enter file in which to save the key (/home/thomas/.ssh/id_rsa): [enter]
|
||||||
|
Enter passphrase (empty for no passphrase): [password]
|
||||||
|
Enter same passphrase again: [password]
|
||||||
|
Your identification has been saved in /home/thomas/.ssh/id_rsa.
|
||||||
|
Your public key has been saved in /home/thomas/.ssh/id_rsa.pub.
|
||||||
|
The key fingerprint is:
|
||||||
|
...
|
||||||
|
The key's randomart image is:
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
If you have already imported a key from GitHub or Launchpad, skip this step.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ ssh-copy-id [username]@[hostname]
|
||||||
|
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/thomas/.ssh/id_rsa.pub"
|
||||||
|
The authenticity of host 'thomas-k3s-node1 (theipaddress)' can't be established.
|
||||||
|
ECDSA key fingerprint is SHA256:...
|
||||||
|
Are you sure you want to continue connecting (yes/no)? yes
|
||||||
|
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
|
||||||
|
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
|
||||||
|
thomas@thomas-k3s-node1's password: [insert your password now]
|
||||||
|
|
||||||
|
Number of key(s) added: 1
|
||||||
|
```
|
||||||
|
|
||||||
|
You will want to do this once for every machine, replacing the hostname with the other next nodes hostname each time.
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
If your hostnames aren't resolving correct, try adding them to your `/etc/hosts` file
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
If you have access to the premix repository, you can download the ansible-playbook and follow the steps contained in there, if not sit back and prepare to do it manually.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
Becoming a patron will allow you to get the ansible-playbook to setup k3s on your own hosts. For as little as 5$/m you can get access to the ansible playbooks for this recipe, and more!
|
||||||
|
See [funkypenguin's Patreon](https://www.patreon.com/funkypenguin) for more!
|
||||||
|
<!---
|
||||||
|
(Just someone needs to remind me (HexF) to write such playbook)
|
||||||
|
-->
|
||||||
|
|
||||||
|
Select one node to become your master, in my case `thomas-k3s-node1`.
|
||||||
|
Now SSH into this node, and run the following:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
localpc$ ssh thomas@thomas-k3s-node1
|
||||||
|
Enter passphrase for key '/home/thomas/.ssh/id_rsa': [ssh key password]
|
||||||
|
|
||||||
|
thomas-k3s-node1$ curl -sfL https://get.k3s.io | sh -
|
||||||
|
[sudo] password for thomas: [password entered in setup]
|
||||||
|
[INFO] Finding latest release
|
||||||
|
[INFO] Using v1.0.0 as release
|
||||||
|
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.0.0/sha256sum-amd64.txt
|
||||||
|
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.0.0/k3s
|
||||||
|
[INFO] Verifying binary download
|
||||||
|
[INFO] Installing k3s to /usr/local/bin/k3s
|
||||||
|
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
|
||||||
|
[INFO] Creating /usr/local/bin/crictl symlink to k3s
|
||||||
|
[INFO] Creating /usr/local/bin/ctr symlink to k3s
|
||||||
|
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
|
||||||
|
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
|
||||||
|
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
|
||||||
|
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
|
||||||
|
[INFO] systemd: Enabling k3s unit
|
||||||
|
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
|
||||||
|
[INFO] systemd: Starting k3s
|
||||||
|
```
|
||||||
|
|
||||||
|
Before we log out of the master, we need the token from it.
|
||||||
|
Make sure to note this token down
|
||||||
|
(please don't write it on paper, use something like `notepad` or `vim`, it's ~100 characters)
|
||||||
|
|
||||||
|
```sh
|
||||||
|
thomas-k3s-node1$ sudo cat /var/lib/rancher/k3s/server/node-token
|
||||||
|
K1097e226f95f56d90a4bab7151...
|
||||||
|
```
|
||||||
|
|
||||||
|
Make sure all nodes can access each other by hostname, whether you add them to `/etc/hosts` or to your DNS server
|
||||||
|
|
||||||
|
Now that you have your master node setup, you can now add worker nodes
|
||||||
|
|
||||||
|
SSH into the other nodes, and run the following making sure to replace values with ones that suit your installation
|
||||||
|
|
||||||
|
```sh
|
||||||
|
localpc$ ssh thomas@thomas-k3s-node2
|
||||||
|
Enter passphrase for key '/home/thomas/.ssh/id_rsa': [ssh key password]
|
||||||
|
|
||||||
|
thomas-k3s-node2$ curl -sfL https://get.k3s.io | K3S_URL=https://thomas-k3s-node1:6443 K3S_TOKEN=K1097e226f95f56d90a4bab7151... sh -
|
||||||
|
```
|
||||||
|
|
||||||
|
Now test your installation!
|
||||||
|
|
||||||
|
SSH into your master node
|
||||||
|
|
||||||
|
```sh
|
||||||
|
ssh thomas@thomas-k3s-node1
|
||||||
|
Enter passphrase for key '/home/thomas/.ssh/id_rsa': [ssh key password]
|
||||||
|
|
||||||
|
thomas-k3s-node1$ sudo kubectl get nodes
|
||||||
|
|
||||||
|
NAME STATUS ROLES AGE VERSION
|
||||||
|
thomas-k3s-node1 Ready master 15m3s v1.16.3-k3s.2
|
||||||
|
thomas-k3s-node2 Ready <none> 6m58s v1.16.3-k3s.2
|
||||||
|
thomas-k3s-node3 Ready <none> 6m12s v1.16.3-k3s.2
|
||||||
|
```
|
||||||
|
|
||||||
|
If you got Ready for all your nodes, Well Done! Your k3s cluster is now running! If not try getting help in our discord.
|
||||||
|
|
||||||
|
### Post-Installation
|
||||||
|
|
||||||
|
Now you can get yourself a kubeconfig for your cluster.
|
||||||
|
SSH into your master node, and run the following
|
||||||
|
|
||||||
|
```sh
|
||||||
|
localpc$ ssh thomas@thomas-k3s-node1
|
||||||
|
Enter passphrase for key '/home/thomas/.ssh/id_rsa': [ssh key password]
|
||||||
|
|
||||||
|
thomas-k3s-node1$ sudo kubectl config view --flatten
|
||||||
|
apiVersion: v1
|
||||||
|
clusters:
|
||||||
|
- cluster:
|
||||||
|
certificate-authority-data: LS0tLS1CRUdJTiBD...
|
||||||
|
server: https://127.0.0.1:6443
|
||||||
|
name: default
|
||||||
|
contexts:
|
||||||
|
- context:
|
||||||
|
cluster: default
|
||||||
|
user: default
|
||||||
|
name: default
|
||||||
|
current-context: default
|
||||||
|
kind: Config
|
||||||
|
preferences: {}
|
||||||
|
users:
|
||||||
|
- name: default
|
||||||
|
user:
|
||||||
|
password: thisishowtolosecontrolofyourk3s
|
||||||
|
username: admin
|
||||||
|
```
|
||||||
|
|
||||||
|
Make sure to change `clusters.cluster.server` to have the master node's name instead of `127.0.0.1`, in my case making it `https://thomas-k3s-node1:6443`
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
This kubeconfig file can grant full access to your Kubernetes installation, I recommend you protect this file just as well as you protect your passwords
|
||||||
|
|
||||||
|
You will probably want to save this kubeconfig file into a file on your local machine, say `my-k3s-cluster.yml` or `where-8-hours-of-my-life-went.yml`.
|
||||||
|
Now test it out!
|
||||||
|
|
||||||
|
```sh
|
||||||
|
localpc$ kubectl --kubeconfig=my-k3s-cluster.yml get nodes
|
||||||
|
NAME STATUS ROLES AGE VERSION
|
||||||
|
thomas-k3s-node1 Ready master 495m v1.16.3-k3s.2
|
||||||
|
thomas-k3s-node2 Ready <none> 488m v1.16.3-k3s.2
|
||||||
|
thomas-k3s-node3 Ready <none> 487m v1.16.3-k3s.2
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
To the reader concerned about my health, no I did not actually spend 8 hours writing this guide, Instead I spent most of it helping you guys on the discord (👍) and other stuff
|
||||||
|
-->
|
||||||
|
|
||||||
|
That is all! You have yourself a Kubernetes cluster for you and your dog to enjoy.
|
||||||
|
|
||||||
|
## DRP
|
||||||
|
|
||||||
|
DRP or Digital Rebar Provisioning Tool is a tool designed to automatically setup your cluster, installing an operating system for you, and doing all the configuration like we did in the k3s setup.
|
||||||
|
|
||||||
|
This section is WIP, instead, try using the K3S guide above 🙂
|
||||||
|
|
||||||
|
## Where from now
|
||||||
|
|
||||||
|
Now that you have wasted half a lifetime on installing your very own cluster, you can install more to it. Like a load balancer!
|
||||||
|
|
||||||
|
* [Start](/kubernetes/start/) - Why Kubernetes?
|
||||||
|
* [Design](/kubernetes/design/) - How does it fit together?
|
||||||
|
* Cluster (this page) - Setup a basic cluster
|
||||||
|
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
|
||||||
|
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
|
||||||
|
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
|
||||||
|
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||||
|
|
||||||
|
## About your Chef
|
||||||
|
|
||||||
|
This article, believe it or not, was not diced up by your regular chef (funkypenguin).
|
||||||
|
Instead, today's article was diced up by HexF, a fellow kiwi (hence a lot of kiwi references) who enjoys his sysadmin time.
|
||||||
|
Feel free to talk to today's chef in the discord, or see one of his many other links that you can follow below
|
||||||
|
|
||||||
|
[Twitter](https://hexf.me/api/social/twitter/geekcookbook) • [Website](https://hexf.me/api/social/website/geekcookbook) • [Github](https://hexf.me/api/social/github/geekcookbook)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The links above are just redirect links incase anything ever changes, and it has analytics too
|
||||||
|
-->
|
||||||
@@ -61,7 +61,7 @@ Still with me? Good. Move on to understanding Helm charts...
|
|||||||
|
|
||||||
1. Of course, you can have lots of fun deploying all sorts of things via Helm. Check out https://github.com/helm/charts for some examples.
|
1. Of course, you can have lots of fun deploying all sorts of things via Helm. Check out https://github.com/helm/charts for some examples.
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -333,7 +333,7 @@ Still with me? Good. Move on to setting up an ingress SSL terminating proxy with
|
|||||||
|
|
||||||
1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome 😉
|
1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome 😉
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -180,7 +180,7 @@ Still with me? Good. Move on to understanding Helm charts...
|
|||||||
1. I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74).
|
1. I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74).
|
||||||
|
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -69,7 +69,7 @@ Still with me? Good. Move on to reviewing the design elements
|
|||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -213,7 +213,7 @@ I'll be adding more Kubernetes versions of existing recipes soon. Check out the
|
|||||||
|
|
||||||
1. It's kinda lame to be able to bring up Traefik but not to use it. I'll be adding the oauth_proxy element shortly, which will make this last step a little more conclusive and exciting!
|
1. It's kinda lame to be able to bring up Traefik but not to use it. I'll be adding the oauth_proxy element shortly, which will make this last step a little more conclusive and exciting!
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -111,8 +111,8 @@ networks:
|
|||||||
|
|
||||||
Now work your way through the list of tools below, adding whichever tools your want to use, and finishing with the **end** section:
|
Now work your way through the list of tools below, adding whichever tools your want to use, and finishing with the **end** section:
|
||||||
|
|
||||||
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
|
* [SABnzbd](/recipes/autopirate/sabnzbd/)
|
||||||
* [NZBGet](/recipes/autopirate/nzbget.md)
|
* [NZBGet](/recipes/autopirate/nzbget/)
|
||||||
* [RTorrent](/recipes/autopirate/rtorrent/)
|
* [RTorrent](/recipes/autopirate/rtorrent/)
|
||||||
* [Sonarr](/recipes/autopirate/sonarr/)
|
* [Sonarr](/recipes/autopirate/sonarr/)
|
||||||
* [Radarr](/recipes/autopirate/radarr/)
|
* [Radarr](/recipes/autopirate/radarr/)
|
||||||
@@ -125,7 +125,7 @@ Now work your way through the list of tools below, adding whichever tools your w
|
|||||||
* [Jackett](/recipes/autopirate/jackett/)
|
* [Jackett](/recipes/autopirate/jackett/)
|
||||||
* [End](/recipes/autopirate/end/) (launch the stack)
|
* [End](/recipes/autopirate/end/) (launch the stack)
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ Log into each of your new tools at its respective HTTPS URL. You'll be prompted
|
|||||||
|
|
||||||
1. This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)
|
1. This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -74,7 +74,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
|
|
||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -81,7 +81,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
2. The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!
|
2. The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -74,7 +74,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
|
|
||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -87,7 +87,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](/recipes/calibre-web) recipe.
|
1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](/recipes/calibre-web) recipe.
|
||||||
2. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
2. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -76,7 +76,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
2. The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel!
|
2. The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel!
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -72,7 +72,11 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
|
|
||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
2. If you intend to configure Mylar to perform its own NZB searches and push the hits to a downloader such as SABnzbd, then in addition to configuring the connection to SAB with host, port and api key, you will need to set the parameter `host_return` parameter to the fully qualified Mylar address (e.g. `http://mylar:8090`).
|
||||||
|
|
||||||
|
This will provide the link to the downloader necessary to initiate the download. This parameter is not presented in the user interface so the config file (`$MYLAR_HOME/config.ini`) will need to be manually updated. The parameter can be found under the [Interface] section of the file. ([Details](https://github.com/evilhero/mylar/issues/2242))
|
||||||
|
|
||||||
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -79,7 +79,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
|
|
||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -78,7 +78,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
|
|
||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -94,7 +94,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra2, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra2, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
2. Note that NZBHydra2 _can_ co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you'll need to change both the target hostname (_to "hydra2"_) and the target port (_to 5076_).
|
2. Note that NZBHydra2 _can_ co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you'll need to change both the target hostname (_to "hydra2"_) and the target port (_to 5076_).
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -79,7 +79,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
|
|
||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -90,7 +90,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
|
|
||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -79,7 +79,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
|
|
||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -86,7 +86,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
|
|
||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -76,7 +76,7 @@ Continue through the list of tools below, adding whichever tools your want to us
|
|||||||
|
|
||||||
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
101
manuscript/recipes/bitwarden.md
Normal file
@@ -0,0 +1,101 @@
|
|||||||
|
# Bitwarden
|
||||||
|
|
||||||
|
Heard about the [latest password breach](https://www.databreaches.net) (*since lunch*)? [HaveYouBeenPowned](http://haveibeenpwned.com) yet (*today*)? [Passwords are broken](https://www.theguardian.com/technology/2008/nov/13/internet-passwords), and as the amount of sites for which you need to store credentials grows exponetially, so does the risk of using a common password.
|
||||||
|
|
||||||
|
"*Duh, use a password manager*", you say. Sure, but be aware that [even password managers have security flaws](https://www.securityevaluators.com/casestudies/password-manager-hacking/).
|
||||||
|
|
||||||
|
**OK, look smartass..** no software is perfect, and there will always be a risk of your credentials being exposed in ways you didn't intend. You can at least **minimize** the impact of such exposure by using a password manager to store unique credentials per-site. While [1Password](http://1password.com) is king of the commercial password manager, [BitWarden](https://bitwarden.com) is king of the open-source, self-hosted password manager.
|
||||||
|
|
||||||
|
Enter Bitwarden..
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Bitwarden is a free and open source password management solution for individuals, teams, and business organizations. While Bitwarden does offer a paid / hosted version, the free version comes with the following (*better than any other free password manager!*):
|
||||||
|
|
||||||
|
* Access & install all Bitwarden apps
|
||||||
|
* Sync all of your devices, no limits!
|
||||||
|
* Store unlimited items in your vault
|
||||||
|
* Logins, secure notes, credit cards, & identities
|
||||||
|
* Two-step authentication (2FA)
|
||||||
|
* Secure password generator
|
||||||
|
* Self-host on your own server (optional)
|
||||||
|
|
||||||
|
## Ingredients
|
||||||
|
|
||||||
|
!!! summary "Ingredients"
|
||||||
|
Existing:
|
||||||
|
|
||||||
|
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
||||||
|
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
|
||||||
|
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
|
||||||
|
|
||||||
|
## Preparation
|
||||||
|
|
||||||
|
### Setup data locations
|
||||||
|
|
||||||
|
We'll need to create a directory to bind-mount into our container, so create `/var/data/bitwarden`:
|
||||||
|
|
||||||
|
```
|
||||||
|
mkdir /var/data/bitwarden
|
||||||
|
```
|
||||||
|
### Setup environment
|
||||||
|
|
||||||
|
Create `/var/data/config/bitwarden/bitwarden.env`, and **leave it empty for now**.
|
||||||
|
|
||||||
|
!!! question
|
||||||
|
What, why an empty env file? Well, the container supports lots of customizations via environment variables, for things like toggling self-registration, 2FA, etc. These are too complex to go into for this recipe, but readers are recommended to review the [dani-garcia/bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs), and customize their installation to suite.
|
||||||
|
|
||||||
|
### Setup Docker Swarm
|
||||||
|
|
||||||
|
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
version: "3"
|
||||||
|
services:
|
||||||
|
bitwarden:
|
||||||
|
image: bitwardenrs/server
|
||||||
|
env_file: /var/data/config/bitwarden/bitwarden.env
|
||||||
|
volumes:
|
||||||
|
- /etc/localtime:/etc/localtime:ro
|
||||||
|
- /var/data/bitwarden:/data/:rw
|
||||||
|
deploy:
|
||||||
|
labels:
|
||||||
|
- traefik.enable=true
|
||||||
|
- traefik.web.frontend.rule=Host:bitwarden.example.com
|
||||||
|
- traefik.web.port=80
|
||||||
|
- traefik.hub.frontend.rule=Host:bitwarden.example.com;Path:/notifications/hub
|
||||||
|
- traefik.hub.port=3012
|
||||||
|
- traefik.docker.network=traefik_public
|
||||||
|
networks:
|
||||||
|
- traefik_public
|
||||||
|
|
||||||
|
networks:
|
||||||
|
traefik_public:
|
||||||
|
external: true
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
Note the clever use of two Traefik frontends to expose the notifications hub on port 3012. Thanks @gkoerk!
|
||||||
|
|
||||||
|
|
||||||
|
## Serving
|
||||||
|
|
||||||
|
### Launch Bitwarden stack
|
||||||
|
|
||||||
|
Launch the Bitwarden stack by running ```docker stack deploy bitwarden -c <path -to-docker-compose.yml>```
|
||||||
|
|
||||||
|
Browse to your new instance at https://**YOUR-FQDN**, and create a new user account and master password (*Just click the **Create Account** button without filling in your email address or master password*)
|
||||||
|
|
||||||
|
### Get the apps / extensions
|
||||||
|
|
||||||
|
Once you've created your account, jump over to https://bitwarden.com/#download and download the apps for your mobile and browser, and start adding your logins!
|
||||||
|
|
||||||
|
## Chef's Notes 📓
|
||||||
|
|
||||||
|
1. You'll notice we're not using the *official* container images (*[all 6 of them required](https://help.bitwarden.com/article/install-on-premise/#install-bitwarden)!)*, but rather a [more lightweight version ideal for self-hosting](https://hub.docker.com/r/bitwardenrs/server). All of the elements are contained within a single container, and SQLite is used for the database backend.
|
||||||
|
2. As mentioned above, readers should refer to the [dani-garcia/bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs) for details on customizing the behaviour of Bitwarden.
|
||||||
|
3. The inclusion of Bitwarden was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Thanks Gerry!
|
||||||
@@ -13,7 +13,7 @@ I like to protect my public-facing web UIs with an [oauth_proxy](/reference/oaut
|
|||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
||||||
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design
|
2. [Traefik](/ha-docker-swarm/traefik/) configured per design
|
||||||
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
|
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
@@ -141,12 +141,6 @@ Launch the BookStack stack by running ```docker stack deploy bookstack -c <path
|
|||||||
|
|
||||||
Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_proxy, and then login with username 'admin@admin.com' and password 'password'.
|
Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_proxy, and then login with username 'admin@admin.com' and password 'password'.
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. If you wanted to expose the BookStack UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the bookstack container. You'd also need to add the traefik_public network to the bookstack container.
|
1. If you wanted to expose the BookStack UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the bookstack container. You'd also need to add the traefik_public network to the bookstack container.
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
|
||||||
|
|
||||||
### Your comments? 💬
|
|
||||||
|
|||||||
@@ -122,13 +122,7 @@ Launch the Calibre-Web stack by running ```docker stack deploy calibre-web -c <p
|
|||||||
|
|
||||||
Log into your new instance at https://**YOUR-FQDN**. You'll be directed to the initial GUI configuraition. Set the first field (_Location of Calibre database_) to "_/books/_", and when complete, login using defaults username of "**admin**" with password "**admin123**".
|
Log into your new instance at https://**YOUR-FQDN**. You'll be directed to the initial GUI configuraition. Set the first field (_Location of Calibre database_) to "_/books/_", and when complete, login using defaults username of "**admin**" with password "**admin123**".
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
|
1. Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
|
||||||
2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.
|
2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -300,12 +300,7 @@ Now browse your NextCloud files. Click the plus (+) sign to create a new documen
|
|||||||
|
|
||||||
[](https://www.observe.global/)
|
[](https://www.observe.global/)
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.
|
1. Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
|
||||||
|
|
||||||
### Your comments? 💬
|
|
||||||
|
|||||||
@@ -23,15 +23,6 @@ This recipe isn't for everyone - if you just want to make some money from crypto
|
|||||||
|
|
||||||
For readability, I've split this recipe into multiple sub-recipes, which can be found below, or in the navigation links on the right-hand side:
|
For readability, I've split this recipe into multiple sub-recipes, which can be found below, or in the navigation links on the right-hand side:
|
||||||
|
|
||||||
<<<<<<< HEAD:manuscript/recipies/cryptominer.md
|
|
||||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
|
||||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
|
||||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
|
||||||
4. Setup your miners with [Miner Hotel](/recipies/cryptominer/minerhotel/) 🏨
|
|
||||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
|
||||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
|
||||||
7. [Profit](/recipies/cryptominer/profit/)!
|
|
||||||
=======
|
|
||||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||||
@@ -39,13 +30,12 @@ For readability, I've split this recipe into multiple sub-recipes, which can be
|
|||||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||||
>>>>>>> master:manuscript/recipes/cryptominer.md
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
1. Ultimately I hope to move all the configuration / mining executables into docker containers, but for now, they're running on a CentOS7 host for direct access to GPUs. (Apparently it _may_ be possible to pass-thru the GPUs to docker containers, but I wanted stability first, before abstracting my hardware away from my miners)
|
1. Ultimately I hope to move all the configuration / mining executables into docker containers, but for now, they're running on a CentOS7 host for direct access to GPUs. (Apparently it _may_ be possible to pass-thru the GPUs to docker containers, but I wanted stability first, before abstracting my hardware away from my miners)
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -149,15 +149,6 @@ If you want to tweak the BIOS yourself, download the [Polaris bios editor](https
|
|||||||
|
|
||||||
Now, continue to the next stage of your grand mining adventure:
|
Now, continue to the next stage of your grand mining adventure:
|
||||||
|
|
||||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/amd-gpu.md
|
|
||||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
|
||||||
2. Setup your AMD (_this page_) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
|
||||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
|
||||||
3. Setup your miners with [Miner Hotel](/recipies/cryptominer/minerhotel/) 🏨
|
|
||||||
4. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
|
||||||
5. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
|
||||||
6. [Profit](/recipies/cryptominer/profit/)!
|
|
||||||
=======
|
|
||||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||||
2. Setup your AMD (_this page_) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
2. Setup your AMD (_this page_) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||||
@@ -165,14 +156,13 @@ Now, continue to the next stage of your grand mining adventure:
|
|||||||
4. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
4. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||||
5. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
5. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||||
6. [Profit](/recipes/cryptominer/profit/)! 💰
|
6. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||||
>>>>>>> master:manuscript/recipes/cryptominer/amd-gpu.md
|
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
1. My two RX580 cards (_bought alongside each other_) perform slightly differently. GPU0 works with a 2050Mhz memory clock, but GPU1 only works at 2000Mhz. Anything over 2000Mhz causes system instability. YMMV.
|
1. My two RX580 cards (_bought alongside each other_) perform slightly differently. GPU0 works with a 2050Mhz memory clock, but GPU1 only works at 2000Mhz. Anything over 2000Mhz causes system instability. YMMV.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -37,15 +37,6 @@ Once you have enough coins in your exchange wallet, you can "trade" them into th
|
|||||||
|
|
||||||
Now, continue to the next stage of your grand mining adventure:
|
Now, continue to the next stage of your grand mining adventure:
|
||||||
|
|
||||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/exchange.md
|
|
||||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
|
||||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
|
||||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
|
||||||
4. Setup your miners with [Miner Hotel](/recipies/cryptominer/minerhotel/) 🏨
|
|
||||||
5. Send your coins to exchanges (_This page_) or [wallets](/recipies/cryptominer/wallet/) 💹
|
|
||||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
|
||||||
7. [Profit](/recipies/cryptominer/profit/)!
|
|
||||||
=======
|
|
||||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||||
@@ -53,12 +44,11 @@ Now, continue to the next stage of your grand mining adventure:
|
|||||||
5. Send your coins to exchanges (_This page_) or [wallets](/recipes/cryptominer/wallet/) 💹
|
5. Send your coins to exchanges (_This page_) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||||
>>>>>>> master:manuscript/recipes/cryptominer/exchange.md
|
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -91,20 +91,14 @@ Now, continue to the next stage of your grand mining adventure:
|
|||||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||||
4. Setup your miners with Miner Hotel 🏨 (_This page_)
|
4. Setup your miners with Miner Hotel 🏨 (_This page_)
|
||||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/minerhotel.md
|
|
||||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
|
||||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
|
||||||
7. [Profit](/recipies/cryptominer/profit/)!
|
|
||||||
=======
|
|
||||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||||
>>>>>>> master:manuscript/recipes/cryptominer/minerhotel.md
|
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -40,15 +40,6 @@ As noted by IronicBadger [here](https://www.linuxserver.io/2018/01/20/how-to-bui
|
|||||||
|
|
||||||
Now, continue to the next stage of your grand mining adventure:
|
Now, continue to the next stage of your grand mining adventure:
|
||||||
|
|
||||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/mining-pool.md
|
|
||||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
|
||||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
|
||||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
|
||||||
4. Setup your miners with [Miner Hotel](/recipies/cryptominer/minerhotel/) 🏨
|
|
||||||
5. Send your coins to exchanges (_This page_) or [wallets](/recipies/cryptominer/wallet/) 💹
|
|
||||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
|
||||||
7. [Profit](/recipies/cryptominer/profit/)!
|
|
||||||
=======
|
|
||||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||||
@@ -56,12 +47,11 @@ Now, continue to the next stage of your grand mining adventure:
|
|||||||
5. Send your coins to exchanges (_This page_) or [wallets](/recipes/cryptominer/wallet/) 💹
|
5. Send your coins to exchanges (_This page_) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||||
>>>>>>> master:manuscript/recipes/cryptominer/mining-pool.md
|
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -31,21 +31,12 @@ I recommend this design (_with the board with little holes in it_) - it takes up
|
|||||||
Now, continue to the next stage of your grand mining adventure:
|
Now, continue to the next stage of your grand mining adventure:
|
||||||
|
|
||||||
1. Build your mining rig 💻 (This page)
|
1. Build your mining rig 💻 (This page)
|
||||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/mining-rig.md
|
|
||||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
|
||||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
|
||||||
4. Setup your miners with [Miner Hotel](/recipies/cryptominer/minerhotel/) 🏨
|
|
||||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
|
||||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
|
||||||
7. [Profit](/recipies/cryptominer/profit/)!
|
|
||||||
=======
|
|
||||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||||
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
||||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||||
>>>>>>> master:manuscript/recipes/cryptominer/mining-rig.md
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -57,7 +48,7 @@ Yes. It's the ultimate _#firstworldproblem_, but if you have a means to remotely
|
|||||||
|
|
||||||
(_I hooked up a remote-controlled outlet to my rig, so that I can power-cycle it without having to crawl under the desk!_)
|
(_I hooked up a remote-controlled outlet to my rig, so that I can power-cycle it without having to crawl under the desk!_)
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -80,17 +80,13 @@ Now, continue to the next stage of your grand mining adventure:
|
|||||||
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
4. Setup your miners with [Miner Hotel](/recipes/cryptominer/minerhotel/) 🏨
|
||||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||||
6. Monitor your empire :heartbeat: (_this page_)
|
6. Monitor your empire :heartbeat: (_this page_)
|
||||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/monitor.md
|
|
||||||
7. [Profit](/recipies/cryptominer/profit/)!
|
|
||||||
=======
|
|
||||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||||
>>>>>>> master:manuscript/recipes/cryptominer/monitor.md
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
1. Ultimately I hope to move all the configuration / mining executables into docker containers, but for now, they're running on a CentOS7 host for direct access to GPUs. (_Apparently it **may** be possible to pass-thru the GPUs to docker containers, but I wanted stability first, before abstracting my hardware away from my miners_)
|
1. Ultimately I hope to move all the configuration / mining executables into docker containers, but for now, they're running on a CentOS7 host for direct access to GPUs. (_Apparently it **may** be possible to pass-thru the GPUs to docker containers, but I wanted stability first, before abstracting my hardware away from my miners_)
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -146,15 +146,6 @@ Play with changing your settings.conf file until you break it, and then go back
|
|||||||
|
|
||||||
Now, continue to the next stage of your grand mining adventure:
|
Now, continue to the next stage of your grand mining adventure:
|
||||||
|
|
||||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/nvidia-gpu.md
|
|
||||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
|
||||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or Nvidia (_this page_) GPUs 🎨
|
|
||||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
|
||||||
4. Setup your miners with [Miner Hotel](/recipies/cryptominer/minerhotel/) 🏨
|
|
||||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
|
||||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
|
||||||
7. [Profit](/recipies/cryptominer/profit/)!
|
|
||||||
=======
|
|
||||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or Nvidia (_this page_) GPUs 🎨
|
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or Nvidia (_this page_) GPUs 🎨
|
||||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||||
@@ -162,12 +153,11 @@ Now, continue to the next stage of your grand mining adventure:
|
|||||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||||
>>>>>>> master:manuscript/recipes/cryptominer/nvidia-gpu.md
|
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -6,15 +6,6 @@ Well, that's it really. You're a cryptominer. Welcome to the party.
|
|||||||
|
|
||||||
To recap, you did all this:
|
To recap, you did all this:
|
||||||
|
|
||||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/profit.md
|
|
||||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
|
||||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
|
||||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
|
||||||
4. Setup your miners with [Miner Hotel](/recipies/cryptominer/minerhotel/) 🏨
|
|
||||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or [wallets](/recipies/cryptominer/wallet/) 💹
|
|
||||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
|
||||||
7. Profit! (_This page_)
|
|
||||||
=======
|
|
||||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||||
@@ -22,7 +13,6 @@ To recap, you did all this:
|
|||||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or [wallets](/recipes/cryptominer/wallet/) 💹
|
||||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||||
7. Profit! (_This page_) 💰
|
7. Profit! (_This page_) 💰
|
||||||
>>>>>>> master:manuscript/recipes/cryptominer/profit.md
|
|
||||||
|
|
||||||
|
|
||||||
## What next?
|
## What next?
|
||||||
@@ -31,7 +21,7 @@ Get in touch and share your experience - there's a special [discord](https://dis
|
|||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -23,15 +23,6 @@ I mine most of my coins to Exchanges, but I do have the following wallets:
|
|||||||
|
|
||||||
Now, continue to the next stage of your grand mining adventure:
|
Now, continue to the next stage of your grand mining adventure:
|
||||||
|
|
||||||
<<<<<<< HEAD:manuscript/recipies/cryptominer/wallet.md
|
|
||||||
1. Build your [mining rig](/recipies/cryptominer/mining-rig/) 💻
|
|
||||||
2. Setup your [AMD](/recipies/cryptominer/amd-gpu/) or [Nvidia](/recipies/cryptominer/nvidia-gpu/) GPUs 🎨
|
|
||||||
3. Sign up for [mining pools](/recipies/cryptominer/mining-pool/) :swimmer:
|
|
||||||
4. Setup your miners with [Miner Hotel](/recipies/cryptominer/minerhotel/) 🏨
|
|
||||||
5. Send your coins to [exchanges](/recipies/cryptominer/exchange/) or wallets (_This page_) 💹
|
|
||||||
6. [Monitor](/recipies/cryptominer/monitor/) your empire :heartbeat:
|
|
||||||
7. [Profit](/recipies/cryptominer/profit/)!
|
|
||||||
=======
|
|
||||||
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
1. Build your [mining rig](/recipes/cryptominer/mining-rig/) 💻
|
||||||
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
2. Setup your [AMD](/recipes/cryptominer/amd-gpu/) or [Nvidia](/recipes/cryptominer/nvidia-gpu/) GPUs 🎨
|
||||||
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
3. Sign up for [mining pools](/recipes/cryptominer/mining-pool/) :swimmer:
|
||||||
@@ -39,12 +30,11 @@ Now, continue to the next stage of your grand mining adventure:
|
|||||||
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or wallets (_This page_) 💹
|
5. Send your coins to [exchanges](/recipes/cryptominer/exchange/) or wallets (_This page_) 💹
|
||||||
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
6. [Monitor](/recipes/cryptominer/monitor/) your empire :heartbeat:
|
||||||
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
7. [Profit](/recipes/cryptominer/profit/)! 💰
|
||||||
>>>>>>> master:manuscript/recipes/cryptominer/wallet.md
|
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -160,13 +160,7 @@ Launch Duplicity stack by running ```docker stack deploy duplicity -c <path -to-
|
|||||||
|
|
||||||
Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination.
|
Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination.
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
|
1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
|
||||||
2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env
|
2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -243,13 +243,7 @@ This takes you to a list of backup names and file paths. You can choose to downl
|
|||||||
|
|
||||||
[](https://www.observe.global/)
|
[](https://www.observe.global/)
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You'd also need to add the traefik_public network to the app service.
|
1. If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You'd also need to add the traefik_public network to the app service.
|
||||||
2. The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
2. The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
|
||||||
|
|
||||||
### Your comments? 💬
|
|
||||||
|
|||||||
@@ -83,14 +83,8 @@ Launch the stack by running ```docker stack deploy emby -c <path -to-docker-comp
|
|||||||
|
|
||||||
Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-based setup to complete deploying your Emby.
|
Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-based setup to complete deploying your Emby.
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. I didn't use an [oauth2_proxy](/reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
|
1. I didn't use an [oauth2_proxy](/reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
|
||||||
2. Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
2. Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||||
3. We don't bother exposing the HTTPS port for Emby, since [Traefik](/ha-docker-swarm/traefik/) is doing the SSL termination for us already.
|
3. We don't bother exposing the HTTPS port for Emby, since [Traefik](/ha-docker-swarm/traefik/) is doing the SSL termination for us already.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -39,7 +39,7 @@ services:
|
|||||||
- /etc/localtime:/etc/localtime:ro
|
- /etc/localtime:/etc/localtime:ro
|
||||||
- /var/data/ghost/:/var/lib/ghost/content
|
- /var/data/ghost/:/var/lib/ghost/content
|
||||||
networks:
|
networks:
|
||||||
- traefik
|
- traefik_public
|
||||||
deploy:
|
deploy:
|
||||||
labels:
|
labels:
|
||||||
- traefik.frontend.rule=Host:ghost.example.com
|
- traefik.frontend.rule=Host:ghost.example.com
|
||||||
@@ -47,7 +47,7 @@ services:
|
|||||||
- traefik.port=2368
|
- traefik.port=2368
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
traefik:
|
traefik_public:
|
||||||
external: true
|
external: true
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -60,7 +60,7 @@ Launch the Ghost stack by running ```docker stack deploy ghost -c <path -to-dock
|
|||||||
|
|
||||||
Create your first administrative account at https://**YOUR-FQDN**/admin/
|
Create your first administrative account at https://**YOUR-FQDN**/admin/
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. If I wasn't committed to a [static-site-generated blog](https://www.funkypenguin.co.nz/blog/), Ghost is the platform I'd use for my blog.
|
1. If I wasn't committed to a [static-site-generated blog](https://www.funkypenguin.co.nz/blog/), Ghost is the platform I'd use for my blog.
|
||||||
2. A default using the SQlite database takes 548k of space:
|
2. A default using the SQlite database takes 548k of space:
|
||||||
@@ -69,9 +69,3 @@ Create your first administrative account at https://**YOUR-FQDN**/admin/
|
|||||||
548K /var/data/ghost/
|
548K /var/data/ghost/
|
||||||
[root@ds1 ghost]#
|
[root@ds1 ghost]#
|
||||||
```
|
```
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -33,17 +33,17 @@ Create a docker swarm config file in docker-compose syntax (v3), something like
|
|||||||
version: '3'
|
version: '3'
|
||||||
|
|
||||||
services:
|
services:
|
||||||
1:
|
thing1:
|
||||||
image: gitlab/gitlab-runner
|
image: gitlab/gitlab-runner
|
||||||
volumes:
|
volumes:
|
||||||
- /var/data/gitlab-runner/1:/var/data/gitlab/runners/1
|
- /var/data/gitlab/runners/1:/etc/gitlab-runner
|
||||||
networks:
|
networks:
|
||||||
- internal
|
- internal
|
||||||
|
|
||||||
2:
|
thing2:
|
||||||
image: gitlab/gitlab-runner
|
image: gitlab/gitlab-runner
|
||||||
volumes:
|
volumes:
|
||||||
- /var/data/gitlab-runner/:/var/data/gitlab/runners/2
|
- /var/data/gitlab/runners/2:/etc/gitlab-runner
|
||||||
networks:
|
networks:
|
||||||
- internal
|
- internal
|
||||||
|
|
||||||
@@ -58,7 +58,7 @@ networks:
|
|||||||
|
|
||||||
### Configure runners
|
### Configure runners
|
||||||
|
|
||||||
From your GitLab UI, you can retrieve a "token" necessary to register a new runner. To register the runner, you can either create config.toml in each runner's bind-mounted folder (example below), or just "docker exec" into each runner container and execute ```gitlab-container register``` to interactively generate config.toml.
|
From your GitLab UI, you can retrieve a "token" necessary to register a new runner. To register the runner, you can either create config.toml in each runner's bind-mounted folder (example below), or just "docker exec" into each runner container and execute ```gitlab-runner register``` to interactively generate config.toml.
|
||||||
|
|
||||||
Sample runner config.toml:
|
Sample runner config.toml:
|
||||||
|
|
||||||
@@ -89,14 +89,7 @@ Launch the mail server stack by running ```docker stack deploy gitlab-runner -c
|
|||||||
|
|
||||||
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
|
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. You'll note that I setup 2 runners. One is locked to a single project (this cookbook build), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
|
1. You'll note that I setup 2 runners. One is locked to a single project (this cookbook build), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
|
||||||
2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (and GitLab starts so slowly!), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.
|
2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (and GitLab starts so slowly!), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.
|
||||||
|
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -129,15 +129,8 @@ Launch the mail server stack by running ```docker stack deploy gitlab -c <path -
|
|||||||
Log into your new instance at https://[your FQDN], with user "root" and the password you specified in gitlab.env.
|
Log into your new instance at https://[your FQDN], with user "root" and the password you specified in gitlab.env.
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
A few comments on decisions taken in this design:
|
A few comments on decisions taken in this design:
|
||||||
|
|
||||||
1. I use the **sameersbn/gitlab:latest** image, rather than a specific version. This lets me execute updates simply by redeploying the stack (and why **wouldn't** I want the latest version?)
|
1. I use the **sameersbn/gitlab:latest** image, rather than a specific version. This lets me execute updates simply by redeploying the stack (and why **wouldn't** I want the latest version?)
|
||||||
|
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -125,12 +125,6 @@ Launch the Gollum stack by running ```docker stack deploy gollum -c <path-to-doc
|
|||||||
|
|
||||||
Authenticate against your OAuth provider, and then start editing your wiki!
|
Authenticate against your OAuth provider, and then start editing your wiki!
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous"
|
1. In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous"
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -128,12 +128,6 @@ Launch the Home Assistant stack by running ```docker stack deploy homeassistant
|
|||||||
|
|
||||||
Log into your new instance at https://**YOUR-FQDN**, the password you created in configuration.yml as "frontend - api_key". Then setup a bunch of sensors, and log into https://grafana.**YOUR FQDN** and create some beautiful graphs :)
|
Log into your new instance at https://**YOUR-FQDN**, the password you created in configuration.yml as "frontend - api_key". Then setup a bunch of sensors, and log into https://grafana.**YOUR FQDN** and create some beautiful graphs :)
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. I **tried** to protect Home Assistant using [oauth2_proxy](/reference/oauth_proxy), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!
|
1. I **tried** to protect Home Assistant using [oauth2_proxy](/reference/oauth_proxy), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -22,3 +22,5 @@ Generate your own UUID, or get a random one at https://www.uuidgenerator.net/
|
|||||||
Plug in your iBeacon, launch LightBlue Explorer, and find your iBeacon. The first time you attempt to interrogate it, you'll be prompted to pair. Although it's not recorded anywhere in the documentation (_grr!_), the pairing code is **123456**
|
Plug in your iBeacon, launch LightBlue Explorer, and find your iBeacon. The first time you attempt to interrogate it, you'll be prompted to pair. Although it's not recorded anywhere in the documentation (_grr!_), the pairing code is **123456**
|
||||||
|
|
||||||
Having paired, you'll be able to see the vital statistics of your iBeacon.
|
Having paired, you'll be able to see the vital statistics of your iBeacon.
|
||||||
|
|
||||||
|
## Chef's Notes 📓
|
||||||
|
|||||||
@@ -142,12 +142,6 @@ Launch the Huginn stack by running ```docker stack deploy huginn -c <path -to-do
|
|||||||
|
|
||||||
Log into your new instance at https://**YOUR-FQDN**. You'll need to use the "Sign Up" button, and (optionally) enter your invitation code in order to create your account.
|
Log into your new instance at https://**YOUR-FQDN**. You'll need to use the "Sign Up" button, and (optionally) enter your invitation code in order to create your account.
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. I initially considered putting an oauth proxy in front of Huginn, but since the invitation code logic prevents untrusted access, and since using a proxy would break oauth for sevices like Twitter integration, I left it out.
|
1. I initially considered putting an oauth proxy in front of Huginn, but since the invitation code logic prevents untrusted access, and since using a proxy would break oauth for sevices like Twitter integration, I left it out.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -125,12 +125,6 @@ After swarm deploys, you won't see much, but you can monitor what InstaPy is doi
|
|||||||
|
|
||||||
You can **also** watch the bot at work by VNCing to your docker swarm, password "secret". You'll see Selenium browser window cycling away, interacting with all your real/fake friends on Instagram :)
|
You can **also** watch the bot at work by VNCing to your docker swarm, password "secret". You'll see Selenium browser window cycling away, interacting with all your real/fake friends on Instagram :)
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. Amazingly, my bot has ended up tagging more _non-penguins_ than actual penguins. I don't understand how Instagrammers come up with their hashtags!
|
1. Amazingly, my bot has ended up tagging more _non-penguins_ than actual penguins. I don't understand how Instagrammers come up with their hashtags!
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -180,12 +180,6 @@ QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx | 28c13ec68f33 | Sees 2 other pee
|
|||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. I'm still trying to work out how to _mount_ the ipfs data in my filesystem in a usable way. Which is why this is still a WIP :)
|
1. I'm still trying to work out how to _mount_ the ipfs data in my filesystem in a usable way. Which is why this is still a WIP :)
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
|
||||||
|
|
||||||
### Your comments? 💬
|
|
||||||
|
|||||||
@@ -116,13 +116,7 @@ Launch the Kanboard stack by running ```docker stack deploy kanboard -c <path -t
|
|||||||
|
|
||||||
Log into your new instance at https://**YOUR-FQDN**. Default credentials are admin/admin, after which you can change (_under 'profile'_) and add more users.
|
Log into your new instance at https://**YOUR-FQDN**. Default credentials are admin/admin, after which you can change (_under 'profile'_) and add more users.
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. The default theme can be significantly improved by applying the [ThemePlus](https://github.com/phsteffen/kanboard-themeplus) plugin.
|
1. The default theme can be significantly improved by applying the [ThemePlus](https://github.com/phsteffen/kanboard-themeplus) plugin.
|
||||||
2. Kanboard becomes more useful when you integrate in/outbound email with [MailGun](https://github.com/kanboard/plugin-mailgun), [SendGrid](https://github.com/kanboard/plugin-sendgrid), or [Postmark](https://github.com/kanboard/plugin-postmark).
|
2. Kanboard becomes more useful when you integrate in/outbound email with [MailGun](https://github.com/kanboard/plugin-mailgun), [SendGrid](https://github.com/kanboard/plugin-sendgrid), or [Postmark](https://github.com/kanboard/plugin-postmark).
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
# KeyCloak
|
# KeyCloak
|
||||||
|
|
||||||
[KeyCloak](https://www.keycloak.org/) is "an open source identity and access management solution." Using a local database, or a variety of backends (_think [OpenLDAP](/recipes/openldap/)_), you can provide Single Sign-On (SSO) using OpenID, OAuth 2.0, and SAML.
|
[KeyCloak](https://www.keycloak.org/) is "*an open source identity and access management solution*". Using a local database, or a variety of backends (_think [OpenLDAP](/recipes/openldap/)_), you can provide Single Sign-On (SSO) using OpenID, OAuth 2.0, and SAML. KeyCloak's OpenID provider can be used in combination with [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), to protect [vulnerable services](/recipe/nzbget/) with an extra layer of authentication.
|
||||||
|
|
||||||
!!! important
|
!!! important
|
||||||
Development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
|
Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
|
||||||
|
|
||||||
[](https://www.observe.global/)
|
[](https://www.observe.global/)
|
||||||
|
|
||||||
@@ -11,9 +11,12 @@
|
|||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
!!! Summary
|
||||||
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design
|
Existing:
|
||||||
3. DNS entry for the hostname (_i.e. "keycloak.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
|
|
||||||
|
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph/)
|
||||||
|
* [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
|
||||||
|
* [X] DNS entry for the hostname (_i.e. "keycloak.your-domain.com"_) you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|
||||||
@@ -22,13 +25,13 @@
|
|||||||
We'll need several directories to bind-mount into our container for both runtime and backup data, so create them as follows
|
We'll need several directories to bind-mount into our container for both runtime and backup data, so create them as follows
|
||||||
|
|
||||||
```
|
```
|
||||||
mkdir /var/data/runtime/keycloak/database
|
mkdir -p /var/data/runtime/keycloak/database
|
||||||
mkdir /var/data/keycloak/database-dump
|
mkdir -p /var/data/keycloak/database-dump
|
||||||
```
|
```
|
||||||
|
|
||||||
### Prepare environment
|
### Prepare environment
|
||||||
|
|
||||||
Create ```/var/data/keycloak/keycloak.env```, and populate with the following variables, customized for your own domain structure.
|
Create `/var/data/keycloak/keycloak.env`, and populate with the following variables, customized for your own domain structure.
|
||||||
|
|
||||||
```
|
```
|
||||||
# Technically, this could be auto-detected, but we prefer to be prescriptive
|
# Technically, this could be auto-detected, but we prefer to be prescriptive
|
||||||
@@ -51,7 +54,7 @@ POSTGRES_USER=keycloak
|
|||||||
POSTGRES_PASSWORD=myuberpassword
|
POSTGRES_PASSWORD=myuberpassword
|
||||||
```
|
```
|
||||||
|
|
||||||
Create /var/data/keycloak/keycloak-backup.env, and populate with the following, so that your database can be backed up to the filesystem, daily:
|
Create `/var/data/keycloak/keycloak-backup.env`, and populate with the following, so that your database can be backed up to the filesystem, daily:
|
||||||
|
|
||||||
```
|
```
|
||||||
PGHOST=keycloak-db
|
PGHOST=keycloak-db
|
||||||
@@ -77,7 +80,8 @@ services:
|
|||||||
volumes:
|
volumes:
|
||||||
- /etc/localtime:/etc/localtime:ro
|
- /etc/localtime:/etc/localtime:ro
|
||||||
networks:
|
networks:
|
||||||
- traefik_public
|
- traefik_public
|
||||||
|
- internal
|
||||||
deploy:
|
deploy:
|
||||||
labels:
|
labels:
|
||||||
- traefik.frontend.rule=Host:keycloak.batcave.com
|
- traefik.frontend.rule=Host:keycloak.batcave.com
|
||||||
@@ -91,7 +95,7 @@ services:
|
|||||||
- /var/data/runtime/keycloak/database:/var/lib/postgresql/data
|
- /var/data/runtime/keycloak/database:/var/lib/postgresql/data
|
||||||
- /etc/localtime:/etc/localtime:ro
|
- /etc/localtime:/etc/localtime:ro
|
||||||
networks:
|
networks:
|
||||||
- traefik_public
|
- internal
|
||||||
|
|
||||||
keycloak-db-backup:
|
keycloak-db-backup:
|
||||||
image: postgres:10.1
|
image: postgres:10.1
|
||||||
@@ -110,76 +114,34 @@ services:
|
|||||||
done
|
done
|
||||||
EOF'
|
EOF'
|
||||||
networks:
|
networks:
|
||||||
- traefik_public
|
- internal
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
traefik_public:
|
traefik_public:
|
||||||
external: true
|
external: true
|
||||||
|
internal:
|
||||||
|
driver: overlay
|
||||||
|
ipam:
|
||||||
|
config:
|
||||||
|
- subnet: 172.16.49.0/24
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! warning
|
!!! note
|
||||||
**Normally**, we set unique static subnets for every stack you deploy, and put the non-public facing components (like databases) in an dedicated <stack\>_internal network. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
|
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
|
||||||
|
|
||||||
However, KeyCloak's JBOSS startup script assumes a single interface, and will crash in a ball of 🔥 if you try to assign multiple interfaces to the container. This means that we can't use a "keycloak_internal" network for our supporting containers. This is why unlike our other recipes, all the supporting services are prefixed with "keycloak-".
|
|
||||||
|
|
||||||
|
|
||||||
## Serving
|
## Serving
|
||||||
|
|
||||||
### Launch KeyCloak stack
|
### Launch KeyCloak stack
|
||||||
|
|
||||||
Launch the OpenLDAP stack by running ```docker stack deploy keycloak -c <path -to-docker-compose.yml>```
|
Launch the KeyCloak stack by running ```docker stack deploy keycloak -c <path -to-docker-compose.yml>```
|
||||||
|
|
||||||
Log into your new instance at https://**YOUR-FQDN**, and login with the user/password you defined in keycloak.env.
|
Log into your new instance at https://**YOUR-FQDN**, and login with the user/password you defined in `keycloak.env`.
|
||||||
|
|
||||||
### Integrating into OpenLDAP
|
|
||||||
|
|
||||||
KeyCloak gets really sexy when you integrate it into your [OpenLDAP](/recipes/openldap/) stack (_also, it's great not to have to play with ugly LDAP tree UIs_).
|
|
||||||
|
|
||||||
You'll need to have completed the [OpenLDAP](/recipes/openldap/) recipe
|
|
||||||
|
|
||||||
You start in the "Master" realm - but mouseover the realm name, to a dropdown box allowing you add an new realm:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Enter a name for your new realm, and click "_Create_":
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Once in the desired realm, click on **User Federation**, and click **Add Provider**. On the next page ("_Required Settings_"), set the following:
|
|
||||||
|
|
||||||
* **Edit Mode** : Writeable
|
|
||||||
* **Vendor** : Other
|
|
||||||
* **Connection URL** : ldap://openldap
|
|
||||||
* **Users DN** : ou=People,<your base DN\>
|
|
||||||
* **Authentication Type** : simple
|
|
||||||
* **Bind DN** : cn=admin,<your base DN\>
|
|
||||||
* **Bind Credential** : <your chosen admin password\>
|
|
||||||
|
|
||||||
Save your changes, and then navigate back to "User Federation" > Your LDAP name > Mappers:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
For each of the following mappers, click the name, and set the "_Read Only_" flag to "_Off_" (_this enables 2-way sync between KeyCloak and OpenLDAP_)
|
|
||||||
|
|
||||||
* last name
|
|
||||||
* username
|
|
||||||
* email
|
|
||||||
* first name
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
!!! important
|
!!! important
|
||||||
Development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
|
Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
|
||||||
|
|
||||||
[](https://www.observe.global/)
|
[](https://www.observe.global/)
|
||||||
|
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
1. I wanted to be able to add multiple networks to KeyCloak (_i.e., a dedicated overlay network for LDAP authentication_), but the entrypoint used by the container produces an error when more than one network is configured. This could theoretically be corrected in future, with a PR, but the [GitHub repo](https://github.com/jboss-dockerfiles/keycloak) has no issues enabled, so I wasn't sure where to start.
|
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
|
||||||
|
|
||||||
### Your comments? 💬
|
|
||||||
|
|||||||
68
manuscript/recipes/keycloak/authenticate-against-openldap.md
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
# Authenticate KeyCloak against OpenLDAP
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
This is not a complete recipe - it's an **optional** component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
|
||||||
|
|
||||||
|
KeyCloak gets really sexy when you integrate it into your [OpenLDAP](/recipes/openldap/) stack (_also, it's great not to have to play with ugly LDAP tree UIs_). Note that OpenLDAP integration is **not necessary** if you want to use KeyCloak with [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) - all you need for that is [local users](/recipes/keycloak/create-user/), and an [OIDC client](http://localhost:8000/recipes/keycloak/setup-oidc-provider/).
|
||||||
|
|
||||||
|
## Ingredients
|
||||||
|
|
||||||
|
!!! Summary
|
||||||
|
Existing:
|
||||||
|
|
||||||
|
* [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully
|
||||||
|
|
||||||
|
New:
|
||||||
|
|
||||||
|
* [ ] An [OpenLDAP server](/recipes/openldap/) (*assuming you want to authenticate against it*)
|
||||||
|
|
||||||
|
## Preparation
|
||||||
|
|
||||||
|
You'll need to have completed the [OpenLDAP](/recipes/openldap/) recipe
|
||||||
|
|
||||||
|
You start in the "Master" realm - but mouseover the realm name, to a dropdown box allowing you add an new realm:
|
||||||
|
|
||||||
|
### Create Realm
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Enter a name for your new realm, and click "_Create_":
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Setup User Federation
|
||||||
|
|
||||||
|
Once in the desired realm, click on **User Federation**, and click **Add Provider**. On the next page ("_Required Settings_"), set the following:
|
||||||
|
|
||||||
|
* **Edit Mode** : Writeable
|
||||||
|
* **Vendor** : Other
|
||||||
|
* **Connection URL** : ldap://openldap
|
||||||
|
* **Users DN** : ou=People,<your base DN\>
|
||||||
|
* **Authentication Type** : simple
|
||||||
|
* **Bind DN** : cn=admin,<your base DN\>
|
||||||
|
* **Bind Credential** : <your chosen admin password\>
|
||||||
|
|
||||||
|
Save your changes, and then navigate back to "User Federation" > Your LDAP name > Mappers:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
For each of the following mappers, click the name, and set the "_Read Only_" flag to "_Off_" (_this enables 2-way sync between KeyCloak and OpenLDAP_)
|
||||||
|
|
||||||
|
* last name
|
||||||
|
* username
|
||||||
|
* email
|
||||||
|
* first name
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
We've setup a new realm in KeyCloak, and configured read-write federation to an [OpenLDAP](/recipes/openldap/) backend. We can now manage our LDAP users using either KeyCloak or LDAP directly, and we can protect vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
|
||||||
|
|
||||||
|
!!! Summary
|
||||||
|
Created:
|
||||||
|
|
||||||
|
* [X] KeyCloak realm in read-write federation with [OpenLDAP](/recipes/openldap/) directory
|
||||||
|
|
||||||
|
## Chef's Notes 📓
|
||||||
38
manuscript/recipes/keycloak/create-user.md
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
# Create KeyCloak Users
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
|
||||||
|
|
||||||
|
Unless you plan to authenticate against an outside provider (*[OpenLDAP](/recipes/keycloak/openldap/), below, for example*), you'll want to create some local users..
|
||||||
|
|
||||||
|
## Ingredients
|
||||||
|
|
||||||
|
!!! Summary
|
||||||
|
Existing:
|
||||||
|
|
||||||
|
* [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully
|
||||||
|
|
||||||
|
### Create User
|
||||||
|
|
||||||
|
Within the "Master" realm (*no need for more realms yet*), navigate to **Manage** -> **Users**, and then click **Add User** at the top right:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Populate your new user's username (it's the only mandatory field)
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Set User Credentials
|
||||||
|
|
||||||
|
Once your user is created, to set their password, click on the "**Credentials**" tab, and procede to reset it. Set the password to non-temporary, unless you like extra work!
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
We've setup users in KeyCloak, which we can now use to authenticate to KeyCloak, when it's used as an [OIDC Provider](/recipes/keycloak/setup-oidc-provider/), potentially to secure vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
|
||||||
|
|
||||||
|
!!! Summary
|
||||||
|
Created:
|
||||||
|
|
||||||
|
* [X] Username / password to authenticate against [KeyCloak](/recipes/keycloak/)
|
||||||
55
manuscript/recipes/keycloak/setup-oidc-provider.md
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
# Add OIDC Provider to KeyCloak
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
|
||||||
|
|
||||||
|
Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against KeyCloak using OpenID Connect (OIDC), which is required for [Traefik Forward Auth](/recipe/traefik-forward-auth/), we'll setup a client in KeyCloak...
|
||||||
|
|
||||||
|
## Ingredients
|
||||||
|
|
||||||
|
!!! Summary
|
||||||
|
Existing:
|
||||||
|
|
||||||
|
* [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully
|
||||||
|
|
||||||
|
New:
|
||||||
|
|
||||||
|
* [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](/recipe/traefik-forward-auth/) recipe for more information
|
||||||
|
|
||||||
|
## Preparation
|
||||||
|
|
||||||
|
### Create Client
|
||||||
|
|
||||||
|
Within the "Master" realm (*no need for more realms yet*), navigate to **Clients**, and then click **Create** at the top right:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Enter a name for your client (*remember, we're authenticating **applications** now, not users, so use an application-specific name*):
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Configure Client
|
||||||
|
|
||||||
|
Once your client is created, set at **least** the following, and click **Save**
|
||||||
|
|
||||||
|
* **Access Type** : Confidential
|
||||||
|
* **Valid Redirect URIs** : <The URIs you want to protect\>
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Retrieve Client Secret
|
||||||
|
|
||||||
|
Now that you've changed the access type, and clicked **Save**, an additional **Credentials** tab appears at the top of the window. Click on the tab, and capture the KeyCloak-generated secret. This secret, plus your client name, is required to authenticate against KeyCloak via OIDC.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
We've setup an OIDC client in KeyCloak, which we can now use to protect vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). The OIDC URL provided by KeyCloak in the master realm, is *https://<your-keycloak-url\>/realms/master/.well-known/openid-configuration*
|
||||||
|
|
||||||
|
!!! Summary
|
||||||
|
Created:
|
||||||
|
|
||||||
|
* [X] Client ID and Client Secret used to authenticate against KeyCloak with OpenID Connect
|
||||||
|
|
||||||
|
## Chef's Notes 📓
|
||||||
@@ -264,7 +264,7 @@ To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard <name of pod
|
|||||||
|
|
||||||
1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)
|
1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -321,7 +321,7 @@ To look at the Miniflux pod's logs, run ```kubectl logs -n miniflux <name of pod
|
|||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -126,7 +126,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
|
|||||||
|
|
||||||
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
|
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -119,7 +119,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
|
|||||||
|
|
||||||
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
|
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
||||||
|
|
||||||
|
|||||||
@@ -126,7 +126,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
|
|||||||
|
|
||||||
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
|
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -264,7 +264,7 @@ To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard <name of pod
|
|||||||
|
|
||||||
1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)
|
1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
### Tip your waiter (support me) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
|
|||||||
@@ -178,14 +178,8 @@ SSL_TYPE=letsencrypt
|
|||||||
|
|
||||||
Launch the mail server stack by running ```docker stack deploy docker-mailserver -c <path-to-docker-mailserver.yml>```
|
Launch the mail server stack by running ```docker stack deploy docker-mailserver -c <path-to-docker-mailserver.yml>```
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. One of the elements of this design which I didn't appreciate at first is that since the config is entirely file-based, **setup.sh** can be run on any container host, provided it has the shared data mounted. This means that even though docker-mailserver was not designed with docker swarm in mind, it works perfectl with swarm. I.e., from any node, regardless of where the container is actually running, you're able to add/delete email addresses, view logs, etc.
|
1. One of the elements of this design which I didn't appreciate at first is that since the config is entirely file-based, **setup.sh** can be run on any container host, provided it has the shared data mounted. This means that even though docker-mailserver was not designed with docker swarm in mind, it works perfectl with swarm. I.e., from any node, regardless of where the container is actually running, you're able to add/delete email addresses, view logs, etc.
|
||||||
|
|
||||||
2. If you're using sieve with Rainloop, take note of the [workaround](https://discourse.geek-kitchen.funkypenguin.co.nz/t/mail-server-funky-penguins-geek-cookbook/70/15) identified by [ggilley](https://discourse.geek-kitchen.funkypenguin.co.nz/u/ggilley)
|
2. If you're using sieve with Rainloop, take note of the [workaround](https://discourse.geek-kitchen.funkypenguin.co.nz/t/mail-server-funky-penguins-geek-cookbook/70/15) identified by [ggilley](https://discourse.geek-kitchen.funkypenguin.co.nz/u/ggilley)
|
||||||
|
|
||||||
### Tip your waiter (donate)
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you!
|
|
||||||
|
|
||||||
### Your comments?
|
|
||||||
|
|||||||
@@ -116,12 +116,6 @@ Launch the MatterMost stack by running ```docker stack deploy mattermost -c <pat
|
|||||||
|
|
||||||
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
|
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
|
||||||
|
|
||||||
## Chef's Notes
|
## Chef's Notes 📓
|
||||||
|
|
||||||
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
|
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
|
||||||
|
|
||||||
### Tip your waiter (donate) 👏
|
|
||||||
|
|
||||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
|
||||||
|
|
||||||
### Your comments? 💬
|
|
||||||
|
|||||||