Experiment with PDF generation
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
6
docs/.gitpod.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
# image: squidfunk/mkdocs-material
|
||||
tasks:
|
||||
- init: pip install -r requirements.txt
|
||||
ports:
|
||||
- port: 8000
|
||||
onOpen: open-preview
|
||||
141
docs/community/code-of-conduct.md
Normal file
@@ -0,0 +1,141 @@
|
||||
---
|
||||
title: Community Code of Conduct
|
||||
description: We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
|
||||
---
|
||||
# Code of Conduct
|
||||
|
||||
Inspired by the leadership of other [great open source projects](https://www.contributor-covenant.org/adopters/), we've adopted the [Contributor Covenant Code of Conduct](https://www.contributor-covenant.org/) (*below*).
|
||||
|
||||
Details re the implementation of the enforcement guidelines outlined below can be found in the pages detailing each of our community environments:
|
||||
|
||||
* [Discord](/community/discord/)
|
||||
* Discourse (coming soon)
|
||||
* GitHub (coming soon)
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, religion, or sexual identity
|
||||
and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
* Demonstrating empathy and kindness toward other people
|
||||
* Being respectful of differing opinions, viewpoints, and experiences
|
||||
* Giving and gracefully accepting constructive feedback
|
||||
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
* Focusing on what is best not just for us as individuals, but for the
|
||||
overall community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or
|
||||
advances of any kind
|
||||
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or email
|
||||
address, without their explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official e-mail address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
abuse@funkypenguin.co.nz.
|
||||
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series
|
||||
of actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or
|
||||
permanent ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within
|
||||
the community.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.0, available at
|
||||
<https://www.contributor-covenant.org/version/2/0/code_of_conduct.html>.
|
||||
|
||||
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||
enforcement ladder](https://github.com/mozilla/diversity).
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
<https://www.contributor-covenant.org/faq>. Translations are available at
|
||||
<https://www.contributor-covenant.org/translations>.
|
||||
70
docs/community/contribute.md
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
title: How to contribute to Geek Cookbook
|
||||
description: Loving the geeky recipes, and looking for a way to give back / get involved. It's not all coding - here are some ideas re various ways you can be involved!
|
||||
---
|
||||
# Contribute
|
||||
|
||||
## Spread the word ❤️
|
||||
|
||||
Got nothing to contribute, but want to give back to the community? Here are some ideas:
|
||||
|
||||
1. Star :star: the [repo](https://github.com/geek-cookbook/geek-cookbook/)
|
||||
2. Tweet :bird: the [meat](https://ctt.ac/Vl6mc)!
|
||||
|
||||
## Contributing moneyz 💰
|
||||
|
||||
Sponsor [your chef](https://github.com/sponsors/funkypenguin) :heart:, or [join us](/#sponsored-projects) in supporting the open-source projects we enjoy!
|
||||
|
||||
## Contributing bugfixorz 🐛
|
||||
|
||||
Found a typo / error in a recipe? Each recipe includes a link to make the fix, directly on GitHub:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Click the link to edit the recipe in Markdown format, and save to create a pull request!
|
||||
|
||||
Here's a [113-second video](https://static.funkypenguin.co.nz/how-to-contribute-to-geek-cookbook-quick-pr.mp4) illustrating the process!
|
||||
|
||||
## Contributing recipes 🎁
|
||||
|
||||
Want to contributing an entirely new recipe? Awesome!
|
||||
|
||||
For the best experience, start by [creating an issue](https://github.com/geek-cookbook/geek-cookbook/issues/) in the repo (*check whether an existing issue for this recipe exists too!*). Populating the issue template will flesh out the requirements for the recipe, and having the new recipe pre-approved will avoid wasted effort if the recipe _doesn't_ meet requirements for addition, for some reason (*i.e., if it's been superceded by an existing recipe*)
|
||||
|
||||
Once your issue has been reviewed and approved, start working on a PR using either GitHub Codespaces or local dev (below). As soon as you're ready to share your work, create a WIP PR, so that a preview URL will be generated. Iterate on your PR, marking it as ready for review when it's ... ready :grin:
|
||||
|
||||
### 🏆 GitPod
|
||||
|
||||
GitPod (free up to 50h/month) is by far the smoothest and most slick way to edit the cookbook. Click [here](https://gitpod.io/#https://github.com/geek-cookbook/geek-cookbook) to launch a browser-based editing session! 🥷
|
||||
|
||||
### 🥈 GitHub Codespaces
|
||||
|
||||
[GitHub Codespaces](https://github.com/features/codespaces) (_no longer free now that it's out of beta_) provides a browser-based VSCode interface, pre-configured for your development environment. For no-hassle contributions to the cookbook with realtime previews, visit the [repo](https://github.com/geek-cookbook/geek-cookbook), and when clicking the download button (*where you're usually get the URL to clone a repo*), click on "**Open with CodeSpaces**" instead:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
You'll shortly be dropped into the VSCode interface, with mkdocs/material pre-installed and running. Any changes you make are auto-saved (*there's no "Save" button*), and available in the port-forwarded preview within seconds:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Once happy with your changes, drive VSCode as normal to create a branch, commit, push, and create a pull request. You can also abandon the browser window at any time, and return later to pick up where you left off (*even on a different device!*)
|
||||
|
||||
### 🥉 Editing locally
|
||||
|
||||
The process is basically:
|
||||
|
||||
1. [Fork the repo](https://help.github.com/en/github/getting-started-with-github/fork-a-repo)
|
||||
2. Clone your forked repo locally
|
||||
3. Make a new branch for your recipe (*not strictly necessary, but it helps to differentiate multiple in-flight recipes*)
|
||||
4. Create your new recipe as a markdown file within the existing structure of the [manuscript folder](https://github.com/geek-cookbook/geek-cookbook/tree/master/manuscript)
|
||||
5. Add your recipe to the navigation by editing [mkdocs.yml](https://github.com/geek-cookbook/geek-cookbook/blob/master/mkdocs.yml#L32)
|
||||
6. Test locally by running `./scripts/serve.sh` in the repo folder (*this launches a preview in Docker*), and navigating to <http://localhost:8123>
|
||||
7. Rinse and repeat until you're ready to submit a PR
|
||||
8. Create a pull request via the GitHub UI
|
||||
9. The pull request will trigger the creation of a preview environment, as illustrated below. Use the deploy preview to confirm that your recipe is as tasty as possible!
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Contributing skillz 💪
|
||||
|
||||
Got mad skillz, but neither the time nor inclination for recipe-cooking? [Scan the GitHub contributions page](https://github.com/geek-cookbook/geek-cookbook/contribute), [Discussions](https://github.com/geek-cookbook/geek-cookbook/discussions), or jump into [Discord](/community/discord/) or [Discourse](/community/discourse/), and help your fellow geeks with their questions, or just hang out bump up our member count!
|
||||
103
docs/community/discord.md
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
title: Geek out with Funky Penguin's Discord Server
|
||||
description: The most realtime and exciting way engage with our geeky community is in our Discord server!
|
||||
icon: material/discord
|
||||
---
|
||||
# Discord
|
||||
|
||||
The most realtime and exciting way engage with our geeky community is in our [Discord server](http://chat.funkypenguin.co.nz)
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe src="https://e.widgetbot.io/channels/396055506072109067/456689991326760973" height="600" width="800"></iframe>
|
||||
|
||||
!!! question "Eh? What's a Discord? Get off my lawn, young whippersnappers!!"
|
||||
Yeah, I know. I also thought Discord was just for the gamer kids, but it turns out it's great for a geeky community. Why? [Let me elucidate ya.](https://www.youtube.com/watch?v=1qHoSWxVqtE)..
|
||||
|
||||
1. Native markdown for code blocks
|
||||
2. Drag-drop screenshots
|
||||
3. Costs nothing, no ads
|
||||
4. Mobile notifications are reliable, individual channels mutable, etc
|
||||
|
||||
## How do I join the Discord server?
|
||||
|
||||
1. Create [an account](https://discordapp.com)
|
||||
2. [Join the geek party](http://chat.funkypenguin.co.nz)!
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
With the goal of creating a safe and inclusive community, we've adopted the [Contributor Covenant Code of Conduct](https://www.contributor-covenant.org/), as described [here](/community/code-of-conduct/).
|
||||
|
||||
### Reporting abuse
|
||||
|
||||
To report a violation of our code of conduct in our Discord server, type `!report <thing to report>` in any channel.
|
||||
|
||||
Your report message will immediately be deleted from the channel, and an alert raised to moderators, who will address the issue as detailed in the [enforcement guidelines](/community/code-of-conduct/#enforcement-guidelines).
|
||||
|
||||
## Channels
|
||||
|
||||
### 📔 Information
|
||||
|
||||
| Channel Name | Channel Use |
|
||||
|--------------------|------------------------------------------------------------|
|
||||
| #announcements | Used for important announcements |
|
||||
| #changelog | Used for major changes to the cookbook (to be deprecated) |
|
||||
| #cookbook-updates | Updates on all pushes to the master branch of the cookbook |
|
||||
| #premix-updates | Updates on all pushes to the master branch of the premix |
|
||||
| #discourse-updates | Updates to Discourse topics |
|
||||
|
||||
### 💬 Discussion
|
||||
|
||||
| Channel Name | Channel Use |
|
||||
|----------------|----------------------------------------------------------|
|
||||
| #introductions | New? Pop in here and say hi :) |
|
||||
| #general | General chat - anything goes |
|
||||
| #cookbook | Discussions specifically around the cookbook and recipes |
|
||||
| #kubernetes | Discussions about Kubernetes |
|
||||
| #docker-swarm | Discussions about Docker Swarm |
|
||||
| #today-i-learned | Post tips/tricks you've stumbled across
|
||||
| #jobs | For seeking / advertising jobs, bounties, projects, etc |
|
||||
| #advertisements | In here you can advertise your stream, services or websites, at a limit of 2 posts per day |
|
||||
| #dev | Used for collaboratio around current development. |
|
||||
|
||||
### Suggestions
|
||||
|
||||
| Channel Name | Channel Use |
|
||||
|--------------|-------------------------------------|
|
||||
| #in-flight | A list of all suggestions in-flight |
|
||||
| #completed | A list of completed suggestions |
|
||||
|
||||
### Music
|
||||
|
||||
| Channel Name | Channel Use |
|
||||
|------------------|-----------------------------------|
|
||||
| #music | DJs go here to control music |
|
||||
| #listen-to-music | Jump in here to rock out to music |
|
||||
|
||||
## How to get help.
|
||||
|
||||
If you need assistance at any time there are a few commands that you can run in order to get help.
|
||||
|
||||
`!help` Shows help content.
|
||||
|
||||
`!faq` Shows frequently asked questions.
|
||||
|
||||
## Spread the love (inviting others)
|
||||
|
||||
Invite your co-geeks to Discord by:
|
||||
|
||||
1. Sending them a link to <http://chat.funkypenguin.co.nz>, or
|
||||
2. Right-click on the Discord server name and click "Invite People"
|
||||
|
||||
## Formatting your message
|
||||
|
||||
Discord supports minimal message formatting using [markdown](https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline-).
|
||||
|
||||
!!! tip "Editing your most recent message"
|
||||
You can edit your most-recent message by pushing the up arrow, make your edits, and then push `Enter` to save!
|
||||
|
||||
## How do I suggest something?
|
||||
|
||||
1. Find the #completed channel (*under the **Suggestions** category*), and confirm that your suggestion hasn't already been voted on.
|
||||
2. Find the #in-flight channel (*also under **Suggestions***), and confirm that your suggestion isn't already in-flight (*but not completed yet*)
|
||||
3. In any channel, type `!suggest [your suggestion goes here]`. A post will be created in #in-flight for other users to vote on your suggestion. Suggestions change color as more users vote on them.
|
||||
4. When your suggestion is completed (*or a decision has been made*), you'll receive a DM from carl-bot
|
||||
8
docs/community/discourse.md
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
title: Let's discourse together about geeky subjects
|
||||
description: Funky Penguin's Discourse Forums serve our geeky communtiy, and consolidate comments and discussion from either the Geek Cookbook or the blog.
|
||||
---
|
||||
# Discourse
|
||||
|
||||
If you're not into the new-fangled microblogging of Mastodon, or realtime chatting of Discord, can still party with us like it's 2001, using our Discourse forums (*this is also how all the recipe comments work*).
|
||||
|
||||
3
docs/community/github.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# GitHub
|
||||
|
||||
You've found an intentionally un-linked page! This page is under construction, and will be up shortly. In the meantime, head to <https://github.com/geek-cookbook/geek-cookbook>!
|
||||
17
docs/community/index.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title: Funky Penguin's Geeky Communities
|
||||
description: Engage with your fellow geeks, wherever they may be!
|
||||
---
|
||||
|
||||
# Geek Community
|
||||
|
||||
Looking for friends / compatriates?
|
||||
|
||||
Find details about our communities below:
|
||||
|
||||
* [Discord](/community/discord/) - Realtime chat, multiple channels
|
||||
* [Reddit](/community/reddit/) - Geek out old-skool
|
||||
* [Mastodon](/community/mastodon/) - Federated, open-source microblogging platform
|
||||
* [Discourse](https://forum.funkypenguin.co.nz) - Forums - asyncronous communition
|
||||
* [GitHub](https://github.com/funkypenguin/) - Issues and PRs
|
||||
* [Facebook](https://www.facebook.com/funkypenguinnz/) - Social networking for old-timers!
|
||||
47
docs/community/mastodon.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: Join our geeky, Docker/Kubernetes-flavored Mastdon instance
|
||||
description: Looking for your geeky niche in the "fediverse"? Join our Mastodon instance!
|
||||
icon: material/mastodon
|
||||
---
|
||||
# Toot me up, buttercup!
|
||||
|
||||
Mastondon is a self-hosted / open-source microblogging platform (*heavily inspired by Twitter*), which supports federation, rather than centralization. Like email, any user on any Mastodon instance can follow, "toot" (*not tweet!*), and reply to any user on any *other* instance.
|
||||
|
||||
Our community Mastodon server is sooo [FKNY](https://so.fnky.nz/web/directory), but if you're already using Mastodon on another server (*or your [own instance][mastodon]*), you can seamlessly interact with us from there too, thanks to the magic of federation!
|
||||
|
||||
!!! question "This is dumb, there's nobody here"
|
||||
|
||||
* Give it time. The first time you get a federated reply from someone on another instance, it just "clicks" (*at least, it did for me*)
|
||||
* Follow some folks (I'm [funkypenguin@so.fnky.nz](https://so.fnky.nz/@funkypenguin))
|
||||
* Install [mobile client](https://joinmastodon.org/apps)
|
||||
|
||||
## How do I Mastodon?
|
||||
|
||||
1. If you're a [sponsor][github_sponsor], check the special `#mastodon` channel in [Discord][discord]. You'll find a super-sekrit invite URL which will get you setup *instantly*
|
||||
2. If you're *not* a sponsor, go to [FNKY](https://so.fnky.nz), and request an invite (*invites must be approved to prevent abuse*) - mention your [Discord][discord] username in the "Why do you want to join?" question.
|
||||
3. Start tootin'!
|
||||
|
||||
## So who do I follow?
|
||||
|
||||
That.. is tricky. There's no big-tech algorithm to suggest friends based on previously-collected browsing / tracking data, you'll have to build your "social graph" from the ground up, one "brick" at a time. Start by following [me](https://so.fnky.nz/@funkypenguin). Here are some more helpful links:
|
||||
|
||||
* [A self-curated list of accounts, sorted by theme](https://communitywiki.org/trunk)
|
||||
* [Reddit thread #1](https://www.reddit.com/r/Mastodon/comments/enr4ud/who_to_follow_on_mastodon/)
|
||||
* [Reddit thread #2](https://www.reddit.com/r/Mastodon/comments/p6vpvq/wanted_positive_mastodon_accounts_to_follow/)
|
||||
* [Reddit thread #3](https://www.reddit.com/r/Mastodon/comments/s0ly2r/new_user_how_do_i_find_people_to_follow/)
|
||||
* [Reddit thread #4](https://www.reddit.com/r/Mastodon/comments/ublg4q/is_it_possible_to_follow_accounts_from_different/)
|
||||
* [Masto.host's FAQ on finding people to follow](https://masto.host/finding-people-to-follow-on-mastodon/)
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
With the goal of creating a safe and inclusive community, we've adopted the [Contributor Covenant Code of Conduct](https://www.contributor-covenant.org/), as described [here](/community/code-of-conduct/). This code of conduct applies to the Mastodon server.
|
||||
|
||||
### Reporting abuse
|
||||
|
||||
To report a violation of our code of conduct in our Mastodon server, type use Mastodon's "report" function to report a violation, as illustrated below:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Moderators will be alerted to your report, who will address the issue as detailed in the [enforcement guidelines](/community/code-of-conduct/#enforcement-guidelines).
|
||||
|
||||
--8<-- "common-links.md"
|
||||
26
docs/community/reddit.md
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
title: Funky Penguin's Subreddit
|
||||
description: If you're a redditor, jump on over to our subreddit at https://www.reddit.com/r/funkypenguin to engage / share the latest!
|
||||
icon: material/reddit
|
||||
---
|
||||
|
||||
# Reddit
|
||||
|
||||
If you're a redditor, jump on over to our subreddit ([r/funkypenguin](https://www.reddit.com/r/funkypenguin/)), to engage / share the latest!
|
||||
|
||||
## How do I join the subreddit?
|
||||
|
||||
1. If you're not already a member, [create](https://www.reddit.com/register/) a Reddit account
|
||||
2. [Subscribe](https://www.reddit.com/r/funkypenguin/) to r/funkypenguin
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
With the goal of creating a safe and inclusive community, we've adopted the [Contributor Covenant Code of Conduct](https://www.contributor-covenant.org/), as described [here](/community/code-of-conduct/).
|
||||
|
||||
### Reporting abuse
|
||||
|
||||
To report a violation of our code of conduct in our subreddit, use the "Report" button as illustrated below:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
The reported message will be highlighted to moderators, who will address the issue as detailed in the [enforcement guidelines](/community/code-of-conduct/#enforcement-guidelines).
|
||||
280
docs/docker-swarm/authelia.md
Normal file
@@ -0,0 +1,280 @@
|
||||
---
|
||||
title: Using Authelia to secure services in Docker
|
||||
description: Authelia is an open-source authentication and authorization server providing 2-factor authentication and single sign-on (SSO) for your applications via a web portal.
|
||||
---
|
||||
|
||||
# Authelia in Docker Swarm
|
||||
|
||||
[Authelia](https://github.com/authelia/authelia) is an open-source authentication and authorization server providing 2-factor authentication and single sign-on (SSO) for your applications via a web portal. Like [Traefik Forward Auth][tfa], Authelia acts as a companion of reverse proxies like Nginx, [Traefik](/docker-swarm/traefik/), or HAProxy to let them know whether queries should pass through. Unauthenticated users are redirected to Authelia Sign-in portal instead. Authelia is a popular alternative to a heavyweight such as [KeyCloak][keycloak].
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Features include
|
||||
|
||||
* Multiple two-factor methods such as
|
||||
* [Physical Security Key](https://www.authelia.com/docs/features/2fa/security-key) (Yubikey)
|
||||
* OTP using Google Authenticator
|
||||
* Mobile Notifications
|
||||
* Lockout users after too many failed login attempts
|
||||
* Highly Customizable Access Control using rules to match criteria such as subdomain, username, groups the user is in, and Network
|
||||
* Authelia [Community](https://discord.authelia.com/) Support
|
||||
* Full list of features can be viewed [here](https://www.authelia.com/docs/features/)
|
||||
|
||||
## Authelia requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
Already deployed:
|
||||
|
||||
* [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||
* [X] [Traefik](/docker-swarm/traefik/) configured per design
|
||||
|
||||
New:
|
||||
|
||||
* [ ] DNS entry for your auth host (*"authelia.yourdomain.com" is a good choice*), pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First, we create a directory to hold the data which authelia will serve:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/authelia
|
||||
```
|
||||
|
||||
### Create Authelia config file
|
||||
|
||||
Authelia configurations are defined in `/var/data/config/authelia/configuration.yml`. Some are required and some are optional. The following is a variation of the default example config file. Optional configuration settings can be viewed on in [Authelia's documentation](https://www.authelia.com/docs/configuration/)
|
||||
|
||||
!!! warning
|
||||
Your variables may vary significantly from what's illustrated below, and it's best to read up and understand exactly what each option does.
|
||||
|
||||
```yaml title="/var/data/config/authelia/configuration.yml"
|
||||
###############################################################
|
||||
# Authelia configuration #
|
||||
###############################################################
|
||||
|
||||
server:
|
||||
host: 0.0.0.0
|
||||
port: 9091
|
||||
|
||||
log:
|
||||
level: warn
|
||||
|
||||
# This secret can also be set using the env variables AUTHELIA_JWT_SECRET_FILE
|
||||
# I used this site to generate the secret: https://www.grc.com/passwords.htm
|
||||
jwt_secret: SECRET_GOES_HERE
|
||||
|
||||
# https://docs.authelia.com/configuration/miscellaneous.html#default-redirection-url
|
||||
default_redirection_url: https://authelia.example.com
|
||||
|
||||
totp:
|
||||
issuer: authelia.example.com
|
||||
period: 30
|
||||
skew: 1
|
||||
|
||||
authentication_backend:
|
||||
file:
|
||||
path: /config/users_database.yml
|
||||
# customize passwords based on https://docs.authelia.com/configuration/authentication/file.html
|
||||
password:
|
||||
algorithm: argon2id
|
||||
iterations: 1
|
||||
salt_length: 16
|
||||
parallelism: 8
|
||||
memory: 1024 # blocks this much of the RAM. Tune this.
|
||||
|
||||
# https://docs.authelia.com/configuration/access-control.html
|
||||
access_control:
|
||||
default_policy: one_factor
|
||||
rules:
|
||||
- domain: "bitwarden.example.com"
|
||||
policy: two_factor
|
||||
|
||||
- domain: "whoami-authelia-2fa.example.com"
|
||||
policy: two_factor
|
||||
|
||||
- domain: "*.example.com" # (1)!
|
||||
policy: one_factor
|
||||
|
||||
|
||||
session:
|
||||
name: authelia_session
|
||||
# This secret can also be set using the env variables AUTHELIA_SESSION_SECRET_FILE
|
||||
# Used a different secret, but the same site as jwt_secret above.
|
||||
secret: SECRET_GOES_HERE
|
||||
expiration: 3600 # 1 hour
|
||||
inactivity: 300 # 5 minutes
|
||||
domain: example.com # Should match whatever your root protected domain is
|
||||
|
||||
regulation:
|
||||
max_retries: 3
|
||||
find_time: 120
|
||||
ban_time: 300
|
||||
|
||||
storage:
|
||||
encryption_key: SECRET_GOES_HERE_20_CHARACTERS_OR_LONGER
|
||||
local:
|
||||
path: /config/db.sqlite3
|
||||
|
||||
|
||||
notifier:
|
||||
# smtp:
|
||||
# username: SMTP_USERNAME
|
||||
# # This secret can also be set using the env variables AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE
|
||||
# # password: # use docker secret file instead AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE
|
||||
# host: SMTP_HOST
|
||||
# port: 587 #465
|
||||
# sender: batman@example.com # customize for your setup
|
||||
|
||||
# For testing purpose, notifications can be sent in a file. Be sure map the volume in docker-compose.
|
||||
filesystem:
|
||||
filename: /config/notification.txt
|
||||
```
|
||||
|
||||
1. The wildcard rule must go last, since the first rule to match the request, wins
|
||||
|
||||
### Create Authelia user Accounts
|
||||
|
||||
Create `/var/data/config/authelia/users_database.yml` this will be where we can create user accounts and give them groups
|
||||
|
||||
```yaml title="/var/data/config/authelia/users_database.yml"
|
||||
# To create a hashed password you can run the following command:
|
||||
# `docker run authelia/authelia:latest authelia hash-password YOUR_PASSWORD``
|
||||
users:
|
||||
batman: # each new user should be defined in a dictionary like this
|
||||
displayname: "Batman"
|
||||
# replace this with your hashed password. This one, for the purposes of testing, is "password"
|
||||
password: "$argon2id$v=19$m=65536,t=3,p=4$cW1adlh3UjhIRE9zSmZyZw$xA4S2X8BjE7LVb4NndJCZnoyHgON5w3FopO4vw5AQxE"
|
||||
email: batman@example.com
|
||||
groups:
|
||||
- admins
|
||||
- dev
|
||||
```
|
||||
|
||||
To create a hashed password you can run the following command
|
||||
`docker run authelia/authelia:latest authelia hash-password YOUR_PASSWORD`
|
||||
|
||||
### Authelia Docker Swarm config
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this example:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml title="/var/data/config/authelia/authelia.yml"
|
||||
version: "3.2"
|
||||
|
||||
services:
|
||||
authelia:
|
||||
image: authelia/authelia
|
||||
volumes:
|
||||
- /var/data/config/authelia:/config
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:authelia.example.com
|
||||
- traefik.port=80
|
||||
- 'traefik.frontend.auth.forward.address=http://authelia:9091/api/verify?rd=https://authelia.example.com/'
|
||||
- 'traefik.frontend.auth.forward.trustForwardHeader=true'
|
||||
- 'traefik.frontend.auth.forward.authResponseHeaders=Remote-User,Remote-Groups,Remote-Name,Remote-Email'
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.authelia.rule=Host(`authelia.example.com`)"
|
||||
- "traefik.http.routers.authelia.entrypoints=https"
|
||||
- "traefik.http.services.authelia.loadbalancer.server.port=9091"
|
||||
|
||||
whoami-1fa: # (1)!
|
||||
image: containous/whoami
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=traefik_public"
|
||||
|
||||
# traefikv1
|
||||
- "traefik.frontend.rule=Host:whoami-authelia-1fa.example.com"
|
||||
- traefik.port=80
|
||||
- 'traefik.frontend.auth.forward.address=http://authelia:9091/api/verify?rd=https://authelia.example.com/'
|
||||
- 'traefik.frontend.auth.forward.trustForwardHeader=true'
|
||||
- 'traefik.frontend.auth.forward.authResponseHeaders=Remote-User,Remote-Groups,Remote-Name,Remote-Email'
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.whoami-authelia-1fa.rule=Host(`whoami-authelia-1fa.example.com`)"
|
||||
- "traefik.http.routers.whoami-authelia-1fa.entrypoints=https"
|
||||
- "traefik.http.routers.whoami-authelia-1fa.middlewares=authelia"
|
||||
- "traefik.http.services.whoami-authelia-1fa.loadbalancer.server.port=80"
|
||||
|
||||
|
||||
whoami-2fa: # (2)!
|
||||
image: containous/whoami
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=traefik_public"
|
||||
|
||||
# traefikv1
|
||||
- "traefik.frontend.rule=Host:whoami-authelia-2fa.example.com"
|
||||
- traefik.port=80
|
||||
- 'traefik.frontend.auth.forward.address=http://authelia:9091/api/verify?rd=https://authelia.example.com/'
|
||||
- 'traefik.frontend.auth.forward.trustForwardHeader=true'
|
||||
- 'traefik.frontend.auth.forward.authResponseHeaders=Remote-User,Remote-Groups,Remote-Name,Remote-Email'
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.whoami-authelia-2fa.rule=Host(`whoami-authelia-2fa.example.com`)"
|
||||
- "traefik.http.routers.whoami-authelia-2fa.entrypoints=https"
|
||||
- "traefik.http.routers.whoami-authelia-2fa.middlewares=authelia"
|
||||
- "traefik.http.services.whoami-authelia-2fa.loadbalancer.server.port=80"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
1. Optionally used to test 1FA authentication
|
||||
2. Optionally used to test 2FA authentication
|
||||
|
||||
|
||||
|
||||
!!! question "Why not just use Traefik Forward Auth?"
|
||||
While [Traefik Forward Auth][tfa] is a very lightweight, minimal authentication layer, which provides OIDC-based authentication, Authelia provides more features such as multiple methods of authentication (*Hardware, OTP, Email*), advanced rules, and push notifications.
|
||||
|
||||
## Run Authelia
|
||||
|
||||
Launch the Authelia stack by running ```docker stack deploy authelia -c <path -to-docker-compose.yml>```
|
||||
|
||||
### Test Authelia
|
||||
|
||||
To test the service works successfully, try logging into Authelia itself first, as a user whose password you've setup in `/var/data/config/authelia/users_database.yml`.
|
||||
|
||||
You'll notice that upon successful login, you're requested to setup 2FA. If (*like me!*) you didn't configure an SMTP server, you can still setup 2FA (*TOTP or webauthn*), and the setup link email instructions should be found in `/var/data/config/authelia/notifications.txt`
|
||||
|
||||
Now you're ready to test 1FA and 2FA auth, against the two "whoami" services defined in the docker-compose file.
|
||||
|
||||
Try to access each in turn, and confirm that you're _not_ prompted for 2FA on whoami-authelia-1fa, but you _are_ prompted for 2FA on whoami-authelia-2fa! :thumbsup:
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? By adding a simple label to any service, we can secure any service behind our Authelia, with minimal processing / handling overhead, and benefit from the 1FA/2FA multi-layered features provided by Autheila.
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] Authelia configured and available to provide a layer of authentication to other services deployed in the stack
|
||||
|
||||
### Authelia vs Keycloak
|
||||
|
||||
[KeyCloak][keycloak] is the "big daddy" of self-hosted authentication platforms - it has a beautiful GUI, and a very advanced and mature featureset. Like Authelia, KeyCloak can [use an LDAP server](/recipes/keycloak/authenticate-against-openldap/) as a backend, but _unlike_ Authelia, KeyCloak allows for 2-way sync between that LDAP backend, meaning KeyCloak can be used to _create_ and _update_ the LDAP entries (*Authelia's is just a one-way LDAP lookup - you'll need another tool to actually administer your LDAP database*).
|
||||
|
||||
|
||||
[^1]: The initial inclusion of Authelia was due to the efforts of @bencey in Discord (Thanks Ben!)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
97
docs/docker-swarm/design.md
Normal file
@@ -0,0 +1,97 @@
|
||||
---
|
||||
title: Design a secure, scalable Docker Swarm
|
||||
description: Presenting a Docker Swarm design to create your own container-hosting platform, which is highly-available, scalable, portable, secure and automated! 💪
|
||||
---
|
||||
|
||||
# Highly Available Docker Swarm Design
|
||||
|
||||
In the design described below, our "private cloud" platform is:
|
||||
|
||||
* **Highly-available** (_can tolerate the failure of a single component_)
|
||||
* **Scalable** (_can add resource or capacity as required_)
|
||||
* **Portable** (_run it on your garage server today, run it in AWS tomorrow_)
|
||||
* **Secure** (_access protected with [LetsEncrypt certificates](/docker-swarm/traefik/) and optional [OIDC with 2FA](/docker-swarm/traefik-forward-auth/)_)
|
||||
* **Automated** (_requires minimal care and feeding_)
|
||||
|
||||
## Design Decisions
|
||||
|
||||
### Where possible, services will be highly available.**
|
||||
|
||||
This means that:
|
||||
|
||||
* At least 3 docker swarm manager nodes are required, to provide fault-tolerance of a single failure.
|
||||
* [Ceph](/docker-swarm/shared-storage-ceph/) is employed for share storage, because it too can be made tolerant of a single failure.
|
||||
|
||||
!!! note
|
||||
An exception to the 3-nodes decision is running a single-node configuration. If you only **have** one node, then obviously your swarm is only as resilient as that node. It's still a perfectly valid swarm configuration, ideal for starting your self-hosting journey. In fact, under the single-node configuration, you don't need ceph either, and you can simply use the local volume on your host for storage. You'll be able to migrate to ceph/more nodes if/when you expand.
|
||||
|
||||
**Where multiple solutions to a requirement exist, preference will be given to the most portable solution.**
|
||||
|
||||
This means that:
|
||||
|
||||
* Services are defined using docker-compose v3 YAML syntax
|
||||
* Services are portable, meaning a particular stack could be shut down and moved to a new provider with minimal effort.
|
||||
|
||||
## Security
|
||||
|
||||
Under this design, the only inbound connections we're permitting to our docker swarm in a **minimal** configuration (*you may add custom services later, like UniFi Controller*) are:
|
||||
|
||||
### Network Flows
|
||||
|
||||
* **HTTP (TCP 80)** : Redirects to https
|
||||
* **HTTPS (TCP 443)** : Serves individual docker containers via SSL-encrypted reverse proxy
|
||||
|
||||
### Authentication
|
||||
|
||||
* Where the hosted application provides a trusted level of authentication (*i.e., [NextCloud](/recipes/nextcloud/)*), or where the application requires public exposure (*i.e. [Privatebin](/recipes/privatebin/)*), no additional layer of authentication will be required.
|
||||
* Where the hosted application provides inadequate (*i.e. [NZBGet](/recipes/autopirate/nzbget/)*) or no authentication (*i.e. [Gollum](/recipes/gollum/)*), a further authentication against an OAuth provider will be required.
|
||||
|
||||
## High availability
|
||||
|
||||
### Normal function
|
||||
|
||||
Assuming a 3-node configuration, under normal circumstances the following is illustrated:
|
||||
|
||||
* All 3 nodes provide shared storage via Ceph, which is provided by a docker container on each node.
|
||||
* All 3 nodes participate in the Docker Swarm as managers.
|
||||
* The various containers belonging to the application "stacks" deployed within Docker Swarm are automatically distributed amongst the swarm nodes.
|
||||
* Persistent storage for the containers is provide via cephfs mount.
|
||||
* The **traefik** service (*in swarm mode*) receives incoming requests (*on HTTP and HTTPS*), and forwards them to individual containers. Traefik knows the containers names because it's able to read the docker socket.
|
||||
* All 3 nodes run keepalived, at varying priorities. Since traefik is running as a swarm service and listening on TCP 80/443, requests made to the keepalived VIP and arriving at **any** of the swarm nodes will be forwarded to the traefik container (*no matter which node it's on*), and then onto the target backend.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
### Node failure
|
||||
|
||||
In the case of a failure (or scheduled maintenance) of one of the nodes, the following is illustrated:
|
||||
|
||||
* The failed node no longer participates in Ceph, but the remaining nodes provide enough fault-tolerance for the cluster to operate.
|
||||
* The remaining two nodes in Docker Swarm achieve a quorum and agree that the failed node is to be removed.
|
||||
* The (*possibly new*) leader manager node reschedules the containers known to be running on the failed node, onto other nodes.
|
||||
* The **traefik** service is either restarted or unaffected, and as the backend containers stop/start and change IP, traefik is aware and updates accordingly.
|
||||
* The keepalived VIP continues to function on the remaining nodes, and docker swarm continues to forward any traffic received on TCP 80/443 to the appropriate node.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
### Node restore
|
||||
|
||||
When the failed (*or upgraded*) host is restored to service, the following is illustrated:
|
||||
|
||||
* Ceph regains full redundancy
|
||||
* Docker Swarm managers become aware of the recovered node, and will use it for scheduling **new** containers
|
||||
* Existing containers which were migrated off the node are not migrated backend
|
||||
* Keepalived VIP regains full redundancy
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
### Total cluster failure
|
||||
|
||||
A day after writing this, my environment suffered a fault whereby all 3 VMs were unexpectedly and simultaneously powered off.
|
||||
|
||||
Upon restore, docker failed to start on one of the VMs due to local disk space issue[^1]. However, the other two VMs started, established the swarm, mounted their shared storage, and started up all the containers (services) which were managed by the swarm.
|
||||
|
||||
In summary, although I suffered an **unplanned power outage to all of my infrastructure**, followed by a **failure of a third of my hosts**... ==all my platforms are 100% available[^1] with **absolutely no manual intervention**==.
|
||||
|
||||
[^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
183
docs/docker-swarm/docker-swarm-mode.md
Normal file
@@ -0,0 +1,183 @@
|
||||
---
|
||||
title: Enable Docker Swarm mode
|
||||
description: For truly highly-available services with Docker containers, Docker Swarm is the simplest way to achieve redundancy, such that a single docker host could be turned off, and none of our services will be interrupted.
|
||||
---
|
||||
|
||||
# Docker Swarm Mode
|
||||
|
||||
For truly highly-available services with Docker containers, we need an orchestration system. Docker Swarm (*as defined at 1.13*) is the simplest way to achieve redundancy, such that a single docker host could be turned off, and none of our services will be interrupted.
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary
|
||||
Existing
|
||||
|
||||
* [X] 3 x nodes (*bare-metal or VMs*), each with:
|
||||
* A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
|
||||
* At least 2GB RAM
|
||||
* At least 20GB disk space (_but it'll be tight_)
|
||||
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Bash auto-completion
|
||||
|
||||
Add some handy bash auto-completion for docker. Without this, you'll get annoyed that you can't autocomplete ```docker stack deploy <blah> -c <blah.yml>``` commands.
|
||||
|
||||
```bash
|
||||
cd /etc/bash_completion.d/
|
||||
curl -O https://raw.githubusercontent.com/docker/cli/b75596e1e4d5295ac69b9934d1bd8aff691a0de8/contrib/completion/bash/docker
|
||||
```
|
||||
|
||||
Install some useful bash aliases on each host
|
||||
|
||||
```bash
|
||||
cd ~
|
||||
curl -O https://raw.githubusercontent.com/funkypenguin/geek-cookbook/master/examples/scripts/gcb-aliases.sh
|
||||
echo 'source ~/gcb-aliases.sh' >> ~/.bash_profile
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Release the swarm!
|
||||
|
||||
Now, to launch a swarm. Pick a target node, and run `docker swarm init`
|
||||
|
||||
Yeah, that was it. Seriously. Now we have a 1-node swarm.
|
||||
|
||||
```bash
|
||||
[root@ds1 ~]# docker swarm init
|
||||
Swarm initialized: current node (b54vls3wf8xztwfz79nlkivt8) is now a manager.
|
||||
|
||||
To add a worker to this swarm, run the following command:
|
||||
|
||||
docker swarm join \
|
||||
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-bsud7xnvhv4cicwi7l6c9s6l0 \
|
||||
202.170.164.47:2377
|
||||
|
||||
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
|
||||
|
||||
[root@ds1 ~]#
|
||||
```
|
||||
|
||||
Run `docker node ls` to confirm that you have a 1-node swarm:
|
||||
|
||||
```bash
|
||||
[root@ds1 ~]# docker node ls
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
b54vls3wf8xztwfz79nlkivt8 * ds1.funkypenguin.co.nz Ready Active Leader
|
||||
[root@ds1 ~]#
|
||||
```
|
||||
|
||||
Note that when you run `docker swarm init` above, the CLI output gives youe a command to run to join further nodes to my swarm. This command would join the nodes as __workers__ (*as opposed to __managers__*). Workers can easily be promoted to managers (*and demoted again*), but since we know that we want our other two nodes to be managers too, it's simpler just to add them to the swarm as managers immediately.
|
||||
|
||||
On the first swarm node, generate the necessary token to join another manager by running ```docker swarm join-token manager```:
|
||||
|
||||
```bash
|
||||
[root@ds1 ~]# docker swarm join-token manager
|
||||
To add a manager to this swarm, run the following command:
|
||||
|
||||
docker swarm join \
|
||||
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-cfm24bq2zvfkcwujwlp5zqxta \
|
||||
202.170.164.47:2377
|
||||
|
||||
[root@ds1 ~]#
|
||||
```
|
||||
|
||||
Run the command provided on your other nodes to join them to the swarm as managers. After addition of a node, the output of ```docker node ls``` (on either host) should reflect all the nodes:
|
||||
|
||||
```bash
|
||||
[root@ds2 davidy]# docker node ls
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
b54vls3wf8xztwfz79nlkivt8 ds1.funkypenguin.co.nz Ready Active Leader
|
||||
xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reachable
|
||||
[root@ds2 davidy]#
|
||||
```
|
||||
|
||||
### Setup automated cleanup
|
||||
|
||||
Docker swarm doesn't do any cleanup of old images, so as you experiment with various stacks, and as updated containers are released upstream, you'll soon find yourself loosing gigabytes of disk space to old, unused images.
|
||||
|
||||
To address this, we'll run the "[meltwater/docker-cleanup](https://github.com/meltwater/docker-cleanup)" container on all of our nodes. The container will clean up unused images after 30 minutes.
|
||||
|
||||
First, create `docker-cleanup.env` (_mine is under `/var/data/config/docker-cleanup`_), and exclude container images we **know** we want to keep:
|
||||
|
||||
```bash
|
||||
KEEP_IMAGES=traefik,keepalived,docker-mailserver
|
||||
DEBUG=1
|
||||
```
|
||||
|
||||
Then create a docker-compose.yml as per the following example:
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
docker-cleanup:
|
||||
image: meltwater/docker-cleanup:latest
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /var/lib/docker:/var/lib/docker
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
mode: global
|
||||
env_file: /var/data/config/docker-cleanup/docker-cleanup.env
|
||||
|
||||
networks:
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.0.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
Launch the cleanup stack by running ```docker stack deploy docker-cleanup -c <path-to-docker-compose.yml>```
|
||||
|
||||
### Setup automatic updates
|
||||
|
||||
If your swarm runs for a long time, you might find yourself running older container images, after newer versions have been released. If you're the sort of geek who wants to live on the edge, configure [shepherd](https://github.com/djmaze/shepherd) to auto-update your container images regularly.
|
||||
|
||||
Create `/var/data/config/shepherd/shepherd.env` as per the following example:
|
||||
|
||||
```bash
|
||||
# Don't auto-update Plex or Emby (or Jellyfin), I might be watching a movie! (Customize this for the containers you _don't_ want to auto-update)
|
||||
BLACKLIST_SERVICES="plex_plex emby_emby jellyfin_jellyfin"
|
||||
# Run every 24 hours. Note that SLEEP_TIME appears to be in seconds.
|
||||
SLEEP_TIME=86400
|
||||
```
|
||||
|
||||
Then create /var/data/config/shepherd/shepherd.yml as per the following example:
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
shepherd-app:
|
||||
image: mazzolino/shepherd
|
||||
env_file : /var/data/config/shepherd/shepherd.env
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
labels:
|
||||
- "traefik.enable=false"
|
||||
deploy:
|
||||
placement:
|
||||
constraints: [node.role == manager]
|
||||
```
|
||||
|
||||
Launch shepherd by running ```docker stack deploy shepherd -c /var/data/config/shepherd/shepherd.yml```, and then just forget about it, comfortable in the knowledge that every day, Shepherd will check that your images are the latest available, and if not, will destroy and recreate the container on the latest available image.
|
||||
|
||||
## Summary
|
||||
|
||||
--8<-- "5-min-install.md"
|
||||
|
||||
What have we achieved?
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] [Docker swarm cluster](/docker-swarm/design/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
29
docs/docker-swarm/index.md
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
title: Why use Docker Swarm?
|
||||
description: Using Docker Swarm to build your own container-hosting platform which is highly-available, scalable, portable, secure and automated! 💪
|
||||
---
|
||||
|
||||
# Why Docker Swarm?
|
||||
|
||||
Pop quiz, hotshot.. There's a server with containers on it. Once you run enough containers, you start to loose track of compose files / data. If the host fails, all your services are unavailable. What do you do? **WHAT DO YOU DO**?[^1]
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/Ug2hLQv6WeY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
You too, action-geek, can save the day, by...
|
||||
|
||||
1. Enable [Docker Swarm mode](/docker-swarm/docker-swarm-mode/) (*even just on one node*)[^2]
|
||||
2. Store your swarm configuration and application data in an [orderly and consistent structure](/reference/data_layout/)
|
||||
3. Expose all your services consistently using [Traefik](/docker-swarm/traefik/) with optional [additional per-service authentication][tfa]
|
||||
|
||||
Then you can really level-up your geek-fu, by:
|
||||
|
||||
4. Making your Docker Swarm highly with [keepalived](/docker-swarm/keepalived/)
|
||||
5. Setup [shared storage](/docker-swarm/shared-storage-ceph/) to eliminate SPOFs
|
||||
6. [Backup](/recipes/duplicity/) your stuff automatically
|
||||
|
||||
Ready to enter the matrix? Jump in on one of the links above, or start reading the [design](/docker-swarm/design/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: This was an [iconic movie](https://www.imdb.com/title/tt0111257/). It even won 2 Oscars! (*but not for the acting*)
|
||||
[^2]: There are significant advantages to using Docker Swarm, even on just a single node.
|
||||
91
docs/docker-swarm/keepalived.md
Normal file
@@ -0,0 +1,91 @@
|
||||
---
|
||||
title: Make docker swarm HA with keepalived
|
||||
description: While having a self-healing, scalable docker swarm is great for availability and scalability, none of that is worth a sausage if nobody can connect to your cluster!
|
||||
---
|
||||
|
||||
# Keepalived
|
||||
|
||||
While having a self-healing, scalable docker swarm is great for availability and scalability, none of that is worth a sausage if nobody can connect to your cluster!
|
||||
|
||||
In order to provide seamless external access to clustered resources, regardless of which node they're on and tolerant of node failure, you need to present a single IP to the world for external access.
|
||||
|
||||
Normally this is done using a HA loadbalancer, but since Docker Swarm aready provides the load-balancing capabilities (*[routing mesh](https://docs.docker.com/engine/swarm/ingress/)*), all we need for seamless HA is a virtual IP which will be provided by more than one docker node.
|
||||
|
||||
This is accomplished with the use of keepalived on at least two nodes.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
Already deployed:
|
||||
|
||||
* [X] At least 2 x swarm nodes
|
||||
* [X] low-latency link (i.e., no WAN links)
|
||||
|
||||
New:
|
||||
|
||||
* [ ] At least 3 x IPv4 addresses (*one for each node and one for the virtual IP[^1])
|
||||
|
||||
## Preparation
|
||||
|
||||
### Enable IPVS module
|
||||
|
||||
On all nodes which will participate in keepalived, we need the "ip_vs" kernel module, in order to permit services to bind to non-local interface addresses.
|
||||
|
||||
Set this up once-off for both the primary and secondary nodes, by running:
|
||||
|
||||
```bash
|
||||
echo "modprobe ip_vs" >> /etc/modules
|
||||
modprobe ip_vs
|
||||
```
|
||||
|
||||
### Setup nodes
|
||||
|
||||
Assuming your IPs are as per the following example:
|
||||
|
||||
- 192.168.4.1 : Primary
|
||||
- 192.168.4.2 : Secondary
|
||||
- 192.168.4.3 : Virtual
|
||||
|
||||
Run the following on the primary
|
||||
|
||||
```bash
|
||||
docker run -d --name keepalived --restart=always \
|
||||
--cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW --net=host \
|
||||
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.4.1', '192.168.4.2']" \
|
||||
-e KEEPALIVED_VIRTUAL_IPS=192.168.4.3 \
|
||||
-e KEEPALIVED_PRIORITY=200 \
|
||||
osixia/keepalived:2.0.20
|
||||
```
|
||||
|
||||
And on the secondary[^2]:
|
||||
|
||||
```bash
|
||||
docker run -d --name keepalived --restart=always \
|
||||
--cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW --net=host \
|
||||
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.4.1', '192.168.4.2']" \
|
||||
-e KEEPALIVED_VIRTUAL_IPS=192.168.4.3 \
|
||||
-e KEEPALIVED_PRIORITY=100 \
|
||||
osixia/keepalived:2.0.20
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
That's it. Each node will talk to the other via unicast (*no need to un-firewall multicast addresses*), and the node with the highest priority gets to be the master. When ingress traffic arrives on the master node via the VIP, docker's routing mesh will deliver it to the appropriate docker node.
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved?
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] A Virtual IP to which all cluster traffic can be forwarded externally, making it "*Highly Available*"
|
||||
|
||||
--8<-- "5-min-install.md"
|
||||
|
||||
[^1]: Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
|
||||
[^2]: More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
80
docs/docker-swarm/nodes.md
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
title: Setup nodes for docker-swarm
|
||||
description: Let's start building our cluster. You can use either bare-metal machines or virtual machines - the configuration would be the same. To avoid confusion, I'll be referring to these as "nodes" from now on.
|
||||
---
|
||||
# Nodes
|
||||
|
||||
Let's start building our cluster. You can use either bare-metal machines or virtual machines - the configuration would be the same. To avoid confusion, I'll be referring to these as "nodes" from now on.
|
||||
|
||||
!!! note
|
||||
In 2017, I **initially** chose the "[Atomic](https://www.projectatomic.io/)" CentOS/Fedora image for the swarm hosts, but later found its outdated version of Docker to be problematic with advanced features like GPU transcoding (in [Plex](/recipes/plex/)), [Swarmprom](/recipes/swarmprom/), etc. In the end, I went mainstream and simply preferred a modern Ubuntu installation.
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
New in this recipe:
|
||||
|
||||
* [ ] 3 x nodes (*bare-metal or VMs*), each with:
|
||||
* A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
|
||||
* At least 2GB RAM
|
||||
* At least 20GB disk space (_but it'll be tight_)
|
||||
* [ ] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Permit connectivity
|
||||
|
||||
Most modern Linux distributions include firewall rules which only only permit minimal required incoming connections (like SSH). We'll want to allow all traffic between our nodes. The steps to achieve this in CentOS/Ubuntu are a little different...
|
||||
|
||||
#### CentOS
|
||||
|
||||
Add something like this to `/etc/sysconfig/iptables`:
|
||||
|
||||
```bash
|
||||
# Allow all inter-node communication
|
||||
-A INPUT -s 192.168.31.0/24 -j ACCEPT
|
||||
```
|
||||
|
||||
And restart iptables with ```systemctl restart iptables```
|
||||
|
||||
#### Ubuntu
|
||||
|
||||
Install the (*non-default*) persistent iptables tools, by running `apt-get install iptables-persistent`, establishing some default rules (*dkpg will prompt you to save current ruleset*), and then add something like this to `/etc/iptables/rules.v4`:
|
||||
|
||||
```bash
|
||||
# Allow all inter-node communication
|
||||
-A INPUT -s 192.168.31.0/24 -j ACCEPT
|
||||
```
|
||||
|
||||
And refresh your running iptables rules with `iptables-restore < /etc/iptables/rules.v4`
|
||||
|
||||
### Enable hostname resolution
|
||||
|
||||
Depending on your hosting environment, you may have DNS automatically setup for your VMs. If not, it's useful to set up static entries in /etc/hosts for the nodes. For example, I setup the following:
|
||||
|
||||
- 192.168.31.11 ds1 ds1.funkypenguin.co.nz
|
||||
- 192.168.31.12 ds2 ds2.funkypenguin.co.nz
|
||||
- 192.168.31.13 ds3 ds3.funkypenguin.co.nz
|
||||
|
||||
### Set timezone
|
||||
|
||||
Set your local timezone, by running:
|
||||
|
||||
```bash
|
||||
ln -sf /usr/share/zoneinfo/<your timezone> /etc/localtime
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
After completing the above, you should have:
|
||||
|
||||
!!! summary "Summary"
|
||||
Deployed in this recipe:
|
||||
|
||||
* [X] 3 x nodes (*bare-metal or VMs*), each with:
|
||||
* A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
|
||||
* At least 2GB RAM
|
||||
* At least 20GB disk space (_but it'll be tight_)
|
||||
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
113
docs/docker-swarm/registry.md
Normal file
@@ -0,0 +1,113 @@
|
||||
---
|
||||
title: Setup pull through Docker registry / cache
|
||||
description: You may not _want_ your cluster to be pulling multiple copies of images from public registries, especially if rate-limits (hello, Docker Hub!) are a concern. Here's how you setup your own "pull through cache" registry.
|
||||
---
|
||||
# Create Docker "pull through" registry cache
|
||||
|
||||
Although we now have shared storage for our persistent container data, our docker nodes don't share any other docker data, such as container images. This results in an inefficiency - every node which participates in the swarm will, at some point, need the docker image for every container deployed in the swarm.
|
||||
|
||||
When dealing with large container (looking at you, GitLab!), this can result in several gigabytes of wasted bandwidth per-node, and long delays when restarting containers on an alternate node. (_It also wastes disk space on each node, but we'll get to that in the next section_)
|
||||
|
||||
The solution is to run an official Docker registry container as a ["pull-through" cache, or "registry mirror"](https://docs.docker.com/registry/recipes/mirror/). By using our persistent storage for the registry cache, we can ensure we have a single copy of all the containers we've pulled at least once. After the first pull, any subsequent pulls from our nodes will use the cached version from our registry mirror. As a result, services are available more quickly when restarting container nodes, and we can be more aggressive about cleaning up unused containers on our nodes (*more later*)
|
||||
|
||||
The registry mirror runs as a swarm stack, using a simple docker-compose.yml. Customize **your mirror FQDN** below, so that Traefik will generate the appropriate LetsEncrypt certificates for it, and make it available via HTTPS.
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
* [ ] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||
* [ ] [Traefik](/docker-swarm/traefik/) configured per design
|
||||
* [ ] DNS entry for the hostname you intend to use, pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||
|
||||
## Configuration
|
||||
|
||||
Create `/var/data/config/registry/registry.yml` as per the following docker-compose example:
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
|
||||
registry-mirror:
|
||||
image: registry:2
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:<your mirror FQDN>
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=5000
|
||||
ports:
|
||||
- 5000:5000
|
||||
volumes:
|
||||
- /var/data/registry/registry-mirror-data:/var/lib/registry
|
||||
- /var/data/registry/registry-mirror-config.yml:/etc/docker/registry/config.yml
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
!!! note "Unencrypted registry"
|
||||
We create this registry without consideration for SSL, which will fail if we attempt to use the registry directly. However, we're going to use the HTTPS-proxied version via [Traefik][traefik], leveraging Traefik to manage the LetsEncrypt certificates required.
|
||||
|
||||
Create the configuration for the actual registry in `/var/data/registry/registry-mirror-config.yml` as per the following example:
|
||||
|
||||
```yaml
|
||||
version: 0.1
|
||||
log:
|
||||
fields:
|
||||
service: registry
|
||||
storage:
|
||||
cache:
|
||||
blobdescriptor: inmemory
|
||||
filesystem:
|
||||
rootdirectory: /var/lib/registry
|
||||
delete:
|
||||
enabled: true
|
||||
http:
|
||||
addr: :5000
|
||||
headers:
|
||||
X-Content-Type-Options: [nosniff]
|
||||
health:
|
||||
storagedriver:
|
||||
enabled: true
|
||||
interval: 10s
|
||||
threshold: 3
|
||||
proxy:
|
||||
remoteurl: https://registry-1.docker.io
|
||||
```
|
||||
|
||||
## Running
|
||||
|
||||
### Launch Docker registry stack
|
||||
|
||||
Launch the registry stack by running `docker stack deploy registry -c <path-to-docker-compose.yml>`
|
||||
|
||||
### Enable Docker registry mirror
|
||||
|
||||
To tell docker to use the registry mirror, edit `/etc/docker-latest/daemon.json` [^1] on each node, and change from:
|
||||
|
||||
```json
|
||||
{
|
||||
"log-driver": "journald",
|
||||
"signature-verification": false
|
||||
}
|
||||
```
|
||||
|
||||
To:
|
||||
|
||||
```json
|
||||
{
|
||||
"log-driver": "journald",
|
||||
"signature-verification": false,
|
||||
"registry-mirrors": ["https://<your registry mirror FQDN>"]
|
||||
}
|
||||
```
|
||||
|
||||
Then restart docker itself, by running `systemctl restart docker`
|
||||
|
||||
[^1]: Note the extra comma required after "false" above
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
230
docs/docker-swarm/shared-storage-ceph.md
Normal file
@@ -0,0 +1,230 @@
|
||||
---
|
||||
title: Ceph cluster in Docker Swarm
|
||||
description: Ceph provides persistent storage to your Docker Swarm cluster, supporting either rdb images for host volume mounts, or even fancy cephfs docker volumes.
|
||||
---
|
||||
# Shared Storage (Ceph)
|
||||
|
||||
While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
3 x Virtual Machines (configured earlier), each with:
|
||||
|
||||
* [X] Support for "modern" versions of Python and LVM
|
||||
* [X] At least 1GB RAM
|
||||
* [X] At least 20GB disk space (_but it'll be tight_)
|
||||
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
||||
* [X] A second disk dedicated to the Ceph OSD
|
||||
* [X] Each node should have the IP of every other participating node hard-coded in /etc/hosts (*including its own IP*)
|
||||
|
||||
## Preparation
|
||||
|
||||
!!! tip "No more [foolish games](https://www.youtube.com/watch?v=UNoouLa7uxA)"
|
||||
Earlier iterations of this recipe (*based on [Ceph Jewel](https://docs.ceph.com/docs/master/releases/jewel/)*) required significant manual effort to install Ceph in a Docker environment. In the 2+ years since Jewel was released, significant improvements have been made to the ceph "deploy-in-docker" process, including the [introduction of the cephadm tool](https://ceph.io/ceph-management/introducing-cephadm/). Cephadm is the tool which now does all the heavy lifting, below, for the current version of ceph, codenamed "[Octopus](https://www.youtube.com/watch?v=Gi58pN8W3hY)".
|
||||
|
||||
### Pick a master node
|
||||
|
||||
One of your nodes will become the cephadm "master" node. Although all nodes will participate in the Ceph cluster, the master node will be the node which we bootstrap ceph on. It's also the node which will run the Ceph dashboard, and on which future upgrades will be processed. It doesn't matter _which_ node you pick, and the cluster itself will operate in the event of a loss of the master node (although you won't see the dashboard)
|
||||
|
||||
### Install cephadm on master node
|
||||
|
||||
Run the following on the ==master== node:
|
||||
|
||||
```bash
|
||||
MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'`
|
||||
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
|
||||
chmod +x cephadm
|
||||
mkdir -p /etc/ceph
|
||||
./cephadm bootstrap --mon-ip $MYIP
|
||||
```
|
||||
|
||||
The process takes about 30 seconds, after which, you'll have a MVC (*Minimum Viable Cluster*)[^1], encompassing a single monitor and mgr instance on your chosen node. Here's the complete output from a fresh install:
|
||||
|
||||
[^1]: Minimum Viable Cluster acronym copyright, trademark, and whatever else, to Funky Penguin for 1,000,000 years.
|
||||
|
||||
??? "Example output from a fresh cephadm bootstrap"
|
||||
```
|
||||
root@raphael:~# MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'`
|
||||
root@raphael:~# curl --silent --remote-name --location <https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm>
|
||||
|
||||
root@raphael:~# chmod +x cephadm
|
||||
root@raphael:~# mkdir -p /etc/ceph
|
||||
root@raphael:~# ./cephadm bootstrap --mon-ip $MYIP
|
||||
INFO:cephadm:Verifying podman|docker is present...
|
||||
INFO:cephadm:Verifying lvm2 is present...
|
||||
INFO:cephadm:Verifying time synchronization is in place...
|
||||
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
|
||||
INFO:cephadm:Repeating the final host check...
|
||||
INFO:cephadm:podman|docker (/usr/bin/docker) is present
|
||||
INFO:cephadm:systemctl is present
|
||||
INFO:cephadm:lvcreate is present
|
||||
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
|
||||
INFO:cephadm:Host looks OK
|
||||
INFO:root:Cluster fsid: bf3eff78-9e27-11ea-b40a-525400380101
|
||||
INFO:cephadm:Verifying IP 192.168.38.101 port 3300 ...
|
||||
INFO:cephadm:Verifying IP 192.168.38.101 port 6789 ...
|
||||
INFO:cephadm:Mon IP 192.168.38.101 is in CIDR network 192.168.38.0/24
|
||||
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
|
||||
INFO:cephadm:Extracting ceph user uid/gid from container image...
|
||||
INFO:cephadm:Creating initial keys...
|
||||
INFO:cephadm:Creating initial monmap...
|
||||
INFO:cephadm:Creating mon...
|
||||
INFO:cephadm:Waiting for mon to start...
|
||||
INFO:cephadm:Waiting for mon...
|
||||
INFO:cephadm:mon is available
|
||||
INFO:cephadm:Assimilating anything we can from ceph.conf...
|
||||
INFO:cephadm:Generating new minimal ceph.conf...
|
||||
INFO:cephadm:Restarting the monitor...
|
||||
INFO:cephadm:Setting mon public_network...
|
||||
INFO:cephadm:Creating mgr...
|
||||
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
|
||||
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
|
||||
INFO:cephadm:Waiting for mgr to start...
|
||||
INFO:cephadm:Waiting for mgr...
|
||||
INFO:cephadm:mgr not available, waiting (1/10)...
|
||||
INFO:cephadm:mgr not available, waiting (2/10)...
|
||||
INFO:cephadm:mgr not available, waiting (3/10)...
|
||||
INFO:cephadm:mgr is available
|
||||
INFO:cephadm:Enabling cephadm module...
|
||||
INFO:cephadm:Waiting for the mgr to restart...
|
||||
INFO:cephadm:Waiting for Mgr epoch 5...
|
||||
INFO:cephadm:Mgr epoch 5 is available
|
||||
INFO:cephadm:Setting orchestrator backend to cephadm...
|
||||
INFO:cephadm:Generating ssh key...
|
||||
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
|
||||
INFO:cephadm:Adding key to root@localhost's authorized_keys...
|
||||
INFO:cephadm:Adding host raphael...
|
||||
INFO:cephadm:Deploying mon service with default placement...
|
||||
INFO:cephadm:Deploying mgr service with default placement...
|
||||
INFO:cephadm:Deploying crash service with default placement...
|
||||
INFO:cephadm:Enabling mgr prometheus module...
|
||||
INFO:cephadm:Deploying prometheus service with default placement...
|
||||
INFO:cephadm:Deploying grafana service with default placement...
|
||||
INFO:cephadm:Deploying node-exporter service with default placement...
|
||||
INFO:cephadm:Deploying alertmanager service with default placement...
|
||||
INFO:cephadm:Enabling the dashboard module...
|
||||
INFO:cephadm:Waiting for the mgr to restart...
|
||||
INFO:cephadm:Waiting for Mgr epoch 13...
|
||||
INFO:cephadm:Mgr epoch 13 is available
|
||||
INFO:cephadm:Generating a dashboard self-signed certificate...
|
||||
INFO:cephadm:Creating initial admin user...
|
||||
INFO:cephadm:Fetching dashboard port number...
|
||||
INFO:cephadm:Ceph Dashboard is now available at:
|
||||
|
||||
URL: https://raphael:8443/
|
||||
User: admin
|
||||
Password: mid28k0yg5
|
||||
|
||||
INFO:cephadm:You can access the Ceph CLI with:
|
||||
|
||||
sudo ./cephadm shell --fsid bf3eff78-9e27-11ea-b40a-525400380101 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
|
||||
|
||||
INFO:cephadm:Please consider enabling telemetry to help improve Ceph:
|
||||
|
||||
ceph telemetry on
|
||||
|
||||
For more information see:
|
||||
|
||||
https://docs.ceph.com/docs/master/mgr/telemetry/
|
||||
|
||||
INFO:cephadm:Bootstrap complete.
|
||||
root@raphael:~#
|
||||
```
|
||||
|
||||
### Prepare other nodes
|
||||
|
||||
It's now necessary to tranfer the following files to your ==other== nodes, so that cephadm can add them to your cluster, and so that they'll be able to mount the cephfs when we're done:
|
||||
|
||||
| Path on master | Path on non-master |
|
||||
|---------------------------------------|------------------------------------------------------------|
|
||||
| `/etc/ceph/ceph.conf` | `/etc/ceph/ceph.conf` |
|
||||
| `/etc/ceph/ceph.client.admin.keyring` | `/etc/ceph/ceph.client.admin.keyring` |
|
||||
| `/etc/ceph/ceph.pub` | `/root/.ssh/authorized_keys` (append to anything existing) |
|
||||
|
||||
Back on the ==master== node, run `ceph orch host add <node-name>` once for each other node you want to join to the cluster. You can validate the results by running `ceph orch host ls`
|
||||
|
||||
!!! question "Should we be concerned about giving cephadm using root access over SSH?"
|
||||
Not really. Docker is inherently insecure at the host-level anyway (*think what would happen if you launched a global-mode stack with a malicious container image which mounted `/root/.ssh`*), so worrying about cephadm seems a little barn-door-after-horses-bolted. If you take host-level security seriously, consider switching to [Kubernetes](/kubernetes/) :)
|
||||
|
||||
### Add OSDs
|
||||
|
||||
Now the best improvement since the days of ceph-deploy and manual disks.. on the ==master== node, run `ceph orch apply osd --all-available-devices`. This will identify any unloved (*unpartitioned, unmounted*) disks attached to each participating node, and configure these disks as OSDs.
|
||||
|
||||
### Setup CephFS
|
||||
|
||||
On the ==master== node, create a cephfs volume in your cluster, by running `ceph fs volume create data`. Ceph will handle the necessary orchestration itself, creating the necessary pool, mds daemon, etc.
|
||||
|
||||
You can watch the progress by running `ceph fs ls` (to see the fs is configured), and `ceph -s` to wait for `HEALTH_OK`
|
||||
|
||||
### Mount CephFS volume
|
||||
|
||||
On ==every== node, create a mountpoint for the data, by running ```mkdir /var/data```, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually _mounted_ if there's a network / boot delay getting access to the gluster volume:
|
||||
|
||||
```bash
|
||||
mkdir /var/data
|
||||
|
||||
MYNODES="<node1>,<node2>,<node3>" # Add your own nodes here, comma-delimited
|
||||
MYHOST=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'`
|
||||
echo -e "
|
||||
# Mount cephfs volume \n
|
||||
raphael,donatello,leonardo:/ /var/data ceph name=admin,noatime,_netdev 0 0" >> /etc/fstab
|
||||
mount -a
|
||||
```
|
||||
|
||||
??? note "Additional steps on Debian Buster"
|
||||
The above configuration worked on Ubuntu 18.04 **without** requiring a secret to be defined in `/etc/fstab`. Other users have [reported different results](https://forum.funkypenguin.co.nz/t/shared-storage-ceph-funky-penguins-geek-cookbook/47/108) on Debian Buster, however, so consider trying this variation if you encounter error 22:
|
||||
|
||||
```
|
||||
apt-get install ceph-common
|
||||
CEPHKEY=`sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring`
|
||||
echo -e "
|
||||
# Mount cephfs volume \n
|
||||
raphael,donatello,leonardo:/ /var/data ceph name=admin,secret=$CEPHKEY,noatime,_netdev 0 0" >> /etc/fstab
|
||||
mount -a
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Sprinkle with tools
|
||||
|
||||
Although it's possible to use `cephadm shell` to exec into a container with the necessary ceph tools, it's more convenient to use the native CLI tools. To this end, on each node, run the following, which will install the appropriate apt repository, and install the latest ceph CLI tools:
|
||||
|
||||
```bash
|
||||
curl -L https://download.ceph.com/keys/release.asc | sudo apt-key add -
|
||||
cephadm add-repo --release octopus
|
||||
cephadm install ceph-common
|
||||
```
|
||||
|
||||
### Drool over dashboard
|
||||
|
||||
Ceph now includes a comprehensive dashboard, provided by the mgr daemon. The dashboard will be accessible at `https://[IP of your ceph master node]:8443`, but you'll need to run `ceph dashboard ac-user-create <username> <password> administrator` first, to create an administrator account:
|
||||
|
||||
```bash
|
||||
root@raphael:~# ceph dashboard ac-user-create batman supermansucks administrator
|
||||
{"username": "batman", "password": "$2b$12$3HkjY85mav.dq3HHAZiWP.KkMiuoV2TURZFH.6WFfo/BPZCT/0gr.", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1590372281, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": false}
|
||||
root@raphael:~#
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved?
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] Persistent storage available to every node
|
||||
* [X] Resiliency in the event of the failure of a single node
|
||||
* [X] Beautiful dashboard
|
||||
|
||||
--8<-- "5-min-install.md"
|
||||
|
||||
Here's a screencast of the playbook in action. I sped up the boring parts, it actually takes ==5 min== (*you can tell by the timestamps on the prompt*):
|
||||
|
||||

|
||||
[patreon]: <https://www.patreon.com/bePatron?u=6982506>
|
||||
[github_sponsor]: <https://github.com/sponsors/funkypenguin>
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
175
docs/docker-swarm/shared-storage-gluster.md
Normal file
@@ -0,0 +1,175 @@
|
||||
---
|
||||
title: GlusterFS vs Ceph (the winner)
|
||||
description: Here's why Ceph was the obvious winner in the ceph vs glusterfs comparison for our docker-swarm cluster.
|
||||
---
|
||||
# Shared Storage (GlusterFS)
|
||||
|
||||
While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node.
|
||||
|
||||
!!! warning
|
||||
This recipe is deprecated. It didn't work well in 2017, and it's not likely to work any better now. It remains here as a reference. I now recommend the use of [Ceph for shared storage](/docker-swarm/shared-storage-ceph/) instead. - 2019 Chef
|
||||
|
||||
## Design
|
||||
|
||||
### Why GlusterFS?
|
||||
|
||||
This GlusterFS recipe was my original design for shared storage, but I [found it to be flawed](/docker-swarm/shared-storage-ceph/#why-not-glusterfs), and I replaced it with a [design which employs Ceph instead](/docker-swarm/shared-storage-ceph/#why-ceph). This recipe is an alternate to the Ceph design, if you happen to prefer GlusterFS.
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
3 x Virtual Machines (configured earlier), each with:
|
||||
|
||||
* [X] CentOS/Fedora Atomic
|
||||
* [X] At least 1GB RAM
|
||||
* [X] At least 20GB disk space (_but it'll be tight_)
|
||||
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
||||
* [ ] A second disk, or adequate space on the primary disk for a dedicated data partition
|
||||
|
||||
## Preparation
|
||||
|
||||
### Create Gluster "bricks"
|
||||
|
||||
To build our Gluster volume, we need 2 out of the 3 VMs to provide one "brick". The bricks will be used to create the replicated volume. Assuming a replica count of 2 (_i.e., 2 copies of the data are kept in gluster_), our total number of bricks must be divisible by our replica count. (_I.e., you can't have 3 bricks if you want 2 replicas. You can have 4 though - We have to have minimum 3 swarm manager nodes for fault-tolerance, but only 2 of those nodes need to run as gluster servers._)
|
||||
|
||||
On each host, run a variation following to create your bricks, adjusted for the path to your disk.
|
||||
|
||||
!!! note "The example below assumes /dev/vdb is dedicated to the gluster volume"
|
||||
|
||||
```bash
|
||||
(
|
||||
echo o # Create a new empty DOS partition table
|
||||
echo n # Add a new partition
|
||||
echo p # Primary partition
|
||||
echo 1 # Partition number
|
||||
echo # First sector (Accept default: 1)
|
||||
echo # Last sector (Accept default: varies)
|
||||
echo w # Write changes
|
||||
) | sudo fdisk /dev/vdb
|
||||
|
||||
mkfs.xfs -i size=512 /dev/vdb1
|
||||
mkdir -p /var/no-direct-write-here/brick1
|
||||
echo '' >> /etc/fstab >> /etc/fstab
|
||||
echo '# Mount /dev/vdb1 so that it can be used as a glusterfs volume' >> /etc/fstab
|
||||
echo '/dev/vdb1 /var/no-direct-write-here/brick1 xfs defaults 1 2' >> /etc/fstab
|
||||
mount -a && mount
|
||||
```
|
||||
|
||||
!!! warning "Don't provision all your LVM space"
|
||||
Atomic uses LVM to store docker data, and **automatically grows** Docker's volumes as requried. If you commit all your free LVM space to your brick, you'll quickly find (as I did) that docker will start to fail with error messages about insufficient space. If you're going to slice off a portion of your LVM space in /dev/atomicos, make sure you leave enough space for Docker storage, where "enough" depends on how much you plan to pull images, make volumes, etc. I ate through 20GB very quickly doing development, so I ended up provisioning 50GB for atomic alone, with a separate volume for the brick.
|
||||
|
||||
### Create glusterfs container
|
||||
|
||||
Atomic doesn't include the Gluster server components. This means we'll have to run glusterd from within a container, with privileged access to the host. Although convoluted, I've come to prefer this design since it once again makes the OS "disposable", moving all the config into containers and code.
|
||||
|
||||
Run the following on each host:
|
||||
|
||||
````bash
|
||||
docker run \
|
||||
-h glusterfs-server \
|
||||
-v /etc/glusterfs:/etc/glusterfs:z \
|
||||
-v /var/lib/glusterd:/var/lib/glusterd:z \
|
||||
-v /var/log/glusterfs:/var/log/glusterfs:z \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||
-v /var/no-direct-write-here/brick1:/var/no-direct-write-here/brick1 \
|
||||
-d --privileged=true --net=host \
|
||||
--restart=always \
|
||||
--name="glusterfs-server" \
|
||||
gluster/gluster-centos
|
||||
````
|
||||
|
||||
### Create trusted pool
|
||||
|
||||
On a single node (doesn't matter which), run ```docker exec -it glusterfs-server bash``` to launch a shell inside the container.
|
||||
|
||||
From the node, run `gluster peer probe <other host>`.
|
||||
|
||||
Example output:
|
||||
|
||||
```bash
|
||||
[root@glusterfs-server /]# gluster peer probe ds1
|
||||
peer probe: success.
|
||||
[root@glusterfs-server /]#
|
||||
```
|
||||
|
||||
Run ```gluster peer status``` on both nodes to confirm that they're properly connected to each other:
|
||||
|
||||
Example output:
|
||||
|
||||
```bash
|
||||
[root@glusterfs-server /]# gluster peer status
|
||||
Number of Peers: 1
|
||||
|
||||
Hostname: ds3
|
||||
Uuid: 3e115ba9-6a4f-48dd-87d7-e843170ff499
|
||||
State: Peer in Cluster (Connected)
|
||||
[root@glusterfs-server /]#
|
||||
```
|
||||
|
||||
### Create gluster volume
|
||||
|
||||
Now we create a *replicated volume* out of our individual "bricks".
|
||||
|
||||
Create the gluster volume by running:
|
||||
|
||||
```bash
|
||||
gluster volume create gv0 replica 2 \
|
||||
server1:/var/no-direct-write-here/brick1 \
|
||||
server2:/var/no-direct-write-here/brick1
|
||||
```
|
||||
|
||||
Example output:
|
||||
|
||||
```bash
|
||||
[root@glusterfs-server /]# gluster volume create gv0 replica 2 ds1:/var/no-direct-write-here/brick1/gv0 ds3:/var/no-direct-write-here/brick1/gv0
|
||||
volume create: gv0: success: please start the volume to access data
|
||||
[root@glusterfs-server /]#
|
||||
```
|
||||
|
||||
Start the volume by running ```gluster volume start gv0```
|
||||
|
||||
```bash
|
||||
[root@glusterfs-server /]# gluster volume start gv0
|
||||
volume start: gv0: success
|
||||
[root@glusterfs-server /]#
|
||||
```
|
||||
|
||||
The volume is only present on the host you're shelled into though. To add the other hosts to the volume, run ```gluster peer probe <servername>```. Don't probe host from itself.
|
||||
|
||||
From one other host, run ```docker exec -it glusterfs-server bash``` to shell into the gluster-server container, and run ```gluster peer probe <original server name>``` to update the name of the host which started the volume.
|
||||
|
||||
### Mount gluster volume
|
||||
|
||||
On the host (i.e., outside of the container - type ```exit``` if you're still shelled in), create a mountpoint for the data, by running ```mkdir /var/data```, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually _mounted_ if there's a network / boot delay getting access to the gluster volume:
|
||||
|
||||
```bash
|
||||
mkdir /var/data
|
||||
MYHOST=`hostname -s`
|
||||
echo '' >> /etc/fstab >> /etc/fstab
|
||||
echo '# Mount glusterfs volume' >> /etc/fstab
|
||||
echo "$MYHOST:/gv0 /var/data glusterfs defaults,_netdev,context="system_u:object_r:svirt_sandbox_file_t:s0" 0 0" >> /etc/fstab
|
||||
mount -a
|
||||
```
|
||||
|
||||
For some reason, my nodes won't auto-mount this volume on boot. I even tried the trickery below, but they stubbornly refuse to automount:
|
||||
|
||||
```bash
|
||||
echo -e "\n\n# Give GlusterFS 10s to start before \
|
||||
mounting\nsleep 10s && mount -a" >> /etc/rc.local
|
||||
systemctl enable rc-local.service
|
||||
```
|
||||
|
||||
For non-gluster nodes, you'll need to replace $MYHOST above with the name of one of the gluster hosts (I haven't worked out how to make this fully HA yet)
|
||||
|
||||
## Serving
|
||||
|
||||
After completing the above, you should have:
|
||||
|
||||
* [X] Persistent storage available to every node
|
||||
* [X] Resiliency in the event of the failure of a single (gluster) node
|
||||
|
||||
[^1]: Future enhancements to this recipe include:
|
||||
1. Migration of shared storage from GlusterFS to Ceph
|
||||
2. Correct the fact that volumes don't automount on boot
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
206
docs/docker-swarm/traefik-forward-auth/dex-static.md
Normal file
@@ -0,0 +1,206 @@
|
||||
---
|
||||
title: SSO with traefik forward auth and Dex
|
||||
description: Traefik forward auth needs an authentication backend, but if you don't want to use a cloud provider, you can setup your own simple OIDC backend, using Dex.
|
||||
---
|
||||
# Traefik Forward Auth for SSO with Dex (Static)
|
||||
|
||||
[Traefik Forward Auth](/docker-swarm/traefik-forward-auth/) is incredibly useful to secure services with an additional layer of authentication, provided by an OIDC-compatible provider. The simplest possible provider is a self-hosted instance of [CoreOS's Dex](https://github.com/dexidp/dex), configured with a static username and password. This recipe will "get you started" with Traefik Forward Auth, providing a basic authentication layer. In time, you might want to migrate to a "public" provider, like [Google][tfa-google], or GitHub, or to a [Keycloak][keycloak] installation.
|
||||
|
||||
--8<-- "recipe-tfa-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup dex config
|
||||
|
||||
Create `/var/data/config/dex/config.yml` something like the following (*this is a bare-bones, [minimal example](https://github.com/dexidp/dex/blob/master/config.dev.yaml)*). At the very least, you want to replace all occurances of `example.com` with your own domain name. (*If you change nothing else, your ID is `foo`, your secret is `bar`, your username is `admin@yourdomain`, and your password is `password`*):
|
||||
|
||||
```yaml
|
||||
# The base path of dex and the external name of the OpenID Connect service.
|
||||
#
|
||||
# This is the canonical URL that all clients MUST use to refer to dex. If a
|
||||
# path is provided, dex's HTTP service will listen at a non-root URL.
|
||||
issuer: https://dex.example.com
|
||||
|
||||
storage:
|
||||
type: sqlite3
|
||||
config:
|
||||
file: var/sqlite/dex.db
|
||||
|
||||
web:
|
||||
http: 0.0.0.0:5556
|
||||
|
||||
oauth2:
|
||||
skipApprovalScreen: true
|
||||
|
||||
staticClients:
|
||||
- id: foo
|
||||
redirectURIs:
|
||||
- 'https://auth.example.com/_oauth'
|
||||
name: 'example.com'
|
||||
secret: bar
|
||||
|
||||
enablePasswordDB: true
|
||||
|
||||
staticPasswords:
|
||||
- email: "admin@example.com"
|
||||
# bcrypt hash of the string "password"
|
||||
hash: "$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W"
|
||||
username: "admin"
|
||||
userID: "08a8684b-db88-4b73-90a9-3cd1661f5466"
|
||||
```
|
||||
|
||||
### Prepare Traefik Forward Auth environment
|
||||
|
||||
Create `/var/data/config/traefik-forward-auth/traefik-forward-auth.env` per the following example configuration:
|
||||
|
||||
```bash
|
||||
DEFAULT_PROVIDER: oidc
|
||||
PROVIDERS_OIDC_CLIENT_ID: foo # This is the staticClients.id value in config.yml above
|
||||
PROVIDERS_OIDC_CLIENT_SECRET: bar # This is the staticClients.secret value in config.yml above
|
||||
PROVIDERS_OIDC_ISSUER_URL: https://dex.example.com # This is the issuer value in config.yml above, and it has to be reachable via a browser
|
||||
SECRET: imtoosexyformyshorts # Make this up. It's not configured anywhere else
|
||||
AUTH_HOST: auth.example.com # This should match the value of the traefik hosts labels in Traefik Forward Auth
|
||||
COOKIE_DOMAIN: example.com # This should match your base domain
|
||||
```
|
||||
|
||||
### Setup Docker Stack for Dex
|
||||
|
||||
Now create a docker swarm config file in docker-compose syntax (v3), per the following example:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
dex:
|
||||
image: dexidp/dex
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/config/dex/config.yml:/config.yml:ro
|
||||
networks:
|
||||
- traefik_public
|
||||
command: ['serve','/config.yml']
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:dex.example.com
|
||||
- traefik.port=5556
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# and for traefikv2:
|
||||
- "traefik.http.routers.dex.rule=Host(`dex.example.com`)"
|
||||
- "traefik.http.routers.dex.entrypoints=https"
|
||||
- "traefik.http.services.dex.loadbalancer.server.port=5556"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
### Setup Docker Stack for Traefik Forward Auth
|
||||
|
||||
Now create a docker swarm config file for traefik-forward-auth, in docker-compose syntax (v3), per the following example:
|
||||
|
||||
```yaml
|
||||
version: "3.2"
|
||||
|
||||
services:
|
||||
|
||||
traefik-forward-auth:
|
||||
image: thomseddon/traefik-forward-auth:2.2.0
|
||||
env_file: /var/data/config/traefik-forward-auth/traefik-forward-auth.env
|
||||
volumes:
|
||||
- /var/data/config/traefik-forward-auth/config.ini:/config.ini:ro
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefikv1
|
||||
- "traefik.port=4181"
|
||||
- "traefik.frontend.rule=Host:auth.example.com"
|
||||
- "traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181"
|
||||
- "traefik.frontend.auth.forward.trustForwardHeader=true"
|
||||
|
||||
# traefikv2
|
||||
- "traefik.docker.network=traefik_public"
|
||||
- "traefik.http.routers.auth.rule=Host(`auth.example.com`)"
|
||||
- "traefik.http.routers.auth.entrypoints=https"
|
||||
- "traefik.http.routers.auth.tls=true"
|
||||
- "traefik.http.routers.auth.tls.domains[0].main=example.com"
|
||||
- "traefik.http.routers.auth.tls.domains[0].sans=*.example.com"
|
||||
- "traefik.http.routers.auth.tls.certresolver=main"
|
||||
- "traefik.http.routers.auth.service=auth@docker"
|
||||
- "traefik.http.services.auth.loadbalancer.server.port=4181"
|
||||
- "traefik.http.middlewares.forward-auth.forwardauth.address=http://traefik-forward-auth:4181"
|
||||
- "traefik.http.middlewares.forward-auth.forwardauth.trustForwardHeader=true"
|
||||
- "traefik.http.middlewares.forward-auth.forwardauth.authResponseHeaders=X-Forwarded-User"
|
||||
- "traefik.http.routers.auth.middlewares=forward-auth"
|
||||
|
||||
# This simply validates that traefik forward authentication is working
|
||||
whoami:
|
||||
image: containous/whoami
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=traefik_public"
|
||||
|
||||
# traefikv1
|
||||
- "traefik.frontend.rule=Host:whoami.example.com"
|
||||
- "traefik.http.services.whoami.loadbalancer.server.port=80"
|
||||
- "traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181"
|
||||
- "traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User"
|
||||
- "traefik.frontend.auth.forward.trustForwardHeader=true"
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.whoami.rule=Host(`whoami.example.com`)"
|
||||
- "traefik.http.routers.whoami.entrypoints=https"
|
||||
- "traefik.http.services.whoami.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.whoami.middlewares=forward-auth"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch
|
||||
|
||||
Deploy dex with `docker stack deploy dex -c /var/data/dex/dex.yml`, to launch dex, and then deploy Traefik Forward Auth with `docker stack deploy traefik-forward-auth -c /var/data/traefik-forward-auth/traefik-forward-auth.yml`
|
||||
|
||||
Once you redeploy traefik-forward-auth with the above, it **should** use dex as an OIDC provider, authenticating you against the `staticPasswords` username and hashed password described in `config.yml` above.
|
||||
|
||||
### Test
|
||||
|
||||
Browse to <https://whoami.example.com> (*obviously, customized for your domain and having created a DNS record*), and all going according to plan, you'll be redirected to a CoreOS Dex login. Once successfully logged in, you'll be directed to the basic whoami page :thumbsup:
|
||||
|
||||
### Protect services
|
||||
|
||||
To protect any other service, ensure the service itself is exposed by Traefik. Add the following label:
|
||||
|
||||
```yaml
|
||||
- "traefik.http.routers.radarr.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
And re-deploy your services :)
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? By adding an additional label to any service, we can secure any service behind our (static) OIDC provider, with minimal processing / handling overhead.
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] Traefik-forward-auth configured to authenticate against Dex (static)
|
||||
|
||||
[^1]: You can remove the `whoami` container once you know Traefik Forward Auth is working properly
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
134
docs/docker-swarm/traefik-forward-auth/google.md
Normal file
@@ -0,0 +1,134 @@
|
||||
---
|
||||
title: SSO with traefik forward auth with Google Oauth2
|
||||
description: Using Traefik Forward Auth, you can selectively apply SSO to your Docker services, using Google Oauth2 / OIDC as your authentication backend!
|
||||
---
|
||||
# Traefik Forward Auth using Google Oauth2 for SSO
|
||||
|
||||
[Traefik Forward Auth][tfa] is incredibly useful to secure services with an additional layer of authentication, provided by an OIDC-compatible provider. The simplest possible provider is a self-hosted instance of [Dex][tfa-dex-static], configured with a static username and password. This is not much use if you want to provide "normies" access to your services though - a better solution would be to validate their credentials against an existing trusted public source.
|
||||
|
||||
This recipe will illustrate how to point Traefik Forward Auth to Google, confirming that the requestor has a valid Google account (*and that said account is permitted to access your services!*)
|
||||
|
||||
--8<-- "recipe-tfa-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Obtain OAuth credentials
|
||||
|
||||
#### TL;DR
|
||||
|
||||
Log into <https://console.developers.google.com/>, create a new project then search for and select "**Credentials**" in the search bar.
|
||||
|
||||
Fill out the "OAuth Consent Screen" tab, and then click, "**Create Credentials**" > "**OAuth client ID**". Select "**Web Application**", fill in the name of your app, skip "**Authorized JavaScript origins**" and fill "**Authorized redirect URIs**" with either all the domains you will allow authentication from, appended with the url-path (*e.g. <https://radarr.example.com/_oauth>, <https://radarr.example.com/_oauth>, etc*), or if you don't like frustration, use a "auth host" URL instead, like "*<https://auth.example.com/_oauth>*" (*see below for details*)
|
||||
|
||||
#### Monkey see, monkey do 🙈
|
||||
|
||||
Here's a [screencast I recorded](https://static.funkypenguin.co.nz/2021/screencast_2021-01-29_22-29-33.gif) of the OIDC credentias setup in Google Developer Console
|
||||
|
||||
!!! tip
|
||||
Store your client ID and secret safely - you'll need them for the next step.
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create `/var/data/config/traefik-forward-auth/traefik-forward-auth.env` as per the following example:
|
||||
|
||||
```bash
|
||||
PROVIDERS_GOOGLE_CLIENT_ID=<your client id>
|
||||
PROVIDERS_GOOGLE_CLIENT_SECRET=<your client secret>
|
||||
SECRET=<a random string, make it up>
|
||||
# comment out AUTH_HOST if you'd rather use individual redirect_uris (slightly less complicated but more work)
|
||||
AUTH_HOST=auth.example.com
|
||||
COOKIE_DOMAINS=example.com
|
||||
WHITELIST=you@yourdomain.com, me@mydomain.com
|
||||
```
|
||||
|
||||
### Prepare the docker service config
|
||||
|
||||
Create `/var/data/config/traefik-forward-auth/traefik-forward-auth.yml` as per the following example:
|
||||
|
||||
```yaml
|
||||
traefik-forward-auth:
|
||||
image: thomseddon/traefik-forward-auth:2.1.0
|
||||
env_file: /var/data/config/traefik-forward-auth/traefik-forward-auth.env
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels # you only need these if you're using an auth host
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- "traefik.port=4181"
|
||||
- "traefik.frontend.rule=Host:auth.example.com"
|
||||
- "traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181"
|
||||
- "traefik.frontend.auth.forward.trustForwardHeader=true"
|
||||
|
||||
# traefikv2
|
||||
- "traefik.docker.network=traefik_public"
|
||||
- "traefik.http.routers.auth.rule=Host(`auth.example.com`)"
|
||||
- "traefik.http.routers.auth.entrypoints=https"
|
||||
- "traefik.http.routers.auth.tls=true"
|
||||
- "traefik.http.routers.auth.tls.domains[0].main=example.com"
|
||||
- "traefik.http.routers.auth.tls.domains[0].sans=*.example.com"
|
||||
- "traefik.http.routers.auth.tls.certresolver=main"
|
||||
- "traefik.http.routers.auth.service=auth@docker"
|
||||
- "traefik.http.services.auth.loadbalancer.server.port=4181"
|
||||
- "traefik.http.middlewares.forward-auth.forwardauth.address=http://traefik-forward-auth:4181"
|
||||
- "traefik.http.middlewares.forward-auth.forwardauth.trustForwardHeader=true"
|
||||
- "traefik.http.middlewares.forward-auth.forwardauth.authResponseHeaders=X-Forwarded-User"
|
||||
- "traefik.http.routers.auth.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
If you're not confident that forward authentication is working, add a simple "whoami" test container to the above .yml, to help debug traefik forward auth, before attempting to add it to a more complex container.
|
||||
|
||||
```yaml
|
||||
# This simply validates that traefik forward authentication is working
|
||||
whoami:
|
||||
image: containous/whoami
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:whoami.example.com
|
||||
- "traefik.http.services.linx.loadbalancer.server.port=80"
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.whoami.rule=Host(`whoami.example.com`)"
|
||||
- "traefik.http.routers.whoami.entrypoints=https"
|
||||
- "traefik.http.services.whoami.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.whoami.middlewares=forward-auth" # this line enforces traefik-forward-auth
|
||||
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch
|
||||
|
||||
Deploy traefik-forward-auth with ```docker stack deploy traefik-forward-auth -c /var/data/traefik-forward-auth/traefik-forward-auth.yml```
|
||||
|
||||
### Test
|
||||
|
||||
Browse to <https://whoami.example.com> (*obviously, customized for your domain and having created a DNS record*), and all going according to plan, you should be redirected to a Google login. Once successfully logged in, you'll be directed to the basic whoami page.
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our choice of OAuth provider, with minimal processing / handling overhead.
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] Traefik-forward-auth configured to authenticate against an OIDC provider
|
||||
|
||||
[^1]: Be sure to populate `WHITELIST` in `traefik-forward-auth.env`, else you'll happily be granting **any** authenticated Google account access to your services!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
57
docs/docker-swarm/traefik-forward-auth/index.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
title: Add SSO to Traefik with Forward Auth
|
||||
description: Traefik Forward Auth protects services running in Docker with an additional layer of authentication, and can be integrated into Keycloak, Google, GitHub, etc using OIDC.
|
||||
---
|
||||
# Traefik Forward Auth
|
||||
|
||||
Now that we have Traefik deployed, automatically exposing SSL access to our Docker Swarm services using LetsEncrypt wildcard certificates, let's pause to consider that we may not *want* some services exposed directly to the internet...
|
||||
|
||||
..Wait, why not? Well, Traefik doesn't provide any form of authentication, it simply secures the **transmission** of the service between Docker Swarm and the end user. If you were to deploy a service with no native security (*[Radarr][radarr] or [Sonarr][sonarr] come to mind*), then anybody would be able to use it! Even services which *may* have a layer of authentication **might** not be safe to expose publically - often open source projects may be maintained by enthusiasts who happily add extra features, but just pay lip service to security, on the basis that "*it's the user's problem to secure it in their own network*".
|
||||
|
||||
Some of the platforms we use on our swarm may have strong, proven security to prevent abuse. Techniques such as rate-limiting (*to defeat brute force attacks*) or even support 2-factor authentication (*tiny-tiny-rss or Wallabag support this)*.
|
||||
|
||||
Other platforms may provide **no authentication** (*Traefik's own web UI for example*), or minimal, un-proven UI authentication which may have been added as an afterthought.
|
||||
|
||||
Still other platforms may hold such sensitive data (*i.e., NextCloud*), that we'll feel more secure by putting an additional authentication layer in front of them.
|
||||
|
||||
This is the role of Traefik Forward Auth.
|
||||
|
||||
## How does it work?
|
||||
|
||||
**Normally**, Traefik proxies web requests directly to individual web apps running in containers. The user talks directly to the webapp, and the webapp is responsible for ensuring appropriate authentication.
|
||||
|
||||
When employing Traefik Forward Auth as "[middleware](https://doc.traefik.io/traefik/middlewares/forwardauth/)", the forward-auth process sits in the middle of this transaction - traefik receives the incoming request, "checks in" with the auth server to determine whether or not further authentication is required. If the user is authenticated, the auth server returns a 200 response code, and Traefik is authorized to forward the request to the backend. If not, traefik passes the auth server response back to the user - this process will usually direct the user to an authentication provider (*[Google][tfa-google], [Keycloak][tfa-keycloak], and [Dex][tfa-dex-static] are common examples*), so that they can perform a login.
|
||||
|
||||
Illustrated below:
|
||||
{ loading=lazy }
|
||||
|
||||
The advantage under this design is additional security. If I'm deploying a web app which I expect only an authenticated user to require access to (*unlike something intended to be accessed publically, like [Linx][linx]*), I'll pass the request through Traefik Forward Auth. The overhead is negligible, and the additional layer of security is well-worth it.
|
||||
|
||||
## AuthHost mode
|
||||
|
||||
Under normal Oauth2 / OIDC auth, you have to tell your auth provider which URLs it may redirect an authenticated user back to, post-authentication. This is a security feture of the OIDC spec, preventing a malicious landing page from capturing your session and using it to impersonate you. When you're securing many URLs though, explicitly listing them can be a PITA.
|
||||
|
||||
[@thomaseddon's traefik-forward-auth](https://github.com/thomseddon/traefik-forward-auth) includes an ingenious mechanism to simulate an "_auth host_" in your OIDC authentication, so that you can protect an unlimited amount of DNS names (_with a common domain suffix_), without having to manually maintain a list.
|
||||
|
||||
### How does it work?
|
||||
|
||||
Say for example, you're protecting **radarr.example.com**. When you first browse to **<https://radarr.example.com>**, Traefik forwards your session to traefik-forward-auth, to be authenticated. Traefik-forward-auth redirects you to your OIDC provider's login, but instructs the OIDC provider to redirect a successfully authenticated session **back** to **<https://auth.example.com/_oauth>**, rather than to **<https://radarr.example.com/_oauth>**.
|
||||
|
||||
When you successfully authenticate against the OIDC provider, you are redirected to the "_redirect_uri_" of <https://auth.example.com>. Again, your request hits Traefik, which forwards the session to traefik-forward-auth, which **knows** that you've just been authenticated (*cookies have a role to play here*). Traefik-forward-auth also knows the URL of your **original** request (*thanks to the X-Forwarded-Host header*). Traefik-forward-auth redirects you to your original destination, and everybody is happy.
|
||||
|
||||
This clever workaround only works under 2 conditions:
|
||||
|
||||
1. Your "auth host" has the same domain name as the hosts you're protecting (*i.e., auth.example.com protecting radarr.example.com*)
|
||||
2. You explictly tell traefik-forward-auth to use a cookie authenticating your **whole** domain (*i.e. example.com*)
|
||||
|
||||
## Authentication Providers
|
||||
|
||||
Traefik Forward Auth needs to authenticate an incoming user against a provider. A provider can be something as simple as a self-hosted [dex][tfa-dex-static] instance with a single static username/password, or as complex as a [Keycloak][keycloak] instance backed by [OpenLDAP][openldap]. Here are some options, in increasing order of complexity...
|
||||
|
||||
* [Authenticate Traefik Forward Auth against a self-hosted Dex instance with static usernames and passwords][tfa-dex-static]
|
||||
* [Authenticate Traefik Forward Auth against a whitelist of Google accounts][tfa-google]
|
||||
* [Authenticate Traefik Forward Auth against a self-hosted Keycloak instance][tfa-keycloak] with an optional [OpenLDAP backend][openldap]
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: Authhost mode is specifically handy for Google authentication, since Google doesn't permit wildcard redirect_uris, like [Keycloak][keycloak] does.
|
||||
104
docs/docker-swarm/traefik-forward-auth/keycloak.md
Normal file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
title: SSO with traefik forward auth with Keycloak
|
||||
description: Traefik forward auth can selectively SSO your Docker services against an authentication backend using OIDC, and Keycloak is a perfect, self-hosted match.
|
||||
---
|
||||
# Traefik Forward Auth with Keycloak for SSO
|
||||
|
||||
While the [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/) recipe demonstrated a quick way to protect a set of explicitly-specified URLs using OIDC credentials from a Google account, this recipe will illustrate how to use your own Keycloak instance to secure **any** URLs within your DNS domain.
|
||||
|
||||
!!! tip "Keycloak with Traefik"
|
||||
Did you land here from a search, looking for information about using Keycloak with Traefik? All this and more is covered in the [Keycloak][keycloak] recipe!
|
||||
|
||||
--8<-- "recipe-tfa-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup environment
|
||||
|
||||
Create `/var/data/config/traefik/traefik-forward-auth.env` as per the following example (_change "master" if you created a different realm_):
|
||||
|
||||
```bash
|
||||
CLIENT_ID=<your keycloak client name>
|
||||
CLIENT_SECRET=<your keycloak client secret>
|
||||
OIDC_ISSUER=https://<your keycloak URL>/auth/realms/master
|
||||
SECRET=<a random string to secure your cookie>
|
||||
AUTH_HOST=<the FQDN to use for your auth host>
|
||||
COOKIE_DOMAIN=<the root FQDN of your domain>
|
||||
```
|
||||
|
||||
### Prepare the docker service config
|
||||
|
||||
This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/docker-swarm/traefik/) recipe:
|
||||
|
||||
```bash
|
||||
traefik-forward-auth:
|
||||
image: funkypenguin/traefik-forward-auth
|
||||
env_file: /var/data/config/traefik/traefik-forward-auth.env
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.port=4181
|
||||
- traefik.frontend.rule=Host:auth.example.com
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
```
|
||||
|
||||
If you're not confident that forward authentication is working, add a simple "whoami" test container, to help debug traefik forward auth, before attempting to add it to a more complex container.
|
||||
|
||||
```bash
|
||||
# This simply validates that traefik forward authentication is working
|
||||
whoami:
|
||||
image: containous/whoami
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:whoami.example.com
|
||||
- traefik.port=80
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch
|
||||
|
||||
Redeploy traefik with `docker stack deploy traefik-app -c /var/data/traefik/traeifk-app.yml`, to launch the traefik-forward-auth container.
|
||||
|
||||
### Test
|
||||
|
||||
Browse to <https://whoami.example.com> (_obviously, customized for your domain and having created a DNS record_), and all going according to plan, you'll be redirected to a Keycloak login. Once successfully logged in, you'll be directed to the basic whoami page.
|
||||
|
||||
### Protect services
|
||||
|
||||
To protect any other service, ensure the service itself is exposed by Traefik (_if you were previously using an oauth_proxy for this, you may have to migrate some labels from the oauth_proxy serivce to the service itself_). Add the following 3 labels:
|
||||
|
||||
```yaml
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
```
|
||||
|
||||
And re-deploy your services :)
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our Keycloak OIDC provider, with minimal processing / handling overhead.
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] Traefik-forward-auth configured to authenticate against Keycloak
|
||||
|
||||
[^1]: Keycloak is very powerful. You can add 2FA and all other clever things outside of the scope of this simple recipe ;)
|
||||
|
||||
### Keycloak vs Authelia
|
||||
|
||||
[KeyCloak][keycloak] is the "big daddy" of self-hosted authentication platforms - it has a beautiful GUI, and a very advanced and mature featureset. Like Authelia, KeyCloak can [use an LDAP server](/recipes/keycloak/authenticate-against-openldap/) as a backend, but _unlike_ Authelia, KeyCloak allows for 2-way sync between that LDAP backend, meaning KeyCloak can be used to _create_ and _update_ the LDAP entries (*Authelia's is just a one-way LDAP lookup - you'll need another tool to actually administer your LDAP database*).
|
||||
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
253
docs/docker-swarm/traefik.md
Normal file
@@ -0,0 +1,253 @@
|
||||
---
|
||||
title: Traefik exposes Docker services with LetsEncrypt certificates
|
||||
description: Using Traefik, we can provide secure ingress into our Docker Swarm cluster, which opens up opportunities to provide SSO to multiple services in docker swarm via OIDC / SSO, using traefik-forward-auth.
|
||||
---
|
||||
# Traefik
|
||||
|
||||
The platforms we plan to run on our cloud are generally web-based, and each listening on their own unique TCP port. When a container in a swarm exposes a port, then connecting to **any** swarm member on that port will result in your request being forwarded to the appropriate host running the container. (_Docker calls this the swarm "[routing mesh](https://docs.docker.com/engine/swarm/ingress/)"_)
|
||||
|
||||
So we get a rudimentary load balancer built into swarm. We could stop there, just exposing a series of ports on our hosts, and making them HA using keepalived.
|
||||
|
||||
There are some gaps to this approach though:
|
||||
|
||||
- No consideration is given to HTTPS. Implementation would have to be done manually, per-container.
|
||||
- No mechanism is provided for authentication outside of that which the container providers. We may not **want** to expose every interface on every container to the world, especially if we are playing with tools or containers whose quality and origin are unknown.
|
||||
|
||||
To deal with these gaps, we need a front-end load-balancer, and in this design, that role is provided by [Traefik](https://traefik.io/).
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
!!! tip
|
||||
In 2021, this recipe was updated for Traefik v2. There's really no reason to be using Traefikv1 anymore ;)
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
Already deployed:
|
||||
|
||||
* [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||
* [X] DNS entry for the hostname you intend to use (*or a wildcard*), pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||
|
||||
New:
|
||||
* [ ] Traefik configured per design
|
||||
* [ ] Access to update your DNS records for manual/automated [LetsEncrypt](https://letsencrypt.org/docs/challenge-types/) DNS-01 validation, or ingress HTTP/HTTPS for HTTP-01 validation
|
||||
|
||||
## Preparation
|
||||
|
||||
### Prepare traefik.toml
|
||||
|
||||
While it's possible to configure traefik via docker command arguments, I prefer to create a config file (`traefik.toml`). This allows me to change traefik's behaviour by simply changing the file, and keeps my docker config simple.
|
||||
|
||||
Create `/var/data/traefikv2/traefik.toml` as per the following example:
|
||||
|
||||
```bash
|
||||
[global]
|
||||
checkNewVersion = true
|
||||
|
||||
# Enable the Dashboard
|
||||
[api]
|
||||
dashboard = true
|
||||
|
||||
# Write out Traefik logs
|
||||
[log]
|
||||
level = "INFO"
|
||||
filePath = "/traefik.log"
|
||||
|
||||
[entryPoints.http]
|
||||
address = ":80"
|
||||
# Redirect to HTTPS (why wouldn't you?)
|
||||
[entryPoints.http.http.redirections.entryPoint]
|
||||
to = "https"
|
||||
scheme = "https"
|
||||
|
||||
[entryPoints.https]
|
||||
address = ":443"
|
||||
[entryPoints.https.http.tls]
|
||||
certResolver = "main"
|
||||
|
||||
# Let's Encrypt
|
||||
[certificatesResolvers.main.acme]
|
||||
email = "batman@example.com"
|
||||
storage = "acme.json"
|
||||
# uncomment to use staging CA for testing
|
||||
# caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
|
||||
[certificatesResolvers.main.acme.dnsChallenge]
|
||||
provider = "route53"
|
||||
# Uncomment to use HTTP validation, like a caveman!
|
||||
# [certificatesResolvers.main.acme.httpChallenge]
|
||||
# entryPoint = "http"
|
||||
|
||||
# Docker Traefik provider
|
||||
[providers.docker]
|
||||
endpoint = "unix:///var/run/docker.sock"
|
||||
swarmMode = true
|
||||
watch = true
|
||||
```
|
||||
|
||||
### Prepare the docker service config
|
||||
|
||||
!!! tip
|
||||
"We'll want an overlay network, independent of our traefik stack, so that we can attach/detach all our other stacks (including traefik) to the overlay network. This way, we can undeploy/redepoly the traefik stack without having to bring down every other stack first!" - voice of hard-won experience
|
||||
|
||||
Create `/var/data/config/traefik/traefik.yml` as per the following example:
|
||||
|
||||
```yaml
|
||||
version: "3.2"
|
||||
|
||||
# What is this?
|
||||
#
|
||||
# This stack exists solely to deploy the traefik_public overlay network, so that
|
||||
# other stacks (including traefik-app) can attach to it
|
||||
|
||||
services:
|
||||
scratch:
|
||||
image: scratch
|
||||
deploy:
|
||||
replicas: 0
|
||||
networks:
|
||||
- public
|
||||
|
||||
networks:
|
||||
public:
|
||||
driver: overlay
|
||||
attachable: true
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.200.0/24
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
Create `/var/data/config/traefikv2/traefikv2.env` with the environment variables required by the provider you chose in the LetsEncrypt DNS Challenge section of `traefik.toml`. Full configuration options can be found in the [Traefik documentation](https://doc.traefik.io/traefik/https/acme/#providers). Route53 and CloudFlare examples are below.
|
||||
|
||||
```bash
|
||||
# Route53 example
|
||||
AWS_ACCESS_KEY_ID=<your-aws-key>
|
||||
AWS_SECRET_ACCESS_KEY=<your-aws-secret>
|
||||
|
||||
# CloudFlare example
|
||||
# CLOUDFLARE_EMAIL=<your-cloudflare-email>
|
||||
# CLOUDFLARE_API_KEY=<your-cloudflare-api-key>
|
||||
```
|
||||
|
||||
Create `/var/data/config/traefikv2/traefikv2.yml` as per the following example:
|
||||
|
||||
```yaml
|
||||
version: "3.2"
|
||||
|
||||
services:
|
||||
app:
|
||||
image: traefik:v2.4
|
||||
env_file: /var/data/config/traefikv2/traefikv2.env
|
||||
# Note below that we use host mode to avoid source nat being applied to our ingress HTTP/HTTPS sessions
|
||||
# Without host mode, all inbound sessions would have the source IP of the swarm nodes, rather than the
|
||||
# original source IP, which would impact logging. If you don't care about this, you can expose ports the
|
||||
# "minimal" way instead
|
||||
ports:
|
||||
- target: 80
|
||||
published: 80
|
||||
protocol: tcp
|
||||
mode: host
|
||||
- target: 443
|
||||
published: 443
|
||||
protocol: tcp
|
||||
mode: host
|
||||
- target: 8080
|
||||
published: 8080
|
||||
protocol: tcp
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- /var/data/config/traefikv2:/etc/traefik
|
||||
- /var/data/traefikv2/traefik.log:/traefik.log
|
||||
- /var/data/traefikv2/acme.json:/acme.json
|
||||
networks:
|
||||
- traefik_public
|
||||
# Global mode makes an instance of traefik listen on _every_ node, so that regardless of which
|
||||
# node the request arrives on, it'll be forwarded to the correct backend service.
|
||||
deploy:
|
||||
mode: global
|
||||
labels:
|
||||
- "traefik.docker.network=traefik_public"
|
||||
- "traefik.http.routers.api.rule=Host(`traefik.example.com`)"
|
||||
- "traefik.http.routers.api.entrypoints=https"
|
||||
- "traefik.http.routers.api.tls.domains[0].main=example.com"
|
||||
- "traefik.http.routers.api.tls.domains[0].sans=*.example.com"
|
||||
- "traefik.http.routers.api.tls=true"
|
||||
- "traefik.http.routers.api.tls.certresolver=main"
|
||||
- "traefik.http.routers.api.service=api@internal"
|
||||
- "traefik.http.services.dummy.loadbalancer.server.port=9999"
|
||||
|
||||
# uncomment this to enable forward authentication on the traefik api/dashboard
|
||||
#- "traefik.http.routers.api.middlewares=forward-auth"
|
||||
placement:
|
||||
constraints: [node.role == manager]
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
Docker won't start a service with a bind-mount to a non-existent file, so prepare an empty acme.json and traefik.log (_with the appropriate permissions_) by running:
|
||||
|
||||
```bash
|
||||
touch /var/data/traefikv2/acme.json
|
||||
touch /var/data/traefikv2/traefik.log
|
||||
chmod 600 /var/data/traefikv2/acme.json
|
||||
chmod 600 /var/data/traefikv2/traefik.log
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Pay attention above. You **must** set `acme.json`'s permissions to owner-readable-only, else the container will fail to start with an [ID-10T](https://en.wikipedia.org/wiki/User_error#ID-10-T_error) error!
|
||||
|
||||
Traefik will populate acme.json itself when it runs, but it needs to exist before the container will start (_Chicken, meet egg._)
|
||||
|
||||
Likewise with the log file.
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch
|
||||
|
||||
First, launch the traefik stack, which will do nothing other than create an overlay network by running `docker stack deploy traefik -c /var/data/config/traefik/traefik.yml`
|
||||
|
||||
```bash
|
||||
[root@kvm ~]# docker stack deploy traefik -c /var/data/config/traefik/traefik.yml
|
||||
Creating network traefik_public
|
||||
Creating service traefik_scratch
|
||||
[root@kvm ~]#
|
||||
```
|
||||
|
||||
Now deploy the traefik application itself (*which will attach to the overlay network*) by running `docker stack deploy traefikv2 -c /var/data/config/traefikv2/traefikv2.yml`
|
||||
|
||||
```bash
|
||||
[root@kvm ~]# docker stack deploy traefikv2 -c /var/data/config/traefikv2/traefikv2.yml
|
||||
Creating service traefikv2_traefikv2
|
||||
[root@kvm ~]#
|
||||
```
|
||||
|
||||
Confirm traefik is running with `docker stack ps traefikv2`:
|
||||
|
||||
```bash
|
||||
root@raphael:~# docker stack ps traefikv2
|
||||
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
|
||||
lmvqcfhap08o traefikv2_app.dz178s1aahv16bapzqcnzc03p traefik:v2.4 donatello Running Running 2 minutes ago *:443->443/tcp,*:80->80/tcp
|
||||
root@raphael:~#
|
||||
```
|
||||
|
||||
### Check Traefik Dashboard
|
||||
|
||||
You should now be able to access[^1] your traefik instance on `https://traefik.<your domain\>` (*if your LetsEncrypt certificate is working*), or `http://<node IP\>:8080` (*if it's not*)- It'll look a little lonely currently (*below*), but we'll populate it as we add recipes :grin:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
### Summary
|
||||
|
||||
!!! summary
|
||||
We've achieved:
|
||||
|
||||
* [X] An overlay network to permit traefik to access all future stacks we deploy
|
||||
* [X] Frontend proxy which will dynamically configure itself for new backend containers
|
||||
* [X] Automatic SSL support for all proxied resources
|
||||
|
||||
[^1]: Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/docker-swarm/traefik-forward-auth/)!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
9
docs/extras/css/icons.css
Normal file
@@ -0,0 +1,9 @@
|
||||
.kubernetes {
|
||||
color: #3970e4;
|
||||
}
|
||||
.docker {
|
||||
color: #0db7ed;
|
||||
}
|
||||
.discord {
|
||||
color: #7289da;
|
||||
}
|
||||
11
docs/extras/javascript/feedback.js
Normal file
@@ -0,0 +1,11 @@
|
||||
var feedback = document.forms.feedback
|
||||
feedback.addEventListener("submit", function(ev) {
|
||||
ev.preventDefault()
|
||||
|
||||
/* Retrieve page and feedback value */
|
||||
var page = document.location.pathname
|
||||
var data = ev.submitter.getAttribute("data-md-value")
|
||||
|
||||
/* Send feedback value */
|
||||
console.log(page, data)
|
||||
})
|
||||
1
docs/extras/javascript/plausible.js
Normal file
@@ -0,0 +1 @@
|
||||
<script>window.plausible = window.plausible || function() { (window.plausible.q = window.plausible.q || []).push(arguments) }</script>
|
||||
10
docs/extras/javascript/rightmessage.js
Normal file
@@ -0,0 +1,10 @@
|
||||
<!-- RightMessage -->
|
||||
<script type="text/javascript">
|
||||
(function(p, a, n, d, o, b, c) {
|
||||
o = n.createElement('script'); o.type = 'text/javascript'; o.async = true; o.src = 'https://tb.rightmessage.com/'+p+'.js';
|
||||
b = n.getElementsByTagName('script')[0]; d = function(h, u, i) { var c = n.createElement('style'); c.id = 'rmcloak'+i;
|
||||
c.type = 'text/css'; c.appendChild(n.createTextNode('.rmcloak'+h+'{visibility:hidden}.rmcloak'+u+'{display:none}'));
|
||||
b.parentNode.insertBefore(c, b); return c; }; c = d('', '-hidden', ''); d('-stay-invisible', '-stay-hidden', '-stay');
|
||||
setTimeout(o.onerror = function() { c.parentNode && c.parentNode.removeChild(c); }, a); b.parentNode.insertBefore(o, b);
|
||||
})('1802694484', 20000, document);
|
||||
</script>
|
||||
6
docs/extras/javascript/tablesort.js
Normal file
@@ -0,0 +1,6 @@
|
||||
document$.subscribe(function() {
|
||||
var tables = document.querySelectorAll("article table:not([class])")
|
||||
tables.forEach(function(table) {
|
||||
new Tablesort(table)
|
||||
})
|
||||
})
|
||||
27
docs/extras/javascript/widgetbot.js
Normal file
@@ -0,0 +1,27 @@
|
||||
// Display for 10 seconds + custom avatar
|
||||
// crate.notify({
|
||||
// content: 'Need a 🤚? Hot, sweaty geeks are waiting to chat to you! Click 👇',
|
||||
// timeout: 5000,
|
||||
// avatar: 'https://avatars2.githubusercontent.com/u/1524686?s=400&v=4'
|
||||
// })
|
||||
|
||||
|
||||
// This file should _not_ be routinely included, it's here to make tweaking of the widgetbot settings
|
||||
// faster, since making changes doesn't require restarting mkdocs serve
|
||||
<script src="https://cdn.jsdelivr.net/npm/@widgetbot/crate@3"></script>
|
||||
|
||||
<script>
|
||||
const devbutton = new Crate({
|
||||
server: '396055506072109067',
|
||||
channel: '456689991326760973' // Cookbook channel
|
||||
color: '#000',
|
||||
indicator: false,
|
||||
notifications: true,
|
||||
indicator: true,
|
||||
timeout: 5000,
|
||||
glyph: 'https://avatars2.githubusercontent.com/u/1524686?s=400&v=4'
|
||||
})
|
||||
|
||||
devbutton.notify('Hello __world__\n```js\n// This is Sync!\n```')
|
||||
</script>
|
||||
|
||||
BIN
docs/images/archivebox.png
Normal file
|
After Width: | Height: | Size: 160 KiB |
BIN
docs/images/athena-mining-pool.png
Normal file
|
After Width: | Height: | Size: 297 KiB |
BIN
docs/images/authelia.png
Normal file
|
After Width: | Height: | Size: 38 KiB |
BIN
docs/images/authelia_login.png
Normal file
|
After Width: | Height: | Size: 17 KiB |
BIN
docs/images/autopirate.png
Normal file
|
After Width: | Height: | Size: 314 KiB |
BIN
docs/images/banner.png
Normal file
|
After Width: | Height: | Size: 1.2 MiB |
BIN
docs/images/bitwarden.png
Normal file
|
After Width: | Height: | Size: 76 KiB |
BIN
docs/images/bookstack.png
Normal file
|
After Width: | Height: | Size: 208 KiB |
BIN
docs/images/buymeacoffee-cover-page.png
Normal file
|
After Width: | Height: | Size: 785 KiB |
BIN
docs/images/calibre-web.png
Normal file
|
After Width: | Height: | Size: 171 KiB |
BIN
docs/images/ceph.png
Normal file
|
After Width: | Height: | Size: 151 KiB |
452
docs/images/cert-manager.svg
Normal file
@@ -0,0 +1,452 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<svg
|
||||
overflow="hidden"
|
||||
viewBox="0 0 1088 624"
|
||||
version="1.1"
|
||||
id="svg391"
|
||||
sodipodi:docname="high-level-overview-test.svg"
|
||||
inkscape:version="1.1 (c68e22c387, 2021-05-23)"
|
||||
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
||||
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:svg="http://www.w3.org/2000/svg">
|
||||
<sodipodi:namedview
|
||||
id="namedview393"
|
||||
pagecolor="#ffffff"
|
||||
bordercolor="#666666"
|
||||
borderopacity="1.0"
|
||||
inkscape:pageshadow="2"
|
||||
inkscape:pageopacity="0.0"
|
||||
inkscape:pagecheckerboard="0"
|
||||
showgrid="false"
|
||||
inkscape:zoom="1.3141026"
|
||||
inkscape:cx="682.97559"
|
||||
inkscape:cy="311.99999"
|
||||
inkscape:window-width="1920"
|
||||
inkscape:window-height="1028"
|
||||
inkscape:window-x="-6"
|
||||
inkscape:window-y="-6"
|
||||
inkscape:window-maximized="1"
|
||||
inkscape:current-layer="g389" />
|
||||
<defs
|
||||
id="defs241">
|
||||
<clipPath
|
||||
id="clip0">
|
||||
<path
|
||||
d="M0 0 1088 0 1088 624 0 624Z"
|
||||
fill-rule="evenodd"
|
||||
clip-rule="evenodd"
|
||||
id="path220" />
|
||||
</clipPath>
|
||||
</defs>
|
||||
<g
|
||||
clip-path="url(#clip0)"
|
||||
id="g389">
|
||||
<path
|
||||
style="color:#000000;fill:#ffffff;-inkscape-stroke:none"
|
||||
d="M 0,0 H 1088 V 624 H 0 Z"
|
||||
id="rect243" />
|
||||
<g
|
||||
id="rect245">
|
||||
<path
|
||||
style="color:#000000;fill:#ffd966;stroke-width:0.666667;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="m 147.5,111.5 h 211 v 56 h -211 z"
|
||||
id="path645" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="M 147.16602,111.16602 V 111.5 167.83398 h 211.66796 v -56.66796 z m 0.66796,0.66796 h 210.33204 v 55.33204 H 147.83398 Z"
|
||||
id="path647" />
|
||||
</g>
|
||||
<text
|
||||
font-family="Arial, Arial_MSFontService, sans-serif"
|
||||
font-weight="400"
|
||||
font-size="19px"
|
||||
transform="translate(186.804,146)"
|
||||
id="text251">letsencrypt<tspan
|
||||
x="90"
|
||||
y="0"
|
||||
id="tspan247">-</tspan><tspan
|
||||
x="96.166702"
|
||||
y="0"
|
||||
id="tspan249">staging</tspan></text>
|
||||
<g
|
||||
id="rect257">
|
||||
<path
|
||||
style="color:#000000;fill:#ffd966;stroke-width:0.666667;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="m 267.5,26.500099 h 211 v 58 h -211 z"
|
||||
id="path637" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="M 267.16602,26.166016 V 26.5 84.833984 H 478.83398 V 26.166016 Z m 0.66796,0.667968 H 478.16602 V 84.166016 H 267.83398 Z"
|
||||
id="path639" />
|
||||
</g>
|
||||
<text
|
||||
font-family="Arial, Arial_MSFontService, sans-serif"
|
||||
font-weight="400"
|
||||
font-size="19px"
|
||||
transform="translate(318.799,63)"
|
||||
id="text263">letsencrypt<tspan
|
||||
x="90"
|
||||
y="0"
|
||||
id="tspan259">-</tspan><tspan
|
||||
x="96.166702"
|
||||
y="0"
|
||||
id="tspan261">prod</tspan></text>
|
||||
<g
|
||||
id="rect269">
|
||||
<path
|
||||
style="color:#000000;fill:#ffd966;stroke-width:0.666667;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="m 867.5,111.5 h 211 v 57 h -211 z"
|
||||
id="path629" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="M 867.16602,111.16602 V 111.5 168.83398 H 1078.834 v -57.66796 z m 0.66796,0.66796 H 1078.166 v 56.33204 H 867.83398 Z"
|
||||
id="path631" />
|
||||
</g>
|
||||
<text
|
||||
font-family="Arial, Arial_MSFontService, sans-serif"
|
||||
font-weight="400"
|
||||
font-size="19px"
|
||||
transform="translate(910.161,146)"
|
||||
id="text283">venafi<tspan
|
||||
x="49.666599"
|
||||
y="0"
|
||||
id="tspan271">-</tspan><tspan
|
||||
x="55.833302"
|
||||
y="0"
|
||||
id="tspan273">as</tspan><tspan
|
||||
x="75.5"
|
||||
y="0"
|
||||
id="tspan275">-</tspan><tspan
|
||||
x="81.666603"
|
||||
y="0"
|
||||
id="tspan277">a</tspan><tspan
|
||||
x="92"
|
||||
y="0"
|
||||
id="tspan279">-</tspan><tspan
|
||||
x="98.166603"
|
||||
y="0"
|
||||
id="tspan281">service</tspan></text>
|
||||
<g
|
||||
id="rect289">
|
||||
<path
|
||||
style="color:#000000;fill:#ffd966;stroke-width:0.666667;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="m 505.5,66.500099 h 211 V 122.5001 h -211 z"
|
||||
id="path621" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="M 505.16602,66.166016 V 66.5 122.83398 H 716.83398 V 66.166016 Z m 0.66796,0.667968 H 716.16602 V 122.16602 H 505.83398 Z"
|
||||
id="path623" />
|
||||
</g>
|
||||
<text
|
||||
font-family="Arial, Arial_MSFontService, sans-serif"
|
||||
font-weight="400"
|
||||
font-size="19px"
|
||||
transform="translate(564.181,101)"
|
||||
id="text295">hashicorp<tspan
|
||||
x="80.666603"
|
||||
y="0"
|
||||
id="tspan291">-</tspan><tspan
|
||||
x="86.833298"
|
||||
y="0"
|
||||
id="tspan293">vault</tspan></text>
|
||||
<g
|
||||
id="rect301">
|
||||
<path
|
||||
style="color:#000000;fill:#ffd966;stroke-width:0.666667;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="m 744.5,26.500099 h 211 v 57 h -211 z"
|
||||
id="path613" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="M 744.16602,26.166016 V 26.5 83.833984 H 955.83398 V 26.166016 Z m 0.66796,0.667968 H 955.16602 V 83.166016 H 744.83398 Z"
|
||||
id="path615" />
|
||||
</g>
|
||||
<text
|
||||
font-family="Arial, Arial_MSFontService, sans-serif"
|
||||
font-weight="400"
|
||||
font-size="19px"
|
||||
transform="translate(822.831,61)"
|
||||
id="text307">venafi<tspan
|
||||
x="49.666599"
|
||||
y="0"
|
||||
id="tspan303">-</tspan><tspan
|
||||
x="55.833302"
|
||||
y="0"
|
||||
id="tspan305">tpp</tspan></text>
|
||||
<text
|
||||
font-family="Arial, Arial_MSFontService, sans-serif"
|
||||
font-weight="400"
|
||||
font-size="24px"
|
||||
transform="translate(21.6114,97)"
|
||||
id="text313">Issuers</text>
|
||||
<path
|
||||
style="color:#000000;fill:#000000;-inkscape-stroke:none"
|
||||
d="M 0.333333,-2.69038e-7 0.333433,123.41 H -0.333234 L -0.333333,2.69038e-7 Z M 4.0001,122.077 l -3.999995013,8 -4.000004987,-8 z"
|
||||
id="path315"
|
||||
transform="matrix(1,0,0,-1,611.5,252.577)" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;-inkscape-stroke:none"
|
||||
d="M 0.192828,-0.271898 233.304,165.048 232.918,165.592 -0.192828,0.271898 Z M 234.338,161.287 l 4.211,7.89 -8.839,-1.365 z"
|
||||
id="path317"
|
||||
transform="matrix(1,0,0,-1,611.5,252.677)" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;-inkscape-stroke:none"
|
||||
d="M 0.0756516,-0.324635 355.464,82.4935 355.312,83.1428 -0.0756516,0.324635 Z M 354.997,78.6199 l 6.883,5.7112 -8.699,2.08 z"
|
||||
id="path319"
|
||||
transform="matrix(1,0,0,-1,611.5,252.831)" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;-inkscape-stroke:none"
|
||||
d="M 610.856,252.95 377.745,87.6287 378.131,87.0849 611.242,252.406 Z M 376.712,91.3907 372.5,83.5001 l 8.84,1.3651 z"
|
||||
id="path321" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;-inkscape-stroke:none"
|
||||
d="m 611.615,252.724 -351.707,-83.363 0.154,-0.648 351.707,83.363 z M 260.362,173.237 253.5,167.5 l 8.707,-2.047 z"
|
||||
id="path323" />
|
||||
<g
|
||||
id="path325">
|
||||
<path
|
||||
style="color:#000000;fill:#6aa84f;fill-rule:evenodd;stroke-width:0.666667;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="m 505.5,262 c 0,-5.247 4.253,-9.5 9.5,-9.5 h 192 c 5.247,0 9.5,4.253 9.5,9.5 v 38 c 0,5.247 -4.253,9.5 -9.5,9.5 H 515 c -5.247,0 -9.5,-4.253 -9.5,-9.5 z"
|
||||
id="path575" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;fill-rule:evenodd;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="m 515,252.16602 c -5.42712,0 -9.83398,4.40686 -9.83398,9.83398 v 38 c 0,5.42712 4.40686,9.83398 9.83398,9.83398 h 192 c 5.42712,0 9.83398,-4.40686 9.83398,-9.83398 v -38 c 0,-5.42712 -4.40686,-9.83398 -9.83398,-9.83398 z m 0,0.66796 h 192 c 5.06687,0 9.16602,4.09915 9.16602,9.16602 v 38 c 0,5.06687 -4.09915,9.16602 -9.16602,9.16602 H 515 c -5.06687,0 -9.16602,-4.09915 -9.16602,-9.16602 v -38 c 0,-5.06687 4.09915,-9.16602 9.16602,-9.16602 z"
|
||||
id="path577" />
|
||||
</g>
|
||||
<text
|
||||
font-family="Arial, Arial_MSFontService, sans-serif"
|
||||
font-weight="400"
|
||||
font-size="21px"
|
||||
transform="translate(547.233,289)"
|
||||
id="text337">cert<tspan
|
||||
x="35.666599"
|
||||
y="0"
|
||||
id="tspan327">-</tspan><tspan
|
||||
x="42.833302"
|
||||
y="0"
|
||||
id="tspan329">manager</tspan><tspan
|
||||
font-size="24px"
|
||||
x="-344.64801"
|
||||
y="153"
|
||||
id="tspan331">Certificates</tspan><tspan
|
||||
font-size="24px"
|
||||
x="-345.30099"
|
||||
y="266"
|
||||
id="tspan333">Kubernetes</tspan><tspan
|
||||
font-size="24px"
|
||||
x="-324.634"
|
||||
y="295"
|
||||
id="tspan335">Secrets</tspan></text>
|
||||
<path
|
||||
style="color:#000000;fill:#f9cb9c;fill-rule:evenodd;-inkscape-stroke:none"
|
||||
d="m 570.5,532.063 c 0,2.519 -2.043,4.562 -4.562,4.562 v -4.562 c 0,1.259 -1.022,2.281 -2.282,2.281 -1.26,0 -2.281,-1.022 -2.281,-2.281 v 4.562 H 398.063 c -2.52,0 -4.563,2.043 -4.563,4.563 v 54.75 c 0,2.519 2.043,4.562 4.563,4.562 2.519,0 4.562,-2.043 4.562,-4.562 v -4.563 h 163.313 c 2.519,0 4.562,-2.043 4.562,-4.562 z M 398.063,545.75 c 2.519,0 4.562,-2.043 4.562,-4.562 0,-1.26 -1.021,-2.282 -2.281,-2.282 -1.26,0 -2.281,1.022 -2.281,2.282 z"
|
||||
id="path339" />
|
||||
<path
|
||||
style="color:#000000;fill:#c8a37d;fill-rule:evenodd;-inkscape-stroke:none"
|
||||
d="m 398.063,545.75 c 2.519,0 4.562,-2.043 4.562,-4.562 0,-1.26 -1.021,-2.282 -2.281,-2.282 -1.26,0 -2.281,1.022 -2.281,2.282 z m 167.875,-9.125 c 2.519,0 4.562,-2.043 4.562,-4.562 0,-2.52 -2.043,-4.563 -4.562,-4.563 -2.52,0 -4.563,2.043 -4.563,4.563 0,1.26 1.021,2.281 2.281,2.281 1.26,0 2.282,-1.021 2.282,-2.281 z"
|
||||
id="path341" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;fill-rule:evenodd;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="m 565.9375,527.16602 c -2.69923,0 -4.89648,2.19632 -4.89648,4.89648 v 4.22852 H 398.0625 c -2.70016,0 -4.89648,2.19632 -4.89648,4.89648 v 54.75 c 0,2.69923 2.19632,4.89648 4.89648,4.89648 2.69923,0 4.89648,-2.19725 4.89648,-4.89648 v -4.22852 H 565.9375 c 2.69923,0 4.89648,-2.19725 4.89648,-4.89648 v -54.75 c 0,-2.70016 -2.19632,-4.89648 -4.89648,-4.89648 z m 0,0.66796 c 2.33984,0 4.22852,1.88868 4.22852,4.22852 0,2.22036 -1.72083,3.98395 -3.89454,4.16211 v -4.16211 h -0.66601 c 0,1.07861 -0.86947,1.94727 -1.94922,1.94727 -1.07975,0 -1.94727,-0.86848 -1.94727,-1.94727 0,-2.33984 1.88975,-4.22852 4.22852,-4.22852 z m -0.33203,5.83204 v 2.625 h -3.89649 v -2.6211 c 0.48006,0.58593 1.13354,1.00781 1.94727,1.00781 0.8153,0 1.46892,-0.42386 1.94922,-1.01171 z m 4.56055,0.71875 v 52.42773 c 0,2.33877 -1.88975,4.22852 -4.22852,4.22852 H 402.95898 V 541.1875 c 0,-1.44007 -1.17498,-2.61523 -2.61523,-2.61523 -1.44025,0 -2.61328,1.17516 -2.61328,2.61523 v 4.16211 c -2.17556,-0.17723 -3.89649,-1.94108 -3.89649,-4.16211 0,-2.33984 1.88868,-4.22852 4.22852,-4.22852 h 163.3125 0.33398 4.22852 c 1.8433,0 3.39365,-1.06506 4.22852,-2.57421 z m -169.82227,4.85546 c 1.07975,0 1.94727,0.86734 1.94727,1.94727 0,2.22036 -1.72083,3.98395 -3.89454,4.16211 v -4.16211 c 0,-1.07993 0.86752,-1.94727 1.94727,-1.94727 z m -6.50977,4.26954 c 0.83496,1.50915 2.38458,2.57421 4.22852,2.57421 1.8433,0 3.39365,-1.06506 4.22852,-2.57421 v 47.53125 0.33398 4.5625 c 0,2.33877 -1.88975,4.22852 -4.22852,4.22852 -2.33984,0 -4.22852,-1.88975 -4.22852,-4.22852 z"
|
||||
id="path343" />
|
||||
<text
|
||||
font-family="Arial, Arial_MSFontService, sans-serif"
|
||||
font-weight="400"
|
||||
font-size="19px"
|
||||
transform="translate(423.871,570)"
|
||||
id="text345">signed keypair</text>
|
||||
<g
|
||||
id="path347">
|
||||
<path
|
||||
style="color:#000000;fill:#6d9eeb;fill-rule:evenodd;stroke-width:0.666667;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="M 383.834,388.5 H 595.5 v 76.666 c 0,8.469 -6.865,15.334 -15.334,15.334 H 368.5 v -76.666 c 0,-8.469 6.865,-15.334 15.334,-15.334 z"
|
||||
id="path549" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;fill-rule:evenodd;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="m 383.83398,388.16602 c -8.64912,0 -15.66796,7.01884 -15.66796,15.66796 v 77 h 212 c 8.64912,0 15.66796,-7.01884 15.66796,-15.66796 v -77 z m 0,0.66796 h 211.33204 v 76.33204 c 0,8.28885 -6.71115,15 -15,15 H 368.83398 v -76.33204 c 0,-8.28885 6.71115,-15 15,-15 z"
|
||||
id="path551" />
|
||||
</g>
|
||||
<text
|
||||
font-family="Arial, Arial_MSFontService, sans-serif"
|
||||
font-weight="400"
|
||||
font-size="21px"
|
||||
transform="translate(425.756,433)"
|
||||
id="text357">foo.bar.com<tspan
|
||||
font-style="italic"
|
||||
font-size="16px"
|
||||
x="-5.4200101"
|
||||
y="20"
|
||||
id="tspan349">Issuer:</tspan><tspan
|
||||
font-style="italic"
|
||||
font-size="16px"
|
||||
x="47.080002"
|
||||
y="20"
|
||||
id="tspan351">venafi</tspan><tspan
|
||||
font-style="italic"
|
||||
font-size="16px"
|
||||
x="89.580002"
|
||||
y="20"
|
||||
id="tspan353">-</tspan><tspan
|
||||
font-style="italic"
|
||||
font-size="16px"
|
||||
x="94.9133"
|
||||
y="20"
|
||||
id="tspan355">tpp</tspan></text>
|
||||
<g
|
||||
id="path359">
|
||||
<path
|
||||
style="color:#000000;fill:#6d9eeb;fill-rule:evenodd;stroke-width:0.666667;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="M 642.834,388.5 H 853.5 v 76.666 c 0,8.469 -6.865,15.334 -15.334,15.334 H 627.5 v -76.666 c 0,-8.469 6.865,-15.334 15.334,-15.334 z"
|
||||
id="path541" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;fill-rule:evenodd;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="m 642.83398,388.16602 c -8.64912,0 -15.66796,7.01884 -15.66796,15.66796 v 77 h 211 c 8.64912,0 15.66796,-7.01884 15.66796,-15.66796 v -77 z m 0,0.66796 h 210.33204 v 76.33204 c 0,8.28885 -6.71115,15 -15,15 H 627.83398 v -76.33204 c 0,-8.28885 6.71115,-15 15,-15 z"
|
||||
id="path543" />
|
||||
</g>
|
||||
<text
|
||||
font-family="Arial, Arial_MSFontService, sans-serif"
|
||||
font-weight="400"
|
||||
font-size="21px"
|
||||
transform="translate(676.661,420)"
|
||||
id="text371">example.com<tspan
|
||||
x="-25.4067"
|
||||
y="25"
|
||||
id="tspan361">www.example.com</tspan><tspan
|
||||
font-style="italic"
|
||||
font-size="16px"
|
||||
x="-20"
|
||||
y="46"
|
||||
id="tspan363">Issuer:</tspan><tspan
|
||||
font-style="italic"
|
||||
font-size="16px"
|
||||
x="32.5"
|
||||
y="46"
|
||||
id="tspan365">letsencrypt</tspan><tspan
|
||||
font-style="italic"
|
||||
font-size="16px"
|
||||
x="109.667"
|
||||
y="46"
|
||||
id="tspan367">-</tspan><tspan
|
||||
font-style="italic"
|
||||
font-size="16px"
|
||||
x="115"
|
||||
y="46"
|
||||
id="tspan369">prod</tspan></text>
|
||||
<path
|
||||
style="color:#000000;fill:#000000;-inkscape-stroke:none"
|
||||
d="M 0.174994,-0.283704 123.648,75.8769 123.298,76.4443 -0.174994,0.283704 Z M 124.439,72.0563 l 4.709,7.6043 -8.909,-0.7954 z"
|
||||
id="path373"
|
||||
transform="matrix(-1,-8.74228e-8,-8.74228e-8,1,611.648,309.5)" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;-inkscape-stroke:none"
|
||||
d="m 611.674,309.216 123.66,75.582 -0.348,0.569 -123.66,-75.583 z m 124.435,71.758 4.74,7.585 -8.912,-0.759 z"
|
||||
id="path375" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;-inkscape-stroke:none"
|
||||
d="m 482.833,480.5 v 48.939 h -0.666 V 480.5 Z m 3.667,47.606 -4,8 -4,-8 z"
|
||||
id="path377" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;-inkscape-stroke:none"
|
||||
d="m 740.833,480.5 v 48.939 h -0.666 V 480.5 Z m 3.667,47.606 -4,8 -4,-8 z"
|
||||
id="path379" />
|
||||
<path
|
||||
style="color:#000000;fill:#f9cb9c;fill-rule:evenodd;-inkscape-stroke:none"
|
||||
d="m 829.5,532.063 c 0,2.519 -2.043,4.562 -4.562,4.562 v -4.562 c 0,1.259 -1.022,2.281 -2.282,2.281 -1.26,0 -2.281,-1.022 -2.281,-2.281 v 4.562 H 657.063 c -2.52,0 -4.563,2.043 -4.563,4.563 v 54.75 c 0,2.519 2.043,4.562 4.563,4.562 2.519,0 4.562,-2.043 4.562,-4.562 v -4.563 h 163.313 c 2.519,0 4.562,-2.043 4.562,-4.562 z M 657.063,545.75 c 2.519,0 4.562,-2.043 4.562,-4.562 0,-1.26 -1.021,-2.282 -2.281,-2.282 -1.26,0 -2.281,1.022 -2.281,2.282 z"
|
||||
id="path381" />
|
||||
<path
|
||||
style="color:#000000;fill:#c8a37d;fill-rule:evenodd;-inkscape-stroke:none"
|
||||
d="m 657.063,545.75 c 2.519,0 4.562,-2.043 4.562,-4.562 0,-1.26 -1.021,-2.282 -2.281,-2.282 -1.26,0 -2.281,1.022 -2.281,2.282 z m 167.875,-9.125 c 2.519,0 4.562,-2.043 4.562,-4.562 0,-2.52 -2.043,-4.563 -4.562,-4.563 -2.52,0 -4.563,2.043 -4.563,4.563 0,1.26 1.021,2.281 2.281,2.281 1.26,0 2.282,-1.021 2.282,-2.281 z"
|
||||
id="path383" />
|
||||
<path
|
||||
style="color:#000000;fill:#000000;fill-rule:evenodd;stroke-miterlimit:8;-inkscape-stroke:none"
|
||||
d="m 824.9375,527.16602 c -2.69923,0 -4.89648,2.19632 -4.89648,4.89648 v 4.22852 H 657.0625 c -2.70016,0 -4.89648,2.19632 -4.89648,4.89648 v 54.75 c 0,2.69923 2.19632,4.89648 4.89648,4.89648 2.69923,0 4.89648,-2.19725 4.89648,-4.89648 v -4.22852 H 824.9375 c 2.69923,0 4.89648,-2.19725 4.89648,-4.89648 v -54.75 c 0,-2.70016 -2.19632,-4.89648 -4.89648,-4.89648 z m 0,0.66796 c 2.33984,0 4.22852,1.88868 4.22852,4.22852 0,2.22036 -1.72083,3.98395 -3.89454,4.16211 v -4.16211 h -0.66601 c 0,1.07861 -0.86947,1.94727 -1.94922,1.94727 -1.07975,0 -1.94727,-0.86848 -1.94727,-1.94727 0,-2.33984 1.88975,-4.22852 4.22852,-4.22852 z m -0.33203,5.83204 v 2.625 h -3.89649 v -2.6211 c 0.48006,0.58593 1.13354,1.00781 1.94727,1.00781 0.8153,0 1.46892,-0.42386 1.94922,-1.01171 z m 4.56055,0.71875 v 52.42773 c 0,2.33877 -1.88975,4.22852 -4.22852,4.22852 H 661.95898 V 541.1875 c 0,-1.44007 -1.17498,-2.61523 -2.61523,-2.61523 -1.44025,0 -2.61328,1.17516 -2.61328,2.61523 v 4.16211 c -2.17556,-0.17723 -3.89649,-1.94108 -3.89649,-4.16211 0,-2.33984 1.88868,-4.22852 4.22852,-4.22852 h 163.3125 0.33398 4.22852 c 1.8433,0 3.39365,-1.06506 4.22852,-2.57421 z m -169.82227,4.85546 c 1.07975,0 1.94727,0.86734 1.94727,1.94727 0,2.22036 -1.72083,3.98395 -3.89454,4.16211 v -4.16211 c 0,-1.07993 0.86752,-1.94727 1.94727,-1.94727 z m -6.50977,4.26954 c 0.83496,1.50915 2.38458,2.57421 4.22852,2.57421 1.8433,0 3.39365,-1.06506 4.22852,-2.57421 v 47.53125 0.33398 4.5625 c 0,2.33877 -1.88975,4.22852 -4.22852,4.22852 -2.33984,0 -4.22852,-1.88975 -4.22852,-4.22852 z"
|
||||
id="path385" />
|
||||
<text
|
||||
font-family="Arial, Arial_MSFontService, sans-serif"
|
||||
font-weight="400"
|
||||
font-size="19px"
|
||||
transform="translate(682.367,570)"
|
||||
id="text387">signed keypair</text>
|
||||
<g
|
||||
id="g1991"
|
||||
transform="matrix(0.8280038,0,0,0.84377522,155.50129,124.76081)">
|
||||
<path
|
||||
d="m 22.7,17.21 h -3.83 v -1.975 c 0,-1.572 -1.3,-2.86 -2.86,-2.86 -1.56,0 -2.86,1.3 -2.86,2.86 V 17.21 H 9.33 v -1.975 c 0,-3.708 3.023,-6.7 6.7,-6.7 3.708,0 6.7,3.023 6.7,6.7 z"
|
||||
fill="#ffa400"
|
||||
id="path1976" />
|
||||
<path
|
||||
d="M 24.282,17.21 H 7.758 a 1.27,1.27 0 0 0 -1.29,1.29 v 12.2 a 1.27,1.27 0 0 0 1.29,1.3 h 16.524 a 1.27,1.27 0 0 0 1.29,-1.29 V 18.5 c -0.04,-0.725 -0.605,-1.3 -1.3,-1.3 z m -7.456,8.02 v 1.652 c 0,0.443 -0.363,0.846 -0.846,0.846 -0.443,0 -0.846,-0.363 -0.846,-0.846 V 25.23 c -0.524,-0.282 -0.846,-0.846 -0.846,-1.49 0,-0.927 0.766,-1.693 1.693,-1.693 0.927,0 1.693,0.766 1.693,1.693 0.04,0.645 -0.322,1.21 -0.846,1.49 z"
|
||||
fill="#003a70"
|
||||
id="path1978" />
|
||||
<path
|
||||
d="m 6.066,15.395 h -4 A 1.17,1.17 0 0 1 0.897,14.226 1.17,1.17 0 0 1 2.066,13.057 h 4 a 1.17,1.17 0 0 1 1.169,1.169 1.17,1.17 0 0 1 -1.169,1.169 z M 8.886,9.108 A 1.03,1.03 0 0 1 8.161,8.826 L 5.017,6.246 C 4.533,5.843 4.453,5.118 4.857,4.594 5.26,4.11 5.985,4.03 6.509,4.434 l 3.144,2.58 c 0.484,0.403 0.564,1.128 0.16,1.652 C 9.531,8.948 9.208,9.109 8.886,9.109 Z M 16.02,6.368 A 1.17,1.17 0 0 1 14.851,5.199 V 1.17 A 1.17,1.17 0 0 1 16.02,0 1.17,1.17 0 0 1 17.189,1.169 V 5.2 A 1.17,1.17 0 0 1 16.02,6.369 Z m 7.093,2.74 c -0.322,0 -0.685,-0.16 -0.887,-0.443 -0.403,-0.484 -0.322,-1.25 0.16,-1.652 l 3.144,-2.58 c 0.484,-0.403 1.25,-0.322 1.652,0.16 0.402,0.482 0.322,1.25 -0.16,1.652 l -3.144,2.58 a 1.13,1.13 0 0 1 -0.766,0.282 z m 6.81,6.287 h -4.03 a 1.17,1.17 0 0 1 -1.169,-1.169 1.17,1.17 0 0 1 1.169,-1.169 h 4.03 a 1.17,1.17 0 0 1 1.169,1.169 1.17,1.17 0 0 1 -1.169,1.169 z"
|
||||
fill="#ffa400"
|
||||
id="path1980" />
|
||||
</g>
|
||||
<g
|
||||
id="g2012"
|
||||
transform="matrix(0.23808024,0,0,0.23808024,784.86702,38.349964)">
|
||||
<circle
|
||||
class="st0"
|
||||
cx="72"
|
||||
cy="72"
|
||||
r="63"
|
||||
id="circle2008" />
|
||||
|
||||
<path
|
||||
class="st1"
|
||||
d="m 80.9,43 6.5,6.1 -11.9,28.3 C 74,81 72.8,84.4 72,87.8 71.3,84.4 70.1,81 68.6,77.4 L 54.1,43 H 43.2 L 72,110.1 100.8,43 Z"
|
||||
id="path2010" />
|
||||
|
||||
</g>
|
||||
<g
|
||||
id="g2012-0"
|
||||
transform="matrix(0.23808024,0,0,0.23808024,872.72161,122.87773)">
|
||||
<circle
|
||||
class="st0"
|
||||
cx="72"
|
||||
cy="72"
|
||||
r="63"
|
||||
id="circle2008-5" />
|
||||
<path
|
||||
class="st1"
|
||||
d="m 80.9,43 6.5,6.1 -11.9,28.3 C 74,81 72.8,84.4 72,87.8 71.3,84.4 70.1,81 68.6,77.4 L 54.1,43 H 43.2 L 72,110.1 100.8,43 Z"
|
||||
id="path2010-5" />
|
||||
</g>
|
||||
<g
|
||||
id="Logo"
|
||||
transform="matrix(0.2339657,0,0,0.22979822,531.02362,81.927566)">
|
||||
<polygon
|
||||
points="16.73,35.35 44.54,19.3 44.54,0 0,25.69 0,25.71 0,87.41 16.73,97.07 "
|
||||
id="polygon2041" />
|
||||
<polygon
|
||||
points="62.32,0 62.32,49.15 44.54,49.15 44.54,30.81 27.8,40.47 27.8,103.44 44.54,113.12 44.54,64.11 62.32,64.11 62.32,82.33 79.05,72.67 79.05,9.66 "
|
||||
id="polygon2043" />
|
||||
<polygon
|
||||
points="90.12,77.79 62.32,93.84 62.32,113.14 106.86,87.45 106.86,87.43 106.86,25.73 90.12,16.07 "
|
||||
id="polygon2045" />
|
||||
</g>
|
||||
<g
|
||||
id="g1991-5"
|
||||
transform="matrix(0.8280038,0,0,0.84377522,287.34185,40.528799)">
|
||||
<path
|
||||
d="m 22.7,17.21 h -3.83 v -1.975 c 0,-1.572 -1.3,-2.86 -2.86,-2.86 -1.56,0 -2.86,1.3 -2.86,2.86 V 17.21 H 9.33 v -1.975 c 0,-3.708 3.023,-6.7 6.7,-6.7 3.708,0 6.7,3.023 6.7,6.7 z"
|
||||
fill="#ffa400"
|
||||
id="path1976-4" />
|
||||
<path
|
||||
d="M 24.282,17.21 H 7.758 a 1.27,1.27 0 0 0 -1.29,1.29 v 12.2 a 1.27,1.27 0 0 0 1.29,1.3 h 16.524 a 1.27,1.27 0 0 0 1.29,-1.29 V 18.5 c -0.04,-0.725 -0.605,-1.3 -1.3,-1.3 z m -7.456,8.02 v 1.652 c 0,0.443 -0.363,0.846 -0.846,0.846 -0.443,0 -0.846,-0.363 -0.846,-0.846 V 25.23 c -0.524,-0.282 -0.846,-0.846 -0.846,-1.49 0,-0.927 0.766,-1.693 1.693,-1.693 0.927,0 1.693,0.766 1.693,1.693 0.04,0.645 -0.322,1.21 -0.846,1.49 z"
|
||||
fill="#003a70"
|
||||
id="path1978-1" />
|
||||
<path
|
||||
d="m 6.066,15.395 h -4 A 1.17,1.17 0 0 1 0.897,14.226 1.17,1.17 0 0 1 2.066,13.057 h 4 a 1.17,1.17 0 0 1 1.169,1.169 1.17,1.17 0 0 1 -1.169,1.169 z M 8.886,9.108 A 1.03,1.03 0 0 1 8.161,8.826 L 5.017,6.246 C 4.533,5.843 4.453,5.118 4.857,4.594 5.26,4.11 5.985,4.03 6.509,4.434 l 3.144,2.58 c 0.484,0.403 0.564,1.128 0.16,1.652 C 9.531,8.948 9.208,9.109 8.886,9.109 Z M 16.02,6.368 A 1.17,1.17 0 0 1 14.851,5.199 V 1.17 A 1.17,1.17 0 0 1 16.02,0 1.17,1.17 0 0 1 17.189,1.169 V 5.2 A 1.17,1.17 0 0 1 16.02,6.369 Z m 7.093,2.74 c -0.322,0 -0.685,-0.16 -0.887,-0.443 -0.403,-0.484 -0.322,-1.25 0.16,-1.652 l 3.144,-2.58 c 0.484,-0.403 1.25,-0.322 1.652,0.16 0.402,0.482 0.322,1.25 -0.16,1.652 l -3.144,2.58 a 1.13,1.13 0 0 1 -0.766,0.282 z m 6.81,6.287 h -4.03 a 1.17,1.17 0 0 1 -1.169,-1.169 1.17,1.17 0 0 1 1.169,-1.169 h 4.03 a 1.17,1.17 0 0 1 1.169,1.169 1.17,1.17 0 0 1 -1.169,1.169 z"
|
||||
fill="#ffa400"
|
||||
id="path1980-3" />
|
||||
</g>
|
||||
</g>
|
||||
<style
|
||||
type="text/css"
|
||||
id="style2006">
|
||||
.st0{fill-rule:evenodd;clip-rule:evenodd;fill:#FCB116;}
|
||||
.st1{fill:#262626;}
|
||||
</style>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 23 KiB |
BIN
docs/images/collabora-online-in-nextcloud.png
Normal file
|
After Width: | Height: | Size: 95 KiB |
BIN
docs/images/collabora-online.png
Normal file
|
After Width: | Height: | Size: 116 KiB |
BIN
docs/images/collabora-traffic-flow.png
Normal file
|
After Width: | Height: | Size: 168 KiB |
BIN
docs/images/cyberchef.png
Normal file
|
After Width: | Height: | Size: 122 KiB |
BIN
docs/images/diycluster-k3s-profile-setup-node2.png
Normal file
|
After Width: | Height: | Size: 155 KiB |
BIN
docs/images/diycluster-k3s-profile-setup.png
Normal file
|
After Width: | Height: | Size: 160 KiB |
BIN
docs/images/docker-swarm-ha-function.png
Normal file
|
After Width: | Height: | Size: 314 KiB |
BIN
docs/images/docker-swarm-node-failure.png
Normal file
|
After Width: | Height: | Size: 333 KiB |
BIN
docs/images/docker-swarm-node-restore.png
Normal file
|
After Width: | Height: | Size: 310 KiB |
BIN
docs/images/duplicati.jpg
Normal file
|
After Width: | Height: | Size: 56 KiB |
BIN
docs/images/duplicity.png
Normal file
|
After Width: | Height: | Size: 248 KiB |
BIN
docs/images/elkarbackup-setup-1.png
Normal file
|
After Width: | Height: | Size: 963 KiB |
BIN
docs/images/elkarbackup-setup-2.png
Normal file
|
After Width: | Height: | Size: 93 KiB |
BIN
docs/images/elkarbackup-setup-3.png
Normal file
|
After Width: | Height: | Size: 62 KiB |
BIN
docs/images/elkarbackup.png
Normal file
|
After Width: | Height: | Size: 135 KiB |
BIN
docs/images/emby.png
Normal file
|
After Width: | Height: | Size: 1.3 MiB |
BIN
docs/images/external-dns.png
Normal file
|
After Width: | Height: | Size: 251 KiB |
BIN
docs/images/favicon.ico
Normal file
|
After Width: | Height: | Size: 1.1 KiB |
BIN
docs/images/flux_github_token.png
Normal file
|
After Width: | Height: | Size: 121 KiB |
BIN
docs/images/funkwhale.jpg
Normal file
|
After Width: | Height: | Size: 74 KiB |
BIN
docs/images/ghost.png
Normal file
|
After Width: | Height: | Size: 282 KiB |
BIN
docs/images/gollum.png
Normal file
|
After Width: | Height: | Size: 176 KiB |
BIN
docs/images/headphones.png
Normal file
|
After Width: | Height: | Size: 194 KiB |
BIN
docs/images/heimdall.jpg
Normal file
|
After Width: | Height: | Size: 330 KiB |
BIN
docs/images/homeassistant.png
Normal file
|
After Width: | Height: | Size: 226 KiB |
BIN
docs/images/huginn.png
Normal file
|
After Width: | Height: | Size: 92 KiB |
BIN
docs/images/immich.jpg
Normal file
|
After Width: | Height: | Size: 206 KiB |
BIN
docs/images/ingress.jpg
Normal file
|
After Width: | Height: | Size: 92 KiB |
BIN
docs/images/instapy.png
Normal file
|
After Width: | Height: | Size: 815 KiB |
BIN
docs/images/ipfs.png
Normal file
|
After Width: | Height: | Size: 175 KiB |
BIN
docs/images/jackett.png
Normal file
|
After Width: | Height: | Size: 315 KiB |
BIN
docs/images/jellyfin.png
Normal file
|
After Width: | Height: | Size: 775 KiB |
BIN
docs/images/kanboard.png
Normal file
|
After Width: | Height: | Size: 42 KiB |
BIN
docs/images/kavita.png
Normal file
|
After Width: | Height: | Size: 299 KiB |
BIN
docs/images/keepalived.png
Normal file
|
After Width: | Height: | Size: 47 KiB |
BIN
docs/images/keycloak-add-client-1.png
Normal file
|
After Width: | Height: | Size: 57 KiB |
BIN
docs/images/keycloak-add-client-2.png
Normal file
|
After Width: | Height: | Size: 54 KiB |
BIN
docs/images/keycloak-add-client-3.png
Normal file
|
After Width: | Height: | Size: 114 KiB |
BIN
docs/images/keycloak-add-client-4.png
Normal file
|
After Width: | Height: | Size: 66 KiB |
BIN
docs/images/keycloak-add-user-1.png
Normal file
|
After Width: | Height: | Size: 70 KiB |
BIN
docs/images/keycloak-add-user-2.png
Normal file
|
After Width: | Height: | Size: 80 KiB |
BIN
docs/images/keycloak-add-user-3.png
Normal file
|
After Width: | Height: | Size: 69 KiB |
BIN
docs/images/keycloak.png
Normal file
|
After Width: | Height: | Size: 32 KiB |
BIN
docs/images/komga.png
Normal file
|
After Width: | Height: | Size: 1.9 MiB |
BIN
docs/images/kubernetes-cluster-design.png
Normal file
|
After Width: | Height: | Size: 343 KiB |
BIN
docs/images/kubernetes-dashboard.png
Normal file
|
After Width: | Height: | Size: 502 KiB |
BIN
docs/images/kubernetes-helm.png
Normal file
|
After Width: | Height: | Size: 59 KiB |
BIN
docs/images/kubernetes-on-digitalocean-screenshot-1.png
Normal file
|
After Width: | Height: | Size: 159 KiB |
BIN
docs/images/kubernetes-on-digitalocean-screenshot-2.png
Normal file
|
After Width: | Height: | Size: 555 KiB |
BIN
docs/images/kubernetes-on-digitalocean-screenshot-3.png
Normal file
|
After Width: | Height: | Size: 156 KiB |
BIN
docs/images/kubernetes-on-digitalocean-screenshot-4.png
Normal file
|
After Width: | Height: | Size: 300 KiB |
BIN
docs/images/kubernetes-on-digitalocean-screenshot-5.png
Normal file
|
After Width: | Height: | Size: 198 KiB |
BIN
docs/images/kubernetes-on-digitalocean-screenshot-6.png
Normal file
|
After Width: | Height: | Size: 189 KiB |
BIN
docs/images/kubernetes-on-digitalocean.jpg
Normal file
|
After Width: | Height: | Size: 80 KiB |
BIN
docs/images/kubernetes-snapshots.png
Normal file
|
After Width: | Height: | Size: 246 KiB |
BIN
docs/images/lazylibrarian.png
Normal file
|
After Width: | Height: | Size: 61 KiB |
BIN
docs/images/lidarr.png
Normal file
|
After Width: | Height: | Size: 2.4 MiB |
BIN
docs/images/linx.png
Normal file
|
After Width: | Height: | Size: 100 KiB |
BIN
docs/images/mastodon-report-user.png
Normal file
|
After Width: | Height: | Size: 272 KiB |