mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 09:46:23 +00:00
Spiffy new versions of components, some housekeeping
This commit is contained in:
@@ -1,113 +0,0 @@
|
||||
|
||||
# Introduction
|
||||
|
||||
[Tiny Tiny RSS][ttrss] is a self-hosted, AJAX-based RSS reader, which rose to popularity as a replacement for Google Reader. It supports advanced features, such as:
|
||||
|
||||
* Plugins and themeing in a drop-in fashion
|
||||
* Filtering (discard all articles with title matching "trump")
|
||||
* Sharing articles via a unique public URL/feed
|
||||
|
||||
Tiny Tiny RSS requires a database and a webserver - this recipe provides both using docker, exposed to the world via LetsEncrypt.
|
||||
|
||||
# Ingredients
|
||||
|
||||
**Required**
|
||||
|
||||
1. Webserver (nginx container)
|
||||
2. Database (postgresql container)
|
||||
3. TTRSS (ttrss container)
|
||||
3. Nginx reverse proxy with LetsEncrypt
|
||||
|
||||
|
||||
**Optional**
|
||||
|
||||
1. Email server (if you want to email articles from TTRSS)
|
||||
|
||||
# Preparation
|
||||
|
||||
**Setup filesystem location**
|
||||
|
||||
I setup a directory for the ttrss data, at /data/ttrss.
|
||||
|
||||
I created docker-compose.yml, as follows:
|
||||
|
||||
````
|
||||
rproxy:
|
||||
image: nginx:1.13-alpine
|
||||
ports:
|
||||
- "34804:80"
|
||||
environment:
|
||||
- DOMAIN_NAME=ttrss.funkypenguin.co.nz
|
||||
- VIRTUAL_HOST=ttrss.funkypenguin.co.nz
|
||||
- LETSENCRYPT_HOST=ttrss.funkypenguin.co.nz
|
||||
- LETSENCRYPT_EMAIL=davidy@funkypenguin.co.nz
|
||||
volumes:
|
||||
- ./nginx.conf:/etc/nginx/nginx.conf:ro
|
||||
volumes_from:
|
||||
- ttrss
|
||||
links:
|
||||
- ttrss:ttrss
|
||||
|
||||
ttrss:
|
||||
image: tkaefer/docker-ttrss
|
||||
restart: always
|
||||
links:
|
||||
- postgres:database
|
||||
environment:
|
||||
- DB_USER=ttrss
|
||||
- DB_PASS=uVL53xfmJxW
|
||||
- SELF_URL_PATH=https://ttrss.funkypenguin.co.nz
|
||||
volumes:
|
||||
- ./plugins.local:/var/www/plugins.local
|
||||
- ./themes.local:/var/www/themes.local
|
||||
- ./reader:/var/www/reader
|
||||
|
||||
postgres:
|
||||
image: postgres:latest
|
||||
volumes:
|
||||
- /srv/ssd-data/ttrss/db:/var/lib/postgresql/data
|
||||
restart: always
|
||||
environment:
|
||||
- POSTGRES_USER=ttrss
|
||||
- POSTGRES_PASSWORD=uVL53xfmJxW
|
||||
|
||||
gmailsmtp:
|
||||
image: softinnov/gmailsmtp
|
||||
restart: always
|
||||
environment:
|
||||
- user=davidy@funkypenguin.co.nz
|
||||
- pass=eqknehqflfbufzbh
|
||||
- DOMAIN_NAME=gmailsmtp.funkypenguin.co.nz
|
||||
````
|
||||
|
||||
Run ````docker-compose up```` in the same directory, and watch the output. PostgreSQL container will create the "ttrss" database, and ttrss will start using it.
|
||||
|
||||
|
||||
# Login to UI
|
||||
|
||||
Log into https://\<your VIRTUALHOST\>. Default user is "admin" and password is "password"
|
||||
|
||||
# Optional - Enable af_psql_trgm plugin for similar post detection
|
||||
|
||||
One of the native plugins enables the detection of "similar" articles. This requires the pg_trgm extension enabled in your database.
|
||||
|
||||
From the working directory, use ````docker exec```` to get a shell within your postgres container, and run "postgres" as the postgres user:
|
||||
````
|
||||
[root@kvm nginx]# docker exec -it ttrss_postgres_1 /bin/sh
|
||||
# su - postgres
|
||||
No directory, logging in with HOME=/
|
||||
$ psql
|
||||
psql (9.6.3)
|
||||
Type "help" for help.
|
||||
````
|
||||
|
||||
Add the trgm extension to your ttrss database:
|
||||
````
|
||||
postgres=# \c ttrss
|
||||
You are now connected to database "ttrss" as user "postgres".
|
||||
ttrss=# CREATE EXTENSION pg_trgm;
|
||||
CREATE EXTENSION
|
||||
ttrss=# \q
|
||||
````
|
||||
|
||||
[ttrss]:https://tt-rss.org/
|
||||
@@ -1,15 +0,0 @@
|
||||
<!-- Piwik -->
|
||||
<script type="text/javascript">
|
||||
var _paq = _paq || [];
|
||||
/* tracker methods like "setCustomDimension" should be called before "trackPageView" */
|
||||
_paq.push(['trackPageView']);
|
||||
_paq.push(['enableLinkTracking']);
|
||||
(function() {
|
||||
var u="//piwik.funkypenguin.co.nz/";
|
||||
_paq.push(['setTrackerUrl', u+'piwik.php']);
|
||||
_paq.push(['setSiteId', '2']);
|
||||
var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0];
|
||||
g.type='text/javascript'; g.async=true; g.defer=true; g.src=u+'piwik.js'; s.parentNode.insertBefore(g,s);
|
||||
})();
|
||||
</script>
|
||||
<!-- End Piwik Code -->
|
||||
BIN
manuscript/images/mattermost.png
Normal file
BIN
manuscript/images/mattermost.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 128 KiB |
BIN
manuscript/images/name.jpg
Normal file
BIN
manuscript/images/name.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 140 KiB |
@@ -1,385 +0,0 @@
|
||||
# Athena Mining Pool
|
||||
|
||||
[Cryptocurrency miners](/recipes/cryptominer) will "pool" their GPU resources ("_hashpower_") into aggregate "_mining pools_", so that by the combined effort of all the miners, the pool will receive a reward for the blocks "mined" into the blockchain, and this reward will be distributed among the miners.
|
||||
|
||||

|
||||
|
||||
This recipe illustrates how to build a mining pool for [Athena](https://getathena.org), a newborn [CryptoNote](https://cryptonote.org/) [currency](https://cryptonote.org/coins) recently forked from [TurtleCoin](https://turtlecoin.lol)
|
||||
|
||||
The end result is a mining pool which looks like this: https://athx.heigh-ho.funkypenguin.co.nz/
|
||||
|
||||
!!! question "Isn't this just a copy/paste of your [TurtleCoin Pool Recipe](/recipes/turtle-pool/)?"
|
||||
|
||||
Why yes. Yes it is :) But it's adjusted for Athena, which uses different containers and wallet binary names, and it's running the improved [cryptonote-nodejs-pool software](https://github.com/dvandal/cryptonote-nodejs-pool), which is common to all the [cryptonote-mining-pool](/recipes/criptonote-mining-pool/) recipes!
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
||||
2. [Traefik](/ha-docker-swarm/traefik) configured per design
|
||||
3. DNS entry for the hostnames (_pool and api_) you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
|
||||
4. At least 8GB disk space (2.5GB used, 6.5GB for future growth)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Create user account
|
||||
|
||||
The Athena pool elements won't (_and shouldn't_) run as root, but they'll need access to write data to some parts of the filesystem (_like logs, etc_).
|
||||
|
||||
To manage access control, we'll want to create a local user on **each docker node** with the same UID.
|
||||
|
||||
```
|
||||
useradd -u 3508 athena-pool
|
||||
```
|
||||
|
||||
### Setup Redis
|
||||
|
||||
|
||||
The pool uses Redis for in-memory and persistent storage.
|
||||
|
||||
!!! warning "Playing it safe"
|
||||
|
||||
Be aware that by default, Redis stores some data **only** in memory, and writes to the filesystem at default intervals (_can be up to 5 minutes by default_). Given we don't **want** to loose 5 minutes of miner's data if we restart Redis (_what happens if we found a block during those 5 minutes but haven't paid any miners yet?_), we want to ensure that Redis runs in "appendonly" mode, which ensures that every change is immediately written to disk.
|
||||
|
||||
We also want to make sure that we retain all Redis logs persistently (_We're dealing with people's cryptocurrency here, it's a good idea to keep persistent logs for debugging/auditing_)
|
||||
|
||||
Create directories to hold Redis data. We use separate directories for future flexibility - One day, we may want to backup the data but not the logs, or move the data to an SSD partition but leave the logs on slower, cheaper disk.
|
||||
|
||||
```
|
||||
mkdir -p /var/data/athena-pool/redis/config
|
||||
mkdir -p /var/data/athena-pool/redis/data
|
||||
mkdir -p /var/data/athena-pool/redis/logs
|
||||
chown athena-pool /var/data/athena-pool/redis/data
|
||||
chown athena-pool /var/data/athena-pool/redis/logs
|
||||
```
|
||||
|
||||
Create **/var/data/athena-pool/redis/config/redis.conf** using http://download.redis.io/redis-stable/redis.conf as a guide. The following are the values I changed from default on my deployment (_but I'm not a Redis expert!_):
|
||||
|
||||
```
|
||||
appendonly yes
|
||||
appendfilename "appendonly.aof"
|
||||
loglevel notice
|
||||
logfile "/logs/redis.log"
|
||||
protected-mode no
|
||||
```
|
||||
|
||||
I also had to **disable** the following line, by commenting it out (_thus ensuring Redis container will respond to the other containers_):
|
||||
|
||||
```
|
||||
bind 127.0.0.1
|
||||
```
|
||||
|
||||
### Setup Nginx
|
||||
|
||||
We'll run a simple Nginx container to serve the static front-end of the web UI.
|
||||
|
||||
The simplest way to get the frontend is just to clone the upstream athena-pool repo, and mount the "/website" subdirectory into Nginx.
|
||||
|
||||
```
|
||||
git clone https://github.com/funkypenguin/cryptonote-nodejs-pool.git /var/data/athena-pool/nginx
|
||||
```
|
||||
|
||||
Edit **/var/data/athena-pool/nginx/website/config.js**, and change at least the following:
|
||||
|
||||
```
|
||||
var api = "https://<CHOOSE A FQDN TO USE FOR YOUR API>";
|
||||
var blockchainExplorer = "http://explorer.athx.org/?hash={id}#blockchain_block";
|
||||
var transactionExplorer = "http://explorer.athx.org/?hash={id}#blockchain_transaction";
|
||||
```
|
||||
|
||||
And optionally, set the following:
|
||||
```
|
||||
var telegram = "https://t.me/YourPool";
|
||||
var discord = "https://chat.funkypenguin.co.nz";
|
||||
```
|
||||
|
||||
### Setup Athena daemon
|
||||
|
||||
The first thing we'll need to participate in the great and powerful Athena network is a **node**. The node is responsible for communicating with the rest of the nodes in the blockchain, allowing our miners to receive new blocks to try to find.
|
||||
|
||||
Create a directory to hold the blockchain data:
|
||||
|
||||
```
|
||||
mkdir -p /var/data/runtime/athena-pool/daemon/
|
||||
```
|
||||
|
||||
### Setup Athena wallet
|
||||
|
||||
Our pool needs a wallet to be able to (a) receive rewards for blocks discovered, and (b) pay out our miners for their share of the reward.
|
||||
|
||||
!!! note
|
||||
Under Athena, "walletd" was renamed to "services". I don't know why, but I've updated this recipe to reflect this.
|
||||
|
||||
Create directories to hold wallet data:
|
||||
|
||||
```
|
||||
mkdir -p /var/data/athena-pool/services/config
|
||||
mkdir -p /var/data/athena-pool/services/services
|
||||
mkdir -p /var/data/athena-pool/services/logs
|
||||
chown -R athena-pool /var/data/athena-pool/wallet/services
|
||||
chown -R athena-pool /var/data/athena-pool/wallet/logs
|
||||
```
|
||||
|
||||
Now create the initial wallet. You'll want to secure your wallet password, so the command below will **prompt** you for the key (no output from the prompt), and insert it into an environment variable. This means that the key won't be stored in plaintext in your bash history!
|
||||
|
||||
```
|
||||
read PASS
|
||||
```
|
||||
|
||||
After having entered your password (you can confirm by running ```env | grep PASS```), run the following to run the wallet daemon _once_, with the instruction to create a new wallet container:
|
||||
|
||||
```
|
||||
docker run \
|
||||
-v /var/data/athena-pool/wallet/container:/container \
|
||||
--rm -ti --entrypoint /usr/local/bin/services funkypenguin/athena \
|
||||
--container-file /container/wallet.container \
|
||||
--container-password $PASS \
|
||||
--generate-container
|
||||
```
|
||||
|
||||
You'll get a lot of output. The following are relevant lines from a successful run with the extra stuff stripped out:
|
||||
|
||||
```
|
||||
2018-May-01 11:14:57.662664 INFO walled v0.6.4 ()
|
||||
2018-May-01 11:14:59.868087 INFO Generating new wallet
|
||||
2018-May-01 11:14:59.919178 INFO Container initialized with view secret key, public view key <your view public key will be here>
|
||||
2018-May-01 11:14:59.920932 INFO New wallet added athena<your wallet's public address>, creation timestamp 0
|
||||
2018-May-01 11:14:59.932367 INFO Container shut down
|
||||
2018-May-01 11:14:59.932419 INFO Loading container...
|
||||
2018-May-01 11:14:59.961814 INFO Consumer added, consumer 0x55b0fb5bc070, count 1
|
||||
2018-May-01 11:14:59.961996 INFO Starting...
|
||||
2018-May-01 11:14:59.962173 INFO Container loaded, view public key <your view public key will be here>, wallet count 1, actual balance 0.00, pending balance 0.00
|
||||
2018-May-01 11:14:59.962508 INFO New wallet is generated. Address: TRTL<your wallet's public address>
|
||||
2018-May-01 11:14:59.962581 INFO Saving container...
|
||||
2018-May-01 11:14:59.962683 INFO Stopping...
|
||||
2018-May-01 11:14:59.962862 INFO Stopped
|
||||
```
|
||||
|
||||
Take careful note of your wallet password, public view key, and wallet address
|
||||
|
||||
Create **/var/data/athena-pool/services/config/services.conf**, containing the following:
|
||||
|
||||
```
|
||||
bind-address = 0.0.0.0
|
||||
container-file = /services/wallet.container
|
||||
container-password = <ENTER YOUR CONTAINER PASSWORD HERE>
|
||||
rpc-password = <CHOOSE A PASSWORD TO ALLOW POOL TO TALK TO WALLET>
|
||||
log-file = /dev/stdout
|
||||
log-level = 3
|
||||
daemon-address = daemon
|
||||
```
|
||||
|
||||
Set permissions appropriately:
|
||||
|
||||
```
|
||||
chown athena-pool /var/data/athena-pool/services/ -R
|
||||
```
|
||||
|
||||
|
||||
### Setup Athena mining pool
|
||||
|
||||
Following the convention we've set above, create directories to hold pool data:
|
||||
|
||||
```
|
||||
mkdir -p /var/data/athena-pool/pool/config
|
||||
mkdir -p /var/data/athena-pool/pool/logs
|
||||
chown -R athena-pool /var/data/athena-pool/pool/logs
|
||||
```
|
||||
|
||||
Now create **/var/data/athena-pool/pool/config/config.json**, copying https://github.com/funkypenguin/cryptonote-nodejs-pool/blob/master/config_examples/monero.json, and modifying according to https://github.com/athena-network/athena-pool/blob/master/config.json, adjusting at least the following:
|
||||
|
||||
Send logs to /logs/, so that they can potentially be stored / backed up separately from the config:
|
||||
|
||||
```
|
||||
"logging": {
|
||||
"files": {
|
||||
"level": "debug",
|
||||
"directory": "/logs",
|
||||
"flushInterval": 5
|
||||
},
|
||||
```
|
||||
|
||||
Set the "poolAddress" field to your wallet address
|
||||
```
|
||||
"poolServer": {
|
||||
"enabled": true,
|
||||
"clusterForks": "auto",
|
||||
"poolAddress": "<SET THIS TO YOUR WALLET ADDRESS GENERATED ABOVE>",
|
||||
```
|
||||
|
||||
Add the "host" value to the api section, since our API will run on its own container, and choose a password you'll use for the webUI admin page
|
||||
|
||||
```
|
||||
"api": {
|
||||
"enabled": true,
|
||||
"hashrateWindow": 600,
|
||||
"updateInterval": 5,
|
||||
"port": 8117,
|
||||
"blocks": 30,
|
||||
"payments": 30,
|
||||
"password": "<PASSWORD FOR ADMIN PAGE ACCESS>"
|
||||
```
|
||||
|
||||
Set the host value for the daemon:
|
||||
|
||||
```
|
||||
"daemon": {
|
||||
"host": "daemon",
|
||||
"port": 11898
|
||||
},
|
||||
```
|
||||
|
||||
Set the host value for the wallet, and set your container password (_you recorded it earlier, remember?_)
|
||||
|
||||
```
|
||||
"wallet": {
|
||||
"host": "services",
|
||||
"port": 8079,
|
||||
"password": "<SET ME TO YOUR WALLET RPC PASSWORD>"
|
||||
},
|
||||
```
|
||||
|
||||
Set the host value for Redis:
|
||||
|
||||
```
|
||||
"redis": {
|
||||
"host": "redis",
|
||||
"port": 6379
|
||||
},
|
||||
```
|
||||
|
||||
That's it! The above config files mean each element of the pool will be able to communicate with the other elements within the docker swarm, by name.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
!!! tip
|
||||
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||
|
||||
|
||||
```
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
daemon:
|
||||
image: funkypenguin/athena
|
||||
volumes:
|
||||
- /var/data/runtime/athena-pool/daemon/:/root/.athena
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
|
||||
services:
|
||||
image: funkypenguin/athena
|
||||
volumes:
|
||||
- /var/data/athena-pool/services/config:/config:ro
|
||||
- /var/data/athena-pool/services/services:/services
|
||||
- /var/data/athena-pool/services/logs:/logs
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
entrypoint: |
|
||||
services --config /config/services.conf | tee /logs/athena-services.log
|
||||
|
||||
pool:
|
||||
image: funkypenguin/cryptonote-nodejs-pool
|
||||
volumes:
|
||||
- /var/data/athena-pool/pool/config:/config:ro
|
||||
- /var/data/athena-pool/pool/logs:/logs
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
ports:
|
||||
- 3335:3335
|
||||
- 5557:5557
|
||||
- 7779:7779
|
||||
entrypoint: |
|
||||
node init.js -config=/config/athena.json
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:api.athx.heigh-ho.funkypenguin.co.nz
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=8117
|
||||
|
||||
redis:
|
||||
volumes:
|
||||
- /var/data/athena-pool/redis/config:/config:ro
|
||||
- /var/data/athena-pool/redis/data:/data
|
||||
- /var/data/athena-pool/redis/logs:/logs
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
image: redis
|
||||
entrypoint: |
|
||||
redis-server /config/redis.conf
|
||||
networks:
|
||||
- internal
|
||||
|
||||
nginx:
|
||||
volumes:
|
||||
- /var/data/athena-pool/nginx/website:/usr/share/nginx/html:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
image: nginx
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:athx.heigh-ho.funkypenguin.co.nz
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=80
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.26.0/24
|
||||
```
|
||||
|
||||
!!! note
|
||||
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
### Athena is a go!
|
||||
|
||||
Launch the Athena pool stack by running ```docker stack deploy athena-pool -c <path -to-docker-compose.yml>```, and then run ```"```docker stack ps athena-pool``` to ensure the stack has come up properly. (_See [troubleshooting](/reference/troubleshooting) if not_)
|
||||
|
||||
The first thing that'll happen is that Athena will start syncing the blockchain. You can watch this happening with ```docker service logs athena-pool_daemon -f```. While the daemon is syncing, it won't respond to requests, so services, the pool, etc will be non-functional.
|
||||
|
||||
You can watch the various elements of the pool doing their thing, by running ```tail -f /var/data/athena-pool/pool/logs/*.log```
|
||||
|
||||
### So how do I mine to it?
|
||||
|
||||
That.. is another recipe. Start with the "[CryptoMiner](/recipes/cryptominer/)" uber-recipe for GPU/rig details, grab a copy of [xmr-stack](https://github.com/fireice-uk/xmr-stak), use the same parameters as TurtleCoin, and follow your nose. Jump into the Athena discord (_below_) #mining channel for help.
|
||||
|
||||
### What to do if it breaks?
|
||||
|
||||
Athena is a newborn cryptocurrency, and the [destiny of the coin is not yet clear](https://github.com/athena-network/athx-research/issues/1).
|
||||
|
||||
Jump into the [Athena Discord server](http://chat.athx.org) to ask questions, contribute.
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
1. Because Docker Swarm performs ingress NAT for its load-balanced "routing mesh", the source address of inbound miner traffic is rewritten to a (_common_) docker node IP address. This means it's [not possible to determine the actual source IP address](https://github.com/moby/moby/issues/25526) of a miner. Which, in turn, means that any **one** misconfigured miner could trigger an IP ban, and lock out all other miners for 5 minutes at a time.
|
||||
|
||||
Two possible solutions to this are (1) disable banning, or (2) update the pool banning code to ban based on a combination of IP address and miner wallet address. I'll be working on a change to implement #2 if this becomes a concern.
|
||||
|
||||
2. The traefik labels in the docker-compose are to permit automatic LetsEncrypt SSL-protected proxying of your pool UI and API addresses.
|
||||
|
||||
3. Astute readers will note that although I set permissions to the "athena-pool" user above, those permissions are not **actually** enforced in the .yml file (yet)
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
Also, you could send me some ATHX ❤️ to _athena2i8SmWUGQffz6sXEdvCDawkDQvz2gdf9iBnepU999j3fUzuschJiKrow2GCTEsd5cWnk3sz2iz59WSr6NVdpvDXPbX6qj4g_
|
||||
|
||||
### Your comments? 💬
|
||||
@@ -1,462 +0,0 @@
|
||||
# Masari Mining Pool
|
||||
|
||||
[Cryptocurrency miners](/recipes/cryptominer) will "pool" their GPU resources ("_hashpower_") into aggregate "_mining pools_", so that by the combined effort of all the miners, the pool will receive a reward for the blocks "mined" into the blockchain, and this reward will be distributed among the miners.
|
||||
|
||||

|
||||
|
||||
This recipe illustrates how to build a mining pool for [Masari](https://getmasari.org), one of many [CryptoNote](https://cryptonote.org/) [currencies](https://cryptonote.org/coins) (_which include [Monero](https://www.coingecko.com/en/coins/monero)_), but the principles can be applied to most mineable coins.
|
||||
|
||||
The end result is a mining pool which looks like this: https://msr.heigh-ho.funkypenguin.co.nz/
|
||||
|
||||
!!! question "Isn't this just a copy/paste of your Masari Pool Recipe?"
|
||||
|
||||
Why yes. Yes it is :) But it's adjusted for Masari, which uses different containers and wallet binary names, and it's running the improved [cryptonote-nodejs-pool software](https://github.com/dvandal/cryptonote-nodejs-pool), which is common to all the [cryptonote-mining-pool](/recipes/criptonote-mining-pool/) recipes!
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
||||
2. [Traefik](/ha-docker-swarm/traefik) configured per design
|
||||
3. DNS entry for the hostnames (_pool and api_) you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
|
||||
4. At least 16GB disk space (12GB used, 4GB for future growth)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Create user account
|
||||
|
||||
The Masari pool elements won't (_and shouldn't_) run as root, but they'll need access to write data to some parts of the filesystem (_like logs, etc_).
|
||||
|
||||
To manage access control, we'll want to create a local user on **each docker node** with the same UID.
|
||||
|
||||
```
|
||||
useradd -u 3507 masari-pool
|
||||
```
|
||||
|
||||
### Setup Redis
|
||||
|
||||
|
||||
The pool uses Redis for in-memory and persistent storage. This comes in handy for the Docker Swarm deployment, since while the various pool modules weren't _designed_ to run as microservices, the fact that they all rely on Redis for data storage makes this possible.
|
||||
|
||||
!!! warning "Playing it safe"
|
||||
|
||||
Be aware that by default, Redis stores some data **only** in memory, and writes to the filesystem at default intervals (_can be up to 5 minutes by default_). Given we don't **want** to loose 5 minutes of miner's data if we restart Redis (_what happens if we found a block during those 5 minutes but haven't paid any miners yet?_), we want to ensure that Redis runs in "appendonly" mode, which ensures that every change is immediately written to disk.
|
||||
|
||||
We also want to make sure that we retain all Redis logs persistently (_We're dealing with people's cryptocurrency here, it's a good idea to keep persistent logs for debugging/auditing_)
|
||||
|
||||
Create directories to hold Redis data. We use separate directories for future flexibility - One day, we may want to backup the data but not the logs, or move the data to an SSD partition but leave the logs on slower, cheaper disk.
|
||||
|
||||
```
|
||||
mkdir -p /var/data/masari-pool/redis/config
|
||||
mkdir -p /var/data/masari-pool/redis/data
|
||||
mkdir -p /var/data/masari-pool/redis/logs
|
||||
chown masari-pool /var/data/masari-pool/redis/data
|
||||
chown masari-pool /var/data/masari-pool/redis/logs
|
||||
```
|
||||
|
||||
Create **/var/data/masari-pool/redis/config/redis.conf** using http://download.redis.io/redis-stable/redis.conf as a guide. The following are the values I changed from default on my deployment (_but I'm not a Redis expert!_):
|
||||
|
||||
```
|
||||
appendonly yes
|
||||
appendfilename "appendonly.aof"
|
||||
loglevel notice
|
||||
logfile "/logs/redis.log"
|
||||
protected-mode no
|
||||
```
|
||||
|
||||
I also had to **disable** the following line, by commenting it out (_thus ensuring Redis container will respond to the other containers_):
|
||||
|
||||
```
|
||||
bind 127.0.0.1
|
||||
```
|
||||
|
||||
### Setup Nginx
|
||||
|
||||
We'll run a simple Nginx container to serve the static front-end of the web UI.
|
||||
|
||||
The simplest way to get the frontend is just to clone the upstream masari-pool repo, and mount the "/website" subdirectory into Nginx.
|
||||
|
||||
```
|
||||
git clone https://github.com/turtlecoin/turtle-pool.git /var/data/masari-pool/nginx/
|
||||
```
|
||||
|
||||
Edit **/var/data/masari-pool/nginx/website/config.js**, and change at least the following:
|
||||
|
||||
```
|
||||
var api = "https://<CHOOSE A FQDN TO USE FOR YOUR API>";
|
||||
var poolHost = "<SET TO THE PUBLIC DNS NAME FOR YOUR POOL SERVER";
|
||||
```
|
||||
|
||||
### Setup Masari daemon
|
||||
|
||||
The first thing we'll need to participate in the great and powerful Masari network is a **node**. The node is responsible for communicating with the rest of the nodes in the blockchain, allowing our miners to receive new blocks to try to find.
|
||||
|
||||
Create directories to hold the blockchain data for all 3 daemons:
|
||||
|
||||
```
|
||||
mkdir -p /var/data/runtime/masari-pool/daemon/
|
||||
```
|
||||
|
||||
### Setup Masari wallet
|
||||
|
||||
Our pool needs a wallet to be able to (a) receive rewards for blocks discovered, and (b) pay out our miners for their share of the reward.
|
||||
|
||||
Create directories to hold wallet data
|
||||
|
||||
```
|
||||
mkdir -p /var/data/masari-pool/wallet/config
|
||||
mkdir -p /var/data/masari-pool/wallet/container
|
||||
mkdir -p /var/data/masari-pool/wallet/logs
|
||||
chown -R masari-pool /var/data/masari-pool/wallet/container
|
||||
chown -R masari-pool /var/data/masari-pool/wallet/logs
|
||||
```
|
||||
|
||||
Now create the initial wallet. You'll want to secure your wallet password, so the command below will **prompt** you for the key (no output from the prompt), and insert it into an environment variable. This means that the key won't be stored in plaintext in your bash history!
|
||||
|
||||
```
|
||||
read PASS
|
||||
```
|
||||
|
||||
After having entered your password (you can confirm by running ```env | grep PASS```), run the following to run the wallet daemon _once_, with the instruction to create a new wallet container:
|
||||
|
||||
```
|
||||
docker run \
|
||||
-v /var/data/masari-pool/wallet/wallet:/wallet \
|
||||
--rm -ti --entrypoint /usr/local/bin/masari-wallet-cli funkypenguin/masari \
|
||||
--password $PASS \
|
||||
--generate-new-wallet /wallet/wallet \
|
||||
--mnemonic-language English \
|
||||
--command do-nothing-and-exit
|
||||
```
|
||||
|
||||
You'll get a lot of output. The following are relevant lines from a successful run with the extra stuff stripped out:
|
||||
|
||||
```
|
||||
[root@ds3 ~]# docker run \
|
||||
> -v /var/data/masari-pool/wallet/wallet:/wallet \
|
||||
> --rm -ti --entrypoint /usr/local/bin/masari-wallet-cli funkypenguin/masari \
|
||||
> --password $PASS \
|
||||
> --generate-new-wallet /wallet/wallet \
|
||||
> --mnemonic-language English \
|
||||
> --command do-nothing-and-exit
|
||||
This is the command line masari wallet. It needs to connect to a masari daemon to work correctly.
|
||||
|
||||
Masari 'Classy Caiman' (v0.2.4.1-release)
|
||||
Logging to /usr/local/bin/masari-wallet-cli.log
|
||||
Generated new wallet: 5oUpENoBjxvjEq9Rq18cQHVhvBNGwJi9vfr36Uf5cVjx5JZNUWdHDPuFxt5sVBzuMsJcsNNEmQqvnV6UfiGuBgg4HogmEcZ
|
||||
View key: d7b9d73856ba7b8e43010e3e8201d17e3aee90d2a3792439c179553229e9780f
|
||||
**********************************************************************
|
||||
Your wallet has been generated!
|
||||
To start synchronizing with the daemon, use the "refresh" command.
|
||||
Use the "help" command to see the list of available commands.
|
||||
Use "help <command>" to see a command's documentation.
|
||||
Always use the "exit" command when closing masari-wallet-cli to save
|
||||
your current session's state. Otherwise, you might need to synchronize
|
||||
your wallet again (your wallet keys are NOT at risk in any case).
|
||||
|
||||
|
||||
NOTE: the following 25 words can be used to recover access to your wallet. Write them down and store them somewhere safe and secure. Please do not store them in your email or on file storage services outside of your immediate control.
|
||||
|
||||
ecstatic boil cycling bowling jeopardy fawns loudly baby
|
||||
nuance token withdrawn nifty ramped taken donuts irritate
|
||||
sack tedious fishing kangaroo toffee video lyrics mohawk kangaroo
|
||||
**********************************************************************
|
||||
Error: Unknown command: do-nothing-and-exit
|
||||
[root@ds3 ~]#
|
||||
```
|
||||
|
||||
Take careful note of your wallet password, public view key, and wallet address
|
||||
|
||||
Create **/var/data/masari-pool/wallet/config/wallet.conf**, containing the following:
|
||||
|
||||
```
|
||||
wallet-file = /wallet/wallet
|
||||
password = <ENTER YOUR CONTAINER PASSWORD HERE>
|
||||
rpc-password = <CHOOSE A PASSWORD TO ALLOW POOL TO TALK TO WALLET>
|
||||
log-file = /dev/stdout
|
||||
log-level = 1
|
||||
daemon-host = daemon
|
||||
rpc-bind-port = 38082
|
||||
```
|
||||
|
||||
Set permissions appropriately:
|
||||
|
||||
```
|
||||
chown masari-pool /var/data/masari-pool/wallet/ -R
|
||||
```
|
||||
|
||||
|
||||
### Setup Masari mining pool
|
||||
|
||||
Following the convention we've set above, create directories to hold pool data:
|
||||
|
||||
```
|
||||
mkdir -p /var/data/masari-pool/pool/config
|
||||
mkdir -p /var/data/masari-pool/pool/logs
|
||||
chown -R masari-pool /var/data/masari-pool/pool/logs
|
||||
```
|
||||
|
||||
Now create **/var/data/masari-pool/pool/config/config.json**, using https://github.com/masaricoin/masari-pool/blob/master/config.json as a guide, and adjusting at least the following:
|
||||
|
||||
Send logs to /logs/, so that they can potentially be stored / backed up separately from the config:
|
||||
|
||||
```
|
||||
"logging": {
|
||||
"files": {
|
||||
"level": "debug",
|
||||
"directory": "/logs",
|
||||
"flushInterval": 5
|
||||
},
|
||||
```
|
||||
|
||||
Set the "poolAddress" field to your wallet address
|
||||
```
|
||||
"poolServer": {
|
||||
"enabled": true,
|
||||
"clusterForks": "auto",
|
||||
"poolAddress": "<SET THIS TO YOUR WALLET ADDRESS GENERATED ABOVE>",
|
||||
```
|
||||
|
||||
Add the "host" value to the api section, since our API will run on its own container, and choose a password you'll use for the webUI admin page
|
||||
|
||||
```
|
||||
"api": {
|
||||
"enabled": true,
|
||||
"hashrateWindow": 600,
|
||||
"updateInterval": 5,
|
||||
"host": "pool-api",
|
||||
"port": 8117,
|
||||
"blocks": 30,
|
||||
"payments": 30,
|
||||
"password": "<PASSWORD FOR ADMIN PAGE ACCESS>"
|
||||
```
|
||||
|
||||
Set the host value for the daemon:
|
||||
|
||||
```
|
||||
"daemon": {
|
||||
"host": "daemon",
|
||||
"port": 38081
|
||||
},
|
||||
```
|
||||
|
||||
Set the host value for the wallet, and set your container password (_you recorded it earlier, remember?_)
|
||||
|
||||
```
|
||||
"wallet": {
|
||||
"host": "wallet",
|
||||
"port": 38082,
|
||||
"password": "<SET ME TO YOUR WALLET RPC PASSWORD>"
|
||||
},
|
||||
```
|
||||
|
||||
Set the host value for Redis:
|
||||
|
||||
```
|
||||
"redis": {
|
||||
"host": "redis",
|
||||
"port": 6379
|
||||
},
|
||||
```
|
||||
|
||||
That's it! The above config files mean each element of the pool will be able to communicate with the other elements within the docker swarm, by name.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
!!! tip
|
||||
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||
|
||||
|
||||
```
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
daemon:
|
||||
image: funkypenguin/masaricoind
|
||||
volumes:
|
||||
- /var/data/runtime/masari-pool/daemon/1:/var/lib/masaricoind/
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
ports:
|
||||
- 11897:11897
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:explorer.trtl.heigh-ho.funkypenguin.co.nz
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=11898
|
||||
|
||||
daemon-failsafe1:
|
||||
image: funkypenguin/masaricoind
|
||||
volumes:
|
||||
- /var/data/runtime/masari-pool/daemon/failsafe1:/var/lib/masaricoind/
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
|
||||
daemon-failsafe2:
|
||||
image: funkypenguin/masaricoind
|
||||
volumes:
|
||||
- /var/data/runtime/masari-pool/daemon/failsafe2:/var/lib/masaricoind/
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
|
||||
pool-pool:
|
||||
image: funkypenguin/turtle-pool
|
||||
volumes:
|
||||
- /var/data/masari-pool/pool/config:/config:ro
|
||||
- /var/data/masari-pool/pool/logs:/logs
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
ports:
|
||||
- 3333:3333
|
||||
- 5555:5555
|
||||
- 7777:7777
|
||||
entrypoint: |
|
||||
node init.js -module=pool -config=/config/config.json
|
||||
|
||||
pool-api:
|
||||
image: funkypenguin/turtle-pool
|
||||
volumes:
|
||||
- /var/data/masari-pool/pool/config:/config:ro
|
||||
- /var/data/masari-pool/pool/logs:/logs
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:api.trtl.heigh-ho.funkypenguin.co.nz
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=8117
|
||||
entrypoint: |
|
||||
node init.js -module=api -config=/config/config.json
|
||||
|
||||
pool-unlocker:
|
||||
image: funkypenguin/turtle-pool
|
||||
volumes:
|
||||
- /var/data/masari-pool/pool/config:/config:ro
|
||||
- /var/data/masari-pool/pool/logs:/logs
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
entrypoint: |
|
||||
node init.js -module=unlocker -config=/config/config.json
|
||||
|
||||
pool-payments:
|
||||
image: funkypenguin/turtle-pool
|
||||
volumes:
|
||||
- /var/data/masari-pool/pool/config:/config:ro
|
||||
- /var/data/masari-pool/pool/logs:/logs
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
entrypoint: |
|
||||
node init.js -module=payments -config=/config/config.json
|
||||
|
||||
pool-charts:
|
||||
image: funkypenguin/turtle-pool
|
||||
volumes:
|
||||
- /var/data/masari-pool/pool/config:/config:ro
|
||||
- /var/data/masari-pool/pool/logs:/logs
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
entrypoint: |
|
||||
node init.js -module=chartsDataCollector -config=/config/config.json
|
||||
|
||||
wallet:
|
||||
image: funkypenguin/masari
|
||||
volumes:
|
||||
- /var/data/masari-pool/wallet/config:/config:ro
|
||||
- /var/data/masari-pool/wallet/container:/container
|
||||
- /var/data/masari-pool/wallet/logs:/logs
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
entrypoint: |
|
||||
walletd --config /config/wallet.conf | tee /logs/walletd.log
|
||||
|
||||
redis:
|
||||
volumes:
|
||||
- /var/data/masari-pool/redis/config:/config:ro
|
||||
- /var/data/masari-pool/redis/data:/data
|
||||
- /var/data/masari-pool/redis/logs:/logs
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
image: redis
|
||||
entrypoint: |
|
||||
redis-server /config/redis.conf
|
||||
networks:
|
||||
- internal
|
||||
|
||||
nginx:
|
||||
volumes:
|
||||
- /var/data/masari-pool/nginx/website:/usr/share/nginx/html:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
image: nginx
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:trtl.heigh-ho.funkypenguin.co.nz
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=80
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.21.0/24
|
||||
```
|
||||
|
||||
!!! note
|
||||
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch the Turtle! 🐢
|
||||
|
||||
Launch the Turtle pool stack by running ```docker stack deploy masari-pool -c <path -to-docker-compose.yml>```, and then run ```"```docker stack ps masari-pool``` to ensure the stack has come up properly. (_See [troubleshooting](/reference/troubleshooting) if not_)
|
||||
|
||||
The first thing that'll happen is that Masarid will start syncing the blockchain from the bootstrap data. You can watch this happening with ```docker service logs masari-pool_daemon -f```. While the daemon is syncing, it won't respond to requests, so walletd, the pool, etc will be non-functional.
|
||||
|
||||
You can watch the various elements of the pool doing their thing, by running ```tail -f /var/data/masari-pool/pool/logs/*.log```
|
||||
|
||||
### So how do I mine to it?
|
||||
|
||||
That.. is another recipe. Start with the "[CryptoMiner](/recipes/cryptominer/)" uber-recipe for GPU/rig details, grab a copy of xmr-stack (_patched for the forked Masari_) from https://github.com/masaricoin/trtl-stak/releases, and follow your nose. Jump into the Masari discord (_below_) #mining channel for help.
|
||||
|
||||
### What to do if it breaks?
|
||||
|
||||
Masari is a baby cryptocurrency. There are scaling issues to solve, and large amounts components of this recipe are under rapid development. So, elements may break/change in time, and this recipe itself is a work-in-progress.
|
||||
|
||||
Jump into the [Masari Discord server](http://chat.masaricoin.lol/) to ask questions, contribute, and send/receive some TRTL tips!
|
||||
|
||||
## Chef's Notes
|
||||
|
||||
1. Because Docker Swarm performs ingress NAT for its load-balanced "routing mesh", the source address of inbound miner traffic is rewritten to a (_common_) docker node IP address. This means it's [not possible to determine the actual source IP address](https://github.com/moby/moby/issues/25526) of a miner. Which, in turn, means that any **one** misconfigured miner could trigger an IP ban, and lock out all other miners for 5 minutes at a time.
|
||||
|
||||
Two possible solutions to this are (1) disable banning, or (2) update the pool banning code to ban based on a combination of IP address and miner wallet address. I'll be working on a change to implement #2 if this becomes a concern.
|
||||
|
||||
2. The traefik labels in the docker-compose are to permit automatic LetsEncrypt SSL-protected proxying of your pool UI and API addresses.
|
||||
|
||||
3. After a [power fault in my datacenter caused daemon DB corruption](https://www.reddit.com/r/TRTL/comments/8jftzt/funky_penguin_nz_mining_pool_down_with_daemon/), I added a second daemon, running in parallel to the first. The failsafe daemon runs once an hour, syncs with the running daemons, and shuts down again, providing a safely halted version of the daemon DB for recovery.
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
Also, you could send me some :masari: ❤️ to _TRTLv2qCKYChMbU5sNkc85hzq2VcGpQidaowbnV2N6LAYrFNebMLepKKPrdif75x5hAizwfc1pX4gi5VsR9WQbjQgYcJm21zec4_
|
||||
|
||||
### Your comments? 💬
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
Intro
|
||||
|
||||

|
||||

|
||||
|
||||
Details
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ hero: Miniflux - A recipe for a lightweight minimalist RSS reader
|
||||
|
||||
Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](/recipes/kanboard/)_)
|
||||
|
||||

|
||||

|
||||
|
||||
!!! tip "Sponsored Project"
|
||||
Miniflux is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. Although I get to process my RSS feeds less frequently than I'd like to!
|
||||
|
||||
@@ -1,44 +0,0 @@
|
||||
hero: SSO for all your stack elements 🎁
|
||||
|
||||
# SSO Stack
|
||||
|
||||
Most of the recipes in the cookbook are stand-alone - you can deploy and use them in isolation. I was approached recently by an anonymous sponsor, who needed a stack which would allow the combination of several collaborative tools, in a manner which permits "single signon (SSO)". I.e., the goal of the design was that a user would be provisioned _once_, and thereafter have transparent access to multiple separate applications.
|
||||
|
||||
The SSO Stack "uber-recipe" is the result of this design.
|
||||
|
||||

|
||||
|
||||
This recipe presents a method to combine multiple tools into a single swarm deployment, and make them available securely.
|
||||
|
||||
## Menu
|
||||
|
||||
Tools included in the SSO stack are:
|
||||
|
||||
* **[OpenLDAP](https://www.openldap.org/)** : Provides Authentication backend
|
||||
* **[LDAP Account Manager ](https://www.ldap-account-manager.org)** (LAM) : A Web_UI to manage LDAP accounts
|
||||
* **[KeyCloak](https://www.keycloak.org/)** is an open source identity and access management solution, providing SSO and 2FA capabilities backed into authentication provides (like OpenLDAP)
|
||||
* **[docker-mailserver](https://github.com/tomav/docker-mailserver)** : A fullstack, simple mail platform including SMTP, IMAPS, and spam filtering components
|
||||
* **[RainLoop](https://www.rainloop.net/)** : A fast, modern webmail client
|
||||
* **[GitLab](https://gitlab.org)** : A powerful collaborative git-based developmenet platform
|
||||
* **[NextCloud](https://www.nextcloud.org)** : A file share and communication platform
|
||||
|
||||
This is a complex recipe, and should be deployed in a sequential manner (_i.e. you need OpenLDAP with LDAP Account Manager, to enable KeyCloak, in order to get SSO available for NextCloud, etc.._)
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
||||
2. [Traefik](/ha-docker-swarm/traefik) configured per design
|
||||
3. Access to NZB indexers and Usenet servers
|
||||
4. DNS entries configured for each of the NZB tools in this recipe that you want to use
|
||||
|
||||
## Preparation
|
||||
|
||||
Now work your way through the list of tools below, adding whichever tools your want to use, and finishing with the **end** section:
|
||||
|
||||
* [OpenLDAP](/recipes/sso-stack/openldap.md)
|
||||
|
||||
### Tip your waiter (donate) 👏
|
||||
|
||||
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏
|
||||
|
||||
### Your comments? 💬
|
||||
@@ -1,62 +0,0 @@
|
||||
docker run -ti --rm \
|
||||
-v "$(pwd)"/letsencrypt:/etc/letsencrypt \
|
||||
-v "$(pwd)"/cloudflare.ini:/cloudflare.ini \
|
||||
certbot/dns-cloudflare \
|
||||
certonly \
|
||||
--dns-cloudflare \
|
||||
--dns-cloudflare-credentials=/cloudflare.ini \
|
||||
-d mail.observe.global
|
||||
|
||||
|
||||
|
||||
```
|
||||
root@cloud:/var/data/docker-mailserver# docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt -v "$(pwd)"/cloudflare.ini:/cloudflare.ini certbot/dns-cloudflare certonly --dns-cloudflare --dns-cloudflare-credentials=/cloudflare.ini -d mail.observe.global
|
||||
Saving debug log to /var/log/letsencrypt/letsencrypt.log
|
||||
Plugins selected: Authenticator dns-cloudflare, Installer None
|
||||
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
|
||||
cancel): cam@0sum.club
|
||||
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
Please read the Terms of Service at
|
||||
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
|
||||
agree in order to register with the ACME server at
|
||||
https://acme-v02.api.letsencrypt.org/directory
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
(A)gree/(C)ancel: A
|
||||
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
Would you be willing to share your email address with the Electronic Frontier
|
||||
Foundation, a founding partner of the Let's Encrypt project and the non-profit
|
||||
organization that develops Certbot? We'd like to send you email about our work
|
||||
encrypting the web, EFF news, campaigns, and ways to support digital freedom.
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
(Y)es/(N)o: N
|
||||
Obtaining a new certificate
|
||||
Performing the following challenges:
|
||||
dns-01 challenge for mail.observe.global
|
||||
Unsafe permissions on credentials configuration file: /cloudflare.ini
|
||||
Waiting 10 seconds for DNS changes to propagate
|
||||
Waiting for verification...
|
||||
Cleaning up challenges
|
||||
|
||||
IMPORTANT NOTES:
|
||||
- Congratulations! Your certificate and chain have been saved at:
|
||||
/etc/letsencrypt/live/mail.observe.global/fullchain.pem
|
||||
Your key file has been saved at:
|
||||
/etc/letsencrypt/live/mail.observe.global/privkey.pem
|
||||
Your cert will expire on 2019-01-30. To obtain a new or tweaked
|
||||
version of this certificate in the future, simply run certbot
|
||||
again. To non-interactively renew *all* of your certificates, run
|
||||
"certbot renew"
|
||||
- Your account credentials have been saved in your Certbot
|
||||
configuration directory at /etc/letsencrypt. You should make a
|
||||
secure backup of this folder now. This configuration directory will
|
||||
also contain certificates and private keys obtained by Certbot so
|
||||
making regular backups of this folder is ideal.
|
||||
- If you like Certbot, please consider supporting our work by:
|
||||
|
||||
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
|
||||
Donating to EFF: https://eff.org/donate-le
|
||||
|
||||
root@cloud:/var/data/docker-mailserver#
|
||||
```
|
||||
@@ -1,122 +0,0 @@
|
||||
https://edenmal.moe/post/2018/GitLab-Keycloak-SAML-2-0-OmniAuth-Provider/
|
||||
|
||||
OAUTH_SAML_ASSERTION_CONSUMER_SERVICE_URL
|
||||
OAUTH_SAML_IDP_CERT_FINGERPRINT
|
||||
OAUTH_SAML_IDP_SSO_TARGET_URL
|
||||
OAUTH_SAML_ISSUER
|
||||
OAUTH_SAML_NAME_IDENTIFIER_FORMAT
|
||||
|
||||
|
||||
|
||||
|
||||
gitlab_rails['omniauth_enabled'] = true
|
||||
gitlab_rails['omniauth_allow_single_sign_on'] = ['saml']
|
||||
gitlab_rails['omniauth_block_auto_created_users'] = false
|
||||
gitlab_rails['omniauth_auto_link_saml_user'] = true
|
||||
gitlab_rails['omniauth_providers'] = [
|
||||
{
|
||||
name: 'saml',
|
||||
label: 'SAML',
|
||||
args: {
|
||||
|
||||
|
||||
|
||||
|
||||
attribute_statements: { username: ['username'] }
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
OAUTH_BLOCK_AUTO_CREATED_USERS=false
|
||||
OAUTH_AUTO_SIGN_IN_WITH_PROVIDER=saml
|
||||
OAUTH_ALLOW_SSO=saml
|
||||
OAUTH_SAML_ASSERTION_CONSUMER_SERVICE_URL=https://gitlab.observe.global/users/auth/saml/callback
|
||||
OAUTH_SAML_IDP_CERT_FINGERPRINT=41f1c588c928291c5dc30d11161d685231509ab8
|
||||
OAUTH_SAML_IDP_SSO_TARGET_URL=https://keycloak.observe.global/auth/realms/observe/protocol/sam
|
||||
OAUTH_SAML_ISSUER=https://gitlab.observe.global
|
||||
OAUTH_SAML_NAME_IDENTIFIER_FORMAT=urn:oasis:names:tc:SAML:2.0:nameid-format:persistent
|
||||
DISBALED_OAUTH_SAML_ATTRIBUTE_STATEMENTS_EMAIL=mail
|
||||
DISBALEDOAUTH_SAML_ATTRIBUTE_STATEMENTS_NAME=cnam
|
||||
DISBALEDOAUTH_SAML_ATTRIBUTE_STATEMENTS_FIRST_NAME=cname
|
||||
DISBALEDOAUTH_SAML_ATTRIBUTE_STATEMENTS_LAST_NAME=sn
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
"clients": [
|
||||
{
|
||||
"clientId": "https://gitlab.observe.global",
|
||||
"rootUrl": "https://gitlab.observe.global",
|
||||
"enabled": true,
|
||||
"redirectUris": [
|
||||
"https://gitlab.observe.global/*"
|
||||
],
|
||||
"protocol": "saml",
|
||||
"attributes": {
|
||||
"saml.assertion.signature": "false",
|
||||
"saml.force.post.binding": "true",
|
||||
"saml.multivalued.roles": "false",
|
||||
"saml.encrypt": "false",
|
||||
"saml.server.signature": "true",
|
||||
"saml.server.signature.keyinfo.ext": "false",
|
||||
"saml.signature.algorithm": "RSA_SHA256",
|
||||
"saml_force_name_id_format": "false",
|
||||
"saml.client.signature": "false",
|
||||
"saml.authnstatement": "true",
|
||||
"saml_name_id_format": "username",
|
||||
"saml.onetimeuse.condition": "false",
|
||||
"saml_signature_canonicalization_method": "http://www.w3.org/2001/10/xml-exc-c14n#"
|
||||
},
|
||||
"protocolMappers": [
|
||||
{
|
||||
"name": "email",
|
||||
"protocol": "saml",
|
||||
"protocolMapper": "saml-user-property-mapper",
|
||||
"consentRequired": false,
|
||||
"config": {
|
||||
"user.attribute": "email",
|
||||
"attribute.name": "email"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "first_name",
|
||||
"protocol": "saml",
|
||||
"protocolMapper": "saml-user-property-mapper",
|
||||
"consentRequired": false,
|
||||
"config": {
|
||||
"user.attribute": "firstName",
|
||||
"attribute.name": "first_name"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "last_name",
|
||||
"protocol": "saml",
|
||||
"protocolMapper": "saml-user-property-mapper",
|
||||
"consentRequired": false,
|
||||
"config": {
|
||||
"user.attribute": "lastName",
|
||||
"attribute.name": "last_name"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "username",
|
||||
"protocol": "saml",
|
||||
"protocolMapper": "saml-user-property-mapper",
|
||||
"consentRequired": false,
|
||||
"config": {
|
||||
"user.attribute": "username",
|
||||
"attribute.name": "username"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIICnTCCAYUCBgFmyRcGiTANBgkqhkiG9w0BAQsFADASMRAwDgYDVQQDDAdvYnNlcnZlMB4XDTE4MTAzMTA3NDUyMVoXDTI4MTAzMTA3NDcwMVowEjEQMA4GA1UEAwwHb2JzZXJ2ZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAI/quQQfWBuQgvxpkcqzkPmXmiO/XE9KmSLcoIOJuDMXXmev9WFtXYbKfozjrZgC4P0uPQLAXJU+2hO7U5fkaG2IuORCK/fKp+cD3GXVO38mxpGFdk3k2eTLUOFfVAUXpPT9dYPSs3EpiB/llslErBBG7bkfwHr06xjU2sMqo/pRbKDLvrqAaBMuHlgOHhAVWWxyzQuUF0kxHxsAbpOnzpiMMOZxhKKZiNEpozIOESplIKFsEYiS4w60z5ROmYEBVqMKP5rsEop9XgS+JNaqfjDbW/NOgT13bvxiA6HxEB0UGt2tWxVsnxpzp3v5rxnXzo1wZQO5KQYWNxJT9Wb9sy8CAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAIyrT5q/TC3CVF7YOAIrPCq+TEANnoaTUuq2BCG6ory+cnnI9T/qoW9c2GVYSrmdQraY8H3o4J+Trjz2OCmuk4Xdp326Lz7hGPuF6i2p9Dmbu696WDlwZHMm+Dn6lMegGb1WKJGAIB9JBss5lHqGbrAxUav9pWukKBZaNsFVnycOMGLQJuROf3jh/MNd7tcIhxAXQxWf//ZfYH7JfeK973L27oGyK+CrxGwsHIsuwSrkJVAPvWPADiPQFqExK/bC1DPGQO4YAV5rCJPTIwXL5I6l5Al2hw1FcAMND2bTA3MgYzg1aQvAJGO7+wQNQYOPkuhR1Hhb1JWGYj1YOdPnG+g==
|
||||
-----END CERTIFICATE-----
|
||||
|
||||
|
||||
https://edenmal.moe/post/2018/GitLab-Keycloak-SAML-2-0-OmniAuth-Provider/
|
||||
18
mkdocs.yml
18
mkdocs.yml
@@ -19,10 +19,10 @@ copyright: 'Copyright © 2016 - 2019 David Young'
|
||||
|
||||
|
||||
#theme_dir: mkdocs-material
|
||||
pages:
|
||||
nav:
|
||||
- Home: index.md
|
||||
- Introduction:
|
||||
- README: README.md
|
||||
- README: README-UI.md
|
||||
- CHANGELOG: CHANGELOG.md
|
||||
- whoami: whoami.md
|
||||
- Docker Swarm:
|
||||
@@ -171,16 +171,19 @@ google_analytics:
|
||||
- 'auto'
|
||||
|
||||
extra_javascript:
|
||||
- 'extras/javascript/piwik.js'
|
||||
# - 'extras/javascript/piwik.js'
|
||||
|
||||
# Extensions
|
||||
markdown_extensions:
|
||||
- admonition
|
||||
- codehilite(linenums=true)
|
||||
- toc(permalink=true)
|
||||
- codehilite:
|
||||
linenums: true
|
||||
- toc:
|
||||
permalink: true
|
||||
- footnotes
|
||||
- pymdownx.arithmatex
|
||||
- pymdownx.betterem(smart_enable=all)
|
||||
- pymdownx.betterem:
|
||||
smart_enable: all
|
||||
- pymdownx.caret
|
||||
- pymdownx.critic
|
||||
- pymdownx.details
|
||||
@@ -191,6 +194,7 @@ markdown_extensions:
|
||||
- pymdownx.mark
|
||||
- pymdownx.smartsymbols
|
||||
- pymdownx.superfences
|
||||
- pymdownx.tasklist(custom_checkbox=true)
|
||||
- pymdownx.tasklist:
|
||||
custom_checkbox: true
|
||||
- pymdownx.tilde
|
||||
- meta
|
||||
|
||||
1
overrides/README-OVERRIDES.md
Normal file
1
overrides/README-OVERRIDES.md
Normal file
@@ -0,0 +1 @@
|
||||
blah
|
||||
@@ -1 +0,0 @@
|
||||
This directory exists in case we want to add theme overrides (like favicon)
|
||||
@@ -1,4 +1,4 @@
|
||||
mkdocs>=0.17.5
|
||||
mkdocs-material==2.9.4
|
||||
pymdown-extensions==5.0
|
||||
Markdown==2.6.11
|
||||
mkdocs>=1.0.4
|
||||
mkdocs-material>=4.0.2
|
||||
pymdown-extensions>=6.0
|
||||
Markdown>=3.0.1
|
||||
|
||||
Reference in New Issue
Block a user