1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-11 00:36:29 +00:00

Initial commit

This commit is contained in:
funkypenguin
2017-07-16 00:06:02 +12:00
commit 6424e70220
27 changed files with 842 additions and 0 deletions

1
.gitexclude Normal file
View File

@@ -0,0 +1 @@
mkdocs-material

38
.gitlab-ci.yml Normal file
View File

@@ -0,0 +1,38 @@
stages:
- build
- test
- deploy
image: python:alpine
#before_script:
# - pip install mkdocs
# # add your custom theme (https://github.com/mkdocs/mkdocs/wiki/MkDocs-Themes) if not inside a theme_dir
# # - pip install mkdocs-material
build site:
stage: build
script:
- pip install mkdocs
- mkdocs build
- mv site public
artifacts:
expire_in: 1 day
paths:
- public
only:
- master
test site:
stage: test
script:
- echo fake result as a placeholder
deploy site:
image: garland/docker-s3cmd
stage: deploy
environment: production
script:
- export LC_ALL=C.UTF-8
- export LANG=C.UTF-8
- s3cmd --no-mime-magic --access_key=$ACCESS_KEY --secret_key=$SECRET_KEY --acl-public --delete-removed --delete-after --no-ssl --host=$S3HOST --host-bucket='$S3HOSTBUCKET' sync public s3://geeks-cookbook

9
docs/README.md Normal file
View File

@@ -0,0 +1,9 @@
# How to read this book
## Structure
1. "Recipies" are sorted by degree of geekiness required to complete them. Relatively straightforward projects are "beginner", more complex projects are "intermediate", and the really fun ones are "advanced".
2. Each recipe contains enough detail in a single page to take a project to completion.
3. When there are optional add-ons/integrations possible to a project (i.e., the addition of "smart LED bulbs" to Home Assistant), this will be reflected as a sub-page of the main project.
## Requirements

3
docs/advanced/about.md Normal file
View File

@@ -0,0 +1,3 @@
# About
This is the advanced section. It's for geeks who are proficient working in the command line, understanding networking, and who enjoy a challenge!

289
docs/advanced/docker.md Normal file
View File

@@ -0,0 +1,289 @@
# Docker Swarm
For truly highly-available services with Docker containers, we need an orchestration system. Docker Swarm (as defined at 1.13) is the simplest way to achieve redundancy, such that a single docker host could be turned off, and none of our services will be interrupted.
## Ingredients
* 2 x CentOS Atomic hosts (bare-metal or VMs). A reasonable minimum would be:
* 1 x vCPU
* 1GB repo_name
* 10GB HDD
* Hosts must be within the same subnet, and connected on a low-latency link (i.e., no WAN links)
## Preparation
### Install CentOS Atomic hosts
I decided to use CentOS Atomic rather than full-blown CentOS7, for the following reasons:
1. I want less responsibility for maintaining the system, including ensuring regular software updates and reboots. Atomic's idempotent nature means the OS is largely real-only, and updates/rollbacks are "atomic" (haha) procedures, which can be easily rolled back if required.
2. For someone used to administrating servers individually, Atomic is a PITA. You have to employ [tricky](http://blog.oddbit.com/2015/03/10/booting-cloud-images-with-libvirt/) [tricks](https://spinningmatt.wordpress.com/2014/01/08/a-recipe-for-starting-cloud-images-with-virt-install/) to get it to install in a non-cloud environment. It's not designed for tweaking or customizing beyond what cloud-config is capable of. For my purposes, this is good, because it forces me to change my thinking - to consider every daemon as a container, and every config as code, to be checked in and version-controlled. Atomic forces this thinking on you.
3. I want the design to be as "portable" as possible. While I run it on VPSs now, I may want to migrate it to a "cloud" provider in the future, and I'll want the most portable, reproducible design.
```
systemctl disable docker --now
systemctl enable docker-latest --now
sed -i '/DOCKERBINARY/s/^#//g' /etc/sysconfig/docker
atomic host upgrade
```
## Setup Swarm
Setting up swarm really is straightforward. You need to ensure that the nodes can talk to each other.
In my case, my nodes are on a shared subnet with other VPSs, so I wanted to ensure that they were not exposed more than necessary. If I were doing this within a cloud infrastructure which provided separation of instances, I wouldn't need to be so specific:
```
# Permit Docker Swarm from other nodes/managers
-A INPUT -s 202.170.164.47 -p tcp --dport 2376 -j ACCEPT
-A INPUT -s 202.170.164.47 -p tcp --dport 2377 -j ACCEPT
-A INPUT -s 202.170.164.47 -p tcp --dport 7946 -j ACCEPT
-A INPUT -s 202.170.164.47 -p udp --dport 7946 -j ACCEPT
-A INPUT -s 202.170.164.47 -p udp --dport 4789 -j ACCEPT
```
````
Now, to launch my swarm:
```docker swarm init```
Yeah, that was it. Now I have a 1-node swarm.
```
[root@ds1 ~]# docker swarm init
Swarm initialized: current node (b54vls3wf8xztwfz79nlkivt8) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-bsud7xnvhv4cicwi7l6c9s6l0 \
202.170.164.47:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@ds1 ~]#
```
Right, so I a 1-node swarm:
```
[root@ds1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
b54vls3wf8xztwfz79nlkivt8 * ds1.funkypenguin.co.nz Ready Active Leader
[root@ds1 ~]#
```
If I followed the "join" command above, I'd end up with a worker node. In my case, I actually want another manager, so that I have full HA, so I followed the instruction and ran ```docker swarm join-token manager``` instead.:
```
[root@ds1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-cfm24bq2zvfkcwujwlp5zqxta \
202.170.164.47:2377
[root@ds1 ~]#
```
I run the command:
````
[root@ds2 davidy]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
b54vls3wf8xztwfz79nlkivt8 ds1.funkypenguin.co.nz Ready Active Leader
xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reachable
[root@ds2 davidy]#
````
Swarm initialized: current node (25fw5695wkqxm8mtwqnktwykr) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-e9rw3a8pi53jhlghuyscm52bn \
202.170.161.87:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@ds1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
202.170.161.87:2377
[root@ds1 ~]#
````
Added the second host to the swarm, then promoted it.
````
[root@ds2 ~]# docker swarm join \
> --token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
> 202.170.161.87:2377
This node joined a swarm as a manager.
[root@ds2 ~]#
````
lvcreate -l 100%FREE -n gfs /dev/atomicos
mkfs.xfs -i size=512 /dev/atomicos/gfs
mkdir -p /srv/glusterfs
echo '//dev/atomicos/gfs /srv/glusterfs/ xfs defaults 1 2' >> /etc/fstab
mount -a && mount
````
docker run \
-h glusterfs-server \
-v /etc/glusterfs:/etc/glusterfs:z \
-v /var/lib/glusterd:/var/lib/glusterd:z \
-v /var/log/glusterfs:/var/log/glusterfs:z \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v /var/srv/glusterfs:/var/srv/glusterfs \
-d --privileged=true --net=host \
--restart=always \
--name="glusterfs-server" \
gluster/gluster-centos
````
now exec into the container, and "probe" its peer, to establish the gluster cluster
```
[root@ds1 ~]# docker exec -it glusterfs-server bash
[root@glusterfs-server /]#
```
```
[root@glusterfs-server /]# gluster peer probe ds2
peer probe: success.
[root@glusterfs-server /]# gluster peer status
Number of Peers: 1
Hostname: ds2
Uuid: 9fbc1985-4e8d-4380-9c10-3c699ebcb10c
State: Peer in Cluster (Connected)
[root@glusterfs-server /]# exit
exit
[root@ds1 ~]#
```
Run ```gluster volume create gv0 replica 2 ds1:/var/srv/glusterfs/gv0 ds2:/var/srv/glusterfs/gv0``` as below to create the cluster:
```
[root@glusterfs-server /]# gluster volume create gv0 replica 2 ds1:/var/srv/glusterfs/gv0 ds2:/var/srv/glusterfs/gv0
volume create: gv0: success: please start the volume to access data
[root@glusterfs-server /]#
```
Run ```gluster volume start gv0``` to start it:
```
[root@glusterfs-server /]# gluster volume start gv0
volume start: gv0: success
[root@glusterfs-server /]#
```
Exit out of the container:
```
[root@glusterfs-server /]# exit
exit
[root@ds1 ~]#
```
Create your mountpoint on the host, and mount the gluster volume:
```
mkdir /srv/data
HOSTNAME=`hostname -s`
echo "$HOSTNAME:/gv0 /srv/data glusterfs defaults,_netdev 0 0" >> /etc/fstab
mount -a && mount
```
mount -t glusterfs ds1:/gv0 /srv/data/
on secondary
mkdir /srv/data
mount -t glusterfs ds2:/gv0 /srv/data/
/dev/VG-vda3/gv0 /srv/glusterfs xfs defaults 1 2
ds2:/gv0 /srv/data glusterfs defaults,_netdev 0 0
install docker-compose:
````
curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
````
### Atomic hosts
docker stack deploy traefik -c traefik.yml
need to deal with selinux though :(, had to set to permissive to get it working
this seemed to work:
https://github.com/dpw/selinux-dockersock
````
mkdir ~/dockersock
cd ~/dockersock
curl -O https://github.com/dpw/selinux-dockersock/raw/master/dockersock.te
curl -O https://github.com/dpw/selinux-dockersock/raw/master/Makefile
make && semodule -i dockersock.pp
````
however... glusterfs still doesn't support selinux, so until that's sorted, you have te disable selinux anyway with "setenforce 0", in order for _ANY_ containers to write to the glusterfs fuse partition.
need to add something to rc.local to make glustetr fs mount
__ maybe __this works:
setsebool -P virt_sandbox_use_fusefs on
https://docs.openshift.org/latest/install_config/persistent_storage/persistent_storage_glusterfs.html#selinux
Stupid cloud-init makes the system slow to boot:
[root@ds1 ~]# systemctl mask cloud-final.service
Created symlink from /etc/systemd/system/cloud-final.service to /dev/null.
[root@ds1 ~]# systemctl mask cloud-config.service
Created symlink from /etc/systemd/system/cloud-config.service to /dev/null.
[root@ds1 ~]#
added {"experimental":true} to /etc/docker-latest/daemon.json to enable logs of deployed services
I.e changed this:
```
Usage: docker stack COMMAND
{
"log-driver": "journald",
"signature-verification": false
}
```
To this:
```
{
"log-driver": "journald",
"signature-verification": false,
"experimental": true
}```
!!! note the comma after "false" above

0
docs/advanced/gitlab.md Normal file
View File

View File

View File

View File

0
docs/advanced/huginn.md Normal file
View File

226
docs/advanced/keepalived.md Normal file
View File

@@ -0,0 +1,226 @@
# keepalived
## On both hosts
The design for redundant Docker hosts requires a virtual IP for high availability. To enable this, we install the "keepalived" daemon on both hosts:
````yum -y install keepalived````
Below, we'll configure a very basic primary/secondary configuration.
!!! note
Note that if you have a firewall on your hosts, you need to permit the VRRP traffic, as follows (note that for both INPUT and OUTPUT rule, the destination is 224.0.0.18, a multicast address)
````
# permit keepalived in
-A INPUT -i eth0 -d 224.0.0.18 -j ACCEPT
# permit keepalived out
-A OUTPUT -o eth0 -d 224.0.0.18 -j ACCEPT
````
## On the primary
Configure keepalived (note the priority)
````
VIP=<YOUR HA IP>
PASS=<PASSWORD-OF-CHOICE>
cat << EOF > /etc/keepalived/keepalived.conf
vrrp_instance DS {
state MASTER
interface eth0
virtual_router_id 42
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass $PASS
}
virtual_ipaddress {
$VIP
}
}
EOF
systemctl enable keepalived
systemctl start keepalived
````
## On the secondary
Repeat the same on the secondary (all that changes is the priority - the priority of the secondary must be lower than that of the primary):
````
VIP=<YOUR HA IP>
PASS=<PASSWORD-OF-CHOICE>
cat << EOF > /etc/keepalived/keepalived.conf
vrrp_instance DS {
state MASTER
interface eth0
virtual_router_id 42
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass $PASS
}
virtual_ipaddress {
$VIP
}
}
EOF
systemctl enable keepalived
systemctl start keepalived
````
Check the state of keepalived on both nodes by running
````systemctl status keepalived````
## Confirm HA function
You should now be able to ping your HA IP address, and you can test the HA function by running ````tail -f /var/log/messages | grep Keepalived```` the secondary node, and turning keepalived off/on on the primary node, by running ````systemctl stop keepalived && sleep 10s && systemctl start keepalived````.
## Docker
On both hosts, run:
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum makecache fast
sudo yum install docker-ce
sudo systemctl start docker
sudo docker run hello-world
## Setup Swarm
````
[[root@ds1 ~]# docker swarm init
Swarm initialized: current node (25fw5695wkqxm8mtwqnktwykr) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-e9rw3a8pi53jhlghuyscm52bn \
202.170.161.87:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@ds1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
202.170.161.87:2377
[root@ds1 ~]#
````
Added the second host to the swarm, then promoted it.
````
[root@ds2 ~]# docker swarm join \
> --token SWMTKN-1-54al7nosz9jzj41a8d6kjhz2yez7zxgbdw362f821j81svqofo-1sjspmbyxqvica5gdb5p4n7mh \
> 202.170.161.87:2377
This node joined a swarm as a manager.
[root@ds2 ~]#
````
mkfs.xfs -i size=512 /dev/vdb1
mkdir -p /srv/glusterfs
echo '/dev/vdb1 /srv/glusterfs/ xfs defaults 1 2' >> /etc/fstab
mount -a && mount
````
docker run \
-h glusterfs-server \
-v /etc/glusterfs:/etc/glusterfs:z \
-v /var/lib/glusterd:/var/lib/glusterd:z \
-v /var/log/glusterfs:/var/log/glusterfs:z \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v /var/srv/glusterfs:/var/srv/glusterfs \
-d --privileged=true --net=host \
--restart=always \
--name="glusterfs-server" \
gluster/gluster-centos
````
gluster volume create gv0 replica 2 server1:/data/brick1/gv0 server2:/data/brick1/gv0
gluster volume start gv0
[root@ds2 ~]# gluster peer probe ds1
gluster volume create gv0 replica 2 ds1:/srv/glusterfs/gv0 ds2:/srv/glusterfs/gv0
gluster volume start gv0
[root@ds1 ~]# mkdir /srv/data
[root@ds1 ~]# mount -t glusterfs ds1:/gv0 /srv/data/
on secondary
mkdir /srv/data
mount -t glusterfs ds2:/gv0 /srv/data/
/dev/VG-vda3/gv0 /srv/glusterfs xfs defaults 1 2
ds2:/gv0 /srv/data glusterfs defaults,_netdev 0 0
install docker-compose:
````
curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
````
### Atomic hosts
systemctl disable docker --now
systemctl enable docker-latest --now
sed -i '/DOCKERBINARY/s/^#//g' /etc/sysconfig/docker
atomic host upgrade
docker stack deploy traefik -c traefik.yml
need to deal with selinux though :(, had to set to permissive to get it working
this seemed to work:
https://github.com/dpw/selinux-dockersock
need to add something to rc.local to make glustetr fs mount
added {"experimental":true} to /etc/docker/dameon.json to enable logs of deployed services
echo "modprobe ip_vs" >> /etc/rc.local
for primary / secondary keepalived
docker run -d --name keepalived --restart=always \
--cap-add=NET_ADMIN --net=host \
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['202.170.164.47', '202.170.164.48']" \
-e KEEPALIVED_VIRTUAL_IPS=202.170.164.49 \
-e KEEPALIVED_PRIORITY=100 \
osixia/keepalived:1.3.5

View File

View File

0
docs/advanced/shaarli.md Normal file
View File

View File

@@ -0,0 +1,113 @@
# Introduction
[Tiny Tiny RSS][ttrss] is a self-hosted, AJAX-based RSS reader, which rose to popularity as a replacement for Google Reader. It supports advanced features, such as:
* Plugins and themeing in a drop-in fashion
* Filtering (discard all articles with title matching "trump")
* Sharing articles via a unique public URL/feed
Tiny Tiny RSS requires a database and a webserver - this recipe provides both using docker, exposed to the world via LetsEncrypt.
# Ingredients
**Required**
1. Webserver (nginx container)
2. Database (postgresql container)
3. TTRSS (ttrss container)
3. Nginx reverse proxy with LetsEncrypt
**Optional**
1. Email server (if you want to email articles from TTRSS)
# Preparation
**Setup filesystem location**
I setup a directory for the ttrss data, at /data/ttrss.
I created docker-compose.yml, as follows:
````
rproxy:
image: nginx:1.13-alpine
ports:
- "34804:80"
environment:
- DOMAIN_NAME=ttrss.funkypenguin.co.nz
- VIRTUAL_HOST=ttrss.funkypenguin.co.nz
- LETSENCRYPT_HOST=ttrss.funkypenguin.co.nz
- LETSENCRYPT_EMAIL=davidy@funkypenguin.co.nz
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
volumes_from:
- ttrss
links:
- ttrss:ttrss
ttrss:
image: tkaefer/docker-ttrss
restart: always
links:
- postgres:database
environment:
- DB_USER=ttrss
- DB_PASS=uVL53xfmJxW
- SELF_URL_PATH=https://ttrss.funkypenguin.co.nz
volumes:
- ./plugins.local:/var/www/plugins.local
- ./themes.local:/var/www/themes.local
- ./reader:/var/www/reader
postgres:
image: postgres:latest
volumes:
- /srv/ssd-data/ttrss/db:/var/lib/postgresql/data
restart: always
environment:
- POSTGRES_USER=ttrss
- POSTGRES_PASSWORD=uVL53xfmJxW
gmailsmtp:
image: softinnov/gmailsmtp
restart: always
environment:
- user=davidy@funkypenguin.co.nz
- pass=eqknehqflfbufzbh
- DOMAIN_NAME=gmailsmtp.funkypenguin.co.nz
````
Run ````docker-compose up```` in the same directory, and watch the output. PostgreSQL container will create the "ttrss" database, and ttrss will start using it.
# Login to UI
Log into https://\<your VIRTUALHOST\>. Default user is "admin" and password is "password"
# Optional - Enable af_psql_trgm plugin for similar post detection
One of the native plugins enables the detection of "similar" articles. This requires the pg_trgm extension enabled in your database.
From the working directory, use ````docker exec```` to get a shell within your postgres container, and run "postgres" as the postgres user:
````
[root@kvm nginx]# docker exec -it ttrss_postgres_1 /bin/sh
# su - postgres
No directory, logging in with HOME=/
$ psql
psql (9.6.3)
Type "help" for help.
````
Add the trgm extension to your ttrss database:
````
postgres=# \c ttrss
You are now connected to database "ttrss" as user "postgres".
ttrss=# CREATE EXTENSION pg_trgm;
CREATE EXTENSION
ttrss=# \q
````
[ttrss]:https://tt-rss.org/

1
docs/advanced/traefik.md Normal file
View File

@@ -0,0 +1 @@
docker stack deploy traefik -c docker-compose.yml

View File

View File

@@ -0,0 +1,5 @@
# 2-Factor authentication
## What is it?
## Why do we need it?

View File

@@ -0,0 +1,5 @@
# Beginner
The recipies in the beginner section meet the following criteria:
1. They do not require command-line skills

BIN
docs/images/site-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

3
docs/index.md Normal file
View File

@@ -0,0 +1,3 @@
# Index
This book is a collection of recipies

View File

View File

38
docs/whoami.md Normal file
View File

@@ -0,0 +1,38 @@
# Welcome to Funky Penguin's Geek Cookbook
## Hello world,
I'm [David](https://www.funkypenguin.co.nz/contact/).
I've spent 20+ years working with technology. My current role is **Senior Infrastructure Architect** at [Prophecy Networks Ltd](http://www.prophecy.net.nz) in New Zealand, with a specific interest in networking, systems, open-source, and business management.
I've had a [book published](https://www.funkypenguin.co.nz/book/phplist-2-email-campaign-manager/), and I [blog](https://www.funkypenguin.co.nz/blog/) on topics that interest me.
## Why Funky Penguin?
My first "real" job, out of high-school, was working the IT helpdesk in a typical pre-2000 organization in South Africa. I enjoyed experimenting with Linux, and cut my teeth by replacing the organization's Exchange 5.5 mail platform with a 15-site [qmail-ldap](http://www.nrg4u.com/) cluster, with [amavis](https://en.wikipedia.org/wiki/Amavis) virus-scanning.
One of our suppliers asked me to quote to do the same for their organization. With nothing to loose, and half-expecting to be turned down, I quoted a generous fee, and chose a cheeky company name. The supplier immediately accepted my quote, and the name ("_Funky Penguin_") stuck.
## Technical Documentation
During the same "real" job above, I wanted to deploy [jabberd](https://en.wikipedia.org/wiki/Jabberd14), for internal instant messaging within the organization, and as a means to control the sprawl of ad-hoc instant-messaging among staff, using ICQ, MSN, and Yahoo Messenger.
To get management approval to deploy, I wrote a logger (with web UI) for jabber conversations ([Bandersnatch](https://www.funkypenguin.co.nz/project/bandersnatch/)), and a [75-page user manual](https://www.funkypenguin.co.nz/book/jajc-manual/) (in [Docbook XML](http://www.docbook.org/) for a spunky Russian WinXP jabber client, [JAJC](http://jajc.jrudevels.org/).
Due to my contributions to [phpList](http://www.phplist.com), I was approached in 2011 by [Packt Publishing](http://www.packtpub.com), to [write a book](https://www.funkypenguin.co.nz/book/phplist-2-email-campaign-manager) about using PHPList.
## Contact Me
Contact me by:
* Email ([davidy@funkypenguin.co.nz](mailto:davidy@funkypenguin.co.nz))
* Twitter ([@funkypenguin](https://twitter.com/funkypenguin))
* Mastodon ([@davidy@funkypenguin.co.nz](https://mastodon.funkypenguin.co.nz/@davidy))
Or by using the form below:
<div class="panel">
<iframe width="100%" height="400" frameborder="0" scrolling="no" src="https://funkypenguin.wufoo.com/forms/z16038vt0bk5txp/"></iframe>
</div>

1
mkdocs-material Submodule

Submodule mkdocs-material added at ea3909dcc1

110
mkdocs.yml Normal file
View File

@@ -0,0 +1,110 @@
site_name: Funky Penguin's Self-Hosted Geek's Cookbook
site_description: 'A short description of my project'
site_author: 'David Young'
site_url: 'http://localhost:8000'
# Repository
# repo_name: 'funkypenguin/geek-cookbook'
# repo_url: 'https://github.com/john-doe/my-project'
# Copyright
copyright: 'Copyright &copy; 2016 - 2017 David Young'
theme: null
theme_dir: 'mkdocs-material/material'
pages:
- Home : index.md
- Introduction:
- README: README.md
- whoami: whoami.md
- Docker (Standalone):
- Getting Started:
- Basic Setup: beginner/beginner.md
- LVM-Backed storage: beginner/beginner.md
- LetsEncrypt Proxy: advanced/about.md
- Tiny Tiny RSS:
- Basic: advanced/tiny-tiny-rss.md
- Plugins: advanced/tiny-tiny-rss.md
- Themes: advanced/tiny-tiny-rss.md
# - Home Assistant:
# - About: advanced/home-assistant/basic.md
# - Basic: advanced/home-assistant/basic.md
# - Grafana: advanced/home-assistant/grafana.md
# - Limitless LED: advanced/home-assistant/limitless-led.md
# - OwnTracks: advanced/home-assistant/limitless-led.md
- Docker (HA Swarm):
- Getting Started:
- Basic Setup: beginner/beginner.md
- Persistent Storage: beginner/beginner.md
- Keepalived: advanced/keepalived.md
- Tiny Tiny RSS:
- Basic: advanced/tiny-tiny-rss.md
- Plugins: advanced/tiny-tiny-rss.md
- Themes: advanced/tiny-tiny-rss.md
# - Home Assistant:
# - About: advanced/home-assistant/basic.md
# - Basic: advanced/home-assistant/basic.md
# - Grafana: advanced/home-assistant/grafana.md
# - Limitless LED: advanced/home-assistant/limitless-led.md
# - OwnTracks: advanced/home-assistant/limitless-led.md
# - Huginn: advanced/huginn.md
# - Nextcloud: advanced/nextcloud.md
# - OwnTracks: advanced/owntracks.md
# - Shaarli: advanced/shaarli.md
# - Wallabag: advanced/wallabag.md
extra:
logo: 'images/site-logo.png'
feature:
tabs: false
palette:
primary: 'indigo'
accent: 'indigo'
font:
text: 'Roboto'
code: 'Roboto Mono'
social:
- type: 'github'
link: 'https://github.com/john-doe'
- type: 'twitter'
link: 'https://twitter.com/john-doe'
- type: 'linkedin'
link: 'https://de.linkedin.com/in/john-doe'
# Google Analytics
google_analytics:
- 'UA-XXXXXXXX-X'
- 'auto'
# Extensions
markdown_extensions:
- admonition
- codehilite(linenums=true)
- toc(permalink=true)
- footnotes
- pymdownx.arithmatex
- pymdownx.betterem(smart_enable=all)
- pymdownx.caret
- pymdownx.critic
- pymdownx.emoji:
emoji_generator: !!python/name:pymdownx.emoji.to_svg
- pymdownx.inlinehilite
- pymdownx.magiclink
- pymdownx.mark
- pymdownx.smartsymbols
- pymdownx.superfences
- pymdownx.tasklist(custom_checkbox=true)
- pymdownx.tilde