mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 09:46:23 +00:00
Add blog post re layered security
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
@@ -1,5 +1,5 @@
|
||||
---
|
||||
date: 2023-03-10
|
||||
date: 2023-02-10
|
||||
categories:
|
||||
- CHANGELOG
|
||||
tags:
|
||||
|
||||
@@ -0,0 +1,52 @@
|
||||
---
|
||||
date: 2023-02-11
|
||||
categories:
|
||||
- note
|
||||
tags:
|
||||
- kubernetes
|
||||
- security
|
||||
title: Why security in-depth is a(n awesome) PITA
|
||||
description: Is it easy to deploy stuff into your cluster? Ha! 0wn3d. It's SUPPOSED to be a PITA!
|
||||
---
|
||||
|
||||
# Security in depth / zero trust is a 100% (awesome) PITA
|
||||
|
||||
Today I spent upwards of half my day deploying a single service into a client's cluster. Here's why I consider this to be a win...
|
||||
|
||||
<!-- more -->
|
||||
|
||||
Here's how the process went:
|
||||
|
||||
1. Discover [4-year-old GitHub repo](https://github.com/aikoven/foundationdb-exporter) containing the **exact** tool we needed (*a prometheus exporter for FoundationDB metrics*)
|
||||
2. Attempt to upload the image into our private repo, running [Harbor](https://goharbor.io/) with vulnerability scanning via [Trivy](https://github.com/aquasecurity/trivy) enforced. Discover that it has 1191 critical CVEs, upload is blocked.
|
||||
3. Rebuild image with the latest node, 4 CVEs remain. CVEs are manually whitelisted[^1]. Image can now be added to repo.
|
||||
4. Image must be signed using [cosign](https://github.com/sigstore/cosign) on both the dev and prod infrastructure (*separate signing keys are used*). [Connaisseur](https://github.com/sse-secure-systems/connaisseur) prevents unsigned images from being run in any of our clusters[^2].
|
||||
5. Image is in the repo, now to deploy it... add a deployment template to our existing database helm chart. Deployment pipeline (*via [Concourse CI](https://concourse-ci.org/)*) fails while [kube-scor](https://github.com/zegl/kube-score)ing / [kube-conform](https://github.com/yannh/kubeconform)ing the generated manifests, because they're missing the appropriate probes and securityContexts
|
||||
6. Note that if we had been able to sneak a less-than-secure deployment past kube-score's static linting, then [Kyverno](https://kyverno.io/) would have prevented the pod from running!
|
||||
7. Fixed all the invalid / less-than-best-practice elements of the deployment. Ensure resource limits, HPAs, securityContexts are applied.
|
||||
8. Manifest deploys (*pipeline is green!*), pod immediately crashloops (*it's not very obtuse code!*)
|
||||
9. Examine Cilium's [Hubble](https://github.com/cilium/hubble), determine that the pod is trying to talk to FoundationDB (*duh*), and being blocked by default.
|
||||
10. Apply the appropriate labels to the deployment / pod to align with the pre-existing regime of [Cilium NetworkPolicies](https://docs.cilium.io/en/latest/security/policy/) permitting ingress/egress to services based on pod labels (*thanks [Monzo](https://monzo.com/blog/we-built-network-isolation-for-1-500-services)!*)
|
||||
11. No more dropped sessions in Hubble! But pod still crashloops. Apply an [Istio AuthorizationPolicy](https://istio.io/latest/docs/reference/config/security/authorization-policy/) to permit mTLS traffic between the exporter and FoundationDB.
|
||||
12. Now the exporter can talk to FoundationDB! But no metrics are being gathered.. why?
|
||||
13. Apply another update to a separate policy helm chart (*which **only** contains CiliumNetworkPolicy manifests*), permitting the cluster Prometheus access to the exporter on the port it happens to prefer.
|
||||
|
||||
Finally, I am rewarded with metrics scraped by Prometheus, and exposed in the Grafana dashboard:
|
||||
|
||||

|
||||
|
||||
!!! note
|
||||
It's especially gratifying to note that while all these schenanigans were going on, the existing services running in our prod and dev namespaces were completely isolated and unaffected. All changes happened in a PR branch, for which Concourse built a fresh, ephemeral namespace for every commit.
|
||||
|
||||
## Why is this a big deal?
|
||||
|
||||
I wanted to highlight how many levels of security / validation we employ in order to introduce any change into our clusters, even a simple metrics scraper. It may seem overly burdensome for a simple trial / tool, but my experience has been that "*temporary is permanent*", and the sooner you deploy something **properly**, the more resilient and reliable the whole system is.
|
||||
|
||||
## Do you want to be a PITA too?
|
||||
|
||||
This is what I love doing (*which is why I'm blogging about it at 11pm!*). If you're looking to augment / improve your Kubernetes layered security posture, [hit me up](https://www.funkypenguin.co.nz/work-with-me/), and let's talk business!
|
||||
|
||||
[^1]: We use ansible for this
|
||||
[^2]: Yes, another Ansible process!
|
||||
|
||||
--8<-- "blog-footer.md"
|
||||
45
docs/blog/posts/notes/run-minio-in-legacy-filesystem-mode.md
Normal file
45
docs/blog/posts/notes/run-minio-in-legacy-filesystem-mode.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
date: 2022-09-01
|
||||
categories:
|
||||
- note
|
||||
tags:
|
||||
- minio
|
||||
title: How to run Minio in legacy FS mode again
|
||||
description: Has your bare-metal / single-node Minio deployment started creating .xl.meta files instead of the files you actually intended to transfer? This is happening because of a significant update / deprecation in June 2022
|
||||
---
|
||||
|
||||
# How to run Minio in legacy FS mode again
|
||||
|
||||
Has your bare-metal / single-node Minio deployment started creating `.xl.meta` files instead of the files you actually intended to transfer? This is happening because of a significant update / deprecation in June 2022.
|
||||
|
||||
Here's a workaround to restore the previous behaviour..
|
||||
|
||||
<!-- more -->
|
||||
|
||||
## Background
|
||||
|
||||
Starting with [RELEASE.2022-06-02T02-11-04Z](https://github.com/minio/minio/releases/tag/RELEASE.2022-06-02T02-11-04Z), MinIO implements a zero-parity erasure coded backend for single-node single-drive deployments. This feature allows access to [erasure coding dependent features](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html?ref=docs-redirect#minio-erasure-coding) without the requirement of multiple drives..
|
||||
|
||||
## .xl.meta instead of files
|
||||
|
||||
This unfortunately breaks expected behavior for a large number of existing users, since Minio can no longer be used to provide an S3-compatible layer to transfer files to later be consumed via typical POSIX access.
|
||||
|
||||
## Workaround to revert Minio to legacy fs mode
|
||||
|
||||
Note that the [docs re pre-existing data](https://min.io/docs/minio/linux/operations/install-deploy-manage/deploy-minio-single-node-single-drive.html?ref=docs-redirect#pre-existing-data) indicate that in the case of Existing filesystem folders, files, and MinIO backend data, then MinIO resumes in the legacy filesystem (“Standalone”) mode with no erasure-coding features.
|
||||
|
||||
So a simple workaround is to create the following format.json in `/path-to-existing-data/.minio.sys/`:
|
||||
|
||||
```json
|
||||
{"version":"1","format":"fs","id":"avoid-going-into-snsd-mode-legacy-is-fine-with-me","fs":{"version":"2"}}
|
||||
```
|
||||
|
||||
When Minio starts, it recognizes this as "existing" (above), and happily starts in legacy mode! 👍🏻
|
||||
|
||||
## Summary
|
||||
|
||||
The lifespan of Minio's FS overlay mode is limited. The solution presented provides a temporary solution to continue using FS mode, but ultimately Minio are intent on removing this feature[^1].
|
||||
|
||||
[^1]: As it turns out, `RELEASE.2022-10-24T18-35-07Z` was the last version to work with overlay mode at all. If you want to continue using Minio the way you've used for years, you'll want to stay on this version.
|
||||
|
||||
--8<-- "blog-footer.md"
|
||||
70
docs/blog/posts/reviews/review-nextcloud-24.md
Normal file
70
docs/blog/posts/reviews/review-nextcloud-24.md
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
date: 2022-08-26
|
||||
categories:
|
||||
- Review
|
||||
tags:
|
||||
- nextcloud
|
||||
description: My review of NextCloud 24
|
||||
title: Review / Nextcloud v24 - Sexy on the outside, boring on the inside
|
||||
upstream_version: v24
|
||||
image: /images/nextcloud.jpg
|
||||
links:
|
||||
- NextCloud 24 recipe: recipes/nextcloud.md
|
||||
---
|
||||
|
||||
# Nextcloud : Sexy on the outside 🕺, and boring(ly reliable) 🥱 on the inside
|
||||
|
||||
The answer (*to "what's sexy on the outside, and boring(ly) reliable on the inside?"*) is..
|
||||
|
||||
.. Nextcloud 24, which I reviewed this week, while modernizing the recipe. Read on for details...
|
||||
|
||||
<!-- more -->
|
||||
|
||||
| Review details | |
|
||||
| ----------- | ------------------------------------ |
|
||||
| :octicons-number-24: Reviewed version | *[{{ page.meta.upstream_version }}]({{ page.meta.upstream_repo }})* |
|
||||
|
||||
## Collaboration is boring.. 🥱
|
||||
|
||||
Back in 2012, the (*overly-geeky*) company I worked for employed [rdiff-backup](https://rdiff-backup.net/) and some hacky scripts on each staff member's laptop, to maintain a "shared drive". Fortunately, we upgraded from this over-engineered UX disaster to an early version of OwnCloud. OwnCloud then was immature, and would occasionally end up in sync loops/conflicts, "loose" staff's shared files, and and require painful backup/restores, but at least it had a desktop UI.
|
||||
|
||||
A few years later, when NextCloud [forked](https://www.zdnet.com/article/owncloud-founder-forks-popular-open-source-cloud/) from OwnCloud, I was tasked with migrating our design, and a big deal was made out of Nextcloud's "personal" vs "shared" syncing folders. I still remember the pain of upgrading from Nextcloud 7 to Nextcloud 8, and dealing with "non-technical" staff who "just want to see their files dammit!"
|
||||
|
||||
I also remember how much better Nextcould's activity summary made life - in a glance, we could see all the changes to the various shared folders we used, and syncing issues became rare(er).
|
||||
|
||||
Look, there's nothing particularly sexy about a file syncing app. It's not fun to test (by yourself), and it doesn't introduce any ground-breaking features, and once you've deployed it, nobody wants to change / upgrade / tweak it, for fear of impacting people's workflow. It's... boring.
|
||||
|
||||
## ... but boring is reliable 🪨
|
||||
|
||||
Yes, (*sigh*), boring is good. A collaboration platform that gets out of your way, and "just works", is exactly what you want, boring as it may be. Take it from me, you do not want to be trying to work out which of your 25 remote users has some sort of local issue which is forcing the other 24 users to re-sync gigabytes of data!
|
||||
|
||||
It's been a few years since I published a Docker Swarm recipe for Nextcloud, complete with database backups, full-text-search, service discovery and SSL termination. After a [reader pointed out](https://github.com/geek-cookbook/geek-cookbook/issues/228) that the recipe was no longer valid for modern versions of Nextcloud, I refreshed it and made some improvements / simplifications. You can find the latest Docker Swarm recipe for Nextcloud [here][nextcloud].
|
||||
|
||||
## Should you try Nextcloud?
|
||||
|
||||
TL;DR - It's still boring on the inside. But that's good. The outside though, is increasingly sexy and well-polished.
|
||||
|
||||
In the process of running the latest recipe through its paces in CI, I noticed that the UX has come a long way. Under the hood, NextCloud is much the same, with some extra polish, and a few years more ecosystem maturity. Now apps like[ Nextcloud Talk](https://nextcloud.com/talk/) (which was beta at the the time) is de-facto, and the integration of 3rd-party apps is well-established.
|
||||
|
||||
Nextcloud (*now called "Nextcloud Hub II" for some reason!*) no longer looks like a boring, corporate file collaboration suite. The default page is a "Dashboard", which can be extended with "Widgets" which integrate with the various apps (*of which there are over 100!*) which can be installed from their app store.
|
||||
|
||||
Tell me this isn't sexy:
|
||||
|
||||

|
||||
|
||||
And it's not just Nextcloud apps which can create widgets - you can hook up to external services, like this:
|
||||
|
||||

|
||||
|
||||
Here's a quick demo video I made of the admin interface, in case, like me, you like evaluate your tools based on shiny screencasts:
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/jXDSDHEb1SA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
|
||||
So, if collaboration is your thing, or you'd like to try out the 100+ apps now supported by Nextcloud, give it a try. The [recipe][nextcloud] is freshly tested and good-to-go, and if you're a sponsor, you can deploy it automatically using [premix][premix]!
|
||||
|
||||
That's it for now - as always, don't be a stranger - hop into Discord and say hi, request a new recipe, or let me know what you thought of Nextcloud!
|
||||
|
||||
--8<-- "blog-footer.md"
|
||||
|
||||
[^1]: "wife-insurance": When the developer's wife is a primary user of the platform, you can bet he'll be writing quality code! :woman: :material-karate: :man: :bed: :cry:
|
||||
[^2]: There's a [friendly Discord server](https://discord.com/invite/D8JsnBEuKb) for Immich too!
|
||||
BIN
docs/images/blog/foundationdb-exporter-grafana-dashboard.png
Normal file
BIN
docs/images/blog/foundationdb-exporter-grafana-dashboard.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 364 KiB |
BIN
docs/images/blog/nextcloud_1.jpg
Normal file
BIN
docs/images/blog/nextcloud_1.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 161 KiB |
BIN
docs/images/blog/nextcloud_2.jpg
Normal file
BIN
docs/images/blog/nextcloud_2.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 145 KiB |
Reference in New Issue
Block a user