mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 09:46:23 +00:00
Merge branch 'main' of github.com:geek-cookbook/geek-cookbook
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
@@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
date: 2023-03-10
|
date: 2023-02-10
|
||||||
categories:
|
categories:
|
||||||
- CHANGELOG
|
- CHANGELOG
|
||||||
tags:
|
tags:
|
||||||
|
|||||||
@@ -0,0 +1,52 @@
|
|||||||
|
---
|
||||||
|
date: 2023-02-11
|
||||||
|
categories:
|
||||||
|
- note
|
||||||
|
tags:
|
||||||
|
- kubernetes
|
||||||
|
- security
|
||||||
|
title: Why security in-depth is a(n awesome) PITA
|
||||||
|
description: Is it easy to deploy stuff into your cluster? Ha! 0wn3d. It's SUPPOSED to be a PITA!
|
||||||
|
---
|
||||||
|
|
||||||
|
# Security in depth / zero trust is a 100% (awesome) PITA
|
||||||
|
|
||||||
|
Today I spent upwards of half my day deploying a single service into a client's cluster. Here's why I consider this to be a win...
|
||||||
|
|
||||||
|
<!-- more -->
|
||||||
|
|
||||||
|
Here's how the process went:
|
||||||
|
|
||||||
|
1. Discover [4-year-old GitHub repo](https://github.com/aikoven/foundationdb-exporter) containing the **exact** tool we needed (*a prometheus exporter for FoundationDB metrics*)
|
||||||
|
2. Attempt to upload the image into our private repo, running [Harbor](https://goharbor.io/) with vulnerability scanning via [Trivy](https://github.com/aquasecurity/trivy) enforced. Discover that it has 1191 critical CVEs, upload is blocked.
|
||||||
|
3. Rebuild image with the latest node, 4 CVEs remain. CVEs are manually whitelisted[^1]. Image can now be added to repo.
|
||||||
|
4. Image must be signed using [cosign](https://github.com/sigstore/cosign) on both the dev and prod infrastructure (*separate signing keys are used*). [Connaisseur](https://github.com/sse-secure-systems/connaisseur) prevents unsigned images from being run in any of our clusters[^2].
|
||||||
|
5. Image is in the repo, now to deploy it... add a deployment template to our existing database helm chart. Deployment pipeline (*via [Concourse CI](https://concourse-ci.org/)*) fails while [kube-scor](https://github.com/zegl/kube-score)ing / [kube-conform](https://github.com/yannh/kubeconform)ing the generated manifests, because they're missing the appropriate probes and securityContexts
|
||||||
|
6. Note that if we had been able to sneak a less-than-secure deployment past kube-score's static linting, then [Kyverno](https://kyverno.io/) would have prevented the pod from running!
|
||||||
|
7. Fixed all the invalid / less-than-best-practice elements of the deployment. Ensure resource limits, HPAs, securityContexts are applied.
|
||||||
|
8. Manifest deploys (*pipeline is green!*), pod immediately crashloops (*it's not very obtuse code!*)
|
||||||
|
9. Examine Cilium's [Hubble](https://github.com/cilium/hubble), determine that the pod is trying to talk to FoundationDB (*duh*), and being blocked by default.
|
||||||
|
10. Apply the appropriate labels to the deployment / pod to align with the pre-existing regime of [Cilium NetworkPolicies](https://docs.cilium.io/en/latest/security/policy/) permitting ingress/egress to services based on pod labels (*thanks [Monzo](https://monzo.com/blog/we-built-network-isolation-for-1-500-services)!*)
|
||||||
|
11. No more dropped sessions in Hubble! But pod still crashloops. Apply an [Istio AuthorizationPolicy](https://istio.io/latest/docs/reference/config/security/authorization-policy/) to permit mTLS traffic between the exporter and FoundationDB.
|
||||||
|
12. Now the exporter can talk to FoundationDB! But no metrics are being gathered.. why?
|
||||||
|
13. Apply another update to a separate policy helm chart (*which **only** contains CiliumNetworkPolicy manifests*), permitting the cluster Prometheus access to the exporter on the port it happens to prefer.
|
||||||
|
|
||||||
|
Finally, I am rewarded with metrics scraped by Prometheus, and exposed in the Grafana dashboard:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
!!! note
|
||||||
|
It's especially gratifying to note that while all these schenanigans were going on, the existing services running in our prod and dev namespaces were completely isolated and unaffected. All changes happened in a PR branch, for which Concourse built a fresh, ephemeral namespace for every commit.
|
||||||
|
|
||||||
|
## Why is this a big deal?
|
||||||
|
|
||||||
|
I wanted to highlight how many levels of security / validation we employ in order to introduce any change into our clusters, even a simple metrics scraper. It may seem overly burdensome for a simple trial / tool, but my experience has been that "*temporary is permanent*", and the sooner you deploy something **properly**, the more resilient and reliable the whole system is.
|
||||||
|
|
||||||
|
## Do you want to be a PITA too?
|
||||||
|
|
||||||
|
This is what I love doing (*which is why I'm blogging about it at 11pm!*). If you're looking to augment / improve your Kubernetes layered security posture, [hit me up](https://www.funkypenguin.co.nz/work-with-me/), and let's talk business!
|
||||||
|
|
||||||
|
[^1]: We use ansible for this
|
||||||
|
[^2]: Yes, another Ansible process!
|
||||||
|
|
||||||
|
--8<-- "blog-footer.md"
|
||||||
45
docs/blog/posts/notes/run-minio-in-legacy-filesystem-mode.md
Normal file
45
docs/blog/posts/notes/run-minio-in-legacy-filesystem-mode.md
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
---
|
||||||
|
date: 2022-09-01
|
||||||
|
categories:
|
||||||
|
- note
|
||||||
|
tags:
|
||||||
|
- minio
|
||||||
|
title: How to run Minio in legacy FS mode again
|
||||||
|
description: Has your bare-metal / single-node Minio deployment started creating .xl.meta files instead of the files you actually intended to transfer? This is happening because of a significant update / deprecation in June 2022
|
||||||
|
---
|
||||||
|
|
||||||
|
# How to run Minio in legacy FS mode again
|
||||||
|
|
||||||
|
Has your bare-metal / single-node Minio deployment started creating `.xl.meta` files instead of the files you actually intended to transfer? This is happening because of a significant update / deprecation in June 2022.
|
||||||
|
|
||||||
|
Here's a workaround to restore the previous behaviour..
|
||||||
|
|
||||||
|
<!-- more -->
|
||||||
|
|
||||||
|
## Background
|
||||||
|
|
||||||
|
Starting with [RELEASE.2022-06-02T02-11-04Z](https://github.com/minio/minio/releases/tag/RELEASE.2022-06-02T02-11-04Z), MinIO implements a zero-parity erasure coded backend for single-node single-drive deployments. This feature allows access to [erasure coding dependent features](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html?ref=docs-redirect#minio-erasure-coding) without the requirement of multiple drives..
|
||||||
|
|
||||||
|
## .xl.meta instead of files
|
||||||
|
|
||||||
|
This unfortunately breaks expected behavior for a large number of existing users, since Minio can no longer be used to provide an S3-compatible layer to transfer files to later be consumed via typical POSIX access.
|
||||||
|
|
||||||
|
## Workaround to revert Minio to legacy fs mode
|
||||||
|
|
||||||
|
Note that the [docs re pre-existing data](https://min.io/docs/minio/linux/operations/install-deploy-manage/deploy-minio-single-node-single-drive.html?ref=docs-redirect#pre-existing-data) indicate that in the case of Existing filesystem folders, files, and MinIO backend data, then MinIO resumes in the legacy filesystem (“Standalone”) mode with no erasure-coding features.
|
||||||
|
|
||||||
|
So a simple workaround is to create the following format.json in `/path-to-existing-data/.minio.sys/`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{"version":"1","format":"fs","id":"avoid-going-into-snsd-mode-legacy-is-fine-with-me","fs":{"version":"2"}}
|
||||||
|
```
|
||||||
|
|
||||||
|
When Minio starts, it recognizes this as "existing" (above), and happily starts in legacy mode! 👍🏻
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
The lifespan of Minio's FS overlay mode is limited. The solution presented provides a temporary solution to continue using FS mode, but ultimately Minio are intent on removing this feature[^1].
|
||||||
|
|
||||||
|
[^1]: As it turns out, `RELEASE.2022-10-24T18-35-07Z` was the last version to work with overlay mode at all. If you want to continue using Minio the way you've used for years, you'll want to stay on this version.
|
||||||
|
|
||||||
|
--8<-- "blog-footer.md"
|
||||||
BIN
docs/images/blog/foundationdb-exporter-grafana-dashboard.png
Normal file
BIN
docs/images/blog/foundationdb-exporter-grafana-dashboard.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 364 KiB |
BIN
docs/images/blog/nextcloud_1.jpg
Normal file
BIN
docs/images/blog/nextcloud_1.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 161 KiB |
BIN
docs/images/blog/nextcloud_2.jpg
Normal file
BIN
docs/images/blog/nextcloud_2.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 145 KiB |
Reference in New Issue
Block a user