1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-14 02:06:32 +00:00

Improve formatting on metallb recipe

Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
David Young
2022-11-18 12:48:38 +13:00
parent 7b9e9089b6
commit 791c30a294

View File

@@ -32,44 +32,41 @@ You'll need to make some decisions re IP allocations.
### Namespace ### Namespace
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/namespaces/namespace-metallb-system.yaml`: We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo:
??? example "Example NameSpace (click to expand)" ```yaml title="/bootstrap/namespaces/namespace-metallb-system.yaml"
```yaml apiVersion: v1
apiVersion: v1 kind: Namespace
kind: Namespace metadata:
metadata:
name: metallb-system name: metallb-system
``` ```
### HelmRepository ### HelmRepository
Next, we need to define a HelmRepository (*a repository of helm charts*), to which we'll refer when we create the HelmRelease. We only need to do this once per-repository. In this case, we're using the (*prolific*) [bitnami chart repository](https://github.com/bitnami/charts/tree/master/bitnami), so per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/helmrepositories/helmrepository-bitnami.yaml`: Next, we need to define a HelmRepository (*a repository of helm charts*), to which we'll refer when we create the HelmRelease. We only need to do this once per-repository. In this case, we're using the (*prolific*) [bitnami chart repository](https://github.com/bitnami/charts/tree/master/bitnami), so per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo:
??? example "Example HelmRepository (click to expand)" ```yaml title="/bootstrap/helmrepositories/helmrepository-bitnami.yaml"
```yaml apiVersion: source.toolkit.fluxcd.io/v1beta1
apiVersion: source.toolkit.fluxcd.io/v1beta1 kind: HelmRepository
kind: HelmRepository metadata:
metadata:
name: bitnami name: bitnami
namespace: flux-system namespace: flux-system
spec: spec:
interval: 15m interval: 15m
url: https://charts.bitnami.com/bitnami url: https://charts.bitnami.com/bitnami
``` ```
### Kustomization ### Kustomization
Now that the "global" elements of this deployment (*Namespace and HelmRepository*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/metallb-system`. I create this example Kustomization in my flux repo at `bootstrap/kustomizations/kustomization-metallb.yaml`: Now that the "global" elements of this deployment (*Namespace and HelmRepository*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/metallb-system`. I create this example Kustomization in my flux repo:
??? example "Example Kustomization (click to expand)" ```yaml title="/bootstrap/kustomizations/kustomization-metallb.yaml"
```yaml apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1 kind: Kustomization
kind: Kustomization metadata:
metadata:
name: metallb--metallb-system name: metallb--metallb-system
namespace: flux-system namespace: flux-system
spec: spec:
interval: 15m interval: 15m
path: ./metallb-system path: ./metallb-system
prune: true # remove any elements later removed from the above path prune: true # remove any elements later removed from the above path
@@ -83,8 +80,7 @@ Now that the "global" elements of this deployment (*Namespace and HelmRepository
kind: Deployment kind: Deployment
name: metallb-controller name: metallb-controller
namespace: metallb-system namespace: metallb-system
```
```
!!! question "What's with that screwy name?" !!! question "What's with that screwy name?"
> Why'd you call the kustomization `metallb--metallb-system`? > Why'd you call the kustomization `metallb--metallb-system`?
@@ -93,52 +89,20 @@ Now that the "global" elements of this deployment (*Namespace and HelmRepository
### ConfigMap (for HelmRelease) ### ConfigMap (for HelmRelease)
Now we're into the metallb-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/bitnami/charts/blob/master/bitnami/metallb/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 spaces (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo at `metallb-system/configmap-metallb-helm-chart-value-overrides.yaml`: Now we're into the metallb-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/bitnami/charts/blob/master/bitnami/metallb/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 spaces (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo at ``:
??? example "Example ConfigMap (click to expand)" ```yaml title="/metallb-system/configmap-metallb-helm-chart-value-overrides.yaml"
```yaml apiVersion: v1
apiVersion: v1 kind: ConfigMap
kind: ConfigMap metadata:
metadata:
creationTimestamp: null
name: metallb-helm-chart-value-overrides name: metallb-helm-chart-value-overrides
namespace: metallb-system namespace: metallb-system
data: data:
values.yaml: |- values.yaml: |- # (1)!
## @section Global parameters # <upstream values go here>
## Global Docker image parameters ```
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
## @param global.imageRegistry Global Docker image registry 1. Paste in the contents of the upstream `values.yaml` here, intended 4 spaces, and then change the values you need as illustrated below.
## @param global.imagePullSecrets Global Docker registry secret names as an array
##
global:
imageRegistry: ""
## E.g.
## imagePullSecrets:
## - myRegistryKeySecretName
<snip>
prometheus:
## Prometheus Operator service monitors
##
serviceMonitor:
## @param speaker.prometheus.serviceMonitor.enabled Enable support for Prometheus Operator
##
enabled: false
## @param speaker.prometheus.serviceMonitor.jobLabel Job label for scrape target
##
jobLabel: "app.kubernetes.io/name"
## @param speaker.prometheus.serviceMonitor.interval Scrape interval. If not set, the Prometheus default scrape interval is used
##
interval: ""
## @param speaker.prometheus.serviceMonitor.metricRelabelings Specify additional relabeling of metrics
##
metricRelabelings: []
## @param speaker.prometheus.serviceMonitor.relabelings Specify general relabeling
##
relabelings: []
```
--8<-- "kubernetes-why-full-values-in-configmap.md" --8<-- "kubernetes-why-full-values-in-configmap.md"
@@ -149,16 +113,15 @@ Then work your way through the values you pasted, and change any which are speci
### ConfigMap (for MetalLB) ### ConfigMap (for MetalLB)
Finally, it's time to actually configure MetalLB! As discussed above, I prefer to configure the helm chart to apply config from an existing ConfigMap, so that I isolate my application configuration from my chart configuration (*and make tracking changes easier*). In my setup, I'm using BGP against a pair of pfsense[^1] firewalls, so per the [official docs](https://metallb.universe.tf/configuration/), I use the following configuration, saved in my flux repo as `metallb-system/configmap-metallb-config.yaml`: Finally, it's time to actually configure MetalLB! As discussed above, I prefer to configure the helm chart to apply config from an existing ConfigMap, so that I isolate my application configuration from my chart configuration (*and make tracking changes easier*). In my setup, I'm using BGP against a pair of pfsense[^1] firewalls, so per the [official docs](https://metallb.universe.tf/configuration/), I use the following configuration, saved in my flux repo:
??? example "Example ConfigMap (click to expand)" ```yaml title="metallb-system/configmap-metallb-config.yaml"
```yaml apiVersion: v1
apiVersion: v1 kind: ConfigMap
kind: ConfigMap metadata:
metadata:
namespace: metallb-system namespace: metallb-system
name: metallb-config name: metallb-config
data: data:
config: | config: |
peers: peers:
- peer-address: 192.168.33.2 - peer-address: 192.168.33.2
@@ -174,7 +137,7 @@ Finally, it's time to actually configure MetalLB! As discussed above, I prefer t
avoid-buggy-ips: true avoid-buggy-ips: true
addresses: addresses:
- 192.168.32.0/24 - 192.168.32.0/24
``` ```
!!! question "What does that mean?" !!! question "What does that mean?"
In the config referenced above, I define one pool of addresses (`192.168.32.0/24`) which MetalLB is responsible for allocating to my services. MetalLB will then "advertise" these addresses to my firewalls (`192.168.33.2` and `192.168.33.4`), in an eBGP relationship where the firewalls' ASN is `64501` and MetalLB's ASN is `64500`. Provided I'm using my firewalls as my default gateway (*a VIP*), when I try to access one of the `192.168.32.x` IPs from any subnet connected to my firewalls, the traffic will be routed from the firewall to one of the cluster nodes running the pods selected by that service. In the config referenced above, I define one pool of addresses (`192.168.32.0/24`) which MetalLB is responsible for allocating to my services. MetalLB will then "advertise" these addresses to my firewalls (`192.168.33.2` and `192.168.33.4`), in an eBGP relationship where the firewalls' ASN is `64501` and MetalLB's ASN is `64500`. Provided I'm using my firewalls as my default gateway (*a VIP*), when I try to access one of the `192.168.32.x` IPs from any subnet connected to my firewalls, the traffic will be routed from the firewall to one of the cluster nodes running the pods selected by that service.
@@ -182,7 +145,7 @@ Finally, it's time to actually configure MetalLB! As discussed above, I prefer t
!!! note "Dude, that's too complicated!" !!! note "Dude, that's too complicated!"
There's an easier way, with some limitations. If you configure MetalLB in L2 mode, all you need to do is to define a range of IPs within your existing node subnet, like this: There's an easier way, with some limitations. If you configure MetalLB in L2 mode, all you need to do is to define a range of IPs within your existing node subnet, like this:
```yaml ```yaml title="metallb-system/configmap-metallb-config.yaml"
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
@@ -199,20 +162,19 @@ Finally, it's time to actually configure MetalLB! As discussed above, I prefer t
### HelmRelease ### HelmRelease
Lastly, having set the scene above, we define the HelmRelease which will actually deploy MetalLB into the cluster, with the config and extra ConfigMap we defined above. I save this in my flux repo as `metallb-system/helmrelease-metallb.yaml`: Lastly, having set the scene above, we define the HelmRelease which will actually deploy MetalLB into the cluster, with the config and extra ConfigMap we defined above. I save this in my flux repo:
??? example "Example HelmRelease (click to expand)" ```yaml title="/metallb-system/helmrelease-metallb.yaml"
```yaml apiVersion: helm.toolkit.fluxcd.io/v2beta1
apiVersion: helm.toolkit.fluxcd.io/v2beta1 kind: HelmRelease
kind: HelmRelease metadata:
metadata:
name: metallb name: metallb
namespace: metallb-system namespace: metallb-system
spec: spec:
chart: chart:
spec: spec:
chart: metallb chart: metallb
version: 2.x version: 2.x # (1)!
sourceRef: sourceRef:
kind: HelmRepository kind: HelmRepository
name: bitnami name: bitnami
@@ -224,7 +186,9 @@ Lastly, having set the scene above, we define the HelmRelease which will actuall
- kind: ConfigMap - kind: ConfigMap
name: metallb-helm-chart-value-overrides name: metallb-helm-chart-value-overrides
valuesKey: values.yaml # This is the default, but best to be explicit for clarity valuesKey: values.yaml # This is the default, but best to be explicit for clarity
``` ```
1. This recipe was written when the chart was at version 2, it's now at v4.x, which introduces some breaking changes. Stay tuned for an upcoming refresh!
--8<-- "kubernetes-why-not-config-in-helmrelease.md" --8<-- "kubernetes-why-not-config-in-helmrelease.md"