mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 09:46:23 +00:00
Experiment with PDF generation
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
55
docs/kubernetes/loadbalancer/index.md
Normal file
55
docs/kubernetes/loadbalancer/index.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
title: What loadbalancer to use in self-hosted Kubernetes?
|
||||
description: Here's a simply way to work out which load balancer you'll need for your self-hosted Kubernetes cluster
|
||||
---
|
||||
# Loadbalancing Services
|
||||
|
||||
## TL;DR
|
||||
|
||||
1. I have multiple nodes (*you'd benefit from [MetalLB](/kubernetes/loadbalancer/metallb/)*)
|
||||
2. I only need/want one node (*just go with [k3s svclb](/kubernetes/loadbalancer/k3s/)*)
|
||||
|
||||
## But why?
|
||||
|
||||
In Kubernetes, you don't access your containers / pods "*directly*", other than for debugging purposes. Rather, we have a construct called a "*service*", which is "in front of" one or more pods.
|
||||
|
||||
Consider that this is how containers talk to each other under Docker Swarm:
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
wordpress->>+mysql: Are you there?
|
||||
mysql->>+wordpress: Yes, ready to serve!
|
||||
|
||||
```
|
||||
|
||||
But **this** is how containers (pods) talk to each other under Kubernetes:
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
wordpress->>+mysql-service: Are you there?
|
||||
mysql-service->>+mysql-pods: Are you there?
|
||||
mysql-pods->>+wordpress: Yes, ready to serve!
|
||||
```
|
||||
|
||||
Why do we do this?
|
||||
|
||||
1. A service isn't pinned to a particular node, it's a virtual IP which lives in the cluster and doesn't change as pods/nodes come and go.
|
||||
2. Using a service "in front of" pods means that rolling updates / scaling of the pods can take place, but communication with the service is uninterrupted (*assuming correct configuration*).
|
||||
|
||||
Here's some [more technical detail](https://kubernetes.io/docs/concepts/services-networking/service/) into how it works, but what you need to know is that when you want to interact with your containers in Kubernetes (*either from other containers or from outside, as a human*), you'll be talking to **services.**
|
||||
|
||||
Also, services are not exposed outside of the cluster by default. There are 3 levels of "exposure" for your Kubernetes services, briefly:
|
||||
|
||||
1. ClusterIP (*A service is only available to other services in the cluster - this is the default*)
|
||||
2. NodePort (*A mostly-random high-port on the node running the pod is forwarded to the pod*)[^1]
|
||||
3. LoadBalancer (*Some external help is required to forward a particular IP into the cluster, terminating on the node running your pod*)
|
||||
|
||||
For anything vaguely useful, only `LoadBalancer` is a viable option. Even though `NodePort` may allow you to access services directly, who wants to remember that they need to access [Radarr][radarr] on `192.168.1.44:34542` and Homer on `192.168.1.44:34532`? Ugh.
|
||||
|
||||
Assuming you only had a single Kubernetes node (*say, a small k3s deployment*), you'd want 100% of all incoming traffic to be directed to that node, and so you wouldn't **need** a loadbalancer. You'd just point some DNS entries / firewall NATs at the IP of the cluster, and be done.
|
||||
|
||||
(*This is [the way k3s works](/kubernetes/loadbalancer/k3s/) by default, although it's still called a LoadBalancer*)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: It is possible to be prescriptive about which port is used for a Nodeport-exposed service, and this is occasionally [a valid deployment strategy](https://github.com/portainer/k8s/#using-nodeport-on-a-localremote-cluster), but you're usually limited to ports between 30000 and 32768.
|
||||
28
docs/kubernetes/loadbalancer/k3s.md
Normal file
28
docs/kubernetes/loadbalancer/k3s.md
Normal file
@@ -0,0 +1,28 @@
|
||||
---
|
||||
title: klipper loadbalancer with k3s
|
||||
description: klipper - k3s' lightweight loadbalancer
|
||||
---
|
||||
|
||||
# K3s Load Balancing with Klipper
|
||||
|
||||
If your cluster is using K3s, and you have only one node, then you could be adequately served by the [built in "klipper" loadbalbancer provided with k3s](https://rancher.com/docs/k3s/latest/en/networking/#service-load-balancer).
|
||||
|
||||
If you want more than one node in your cluster[^1] (*either now or in future*), I'd steer you towards [MetalLB](/kubernetes/loadbalancer/metallb/) instead).
|
||||
|
||||
## How does it work?
|
||||
|
||||
When **not** deployed with `--disable servicelb`, every time you create a service of type `LoadBalancer`, k3s will deploy a daemonset (*a collection of pods which run on every host in the cluster*), listening on that given port on the host. So deploying a LoadBalancer service for nginx on ports 80 and 443, for example, would result in **every** cluster host listening on ports 80 and 443, and sending any incoming traffic to the nginx service.
|
||||
|
||||
## Well that's great, isn't it?
|
||||
|
||||
Yes, to get you started. But consider the following limitations:
|
||||
|
||||
1. This magic can only happen **once** per port. So you can't, for example, run two mysql instances on port 3306.
|
||||
2. Because **every** host listens on the exposed ports, you can't run anything **else** on the hosts, which listens on those ports
|
||||
3. Having multiple hosts listening on a given port still doesn't solve the problem of how to reliably direct traffic to all hosts, and how to gracefully fail over if one of the hosts fails.
|
||||
|
||||
To tackle these issues, you need some more advanced network configuration, along with [MetalLB](/kubernetes/loadbalancer/metallb/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: And seriously, if you're building a Kubernetes cluster, of **course** you'll want more than one host!
|
||||
288
docs/kubernetes/loadbalancer/metallb/index.md
Normal file
288
docs/kubernetes/loadbalancer/metallb/index.md
Normal file
@@ -0,0 +1,288 @@
|
||||
---
|
||||
title: MetalLB - Kubernetes Bare-Metal Loadbalancing
|
||||
description: MetalLB - Load-balancing for bare-metal Kubernetes clusters, deployed with Helm via flux
|
||||
---
|
||||
# MetalLB on Kubernetes, via Helm
|
||||
|
||||
[MetalLB](https://metallb.universe.tf/) offers a network [load balancer](/kubernetes/loadbalancer/) implementation which workes on "bare metal" (*as opposed to a cloud provider*).
|
||||
|
||||
MetalLB does two jobs:
|
||||
|
||||
1. Provides address allocation to services out of a pool of addresses which you define
|
||||
2. Announces these addresses to devices outside the cluster, either using ARP/NDP (L2) or BGP (L3)
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
* [x] If k3s is used, then it was deployed with `--disable servicelb`
|
||||
|
||||
Optional:
|
||||
|
||||
* [ ] Network firewall/router supporting BGP (*ideal but not required*)
|
||||
|
||||
## MetalLB Requirements
|
||||
|
||||
### Allocations
|
||||
|
||||
You'll need to make some decisions re IP allocations.
|
||||
|
||||
* What is the range of addresses you want to use for your LoadBalancer service pool? If you're using BGP, this can be a dedicated subnet (*i.e. a /24*), and if you're not, this should be a range of IPs in your existing network space for your cluster nodes (*i.e., 192.168.1.100-200*)
|
||||
* If you're using BGP, pick two [private AS numbers](https://datatracker.ietf.org/doc/html/rfc6996#section-5) between 64512 and 65534 inclusively.
|
||||
|
||||
### Namespace
|
||||
|
||||
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/namespaces/namespace-metallb-system.yaml`:
|
||||
|
||||
??? example "Example NameSpace (click to expand)"
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: metallb-system
|
||||
```
|
||||
|
||||
### HelmRepository
|
||||
|
||||
Next, we need to define a HelmRepository (*a repository of helm charts*), to which we'll refer when we create the HelmRelease. We only need to do this once per-repository. In this case, we're using the (*prolific*) [bitnami chart repository](https://github.com/bitnami/charts/tree/master/bitnami), so per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/helmrepositories/helmrepository-bitnami.yaml`:
|
||||
|
||||
??? example "Example HelmRepository (click to expand)"
|
||||
```yaml
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta1
|
||||
kind: HelmRepository
|
||||
metadata:
|
||||
name: bitnami
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 15m
|
||||
url: https://charts.bitnami.com/bitnami
|
||||
```
|
||||
|
||||
### Kustomization
|
||||
|
||||
Now that the "global" elements of this deployment (*Namespace and HelmRepository*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/metallb-system`. I create this example Kustomization in my flux repo at `bootstrap/kustomizations/kustomization-metallb.yaml`:
|
||||
|
||||
??? example "Example Kustomization (click to expand)"
|
||||
```yaml
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: metallb--metallb-system
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 15m
|
||||
path: ./metallb-system
|
||||
prune: true # remove any elements later removed from the above path
|
||||
timeout: 2m # if not set, this defaults to interval duration, which is 1h
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
validation: server
|
||||
healthChecks:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: metallb-controller
|
||||
namespace: metallb-system
|
||||
|
||||
```
|
||||
|
||||
!!! question "What's with that screwy name?"
|
||||
> Why'd you call the kustomization `metallb--metallb-system`?
|
||||
|
||||
I keep my file and object names as consistent as possible. In most cases, the helm chart is named the same as the namespace, but in some cases, by upstream chart or historical convention, the namespace is different to the chart name. MetalLB is one of these - the helmrelease/chart name is `metallb`, but the typical namespace it's deployed in is `metallb-system`. (*Appending `-system` seems to be a convention used in some cases for applications which support the entire cluster*). To avoid confusion when I list all kustomizations with `kubectl get kustomization -A`, I give these oddballs a name which identifies both the helmrelease and the namespace.
|
||||
|
||||
### ConfigMap (for HelmRelease)
|
||||
|
||||
Now we're into the metallb-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/bitnami/charts/blob/master/bitnami/metallb/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 tabs (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo at `metallb-system/configmap-metallb-helm-chart-value-overrides.yaml`:
|
||||
|
||||
??? example "Example ConfigMap (click to expand)"
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: metallb-helm-chart-value-overrides
|
||||
namespace: metallb-system
|
||||
data:
|
||||
values.yaml: |-
|
||||
## @section Global parameters
|
||||
## Global Docker image parameters
|
||||
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
|
||||
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
|
||||
|
||||
## @param global.imageRegistry Global Docker image registry
|
||||
## @param global.imagePullSecrets Global Docker registry secret names as an array
|
||||
##
|
||||
global:
|
||||
imageRegistry: ""
|
||||
## E.g.
|
||||
## imagePullSecrets:
|
||||
## - myRegistryKeySecretName
|
||||
<snip>
|
||||
prometheus:
|
||||
## Prometheus Operator service monitors
|
||||
##
|
||||
serviceMonitor:
|
||||
## @param speaker.prometheus.serviceMonitor.enabled Enable support for Prometheus Operator
|
||||
##
|
||||
enabled: false
|
||||
## @param speaker.prometheus.serviceMonitor.jobLabel Job label for scrape target
|
||||
##
|
||||
jobLabel: "app.kubernetes.io/name"
|
||||
## @param speaker.prometheus.serviceMonitor.interval Scrape interval. If not set, the Prometheus default scrape interval is used
|
||||
##
|
||||
interval: ""
|
||||
## @param speaker.prometheus.serviceMonitor.metricRelabelings Specify additional relabeling of metrics
|
||||
##
|
||||
metricRelabelings: []
|
||||
## @param speaker.prometheus.serviceMonitor.relabelings Specify general relabeling
|
||||
##
|
||||
relabelings: []
|
||||
```
|
||||
|
||||
--8<-- "kubernetes-why-full-values-in-configmap.md"
|
||||
|
||||
Then work your way through the values you pasted, and change any which are specific to your configuration. I'd recommend changing the following:
|
||||
|
||||
* `existingConfigMap: metallb-config`: I prefer to set my MetalLB config independently of the chart config, so I set this to `metallb-config`, which I then define below.
|
||||
* `commonAnnotations`: Anticipating the future use of Reloader to bounce applications when their config changes, I add the `configmap.reloader.stakater.com/reload: "metallb-config"` annotation to all deployed objects, which will instruct Reloader to bounce the daemonset if the ConfigMap changes.
|
||||
|
||||
### ConfigMap (for MetalLB)
|
||||
|
||||
Finally, it's time to actually configure MetalLB! As discussed above, I prefer to configure the helm chart to apply config from an existing ConfigMap, so that I isolate my application configuration from my chart configuration (*and make tracking changes easier*). In my setup, I'm using BGP against a pair of pfsense[^1] firewalls, so per the [official docs](https://metallb.universe.tf/configuration/), I use the following configuration, saved in my flux repo as `metallb-system/configmap-metallb-config.yaml`:
|
||||
|
||||
??? example "Example ConfigMap (click to expand)"
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
namespace: metallb-system
|
||||
name: metallb-config
|
||||
data:
|
||||
config: |
|
||||
peers:
|
||||
- peer-address: 192.168.33.2
|
||||
peer-asn: 64501
|
||||
my-asn: 64500
|
||||
- peer-address: 192.168.33.4
|
||||
peer-asn: 64501
|
||||
my-asn: 64500
|
||||
|
||||
address-pools:
|
||||
- name: default
|
||||
protocol: bgp
|
||||
avoid-buggy-ips: true
|
||||
addresses:
|
||||
- 192.168.32.0/24
|
||||
```
|
||||
|
||||
!!! question "What does that mean?"
|
||||
In the config referenced above, I define one pool of addresses (`192.168.32.0/24`) which MetalLB is responsible for allocating to my services. MetalLB will then "advertise" these addresses to my firewalls (`192.168.33.2` and `192.168.33.4`), in an eBGP relationship where the firewalls' ASN is `64501` and MetalLB's ASN is `64500`. Provided I'm using my firewalls as my default gateway (*a VIP*), when I try to access one of the `192.168.32.x` IPs from any subnet connected to my firewalls, the traffic will be routed from the firewall to one of the cluster nodes running the pods selected by that service.
|
||||
|
||||
!!! note "Dude, that's too complicated!"
|
||||
There's an easier way, with some limitations. If you configure MetalLB in L2 mode, all you need to do is to define a range of IPs within your existing node subnet, like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
namespace: metallb-system
|
||||
name: metallb-config
|
||||
data:
|
||||
config: |
|
||||
address-pools:
|
||||
- name: default
|
||||
protocol: layer2
|
||||
addresses:
|
||||
- 192.168.1.240-192.168.1.250
|
||||
```
|
||||
|
||||
### HelmRelease
|
||||
|
||||
Lastly, having set the scene above, we define the HelmRelease which will actually deploy MetalLB into the cluster, with the config and extra ConfigMap we defined above. I save this in my flux repo as `metallb-system/helmrelease-metallb.yaml`:
|
||||
|
||||
??? example "Example HelmRelease (click to expand)"
|
||||
```yaml
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: metallb
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
chart:
|
||||
spec:
|
||||
chart: metallb
|
||||
version: 2.x
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: bitnami
|
||||
namespace: flux-system
|
||||
interval: 15m
|
||||
timeout: 5m
|
||||
releaseName: metallb
|
||||
valuesFrom:
|
||||
- kind: ConfigMap
|
||||
name: metallb-helm-chart-value-overrides
|
||||
valuesKey: values.yaml # This is the default, but best to be explicit for clarity
|
||||
```
|
||||
|
||||
--8<-- "kubernetes-why-not-config-in-helmrelease.md"
|
||||
|
||||
## Deploy MetalLB
|
||||
|
||||
Having committed the above to your flux repository, you should shortly see a metallb kustomization, and in the `metallb-system` namespace, a controller and a speaker pod for every node:
|
||||
|
||||
```bash
|
||||
root@cn1:~# kubectl get pods -n metallb-system -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
metallb-controller-779d8686f6-mgb4s 1/1 Running 0 21d 10.0.6.19 wn3 <none> <none>
|
||||
metallb-speaker-2qh2d 1/1 Running 0 21d 192.168.33.24 wn4 <none> <none>
|
||||
metallb-speaker-7rz24 1/1 Running 0 21d 192.168.33.22 wn2 <none> <none>
|
||||
metallb-speaker-gbm5r 1/1 Running 0 21d 192.168.33.23 wn3 <none> <none>
|
||||
metallb-speaker-gzgd2 1/1 Running 0 21d 192.168.33.21 wn1 <none> <none>
|
||||
metallb-speaker-nz6kd 1/1 Running 0 21d 192.168.33.25 wn5 <none> <none>
|
||||
root@cn1:~#
|
||||
```
|
||||
|
||||
!!! question "Why are there no speakers on my masters?"
|
||||
|
||||
In some cluster setups, master nodes are "tainted" to prevent workloads running on them and consuming capacity required for "mastering". If this is the case for you, but you actually **do** want to run some externally-exposed workloads on your masters, you'll need to update the `speaker.tolerations` value for the HelmRelease config to include:
|
||||
|
||||
```yaml
|
||||
- key: "node-role.kubernetes.io/master"
|
||||
effect: "NoSchedule"
|
||||
```
|
||||
|
||||
### How do I know it's working?
|
||||
|
||||
If you used my [template repository](https://github.com/geek-cookbook/template-flux) to start off your [flux deployment strategy](/kubernetes/deployment/flux/), then the podinfo helm chart has already been deployed. By default, the podinfo service is in `ClusterIP` mode, so it's only reachable within the cluster.
|
||||
|
||||
Edit your podinfo helmrelease configmap (`/podinfo/configmap-podinfo-helm-chart-value-overrides.yaml`), and change this:
|
||||
|
||||
``` yaml hl_lines="6"
|
||||
<snip>
|
||||
# Kubernetes Service settings
|
||||
service:
|
||||
enabled: true
|
||||
annotations: {}
|
||||
type: ClusterIP
|
||||
<snip>
|
||||
```
|
||||
|
||||
To:
|
||||
|
||||
``` yaml hl_lines="6"
|
||||
<snip>
|
||||
# Kubernetes Service settings
|
||||
service:
|
||||
enabled: true
|
||||
annotations: {}
|
||||
type: LoadBalancer
|
||||
<snip>
|
||||
```
|
||||
|
||||
Commit your changes, wait for a reconciliation, and run `kubectl get services -n podinfo`. All going well, you should see that the service now has an IP assigned from the pool you chose for MetalLB!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: I've documented an example re [how to configure BGP between MetalLB and pfsense](/kubernetes/loadbalancer/metallb/pfsense/).
|
||||
80
docs/kubernetes/loadbalancer/metallb/pfsense.md
Normal file
80
docs/kubernetes/loadbalancer/metallb/pfsense.md
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
title: MetalLB BGP config for pfSense - Kubernetes load balancing
|
||||
description: Using MetalLB with pfsense and BGP
|
||||
---
|
||||
# MetalLB on Kubernetes with pfSense
|
||||
|
||||
This is an addendum to the MetalLB recipe, explaining how to configure MetalLB to perform BGP peering with a pfSense firewall.
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
* [X] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||
* [X] [MetalLB](/kubernetes/loadbalancer/metallb/) deployed
|
||||
* [X] One or more pfSense firewalls
|
||||
* [X] Basic familiarity with pfSense operation
|
||||
|
||||
## Preparation
|
||||
|
||||
Complete the [MetalLB](/kubernetes/loadbalancer/metallb/) installation, including the process of identifying ASNs for both your pfSense firewall and your MetalLB configuration.
|
||||
|
||||
Install the FRR package in pfsense, under **System -> Package Manager -> Available Packages**
|
||||
|
||||
### Configure FRR Global/Zebra
|
||||
|
||||
Under **Services -> FRR Global/Zebra**, enable FRR, set your router ID (*this will be your router's peer IP in MetalLB config*), and set a master password (*because apparently you have to, even though we don't use it*):
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
### Configure FRR BGP
|
||||
|
||||
Under **Services -> FRR BGP**, globally enable BGP, and set your local AS and router ID:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
### Configure FRR BGP Advanced
|
||||
|
||||
Use the tabs at the top of the FRR configuration to navigate to "**Advanced**"...
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
... and scroll down to **eBGP**. Check the checkbox titled "**Disable eBGP Require Policy**:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
!!! question "Isn't disabling a policy check a Bad Idea(tm)?"
|
||||
If you're an ISP, sure. If you're only using eBGP to share routes between MetalLB and pfsense, then applying policy is an unnecessary complication.[^1]
|
||||
|
||||
### Configure BGP neighbors
|
||||
|
||||
#### Peer Group
|
||||
|
||||
It's useful to bundle our configurations within a "peer group" (*a collection of settings which applies to all neighbors who are members of that group*), so start off by creating a neighbor with the name of "**metallb**" (*this will become a peer-group*). Set the remote AS (*because you have to*), and leave the rest of the settings as default.
|
||||
|
||||
!!! question "Why bother with a peer group?"
|
||||
> If we're not changing any settings, why are we bothering with a peer group?
|
||||
|
||||
We may later want to change settings which affect all the peers, such as prefix lists, route-maps, etc. We're doing this now for the benefit of our future selves 💪
|
||||
|
||||
#### Individual Neighbors
|
||||
|
||||
Now add each node running MetalLB, as a BGP neighbor. Pick the peer-group you created above, and configure each neighbor's ASN:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Serving
|
||||
|
||||
Once you've added your neighbors, you should be able to use the FRR tab navigation (*it's weird, I know!*) to get to Status / BGP, and identify your neighbors, and all the routes learned from them. In the screenshot below, you'll note that **most** routes are learned from all the neighbors - that'll be service backed by a daemonset, running on all nodes. The `192.168.32.3/32` route, however, is only received from `192.168.33.22`, meaning only one node is running the pods backing this service, so only those pods are advertising the route to pfSense:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
If you're not receiving any routes from MetalLB, or if the neighbors aren't in an established state, here are a few suggestions for troubleshooting:
|
||||
|
||||
1. Confirm on PFSense that the BGP connections (*TCP port 179*) are not being blocked by the firewall
|
||||
2. Examine the metallb speaker logs in the cluster, by running `kubectl logs -n metallb-system -l app.kubernetes.io/name=metallb`
|
||||
3. SSH to the pfsense, start a shell and launch the FFR shell by running `vtysh`. Now you're in a cisco-like console where commands like `show ip bgp sum` and `show ip bgp neighbors <neighbor ip> received-routes` will show you interesting debugging things.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: If you decide to deploy some policy with route-maps, prefix-lists, etc, it's all found under **Services -> FRR Global/Zebra** 🦓
|
||||
Reference in New Issue
Block a user