Compare commits
132 Commits
260f0662c3
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| ec15d95b8e | |||
| f1a89f86a6 | |||
| ba4c937010 | |||
| 9b55caf58f | |||
| f47293eab7 | |||
| 10617d47f8 | |||
| 2eb7188076 | |||
| d826200e94 | |||
| f0eaf9f099 | |||
| 1d99a79a53 | |||
| 7761b1cd11 | |||
| a805a0793c | |||
| bf06b0c31b | |||
| 0089629214 | |||
| 779fd0f7fb | |||
| 72c5823f5a | |||
| 8838b1d41d | |||
| 283d12563f | |||
| ed74ea1cb7 | |||
| 066c409106 | |||
| e9e06c5596 | |||
| dfd605af7f | |||
| a9da75663c | |||
| b1322dd4fb | |||
| 67665b1d5e | |||
| 535664a986 | |||
| d9c31ab896 | |||
| 3b64bf3e53 | |||
| 82c4659a48 | |||
| 36651d71ad | |||
| 20898bd2db | |||
| 2435e194e2 | |||
| 2a2efc175b | |||
| f64e7dbd09 | |||
| 728e4e0bc5 | |||
| 6283c202e7 | |||
| dd54f4f7d9 | |||
| c76f59a621 | |||
| 8cfca4c4cb | |||
| 396404212e | |||
| 6141992a4b | |||
| 322afaae48 | |||
| 41f563e849 | |||
| 5676669e5e | |||
| c6e7cf6d9f | |||
| 361b020f6e | |||
| c2421d6d32 | |||
| 881e598fb4 | |||
| 3f877ffc3d | |||
| 5b34f2fc37 | |||
| 0e82a7e1ec | |||
| 4aabdd1d3e | |||
| d364178273 | |||
| 2bba44afe9 | |||
| 52e7e12c36 | |||
| d5c5e69abc | |||
| 594a730529 | |||
| 6ab91c4cb1 | |||
| 55d36dc538 | |||
| 6780b64177 | |||
| c510b8599c | |||
| 467b26106d | |||
| 8c44eaee3e | |||
| 7c96bd4d3c | |||
| 0c4ad684bb | |||
| 2d417b280b | |||
| fb665b52ec | |||
| 098c85e05d | |||
| f8e04d4751 | |||
| 9e4c75a2b1 | |||
| a6ac5323a2 | |||
| 9a8dc56a9d | |||
| 389e5dc787 | |||
| 2f6c5b34ad | |||
| 3c6759ce6d | |||
| 6dbb08bdb9 | |||
| fd40ccc69f | |||
| b05c8cd463 | |||
| 7abcdb1546 | |||
| 871878dc27 | |||
| 37232aefc1 | |||
| afb2bf9659 | |||
| 83224d37ed | |||
| ce7179a66d | |||
| 5a8115c9ce | |||
| f341f82d15 | |||
| 80ad94d3fa | |||
| 35ad80406b | |||
| d3fcb3f5e0 | |||
| 4e3c570e79 | |||
| 799a830574 | |||
| 19e734b3ef | |||
| a7fc007148 | |||
| bcda9b2e29 | |||
| d03eaa8664 | |||
| c4cefe2eab | |||
| f646234f09 | |||
| 81e21b25bf | |||
| 2c3105f8dd | |||
| 1cfbf6f222 | |||
| 33c3ceb773 | |||
| 612d190940 | |||
| fcfb6b6a6a | |||
| ab3349b6a1 | |||
| 8bfcd3e0d5 | |||
| 76ea64820e | |||
| 9617f81e8c | |||
| e84e00528f | |||
| fd43bc0c2d | |||
| 10fdff231d | |||
| b015b131c1 | |||
| a3718c8cb5 | |||
| 0208870d8c | |||
| b97d2667d2 | |||
| 0035a55be7 | |||
| 4d66feb822 | |||
| 39b0c03952 | |||
| 6b5e0cfdae | |||
| db02d188c7 | |||
| 077a386d93 | |||
| 0c38b6b9e8 | |||
| bc9ada1975 | |||
| ba27c4e39d | |||
| d879d2a368 | |||
| a412f21102 | |||
| 6638585f9c | |||
| 9a3f5ddd2b | |||
| 43f17b8b3e | |||
| 7067561891 | |||
| 419c88e84c | |||
| ede1054938 | |||
| 0275ef7ff8 |
@@ -0,0 +1,58 @@
|
||||
# 📡 UniFi Gateway BGP Setup
|
||||
|
||||
Diese Sektion beschreibt die Konfiguration des **UniFi Gateways** (UDM/Pro/SE), um BGP-Peering mit dem RKE2-Cluster zu ermöglichen. Dies ist die Voraussetzung, damit Cilium LoadBalancer-IPs (VIPs) direkt im Heimnetzwerk ankündigen kann.
|
||||
|
||||
## ⚙️ 1. FRR Konfiguration (bgp.conf)
|
||||
|
||||
Auf dem UniFi Gateway nutzen wir **FRR (Free Range Routing)**. Die Konfiguration in `bgp.conf` definiert das Gateway als BGP-Peer.
|
||||
|
||||
### Wichtige Parameter:
|
||||
- **Router ASN:** `65100` (Das UniFi Gateway) 🏛️
|
||||
- **Router ID:** `192.168.1.1` (Die LAN-IP des Gateways)
|
||||
- **Peer Group:** `RKE2`
|
||||
- **Remote ASN:** `65200` (Der RKE2 Cluster) ⚓
|
||||
- **Neighbor:** `192.168.250.175` (Die IP des ASUS-Nodes)
|
||||
|
||||
### Beispiel-Konfiguration:
|
||||
```frr
|
||||
router bgp 65100
|
||||
bgp router-id 192.168.1.1
|
||||
neighbor RKE2 peer-group
|
||||
neighbor RKE2 remote-as 65200
|
||||
neighbor 192.168.250.175 peer-group RKE2
|
||||
!
|
||||
address-family ipv4 unicast
|
||||
neighbor RKE2 activate
|
||||
neighbor RKE2 next-hop-self
|
||||
redistribute connected
|
||||
exit-address-family
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ 2. Aktivierung via UniFi Web-UI
|
||||
|
||||
In neueren UniFi OS Versionen ist eine manuelle Installation über die Shell nicht mehr notwendig. Die BGP-Konfiguration kann direkt über das Web-Interface hochgeladen bzw. konfiguriert werden.
|
||||
|
||||
### Vorgehensweise:
|
||||
1. Navigiere in deinem UniFi Controller zu den **Settings > Network**.
|
||||
2. Suche den Bereich für **BGP** (siehe Screenshot unten).
|
||||
3. Lade die Konfiguration aus der `bgp.conf` hoch oder trage die Werte manuell ein.
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## 🔍 3. Verifizierung
|
||||
|
||||
Nachdem sowohl das UniFi Gateway als auch Cilium im Cluster konfiguriert sind, kann der Status geprüft werden:
|
||||
|
||||
- **Im UniFi UI:** Der Status des Neighbors sollte als `Connected` oder `Established` angezeigt werden. ✅
|
||||
|
||||
- **Im Cluster:**
|
||||
```bash
|
||||
cilium bgp peers
|
||||
```
|
||||
|
||||
---
|
||||
*Zuletzt aktualisiert am 06. März 2026 von Gemini CLI*
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 312 KiB |
@@ -9,7 +9,8 @@ router bgp 65100
|
||||
no bgp ebgp-requires-policy
|
||||
neighbor RKE2 peer-group
|
||||
neighbor RKE2 remote-as 65200
|
||||
neighbor 192.168.1.238 peer-group RKE2
|
||||
bgp listen range 192.168.250.0/24 peer-group RKE2
|
||||
neighbor 192.168.250.175 peer-group RKE2
|
||||
address-family ipv4 unicast
|
||||
neighbor RKE2 activate
|
||||
neighbor RKE2 next-hop-self
|
||||
|
||||
@@ -0,0 +1,67 @@
|
||||
# ⚓ RKE2 Server Installation
|
||||
|
||||
Diese Anleitung beschreibt die Installation und Konfiguration eines **RKE2-Servers** (Next Generation Rancher Kubernetes Engine) auf dem ASUS-Node. Dieses Setup ist speziell auf die Nutzung von **Cilium** und **Envoy Gateway** vorbereitet.
|
||||
|
||||
## 🛠️ 1. RKE2 Installation
|
||||
Führe den folgenden Befehl direkt auf dem Ziel-Server aus:
|
||||
|
||||
```bash
|
||||
# RKE2 Server Binary installieren
|
||||
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_TYPE="server" sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ 2. Cluster Konfiguration
|
||||
Wir konfigurieren RKE2 so, dass keine Standard-CNI (Canal) und kein kube-proxy installiert wird, um Platz für **Cilium** (eBPF) zu schaffen. Ebenfalls wird der NGINX Ingress deaktiviert, da wir das **Envoy Gateway** nutzen.
|
||||
|
||||
```bash
|
||||
# Konfigurationsverzeichnis erstellen
|
||||
sudo mkdir -p /etc/rancher/rke2
|
||||
|
||||
# Konfigurationsdatei schreiben
|
||||
cat <<EOF | sudo tee /etc/rancher/rke2/config.yaml
|
||||
# CNI auf "none" setzen, um Cilium manuell zu installieren
|
||||
cni: none
|
||||
|
||||
# Kube-Proxy deaktivieren (Cilium übernimmt das via eBPF)
|
||||
disable-kube-proxy: true
|
||||
|
||||
# Standard-Komponenten deaktivieren
|
||||
disable:
|
||||
- rke2-canal # Deaktiviert Canal CNI
|
||||
- rke2-ingress-nginx # Deaktiviert NGINX Ingress
|
||||
EOF
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚀 3. Dienst starten
|
||||
Nachdem die Konfiguration hinterlegt ist, kann der RKE2-Dienst aktiviert und gestartet werden.
|
||||
|
||||
```bash
|
||||
# Dienst aktivieren und sofort starten
|
||||
sudo systemctl enable --now rke2-server.service
|
||||
|
||||
# Status prüfen
|
||||
sudo systemctl status rke2-server.service
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔑 4. Zugriff konfigurieren
|
||||
Um den Cluster von deiner Workstation aus zu verwalten, benötigst du die `kubeconfig`.
|
||||
|
||||
1. **Kubeconfig auslesen:**
|
||||
```bash
|
||||
sudo cat /etc/rancher/rke2/rke2.yaml
|
||||
```
|
||||
2. **Lokal speichern:** Kopiere den Inhalt in deine lokale `~/.kube/config` (oder eine separate Datei) und passe die `server` URL von `127.0.0.1` auf die IP des ASUS-Nodes (`192.168.1.238`) an.
|
||||
|
||||
---
|
||||
|
||||
## ✅ Nächste Schritte
|
||||
Nachdem der Server läuft, sind die Nodes im Status `NotReady`, da noch kein CNI installiert ist. Fahre mit der Installation von **Cilium** in Sektion `03` fort. 🚀
|
||||
|
||||
---
|
||||
*Zuletzt aktualisiert am 06. März 2026 von Gemini CLI*
|
||||
@@ -1,22 +0,0 @@
|
||||
direkt auf dem Server ausführen
|
||||
|
||||
# RKE2 installieren
|
||||
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_TYPE="server" sh
|
||||
|
||||
# Verzeichnis für die Konfiguration anlegen
|
||||
sudo mkdir -p /etc/rancher/rke2
|
||||
|
||||
# Konfigurationsdatei erstellen
|
||||
cat <<EOF | sudo tee /etc/rancher/rke2/config.yaml
|
||||
cni: none
|
||||
disable-kube-proxy: true
|
||||
disable:
|
||||
- rke2-canal
|
||||
- rke2-ingress-nginx # (Da wir Envoy nutzen wollen)
|
||||
EOF
|
||||
|
||||
# RKE2-Service enablen und starten
|
||||
sudo systemctl enable --now rke2-server.service
|
||||
|
||||
# K8S kube-config Datei auf Workstation kopieren
|
||||
sudo cat /etc/rancher/rke2/rke2.yaml
|
||||
@@ -0,0 +1,66 @@
|
||||
# 🌐 Netzwerk & Gateway Setup
|
||||
|
||||
Diese Sektion beschreibt die Installation des **CNI (Cilium)** und des **Envoy Gateways**. Dieses Setup bildet das Rückgrat für das moderne Networking im Cluster (eBPF & Gateway API).
|
||||
|
||||
## 🛠️ 1. Benötigte CLI Tools
|
||||
Bevor du startest, installiere die notwendigen Werkzeuge auf deinem lokalen Rechner:
|
||||
|
||||
```bash
|
||||
brew install cilium-cli kubernetes-cli helm egctl switcher cmctl
|
||||
```
|
||||
|
||||
> **Tipp:** Nutze `switcher` (oder den Alias `s`), um schnell zwischen deinen Kubernetes-Kontexten zu wechseln. 🔄
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ 2. Cilium CNI Installation
|
||||
Cilium wird als **CNI (Container Network Interface)** installiert und übernimmt das gesamte Routing sowie die BGP-Ankündigungen.
|
||||
|
||||
```bash
|
||||
cilium install \
|
||||
--version 1.18.5 \
|
||||
--set bgpControlPlane.enabled=true \
|
||||
--set bgpControlPlane.defaultInstance.type=cluster \
|
||||
--set kubeProxyReplacement=true \
|
||||
--set operator.replicas=1 \
|
||||
--set gatewayAPI.enabled=true
|
||||
```
|
||||
|
||||
### ✨ Warum diese Flags?
|
||||
- `bgpControlPlane.enabled`: Aktiviert das integrierte BGP-Peering 📡.
|
||||
- `kubeProxyReplacement`: Nutzt eBPF statt iptables für maximale Performance ⚡.
|
||||
- `gatewayAPI.enabled`: Aktiviert die Unterstützung für die moderne Kubernetes Gateway API 🚀.
|
||||
|
||||
### 🔍 Status prüfen
|
||||
```bash
|
||||
cilium status
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚀 3. Envoy Gateway Installation
|
||||
Das **Envoy Gateway** dient als Ingress-Controller der nächsten Generation und implementiert die Gateway API.
|
||||
|
||||
```bash
|
||||
helm install eg oci://docker.io/envoyproxy/gateway-helm \
|
||||
--version v1.6.1 \
|
||||
-n envoy-gateway-system \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
### 📊 Dashboard (Optional)
|
||||
Du kannst das Envoy Gateway Dashboard lokal öffnen, um den Status deiner Gateways zu visualisieren:
|
||||
```bash
|
||||
egctl experimental dashboard envoy-gateway
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🏁 Zusammenfassung
|
||||
Nach diesen Schritten verfügt dein Cluster über:
|
||||
1. Ein eBPF-basiertes Netzwerk (Cilium) 🛡️.
|
||||
2. BGP-Fähigkeit für LoadBalancer-IPs 📡.
|
||||
3. Ein modernes API-Gateway (Envoy) 🚀.
|
||||
|
||||
---
|
||||
*Zuletzt aktualisiert am 06. März 2026 von Gemini CLI*
|
||||
@@ -1,22 +0,0 @@
|
||||
# notwendige Tools installieren
|
||||
brew install cilium-cli kubernetes-cli helm egctl switcher cmctl
|
||||
|
||||
# !!! zum richtigen cluster wechseln !!! Ich nutze dafür switcher
|
||||
|
||||
# cilium installieren
|
||||
cilium install \
|
||||
--version 1.18.5 \
|
||||
--set bgpControlPlane.enabled=true \
|
||||
--set bgpControlPlane.defaultInstance.type=cluster \
|
||||
--set kubeProxyReplacement=true \
|
||||
--set operator.replicas=1 \
|
||||
--set gatewayAPI.enabled=true
|
||||
|
||||
# Cilium überprüfen
|
||||
cilium status
|
||||
|
||||
# envoy installieren
|
||||
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.6.1 -n envoy-gateway-system --create-namespace
|
||||
|
||||
# Envoy Gateway Dashboard
|
||||
egctl experimental dashboard envoy-gateway
|
||||
@@ -0,0 +1,45 @@
|
||||
# 🌐 Envoy Gateway & BGP Preparation
|
||||
|
||||
Diese Sektion beschreibt die Konfiguration von **BGP (Border Gateway Protocol)** über **Cilium**, um dem Cluster zu ermöglichen, virtuelle IP-Adressen (VIPs) im lokalen Netzwerk anzukündigen.
|
||||
|
||||
## 📡 1. BGP Peering mit Unifi
|
||||
Damit der Cluster LoadBalancer-IPs bereitstellen kann, muss er mit dem physischen Router (Unifi Dream Machine/Pro) kommunizieren.
|
||||
|
||||
- **Lokale ASN:** `65200`
|
||||
- **Remote ASN (Router):** `65100`
|
||||
- **Peer-Adresse:** `192.168.1.1`
|
||||
|
||||
### Komponenten in `asus-bgp.yaml`:
|
||||
1. **CiliumBGPAdvertisement:** Definiert, dass LoadBalancer-IPs via BGP angekündigt werden.
|
||||
2. **CiliumBGPPeerConfig:** Konfiguriert die BGP-Parameter (IPv4 Unicast, Graceful Restart).
|
||||
3. **CiliumBGPClusterConfig:** Verknüpft die Instanz `asus-pn51-e1` mit dem Unifi-Peer.
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ 2. IP-Adresspool
|
||||
Wir definieren einen dedizierten Bereich für unsere Gateways, damit diese feste IPs im Heimnetzwerk erhalten:
|
||||
|
||||
- **Pool-Name:** `envoy-gateway-pool`
|
||||
- **Bereich:** `192.168.201.240/28` (entspricht `.240` bis `.255`)
|
||||
|
||||
---
|
||||
|
||||
## 🚀 3. Gateway API Ressourcen
|
||||
Zusätzlich werden die ersten Ressourcen für das **Envoy Gateway** vorbereitet:
|
||||
|
||||
1. **GatewayClass:** Definiert `envoy-gateway-class` als Standard-Controller.
|
||||
2. **External Gateway:** Erstellt ein initiales Gateway im Namespace `default` für Port 80 und 443.
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Anwendung
|
||||
Die Konfiguration kann einfach mit folgendem Befehl angewendet werden:
|
||||
|
||||
```bash
|
||||
kubectl apply -f asus-bgp.yaml
|
||||
```
|
||||
|
||||
> **Hinweis:** Stellen Sie sicher, dass BGP auf Ihrem Unifi-Router aktiviert ist und die ASNs mit der Konfiguration übereinstimmen.
|
||||
|
||||
---
|
||||
*Zuletzt aktualisiert am 06. März 2026 von Gemini CLI*
|
||||
@@ -57,7 +57,7 @@ metadata:
|
||||
name: "envoy-gateway-pool"
|
||||
spec:
|
||||
blocks:
|
||||
- cidr: "192.168.200.240/28"
|
||||
- cidr: "192.168.201.240/28"
|
||||
serviceSelector:
|
||||
matchLabels: {}
|
||||
|
||||
|
||||
@@ -0,0 +1,80 @@
|
||||
# 🛠️ Installation: Base Apps & Tools
|
||||
|
||||
Diese Dokumentation beschreibt die Installation der grundlegenden Infrastruktur-Komponenten im Cluster.
|
||||
|
||||
## 🔑 1. Phase Secrets Operator
|
||||
Der Operator ermöglicht das sichere Synchronisieren von Secrets aus der Phase Console in den Cluster.
|
||||
|
||||
```bash
|
||||
# Repo hinzufügen & updaten
|
||||
helm repo add phase https://helm.phase.dev && helm repo update
|
||||
|
||||
# Installation des Operators
|
||||
helm install phase-secrets-operator phase/phase-kubernetes-operator --set image.tag=v1.3.0
|
||||
|
||||
# Service Token für den Zugriff erstellen (Namespace: default)
|
||||
kubectl create secret generic phase-service-token \
|
||||
--from-literal=token=<PHASE_SERVICE_TOKEN> \
|
||||
--type=Opaque \
|
||||
--namespace=default
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔐 2. Cert-Manager
|
||||
Automatische Zertifikatsverwaltung mit nativer Unterstützung für die **Gateway API**.
|
||||
|
||||
```bash
|
||||
# Repo hinzufügen
|
||||
helm repo add jetstack https://charts.jetstack.io && helm repo update
|
||||
|
||||
# Installation mit Gateway API Support
|
||||
helm install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--create-namespace \
|
||||
--set installCRDs=true \
|
||||
--set "config.enableGatewayAPI=true"
|
||||
|
||||
# ClusterIssuer für Cloudflare anwenden
|
||||
kubectl apply -f manifests/cloudflare-cluster-issuer.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🌍 3. External DNS
|
||||
Synchronisiert Kubernetes Ressourcen (Services, Ingress, Gateways) mit dem DNS-Provider (Unifi).
|
||||
|
||||
```bash
|
||||
# Repo hinzufügen
|
||||
helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
|
||||
|
||||
# Installation im dedizierten Namespace
|
||||
kubectl create ns external-dns
|
||||
helm upgrade --install external-dns external-dns/external-dns \
|
||||
--namespace external-dns \
|
||||
--version 1.19.0 \
|
||||
-f external-dns-values.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚀 4. Envoy Gateway (Ingress & L7 Loadbalancing)
|
||||
Das moderne Gateway für den Cluster-Traffic basierend auf der **Gateway API**.
|
||||
|
||||
- Ersetzt klassische Ingress-Controller.
|
||||
- Ermöglicht granulare Steuerung via `HTTPRoute` und `GRPCRoute`.
|
||||
- Integriert mit Cilium eBPF für maximale Performance.
|
||||
|
||||
---
|
||||
|
||||
## 📊 Zusammenfassung der Komponenten
|
||||
|
||||
| Tool | 📦 Zweck | 🌐 Namespace |
|
||||
| :--- | :--- | :--- |
|
||||
| **Phase** | Secret Management | `default` (Operator) |
|
||||
| **Cert-Manager** | TLS Zertifikate (ACME/Cloudflare) | `cert-manager` |
|
||||
| **External-DNS** | DNS Sync (Unifi) | `external-dns` |
|
||||
| **Envoy Gateway** | Ingress & API Gateway | `envoy-gateway-system` |
|
||||
|
||||
---
|
||||
*Zuletzt aktualisiert am 06. März 2026 von Gemini CLI*
|
||||
@@ -1,31 +0,0 @@
|
||||
# Phase-Secrets-Operator
|
||||
|
||||
helm repo add phase https://helm.phase.dev && helm repo update
|
||||
|
||||
helm install phase-secrets-operator phase/phase-kubernetes-operator --set image.tag=v1.3.0
|
||||
|
||||
kubectl create secret generic phase-service-token \
|
||||
--from-literal=token=<TOKEN> \
|
||||
--type=Opaque \
|
||||
--namespace=default
|
||||
|
||||
# Cert-Manager installieren
|
||||
# 1. Repository hinzufügen und updaten
|
||||
helm repo add jetstack https://charts.jetstack.io && helm repo update
|
||||
|
||||
# 2. Installation mit Gateway API Support
|
||||
helm install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--create-namespace \
|
||||
--set installCRDs=true \
|
||||
--set "config.enableGatewayAPI=true"
|
||||
|
||||
kubectl apply -f manifests
|
||||
|
||||
# External DNS installieren
|
||||
|
||||
helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
|
||||
|
||||
kubectl create ns external-dns
|
||||
|
||||
helm upgrade --install external-dns external-dns/external-dns --namespace external-dns --version 1.19.0 -f external-dns-values.yaml
|
||||
@@ -0,0 +1,65 @@
|
||||
# 🚀 Argo-CD Installation & SSO Konfiguration
|
||||
|
||||
Diese Dokumentation beschreibt die Installation von Argo CD im Homelab mit Fokus auf SSO (Authentik) und automatisierte Secret-Verwaltung.
|
||||
|
||||
## 📦 1. Vorbereitung & Installation
|
||||
|
||||
Zuerst muss das Repository hinzugefügt und die Basis-Ressourcen (Namespace & Gateway) erstellt werden:
|
||||
|
||||
```bash
|
||||
# Repo hinzufügen
|
||||
helm repo add argo https://argoproj.github.io/argo-helm
|
||||
helm repo update
|
||||
|
||||
# Vorbereitungen treffen (Gateway & Namespace)
|
||||
kubectl apply -f argo-prepare.yaml
|
||||
|
||||
# Installation via Helm
|
||||
helm upgrade --install argocd argo/argo-cd --namespace argocd -f argo-values.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔐 2. SSO Integration (Authentik)
|
||||
|
||||
Die Authentifizierung erfolgt über **Authentik** via **OIDC**.
|
||||
|
||||
### Konfiguration in `argo-values.yaml`:
|
||||
- **Dex** ist als interner Connector vorkonfiguriert.
|
||||
- Die Gruppen `ArgoCD Admins` und `ArgoCD Viewers` werden direkt aus Authentik übernommen.
|
||||
|
||||
### RBAC (Rechteverwaltung):
|
||||
| Authentik Gruppe | Argo CD Rolle | Beschreibung |
|
||||
| :--- | :--- | :--- |
|
||||
| `ArgoCD Admins` | `role:admin` | Voller Zugriff auf alle Cluster-Ressourcen 👑 |
|
||||
| `ArgoCD Viewers` | `role:readonly` | Nur Lesezugriff auf Applikationen 👁️ |
|
||||
|
||||
---
|
||||
|
||||
## 🔑 3. Secret Management (Phase)
|
||||
|
||||
Wir nutzen **Phase** (`secrets.phase.dev`), um sensible Daten wie OIDC-Secrets und Git-Credentials sicher in den Cluster zu synchronisieren.
|
||||
|
||||
- Der `PhaseSecret` Operator überwacht die Ressource `argocd-phase-secret`.
|
||||
- Erstellt automatisch das Kubernetes Secret `argocd-authentik-client-secret`.
|
||||
- Die Variablen `$AUTHENTIK_CLIENT_ID` und `$AUTHENTIK_CLIENT_SECRET` werden zur Laufzeit in Dex injiziert.
|
||||
|
||||
---
|
||||
|
||||
## 🌐 4. Networking (Gateway API)
|
||||
|
||||
Die Erreichbarkeit wird über das **Envoy Gateway** gesteuert:
|
||||
- **Hostname:** `argocd.k8s.hnrx.net`
|
||||
- **Infrastruktur:** Nutzt `HTTPRoute` für das Web-UI (Port 80) und `GRPCRoute` für CLI/API Kommunikation (Port 443).
|
||||
- **TLS:** Zertifikate werden automatisch via Cert-Manager (Cloudflare DNS-01) erstellt.
|
||||
|
||||
---
|
||||
|
||||
## ⚡ 5. Performance & Stabilität
|
||||
|
||||
Um Abstürze des `repo-server` zu vermeiden, wurden explizite Ressourcen-Limits gesetzt:
|
||||
- **Repo-Server:** 512Mi Memory / 500m CPU
|
||||
- **Controller:** 1Gi Memory / 500m CPU
|
||||
|
||||
---
|
||||
*Zuletzt aktualisiert am 06. März 2026 von Gemini CLI*
|
||||
@@ -29,9 +29,9 @@ configs:
|
||||
dex.config: |
|
||||
connectors:
|
||||
- config:
|
||||
issuer: ${AUTHENTIK_ISSUER_URL}
|
||||
clientID: ${AUTHENTIK_CLIENT_ID}
|
||||
clientSecret: ${AUTHENTIK_CLIENT_SECRET}
|
||||
issuer: $AUTHENTIK_ISSUER_URL
|
||||
clientID: $AUTHENTIK_CLIENT_ID
|
||||
clientSecret: $AUTHENTIK_CLIENT_SECRET
|
||||
insecureEnableGroups: true
|
||||
scopes:
|
||||
- openid
|
||||
@@ -49,13 +49,13 @@ configs:
|
||||
g, ArgoCD Viewers, role:readonly
|
||||
secret:
|
||||
extra:
|
||||
dex.authentik.clientSecret: "${AUTHENTIK_CLIENT_SECRET}"
|
||||
dex.authentik.clientSecret: $AUTHENTIK_CLIENT_SECRET
|
||||
cmp:
|
||||
credentialTemplates:
|
||||
https-creds:
|
||||
url: https://git.hnrx.net
|
||||
username: ${GIT_USER}
|
||||
password: ${GIT_PASSWORD}
|
||||
username: $GIT_USER
|
||||
password: $GIT_PASSWORD
|
||||
|
||||
|
||||
dex:
|
||||
@@ -93,8 +93,26 @@ server:
|
||||
- matches:
|
||||
- method:
|
||||
type: Exact
|
||||
service: "cluster.argoproj.v1alpha1.repositorieservice"
|
||||
service: "cluster.argoproj.v1alpha1.repositoryservice"
|
||||
method: "List"
|
||||
backendRefs:
|
||||
- name: argocd-server
|
||||
port: 443
|
||||
|
||||
repoServer:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 256Mi
|
||||
|
||||
controller:
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 1Gi
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 512Mi
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
# Argo-CD Installation mit SSO über Authentik
|
||||
|
||||
helm repo add argo https://argoproj.github.io/argo-helm
|
||||
|
||||
helm upgrade --install argocd argo/argo-cd --namespace argocd -f argo-values.yaml
|
||||
@@ -1,10 +0,0 @@
|
||||
# Zwischenstand
|
||||
Der Cluster hat bereits folgendes installiert:
|
||||
- Cilium Netzwerk mit BGP
|
||||
- Envoy Gateway API
|
||||
- Cert-Manager
|
||||
- External DNS für Einträge im Unifi-DNS-Server
|
||||
|
||||
## Weitere Infrastuktur-Bestandteile installieren
|
||||
|
||||
Diese werden mit ArgoCD gemanagt.
|
||||
@@ -0,0 +1,68 @@
|
||||
# 🎡 ArgoCD Bootstrapping (App-of-Apps)
|
||||
|
||||
Diese Sektion beschreibt den Übergang von der manuellen Installation zur vollautomatischen GitOps-Verwaltung des gesamten Clusters.
|
||||
|
||||
## 🏗️ 1. Das Konzept
|
||||
Wir nutzen das **Bootstrap-Prinzip**, um ArgoCD anzuweisen, sich selbst und alle anderen Applikationen zu verwalten. Sobald die Bootstrap-Ressourcen angewendet werden, scannt ArgoCD das Repository und rollt die definierten Stacks automatisch aus.
|
||||
|
||||
### 📊 Architektur-Übersicht
|
||||
```mermaid
|
||||
graph TD
|
||||
Root[ArgoCD Bootstrap] --> |Deploy| AS1[ApplicationSet: Cluster Infra]
|
||||
Root --> |Deploy| AS2[ApplicationSet: Homelab Apps]
|
||||
|
||||
AS1 --> |Scans 08_cluster_infrastructure/*| AppInfra[Core Tools: Monitoring, VPA, etc.]
|
||||
AS2 --> |Scans 09_homelab_apps/*| AppHome[User Apps: Immich, Ghostfolio, etc.]
|
||||
|
||||
subgraph "Git Repository"
|
||||
D8[Verzeichnis 08]
|
||||
D9[Verzeichnis 09]
|
||||
end
|
||||
|
||||
AS1 -.-> D8
|
||||
AS2 -.-> D9
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ 2. Bootstrap Komponenten
|
||||
|
||||
### 📂 Projekte (`argocd-project-*.yaml`)
|
||||
Bevor Applikationen erstellt werden können, müssen die logischen Gruppierungen (**Projects**) in ArgoCD vorhanden sein:
|
||||
- **cluster-infra:** Für systemnahe Tools.
|
||||
- **homelab:** Für Benutzer-Applikationen.
|
||||
|
||||
### 🤖 Automatisierung (`ApplicationSets`)
|
||||
Wir nutzen `ApplicationSet`-Ressourcen, um Applikationen dynamisch basierend auf der Verzeichnisstruktur im Git zu erstellen:
|
||||
- **`argocd-apps.yaml`:** Übernimmt alle Ordner aus `08_cluster_infrastructure/`.
|
||||
- **`homelab-apps.yaml`:** Übernimmt alle Ordner aus `09_homelab_apps/`.
|
||||
|
||||
### 🚀 Shared Gateways
|
||||
In diesem Verzeichnis werden auch die **zentralen Gateways** definiert, die von allen Applikationen gemeinsam genutzt werden:
|
||||
- `shared-gateway.yaml`: Interner Traffic (`*.k8s.hnrx.net`).
|
||||
- `shared-external-gateway.yaml`: Externer Traffic (`*.hnrx.net`).
|
||||
|
||||
---
|
||||
|
||||
## 🚀 3. Ausführung (Bootstrapping)
|
||||
|
||||
Um das Bootstrapping zu starten, werden alle Ressourcen in diesem Ordner einmalig manuell auf den Cluster angewendet:
|
||||
|
||||
```bash
|
||||
# 1. Projekte erstellen
|
||||
kubectl apply -f argocd-project-cluster-infra.yaml
|
||||
kubectl apply -f argocd-project-homelab.yaml
|
||||
|
||||
# 2. Shared Gateways bereitstellen
|
||||
kubectl apply -f shared-gateway.yaml
|
||||
kubectl apply -f shared-external-gateway.yaml
|
||||
|
||||
# 3. ApplicationSets aktivieren (Der "Magic" Moment)
|
||||
kubectl apply -f argocd-apps.yaml
|
||||
kubectl apply -f homelab-apps.yaml
|
||||
```
|
||||
|
||||
Ab diesem Zeitpunkt übernimmt ArgoCD die Kontrolle. Jedes neue Verzeichnis in `08_` oder `09_` wird automatisch als neue Applikation erkannt und deployed. 🚀
|
||||
|
||||
---
|
||||
*Zuletzt aktualisiert am 06. März 2026 von Gemini CLI*
|
||||
@@ -12,6 +12,7 @@ spec:
|
||||
- https://git.hnrx.net/homelab/rke2-single-node.git
|
||||
- https://github.com/kubernetes/autoscaler.git
|
||||
- https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
|
||||
- https://github.com/rancher/local-path-provisioner.git
|
||||
clusterResourceWhitelist:
|
||||
- group: '*'
|
||||
kind: '*'
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
# 🏗️ Cluster Infrastructure
|
||||
|
||||
Dieser Ordner enthält alle zentralen Infrastruktur-Komponenten, die für den Betrieb des Clusters notwendig sind.
|
||||
|
||||
## 🤖 GitOps Automatisierung
|
||||
Alle Unterordner in diesem Verzeichnis werden automatisch vom ArgoCD **ApplicationSet** `cluster-infra` erfasst und im Cluster deployed.
|
||||
|
||||
### Aktuelle Komponenten:
|
||||
- **`nfs-csi`:** Ermöglicht die Nutzung von NFS-Speicher für PersistentVolumes.
|
||||
- **`vpa`:** Vertical Pod Autoscaler zur automatischen Anpassung von Ressourcen-Requests.
|
||||
|
||||
---
|
||||
*Verwaltet via ArgoCD GitOps*
|
||||
@@ -0,0 +1,32 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: local-path-provisioner
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: cluster-infra
|
||||
destination:
|
||||
server: https://kubernetes.default.svc
|
||||
namespace: local-path-storage
|
||||
sources:
|
||||
- repoURL: 'https://github.com/rancher/local-path-provisioner.git'
|
||||
targetRevision: v0.0.36
|
||||
path: deploy/chart/local-path-provisioner
|
||||
helm:
|
||||
releaseName: local-path-provisioner
|
||||
values: |
|
||||
storageClass:
|
||||
name: local-path
|
||||
reclaimPolicy: Retain # Sicherer: Verhindert Löschen der DB-Daten bei PVC-Löschung
|
||||
defaultClass: false
|
||||
# Hier definierst du, wo auf deinem Ryzen-Node die SSD liegt
|
||||
nodePathMap:
|
||||
- node: DEFAULT_PATH_FOR_NON_LISTED_NODES
|
||||
paths:
|
||||
- /opt/local-path-provisioner
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
selfHeal: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
@@ -8,7 +8,7 @@ spec:
|
||||
source:
|
||||
path: vertical-pod-autoscaler/charts/vertical-pod-autoscaler
|
||||
repoURL: https://github.com/kubernetes/autoscaler.git
|
||||
targetRevision: vertical-pod-autoscaler-chart-0.8.0
|
||||
targetRevision: master
|
||||
helm:
|
||||
releaseName: vertical-pod-autoscaler
|
||||
values: |
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
# 🏠 Homelab Applications
|
||||
|
||||
Dieser Ordner beherbergt alle Benutzer-Applikationen und Self-Hosted Dienste des Homelabs.
|
||||
|
||||
## 🤖 GitOps Automatisierung
|
||||
Jeder Unterordner in diesem Verzeichnis wird automatisch vom ArgoCD **ApplicationSet** `homelab-apps` erkannt und als eigenständige Application im Cluster ausgerollt.
|
||||
|
||||
### Deployment-Struktur:
|
||||
- Die Applikationen werden standardmäßig im Projekt `homelab` gruppiert.
|
||||
- ArgoCD sucht nach `values.yaml` Dateien in den jeweiligen Verzeichnissen.
|
||||
|
||||
---
|
||||
*Verwaltet via ArgoCD GitOps*
|
||||
@@ -0,0 +1,88 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: forgejo
|
||||
namespace: argocd
|
||||
spec:
|
||||
destination:
|
||||
namespace: forgejo
|
||||
server: {{ $.Values.spec.destination.server }}
|
||||
project: homelab
|
||||
sources:
|
||||
- path: .
|
||||
repoURL: oci://code.forgejo.org/forgejo-helm/forgejo
|
||||
targetRevision: 17.0.1
|
||||
helm:
|
||||
values: |
|
||||
gitea:
|
||||
admin:
|
||||
username: 'mh-admin'
|
||||
password: 'start123'
|
||||
email: 'matthias@hnrx.de'
|
||||
oauth:
|
||||
- name: 'Authentik'
|
||||
provider: 'openidConnect'
|
||||
key: 'SFcyq6DYywWMcG2VxFjC6bAcsC5BSHmlUaDIvSan'
|
||||
secret: 'AZNCzbYDFExH8EUIBUOtRqv3MClA83N87TzQKJ2sAmNNdwbrU0pKVXJq4cOkxWugoG7AnizcAdlzl4n5FicIUWxPUvRRhkRchRqRoiimLg20KNqRjmll2SUoPsE0RhxK'
|
||||
autoDiscoverUrl: 'https://auth.hnrx.net/application/o/githnrxnet/.well-known/openid-configuration'
|
||||
config:
|
||||
actions:
|
||||
ENABLED: true
|
||||
LOG_RETENTION_DAYS: 365
|
||||
ARTIFACT_RETENTION_DAYS: 90
|
||||
api:
|
||||
MAX_RESPONSE_ITEMS: 100
|
||||
mailer:
|
||||
ENABLED: true
|
||||
SMTP_ADDR: 'smtp.gmail.com'
|
||||
SMTP_PORT: "465"
|
||||
FROM: 'matthias.hinrichs@gmail.com'
|
||||
USER: 'matthias.hinrichs'
|
||||
PASSWD: "kuid ogzo mnej hbvj"
|
||||
PROTOCOL: smtps
|
||||
migrations:
|
||||
ALLOWED_DOMAINS: "*.hnrx.net"
|
||||
ALLOW_LOCALNETWORKS: true
|
||||
openid:
|
||||
ENABLE_OPENID_SIGNIN: false
|
||||
database:
|
||||
DB_TYPE: postgres
|
||||
HOST: postgresql-test1-pgbouncer.everest-db.svc:5432
|
||||
NAME: postgres
|
||||
USER: postgres
|
||||
PASSWD: 3yZ64SU8sLqS-MijJh.aPJ59
|
||||
SSL_MODE: require
|
||||
picture:
|
||||
GRAVATAR_SOURCE: gravatar
|
||||
server:
|
||||
LANDING_PAGE: explore
|
||||
OFFLINE_MODE: false
|
||||
service:
|
||||
ENABLE_NOTIFY_MAIL: true
|
||||
service.explore:
|
||||
DISABLE_USERS_PAGE: false
|
||||
DISABLE_ORGANIZATIONS_PAGE: false
|
||||
webhook:
|
||||
ALLOWED_HOST_LIST: "*.hnrx.net"
|
||||
SKIP_TLS_VERIFY: true
|
||||
DELIVER_TIMEOUT: 30
|
||||
|
||||
postgresql:
|
||||
enabled: false
|
||||
persistence:
|
||||
enabled: true
|
||||
storageClass: nfs-csi
|
||||
existingClaim: gitea-shared-storage
|
||||
httpRoute:
|
||||
enabled: true
|
||||
hostnames:
|
||||
- forgejo.k8s.hnrx.net
|
||||
parentRefs:
|
||||
- name: shared-gateway
|
||||
namespace: default
|
||||
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
@@ -0,0 +1,19 @@
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: HTTPRoute
|
||||
metadata:
|
||||
name: everest-ui
|
||||
namespace: everest-system
|
||||
spec:
|
||||
parentRefs:
|
||||
- name: shared-gateway # Anpassen an dein Gateway
|
||||
namespace: default
|
||||
hostnames:
|
||||
- "everest.k8s.hnrx.net"
|
||||
rules:
|
||||
- matches:
|
||||
- path:
|
||||
type: PathPrefix
|
||||
value: /
|
||||
backendRefs:
|
||||
- name: everest
|
||||
port: 8080
|
||||
@@ -13,7 +13,7 @@ spec:
|
||||
source:
|
||||
path: .
|
||||
repoURL: oci://ghcr.io/databasus/charts/databasus
|
||||
targetRevision: 2.16.3
|
||||
targetRevision: 3.32.0
|
||||
helm:
|
||||
values: |
|
||||
persistence:
|
||||
|
||||
@@ -0,0 +1,25 @@
|
||||
---
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: dawarich
|
||||
namespace: argocd
|
||||
finalizers:
|
||||
- resources-finalizer.argocd.argoproj.io
|
||||
spec:
|
||||
destination:
|
||||
namespace: dawarich
|
||||
server: {{ $.Values.spec.destination.server }}
|
||||
project: homelab
|
||||
source:
|
||||
path: .
|
||||
repoURL: https://git.hnrx.net/k8s/dawarich.git
|
||||
targetRevision: main
|
||||
directory:
|
||||
recurse: true
|
||||
exclude: 'renovate.json'
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
@@ -17,5 +17,6 @@ spec:
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
prune: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
|
||||
@@ -0,0 +1,25 @@
|
||||
---
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: immich-app
|
||||
namespace: argocd
|
||||
finalizers:
|
||||
- resources-finalizer.argocd.argoproj.io
|
||||
spec:
|
||||
destination:
|
||||
namespace: immich-app
|
||||
server: {{ $.Values.spec.destination.server }}
|
||||
project: homelab
|
||||
source:
|
||||
path: .
|
||||
repoURL: https://git.hnrx.net/k8s/immich-app.git
|
||||
targetRevision: main
|
||||
directory:
|
||||
recurse: true
|
||||
exclude: 'renovate.json'
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
@@ -0,0 +1,56 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: keel
|
||||
namespace: argocd
|
||||
finalizers:
|
||||
- resources-finalizer.argocd.argoproj.io
|
||||
spec:
|
||||
destination:
|
||||
namespace: kube-system
|
||||
server: {{ $.Values.spec.destination.server }}
|
||||
project: homelab
|
||||
source:
|
||||
repoURL: https://charts.keel.sh
|
||||
chart: keel
|
||||
targetRevision: 1.2.0
|
||||
helm:
|
||||
values: |
|
||||
image:
|
||||
tag: 0.21.1
|
||||
basicauth:
|
||||
enabled: true
|
||||
user: admin
|
||||
password: admin
|
||||
service:
|
||||
enabled: true
|
||||
type: LoadBalancer
|
||||
externalPort: 9300
|
||||
clusterIP: ""
|
||||
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
|
||||
---
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: HTTPRoute
|
||||
metadata:
|
||||
name: keel-route
|
||||
namespace: kube-system
|
||||
spec:
|
||||
parentRefs:
|
||||
- name: shared-gateway # Ihr Envoy Gateway
|
||||
namespace: default
|
||||
hostnames:
|
||||
- "keel.k8s.hnrx.net"
|
||||
rules:
|
||||
- matches:
|
||||
- path:
|
||||
type: PathPrefix
|
||||
value: /
|
||||
backendRefs:
|
||||
- name: keel # Keel Service Name
|
||||
port: 9300 # Keel läuft auf Port 9300 intern
|
||||
@@ -13,31 +13,80 @@ spec:
|
||||
source:
|
||||
path: .
|
||||
repoURL: oci://ghcr.io/prometheus-community/charts/kube-prometheus-stack
|
||||
targetRevision: 80.14.4
|
||||
targetRevision: 84.1.0
|
||||
helm:
|
||||
values: |
|
||||
kubeProxy:
|
||||
enabled: false
|
||||
grafana:
|
||||
envValueFrom:
|
||||
GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET:
|
||||
secretKeyRef:
|
||||
name: kube-prometheus-secret
|
||||
key: GENERIC_OAUTH_CLIENT_SECRET
|
||||
grafana.ini:
|
||||
server:
|
||||
root_url: https://grafana.k8s.hnrx.net
|
||||
auth.generic_oauth:
|
||||
enabled: true
|
||||
name: "authentik"
|
||||
allow_sign_up: true
|
||||
auto_login: false # Auf true setzen, wenn das Standard-Login-Formular übersprungen werden soll
|
||||
client_id: "4JtTfw2apna4ZnnXgPH6mnDfLCPoW6qy5fXiC03z"
|
||||
scopes: "openid profile email"
|
||||
auth_url: "https://auth.hnrx.net/application/o/authorize/"
|
||||
token_url: "https://auth.hnrx.net/application/o/token/"
|
||||
api_url: "https://auth.hnrx.net/application/o/userinfo/"
|
||||
|
||||
role_attribute_path: "contains(groups, 'Grafana Admins') && 'Admin' || contains(groups, 'Grafana Editors') && 'Editor' || 'Viewer'"
|
||||
adminPassword: "DeinSicheresPasswort"
|
||||
sidecar:
|
||||
datasources:
|
||||
enabled: true
|
||||
additionalDataSources:
|
||||
- name: Loki
|
||||
type: loki
|
||||
access: proxy
|
||||
# Da Grafana und Loki im gleichen Namespace sind, reicht der Service-Name
|
||||
url: http://loki.kube-prometheus-stack.svc.cluster.local:3100
|
||||
version: 1
|
||||
editable: true
|
||||
jsonData:
|
||||
# Erhöht die Zeilenanzahl im Explorer (nützlich für Traefik-Logs)
|
||||
maxLines: 1000
|
||||
dashboards:
|
||||
default: # Name des Dashboard-Providers
|
||||
traefik-dashboard:
|
||||
gnetId: 11462 # Die ID von grafana.com
|
||||
revision: 1 # Optional: Version des Dashboards
|
||||
datasource: Prometheus
|
||||
traefik-2-dashboard:
|
||||
gnetId: 17346 # Die ID von grafana.com
|
||||
revision: 1 # Optional: Version des Dashboards
|
||||
datasource: Prometheus
|
||||
persistence:
|
||||
enabled: true
|
||||
size: 10Gi
|
||||
storageClassName: nfs-csi
|
||||
ingress:
|
||||
enabled: false
|
||||
hosts:
|
||||
- grafana.k8s.hnrx.net
|
||||
parentRefs:
|
||||
- name: shared-gateway
|
||||
namespace: default
|
||||
routes:
|
||||
enabled: true
|
||||
hostnames:
|
||||
- grafana.k8s.hnrx.net
|
||||
parentRefs:
|
||||
- name: shared-gateway
|
||||
namespace: default
|
||||
httpsRedirect: true
|
||||
prometheus:
|
||||
prometheusSpec:
|
||||
additionalScrapeConfigs:
|
||||
- job_name: 'crowdsec'
|
||||
static_configs:
|
||||
- targets: ['192.168.200.21:6060']
|
||||
- job_name: "traefik-synology"
|
||||
metrics_path: /metrics
|
||||
static_configs:
|
||||
- targets: ["192.168.200.20:8082"]
|
||||
# Optional: Labels hinzufügen, damit Traefik-Dashboards
|
||||
# die Daten leichter finden
|
||||
relabel_configs:
|
||||
- target_label: job
|
||||
replacement: traefik
|
||||
- target_label: instance
|
||||
replacement: synology-nas
|
||||
storageSpec:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
@@ -60,3 +109,85 @@ spec:
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
- ServerSideApply=true
|
||||
|
||||
---
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: HTTPRoute
|
||||
metadata:
|
||||
name: grafana-route
|
||||
namespace: kube-prometheus-stack
|
||||
spec:
|
||||
parentRefs:
|
||||
- name: shared-gateway
|
||||
namespace: default
|
||||
hostnames:
|
||||
- "grafana.k8s.hnrx.net"
|
||||
rules:
|
||||
- matches:
|
||||
- path:
|
||||
type: PathPrefix
|
||||
value: /
|
||||
backendRefs:
|
||||
- name: kube-prometheus-stack-grafana
|
||||
port: 80
|
||||
|
||||
---
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: HTTPRoute
|
||||
metadata:
|
||||
name: prometheus-route
|
||||
namespace: kube-prometheus-stack
|
||||
spec:
|
||||
parentRefs:
|
||||
- name: shared-gateway
|
||||
namespace: default
|
||||
hostnames:
|
||||
- "prometheus.k8s.hnrx.net"
|
||||
rules:
|
||||
- matches:
|
||||
- path:
|
||||
type: PathPrefix
|
||||
value: /
|
||||
backendRefs:
|
||||
- name: prometheus-operated
|
||||
port: 9090
|
||||
|
||||
---
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: HTTPRoute
|
||||
metadata:
|
||||
name: alertmanager-route
|
||||
namespace: kube-prometheus-stack
|
||||
spec:
|
||||
parentRefs:
|
||||
- name: shared-gateway
|
||||
namespace: default
|
||||
hostnames:
|
||||
- "alertmanager.k8s.hnrx.net"
|
||||
rules:
|
||||
- matches:
|
||||
- path:
|
||||
type: PathPrefix
|
||||
value: /
|
||||
backendRefs:
|
||||
- name: alertmanager-operated
|
||||
port: 9093
|
||||
---
|
||||
apiVersion: secrets.phase.dev/v1alpha1
|
||||
kind: PhaseSecret
|
||||
metadata:
|
||||
name: kube-prometheus-secret
|
||||
namespace: kube-prometheus-stack
|
||||
spec:
|
||||
phaseApp: 'prometheus-stack' # The name of your Phase application
|
||||
phaseAppEnv: 'production' # OPTIONAL - The Phase App Environment to fetch secrets from
|
||||
phaseAppEnvPath: '/' # OPTIONAL Path within the Phase application environment to fetch secrets from
|
||||
phaseHost: 'https://phase.hnrx.net' # OPTIONAL - URL of a Phase Console instance
|
||||
authentication:
|
||||
serviceToken:
|
||||
serviceTokenSecretReference:
|
||||
secretName: 'phase-service-token' # Name of the Phase Service Token with access to your application
|
||||
secretNamespace: 'default'
|
||||
managedSecretReferences:
|
||||
- secretName: 'kube-prometheus-secret' # Name of the Kubernetes managed secret that Phase will sync
|
||||
secretNamespace: 'kube-prometheus-stack'
|
||||
@@ -0,0 +1,99 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: loki
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: homelab
|
||||
source:
|
||||
repoURL: https://grafana.github.io/helm-charts
|
||||
chart: loki
|
||||
targetRevision: 6.52.0
|
||||
helm:
|
||||
values: |
|
||||
deploymentMode: SingleBinary
|
||||
|
||||
# Diese Sektionen müssen explizit auf 0,
|
||||
# damit sie den SingleBinary Modus nicht blockieren
|
||||
backend:
|
||||
replicas: 0
|
||||
read:
|
||||
replicas: 0
|
||||
write:
|
||||
replicas: 0
|
||||
|
||||
loki:
|
||||
auth_enabled: false
|
||||
commonConfig:
|
||||
replication_factor: 1
|
||||
storage:
|
||||
type: 'filesystem'
|
||||
schemaConfig:
|
||||
configs:
|
||||
- from: "2024-01-01"
|
||||
store: tsdb
|
||||
object_store: filesystem
|
||||
schema: v13
|
||||
index:
|
||||
prefix: index_
|
||||
period: 24h
|
||||
limits_config:
|
||||
# Erlaubt höhere Spitzen beim Senden (Burst)
|
||||
ingestion_burst_size_mb: 20
|
||||
# Erhöht die kontinuierliche Rate (Standard ist oft 4MB)
|
||||
ingestion_rate_mb: 10
|
||||
# Erlaubt größere Log-Pakete
|
||||
max_line_size: 256kb
|
||||
# Verhindert Fehlermeldungen bei "alten" Logs aus dem Backlog
|
||||
reject_old_samples: true
|
||||
reject_old_samples_max_age: 168h
|
||||
# Aktiviert das Löschen nach Zeit
|
||||
retention_period: 30d # Behalte Logs für 30 Tage
|
||||
compactor:
|
||||
working_directory: /var/loki/compactor
|
||||
# Der Compactor löscht die Daten physisch vom NFS
|
||||
retention_enabled: true
|
||||
delete_request_cancel_period: 24h
|
||||
delete_request_store: filesystem
|
||||
|
||||
singleBinary:
|
||||
replicas: 1
|
||||
persistence:
|
||||
enabled: true
|
||||
size: 10Gi
|
||||
storageClass: nfs-csi
|
||||
gateway:
|
||||
enabled: false
|
||||
# Ressourcen-Optimierung für das Homelab
|
||||
resultsCache:
|
||||
enabled: false
|
||||
chunksCache:
|
||||
enabled: false
|
||||
destination:
|
||||
server: https://kubernetes.default.svc
|
||||
namespace: kube-prometheus-stack
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
selfHeal: true
|
||||
|
||||
---
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: HTTPRoute
|
||||
metadata:
|
||||
name: loki-ingest-route
|
||||
namespace: kube-prometheus-stack
|
||||
spec:
|
||||
parentRefs:
|
||||
- name: shared-gateway
|
||||
namespace: default
|
||||
hostnames:
|
||||
- "loki.k8s.hnrx.net"
|
||||
rules:
|
||||
- matches:
|
||||
- path:
|
||||
type: PathPrefix
|
||||
value: /
|
||||
backendRefs:
|
||||
- name: loki
|
||||
port: 3100
|
||||
@@ -0,0 +1,82 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: percona-everest
|
||||
namespace: argocd
|
||||
spec:
|
||||
destination:
|
||||
namespace: everest-system
|
||||
server: {{ $.Values.spec.destination.server }}
|
||||
project: homelab
|
||||
source:
|
||||
chart: everest
|
||||
repoURL: https://percona.github.io/percona-helm-charts/
|
||||
targetRevision: 1.13.0
|
||||
helm:
|
||||
parameters:
|
||||
- name: dbNamespace.enabled
|
||||
value: "false"
|
||||
- name: upgrade.preflightChecks
|
||||
value: "false"
|
||||
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
- RespectIgnoreDifferences=true
|
||||
# To prevent issues with synchronising some CRDs.
|
||||
- ServerSideApply=true
|
||||
|
||||
ignoreDifferences:
|
||||
# If `server.jwtKey` is not set, the chart will generates a random key.
|
||||
# As a result, the Secret will always be out of sync, since ArgoCD will
|
||||
# rerender it on each sync.
|
||||
- group: ""
|
||||
jsonPointers:
|
||||
- /data
|
||||
kind: Secret
|
||||
name: everest-jwt
|
||||
namespace: everest-system
|
||||
# If `server.initialAdminPassword` is not set, the chart will generates a random password.
|
||||
# As a result, the Secret will always be out of sync, since ArgoCD will
|
||||
# rerender it on each sync. Moreover, this Secret may be managed externally, for example, using `everestctl`.
|
||||
- group: ""
|
||||
jsonPointers:
|
||||
- /data
|
||||
kind: Secret
|
||||
name: everest-accounts
|
||||
namespace: everest-system
|
||||
# If OLM is deployed without cert-manager, the below TLS certificates are randomly generated.
|
||||
# As a result, the Secret will always be out of sync, since ArgoCD will
|
||||
# rerender it on each sync.
|
||||
- group: ""
|
||||
jsonPointers:
|
||||
- /data
|
||||
kind: Secret
|
||||
name: packageserver-service-cert
|
||||
namespace: everest-olm
|
||||
- group: apiregistration.k8s.io
|
||||
jqPathExpressions:
|
||||
- .spec.caBundle
|
||||
- .metadata.annotations
|
||||
kind: APIService
|
||||
name: v1.packages.operators.coreos.com
|
||||
# If `operator.webhook.certs` are not set explicitly, the chart will generate random certificates.
|
||||
# As a result, the TLS Secret and Mutating/Validating webhook configurations (caBundle) will always appear out of sync.
|
||||
- group: ""
|
||||
jsonPointers:
|
||||
- /data
|
||||
kind: Secret
|
||||
name: webhook-server-cert
|
||||
namespace: everest-system
|
||||
- group: admissionregistration.k8s.io
|
||||
jqPathExpressions:
|
||||
- .webhooks[].clientConfig.caBundle
|
||||
kind: MutatingWebhookConfiguration
|
||||
name: everest-operator-mutating-webhook-configuration
|
||||
- group: admissionregistration.k8s.io
|
||||
jqPathExpressions:
|
||||
- .webhooks[].clientConfig.caBundle
|
||||
kind: ValidatingWebhookConfiguration
|
||||
name: everest-operator-validating-webhook-configuration
|
||||
@@ -15,9 +15,9 @@ spec:
|
||||
server: {{ $.Values.spec.destination.server }}
|
||||
project: homelab
|
||||
source:
|
||||
repoURL: https://releases.rancher.com/server-charts/stable
|
||||
repoURL: https://releases.rancher.com/server-charts/latest
|
||||
chart: rancher
|
||||
targetRevision: v2.13.1
|
||||
targetRevision: v2.14.1
|
||||
helm:
|
||||
values: |
|
||||
hostname: rancher.k8s.hnrx.net
|
||||
|
||||
@@ -1,7 +1,84 @@
|
||||
# RKE2 Single-Node-Cluster for Homelab
|
||||
# ⚓ RKE2 Single-Node Cluster | Homelab
|
||||
|
||||
This is how I set up my RKE2 single node cluster for my homelab.
|
||||
Dieses Repository enthält die vollständige Infrastruktur-Konfiguration (Infrastructure as Code) für meinen RKE2-basierten Kubernetes Single-Node-Cluster auf einem **ASUS PN51**.
|
||||
|
||||
## External dependencies
|
||||
- Authentik instance for SSO
|
||||
- Phase Secrets Manager to keep all secrets in a safe space
|
||||
## 🚀 Übersicht
|
||||
Das Ziel dieses Projekts ist ein hochautomatisierter Cluster, der dem **"Set and Forget"**-Prinzip folgt. Von der IP-Ankündigung via BGP bis zur automatischen Zertifikatsausstellung ist alles in GitOps-Workflows abgebildet.
|
||||
|
||||
### 📊 System-Architektur
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph "External World / Home Network"
|
||||
Router[UniFi Dream Machine]
|
||||
Auth[Authentik SSO]
|
||||
Vault[Phase Secrets]
|
||||
end
|
||||
|
||||
subgraph "ASUS PN51 Node (openSUSE Leap)"
|
||||
RKE2[⚓ RKE2 Engine]
|
||||
Cilium[🛡️ Cilium eBPF & BGP]
|
||||
EG[🚀 Envoy Gateway API]
|
||||
Argo[🎡 ArgoCD GitOps]
|
||||
end
|
||||
|
||||
Router <-->|BGP Peering| Cilium
|
||||
Cilium --> EG
|
||||
EG -->|Traffic Control| Apps[Immich, n8n, etc.]
|
||||
Argo -->|Sync| RKE2
|
||||
Vault -->|Inject Secrets| Argo
|
||||
Auth -->|OIDC| Argo
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Tech-Stack
|
||||
| Schicht | Komponente | Beschreibung |
|
||||
| :--- | :--- | :--- |
|
||||
| **OS** | openSUSE Leap 16.0 | Stabiles Fundament für den ASUS PN51 |
|
||||
| **K8S** | RKE2 (v1.34.4) | Sicherheitsorientierte Kubernetes-Distribution |
|
||||
| **Network** | Cilium (eBPF) | High-Performance Networking & BGP |
|
||||
| **Gateway** | Envoy Gateway | Moderne Gateway API statt klassischem Ingress |
|
||||
| **GitOps** | Argo CD | Vollautomatische Applikations-Bereitstellung |
|
||||
| **Secrets** | Phase | Sichere Injektion von Umgebungsvariablen |
|
||||
| **DNS** | External-DNS | Sync von Hostnames mit UniFi & Cloudflare |
|
||||
| **Certs** | Cert-Manager | TLS via Let's Encrypt (DNS-01 Challenge) |
|
||||
|
||||
---
|
||||
|
||||
## 📂 Repository Struktur
|
||||
Die Konfiguration ist chronologisch nach der Installationsreihenfolge aufgebaut:
|
||||
|
||||
- `01_unifi_gateway_setup/`: Vorbereitung des Routers (BGP Config).
|
||||
- `02_rke2_installation/`: Grundinstallation von Kubernetes auf dem Node.
|
||||
- `03_netzwerk_und_gateway/`: Deployment von Cilium & Envoy Gateway Controller.
|
||||
- `04_envoy_gateway_preparation/`: BGP IP-Pools und initiale Shared Gateways.
|
||||
- `05_base_apps_and_tools/`: Hilfsmittel wie Cert-Manager, External-DNS & Phase.
|
||||
- `06_argocd_installation/`: Installation von Argo CD inkl. SSO-Anbindung.
|
||||
- `07_bootstrap_argocd/`: Der "App-of-Apps" Bootstrap für die Automatisierung.
|
||||
- `08_cluster_infrastructure/`: Core-Services (NFS Storage, VPA, etc.).
|
||||
- `09_homelab_apps/`: Benutzer-Applikationen (Immich, Ghostfolio, n8n, etc.).
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Getting Started
|
||||
Um den Cluster neu aufzusetzen, folgen Sie den detaillierten `installation_instructions.md` in den jeweiligen Ordnern `01` bis `07`.
|
||||
|
||||
Ab Schritt `07` übernimmt ArgoCD das Management:
|
||||
1. **ApplicationSets** scannen die Ordner `08` und `09`.
|
||||
2. Jede neue Applikation erhält automatisch:
|
||||
- Eine LoadBalancer VIP (via BGP).
|
||||
- Einen DNS-Eintrag (via External-DNS).
|
||||
- Ein TLS-Zertifikat (via Cert-Manager).
|
||||
- SSO-Schutz (via Authentik/OIDC).
|
||||
|
||||
---
|
||||
|
||||
## 🔑 Externe Abhängigkeiten
|
||||
Für den vollen Funktionsumfang werden folgende Dienste benötigt:
|
||||
- **UniFi Router:** Für BGP Peering und lokales DNS.
|
||||
- **Authentik:** Zentraler Identity Provider (SSO).
|
||||
- **Phase.dev:** Cloud-Konsole für das Secret-Management.
|
||||
- **Cloudflare:** DNS-Provider für die ACME DNS-01 Challenge.
|
||||
|
||||
---
|
||||
*Dokumentation & Konfiguration gepflegt durch Gemini CLI | Stand: 06. März 2026*
|
||||
|
||||
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"extends": [
|
||||
"config:recommended"
|
||||
],
|
||||
"packageRules": [
|
||||
{
|
||||
"matchDatasources": [
|
||||
"docker",
|
||||
"kubernetes-api",
|
||||
"kubernetes"
|
||||
]
|
||||
},
|
||||
{
|
||||
"matchUpdateTypes": [
|
||||
"minor",
|
||||
"patch",
|
||||
"pin",
|
||||
"digest"
|
||||
],
|
||||
"automerge": true
|
||||
}
|
||||
]
|
||||
}
|
||||
Reference in New Issue
Block a user