Multi-Homing Guide¶
Network segmentation for TrueNAS-hosted Kubernetes clusters using multiple NICs. This guide walks through a common pattern: separating internal cluster traffic from a DMZ-facing ingress using Traefik.
Why Multi-Home?¶
A single-NIC cluster puts all traffic on one network — cluster control plane, inter-node communication, and external-facing services share the same subnet. Multi-homing lets you:
- Isolate ingress traffic — DMZ-facing services (Traefik, Nginx) get a dedicated NIC on a separate subnet
- Separate storage traffic — iSCSI/NFS traffic on a dedicated high-throughput network
- Enforce network policies — firewall rules per subnet at the router level
- Reduce blast radius — a compromised ingress pod can't reach internal cluster IPs directly
Architecture¶
graph TB
subgraph Internet
client[External Client]
end
subgraph Router / Firewall
fw[pfSense / OPNsense / UniFi]
end
subgraph TrueNAS SCALE
subgraph "Internal Network (vlan100 — 192.168.100.0/24)"
cp1[Control Plane 1<br/>192.168.100.50]
cp2[Control Plane 2<br/>192.168.100.51]
cp3[Control Plane 3<br/>192.168.100.52]
w1[Worker 1<br/>192.168.100.60]
w2[Worker 2<br/>192.168.100.61]
end
subgraph "DMZ Network (vlan200 — 10.0.200.0/24)"
w1dmz[Worker 1<br/>10.0.200.60]
w2dmz[Worker 2<br/>10.0.200.61]
end
end
client --> fw
fw --> w1dmz
fw --> w2dmz
fw --> cp1
fw --> cp2
fw --> cp3
w1 -.- w1dmz
w2 -.- w2dmz
- vlan100 (Internal) — etcd, kubelet, inter-node communication, Omni SideroLink. Not routable from the internet.
- vlan200 (DMZ) — Traefik ingress only. Firewall allows inbound HTTP/HTTPS from the internet, nothing else.
- Workers have NICs on both networks. Control planes only need the internal network.
MachineClass Configuration¶
Control Planes (internal only)¶
Workers (internal + DMZ)¶
cpus: 4
memory: 8192
disk_size: 100
pool: default
network_interface: vlan100
advertised_subnets: "192.168.100.0/24"
additional_nics:
- network_interface: vlan200
type: VIRTIO
The advertised_subnets field pins etcd and kubelet to the internal network (vlan100). Without it, Kubernetes might try to use the DMZ interface for cluster communication, which would fail or create a split-brain.
Note: If you omit
advertised_subnets, the provider auto-detects the primary NIC's subnet from TrueNAS and applies it automatically.
DHCP Reservations¶
Set up static DHCP leases in your router for predictable IPs. All NICs use deterministic MAC addresses derived from the machine request ID, so DHCP reservations survive reprovision. The provider logs each NIC's MAC address at creation:
VM NIC MAC address (deterministic) — stable across reprovision for DHCP reservations
mac=02:ab:cd:xx:xx:xx vm_name=omni_talos_worker_1 network_interface=vlan100 role=primary
attached additional NIC network_interface=vlan200 mac=02:ef:01:yy:yy:yy vm_name=omni_talos_worker_1
| VM | vlan100 (Internal) | vlan200 (DMZ) |
|---|---|---|
| cp-1 | 192.168.100.50 | — |
| cp-2 | 192.168.100.51 | — |
| cp-3 | 192.168.100.52 | — |
| worker-1 | 192.168.100.60 | 10.0.200.60 |
| worker-2 | 192.168.100.61 | 10.0.200.61 |
Firewall Rules¶
On your router/firewall, create rules for the DMZ subnet:
| Direction | Source | Destination | Ports | Action |
|---|---|---|---|---|
| Inbound | Any | 10.0.200.0/24 | 80, 443 | Allow |
| Inbound | Any | 10.0.200.0/24 | * | Deny |
| Outbound | 10.0.200.0/24 | 192.168.100.0/24 | * | Deny |
| Outbound | 10.0.200.0/24 | Any | 80, 443 | Allow |
This ensures: - External traffic can reach Traefik on ports 80/443 - DMZ cannot initiate connections to the internal network - DMZ can make outbound HTTPS calls (for webhooks, API calls, etc.)
Traefik Deployment¶
Deploy Traefik as a DaemonSet on worker nodes, bound to the DMZ interface. This uses MetalLB to assign a LoadBalancer IP from the DMZ subnet.
MetalLB Configuration¶
First, configure MetalLB with an IP pool from the DMZ subnet. Apply this as an Omni cluster config patch or directly via kubectl:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: dmz-pool
namespace: metallb-system
spec:
addresses:
- 10.0.200.100-10.0.200.150
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: dmz-l2
namespace: metallb-system
spec:
ipAddressPools:
- dmz-pool
interfaces:
- eth1 # The DMZ interface inside Talos
Finding the interface name: Talos names interfaces
eth0,eth1, etc. in the order they appear. The primary NIC (vlan100) iseth0, the DMZ NIC (vlan200) iseth1. Verify withtalosctl get addresseson a worker node.
Traefik Helm Values¶
deployment:
kind: DaemonSet
nodeSelector:
# Only schedule on workers (which have the DMZ NIC)
node-role.kubernetes.io/worker: ""
service:
type: LoadBalancer
annotations:
metallb.universe.tf/address-pool: dmz-pool
ports:
web:
port: 8000
exposedPort: 80
protocol: TCP
websecure:
port: 8443
exposedPort: 443
protocol: TCP
# Traefik binds to all interfaces by default.
# Incoming traffic arrives via MetalLB on the DMZ IP.
# Internal services are reached via the internal network.
Install with Helm:
helm repo add traefik https://traefik.github.io/charts
helm install traefik traefik/traefik -n traefik --create-namespace -f traefik-values.yaml
DNS Configuration¶
Point your public DNS to the MetalLB DMZ IP:
On your router, NAT/port-forward ports 80 and 443 from your WAN IP to 10.0.200.100.
Verifying the Setup¶
-
Check NICs inside Talos:
You should see IPs on botheth0(internal) andeth1(DMZ). -
Check etcd is on the internal network:
All etcd peer URLs should use192.168.100.xaddresses, not10.0.200.x. -
Check Traefik is listening on DMZ:
TheEXTERNAL-IPshould be from the DMZ pool (e.g.,10.0.200.100). -
Test end-to-end:
Variations¶
Storage Network (3 NICs)¶
Add a third NIC for dedicated NFS/iSCSI storage traffic:
network_interface: vlan100
advertised_subnets: "192.168.100.0/24"
additional_nics:
- network_interface: vlan200 # DMZ
type: VIRTIO
- network_interface: vlan300 # Storage (MTU 9000 on switch)
type: VIRTIO
Dual-Stack (IPv4 + IPv6)¶
If your DMZ subnet has IPv6, include both CIDRs:
This pins etcd and kubelet to the internal network for both address families.