Networking Guide¶
Network configuration for TrueNAS-hosted Kubernetes clusters — bridge setup, DHCP reservations, load balancer IPs, VIP, and router-specific guides.
Architecture Overview¶
┌─────────────┐ ┌─────────────────────────────────┐
│ Router │ │ TrueNAS SCALE │
│ (UniFi / │ │ │
│ pfSense / │◄────┤ br100 (VLAN 100) │
│ OPNsense) │ │ ├─ omni_cp_1 (DHCP .50) │
│ │ │ ├─ omni_cp_2 (DHCP .51) │
│ DHCP Server │ │ ├─ omni_cp_3 (DHCP .52) │
│ Gateway .1 │ │ ├─ omni_worker_1 (DHCP .60) │
│ │ │ └─ omni_worker_2 (DHCP .61) │
└─────────────┘ └─────────────────────────────────┘
VIP: 192.168.100.254 (Talos, floats between CP nodes)
MetalLB: 192.168.100.201-250 (advertised via L2 ARP)
DHCP: 192.168.100.50-200 (managed by router)
All VMs share a single Layer 2 broadcast domain via the TrueNAS bridge. The router provides DHCP and acts as the default gateway. MetalLB and VIP operate at Layer 2 using gratuitous ARP — no special router configuration needed.
TrueNAS Bridge Setup¶
Option A: Bridge (Recommended)¶
A bridge groups one or more physical NICs into a virtual switch. VMs connect to this bridge.
- TrueNAS UI > Network > Interfaces > Add
- Type: Bridge
- Bridge Members: select your physical NIC (e.g.,
enp5s0) - If using VLANs: the physical NIC should be a trunk port carrying your VLAN tags
Set DEFAULT_NETWORK_INTERFACE=br0 (or br100 for a VLAN-tagged bridge) on the provider.
Option B: VLAN Interface¶
If your physical NIC is a trunk port and you want VMs on a specific VLAN without a bridge:
- TrueNAS UI > Network > Interfaces > Add
- Type: VLAN
- Parent Interface: your physical NIC
- VLAN Tag:
100
Set DEFAULT_NETWORK_INTERFACE=vlan100 on the provider.
Option C: Physical NIC¶
Pass a physical NIC directly. Only one VM can use it (or use macvtap). Not recommended for multi-VM clusters.
IP Address Planning¶
Plan your subnet before deploying. Every range must be non-overlapping.
Recommended /24 Layout¶
| Range | Count | Purpose | Managed By |
|---|---|---|---|
.1 |
1 | Gateway | Router |
.2-.49 |
48 | Infrastructure (NAS, switches, APs) | DHCP reservations |
.50-.200 |
151 | DHCP pool (VMs, devices) | Router DHCP server |
.201-.250 |
50 | MetalLB / LoadBalancer Services | MetalLB L2 |
.251-.253 |
3 | Reserved (future use) | — |
.254 |
1 | Kubernetes API VIP | Talos VIP |
.255 |
1 | Broadcast | — |
Critical: Your router's DHCP range must STOP at
.200(or wherever you choose) to leave room above it for MetalLB and VIP. If the DHCP range extends to.254, there's nowhere to put load balancer IPs without conflicts. Configure your DHCP server to end its range before the MetalLB block starts.The MetalLB range (
.201-.250) is how your Kubernetes Services get externally-accessible IPs on your LAN. When you create aLoadBalancerService, MetalLB assigns an IP from this pool and announces it via ARP. Any device on the same VLAN can reach it — this is how you expose Ingress, dashboards, and applications to your home network.
Smaller /25 or /26 Networks¶
Adjust proportionally. The key constraint is: DHCP range + MetalLB range + VIPs must all fit within the subnet with no overlap.
DHCP Reservations (Stable VM IPs)¶
VMs use DHCP by default. For stable IPs, create DHCP reservations on your router — the router assigns a fixed IP based on the VM's MAC address. The VM itself still uses DHCP; it doesn't need static network config.
This is preferred over static IP configuration because: - The router is the single source of truth for IP assignments - No Talos machine config patches needed - Works with any DHCP server - Easy to change IPs without reprovisioning VMs
Finding the MAC Address¶
The provider logs each VM's MAC address during provisioning. Since v0.13.0, the primary NIC uses a deterministic MAC derived from the machine request ID — this MAC is stable across reprovisions, so DHCP reservations survive:
VM NIC MAC address (deterministic) — stable across reprovision for DHCP reservations
mac=02:ab:cd:ef:01:23 vm_name=omni_cluster_workers_abc123 network_interface=br100 role=primary
You can also find it in TrueNAS UI: Virtualization > click VM > Devices > NIC.
UniFi Controller¶
- Network Application > Client Devices — find the VM by MAC or current IP
- Click the client > Settings (gear icon)
- Enable Fixed IP Address
- Enter the desired IP (must be within your DHCP range, e.g.,
.50-.200) - The reservation takes effect on the VM's next DHCP renewal (reboot or wait for lease expiry)
UniFi Note: Fixed IP assignments are DHCP reservations, not true static IPs. The VM still uses DHCP — UniFi just always gives it the same address.
pfSense¶
- Services > DHCP Server > select the VLAN interface
- Scroll to DHCP Static Mappings
- Click Add
- Enter MAC address and desired IP
- Save and Apply Changes
OPNsense¶
- Services > ISC DHCPv4 > select the interface
- Scroll to DHCP Static Mappings
- Click Add (+)
- Enter MAC address, IP, and hostname
- Save and Apply
dnsmasq / Pi-hole¶
Add to /etc/dnsmasq.d/dhcp-reservations.conf:
dhcp-host=00:a0:98:18:c4:af,192.168.100.51,omni-cp-1
dhcp-host=00:a0:98:22:b3:de,192.168.100.52,omni-cp-2
dhcp-host=00:a0:98:33:f1:ab,192.168.100.60,omni-worker-1
Restart dnsmasq: sudo systemctl restart dnsmasq
ISC DHCP Server¶
Add to /etc/dhcp/dhcpd.conf inside the subnet block:
Restart: sudo systemctl restart isc-dhcp-server
MetalLB (LoadBalancer Services)¶
MetalLB provides LoadBalancer type Services in bare-metal Kubernetes clusters. In Layer 2 mode, it responds to ARP requests for Service IPs — no BGP or special router config needed.
Why IPs Must Be Outside DHCP Range¶
MetalLB uses gratuitous ARP to announce Service IPs. If a MetalLB IP overlaps with the DHCP range:
- Router hands
.205to a phone via DHCP - MetalLB also claims
.205for an Ingress Service - Both devices respond to ARP for
.205 - Traffic randomly goes to the phone or the Service
- Intermittent failures, impossible to debug without packet captures
Solution: Reserve a block outside DHCP for MetalLB. In the recommended layout: .201-.250.
Installation¶
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb --namespace metallb-system --create-namespace
Configuration¶
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- 192.168.100.201-192.168.100.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default
Important: The
L2Advertisementresource is required. Without it, MetalLB allocates IPs but doesn't announce them — Services get an External IP but it's unreachable.
Verification¶
# Check MetalLB assigned an IP
kubectl get svc -A | grep LoadBalancer
# From another device on the same VLAN, verify ARP
arping -I eth0 192.168.100.201
# Should see replies from the node running the MetalLB speaker
Control Plane VIP (Kubernetes API HA)¶
Talos supports a shared virtual IP for the Kubernetes API server. One control plane node holds the VIP at a time; on failure, another takes over via etcd leader election (~1 minute failover).
How It Works¶
- Talos runs a VIP manager on each control plane node
- The etcd leader sends gratuitous ARP for the VIP address
- All traffic to the VIP hits the current leader
- If the leader fails, a new etcd leader is elected and takes over the VIP
- Clients experience ~1 minute interruption during failover
Setup¶
Apply as an Omni config patch on the control plane machine set (not per-machine — all CP nodes need the same VIP config):
Requirements¶
- All control plane nodes on the same Layer 2 network (same bridge/VLAN)
- VIP outside DHCP range and MetalLB pool
- Minimum 3 control plane nodes for HA (1 node works but has no failover)
- VIP is unavailable during initial bootstrap until etcd is running
- Use for Kubernetes API only — Talos API should be accessed per-node
Using the VIP¶
After cluster creation, your kubeconfig endpoint should point to the VIP:
# Omni typically sets this automatically. To verify:
kubectl config view | grep server
# Should show: https://192.168.100.254:6443
See Talos VIP documentation for advanced options.
UniFi-Specific Guide¶
Dedicated Kubernetes VLAN (Recommended)¶
Do not use UniFi Auto-Scale Network for Kubernetes VMs. Create a dedicated VLAN with a fixed subnet.
Step-by-Step¶
- UniFi Console > Settings > Networks > Create New Network
- Name:
Kubernetes(orK8s-Cluster) - Router: your UDM/USG
- VLAN ID:
100(or any unused ID) - Gateway/Subnet:
192.168.100.1/24 - DHCP Range:
192.168.100.50-192.168.100.200(stop at .200 — leave .201+ for MetalLB/VIP) -
DHCP DNS: your DNS servers (or leave default)
-
Configure the switch port for TrueNAS
- UniFi Console > Devices > your switch > Ports
- Find the port connected to TrueNAS
- Port Profile: either
All(trunk all VLANs) or create a custom profile that includes VLAN 100 -
If TrueNAS has multiple NICs, you can dedicate one to this VLAN
-
Create the bridge on TrueNAS
- TrueNAS UI > Network > Interfaces > Add
- Type: Bridge, or VLAN if you want a tagged interface
-
For VLAN: Parent = physical NIC, VLAN Tag = 100
-
Set the provider config
Why Not Auto-Scale Network?¶
UniFi Auto-Scale dynamically creates VLANs and subnets. Problems for Kubernetes:
| Issue | Impact |
|---|---|
| Unpredictable subnets | Can't pre-configure MetalLB IP pool |
| DHCP range not configurable | Can't carve out static ranges for LB |
| Subnet may change | MetalLB config becomes invalid |
| No per-VLAN DHCP customization | Can't set lease times or DNS per-VLAN |
Auto-Scale is designed for transient consumer devices (phones, IoT, guests). Kubernetes clusters need stable, predictable networking.
Recommendation: Auto-Scale for everything else, dedicated fixed VLAN for Kubernetes.
pfSense / OPNsense Guide¶
VLAN Setup¶
- Interfaces > VLANs > Add
- Parent: the interface connected to TrueNAS
-
VLAN Tag:
100 -
Interfaces > Assignments — assign the new VLAN as an interface (e.g.,
OPT1) -
Interfaces > OPT1 — enable, set static IP
192.168.100.1/24 -
Services > DHCP Server > OPT1
- Enable DHCP
- Range:
192.168.100.50to192.168.100.200(end before .201 — reserve .201+ for MetalLB) -
DNS: your preferred DNS servers
-
Firewall > Rules > OPT1 — add rules to allow traffic (at minimum: allow all from the K8s VLAN, or be more restrictive)
Inter-VLAN Routing¶
By default, pfSense/OPNsense routes between VLANs. If you want to access MetalLB Services from your main LAN, no extra config is needed — the firewall handles routing.
If you want isolation (K8s VLAN can't reach other VLANs), add firewall rules to block inter-VLAN traffic except specific ports.
Mikrotik / RouterOS Guide¶
VLAN and DHCP¶
# Create VLAN on trunk port
/interface vlan add name=vlan100 vlan-id=100 interface=ether1
# Assign IP
/ip address add address=192.168.100.1/24 interface=vlan100
# DHCP pool — STOP at .200, leave .201+ for MetalLB and VIP
/ip pool add name=k8s-dhcp ranges=192.168.100.50-192.168.100.200
# DHCP server
/ip dhcp-server add interface=vlan100 address-pool=k8s-dhcp
/ip dhcp-server network add address=192.168.100.0/24 gateway=192.168.100.1 dns-server=1.1.1.1,8.8.8.8
# DHCP reservation example
/ip dhcp-server lease add mac-address=00:A0:98:18:C4:AF address=192.168.100.51 server=dhcp-k8s
Multiple Clusters on One TrueNAS¶
Use separate VLANs per cluster for network isolation:
| Cluster | VLAN | Subnet | Bridge | MetalLB Range |
|---|---|---|---|---|
| Production | 100 | 192.168.100.0/24 |
br100 |
.201-.250 |
| Staging | 101 | 192.168.101.0/24 |
br101 |
.201-.250 |
| Dev | 102 | 192.168.102.0/24 |
br102 |
.201-.250 |
Each cluster's MachineClass targets a different bridge:
VMs on different VLANs can't communicate at Layer 2 — full isolation without firewall rules.
Troubleshooting¶
VM has no IP address¶
- Check bridge exists: SSH to TrueNAS, run
ip link show br100— should showUP - Check DHCP server: From TrueNAS,
dhcping -s 192.168.100.1(or check router DHCP logs) - Test manually: Create an Alpine VM on the same bridge, run
ip link set eth0 up && udhcpc -i eth0 - Check VLAN tagging: If using tagged VLANs, verify the switch port is configured as a trunk carrying that VLAN
VM gets IP but can't reach internet¶
- Check gateway:
ip routeon the VM should show default via your gateway - Check DNS:
nslookup google.com— if this fails, DNS is wrong - Check firewall: Your router may block traffic from the K8s VLAN. Add a permit rule.
- Check NAT: pfSense/OPNsense need an outbound NAT rule for the K8s VLAN if it's not on the default LAN
MetalLB IPs not reachable from other VLANs¶
- Layer 2 only: MetalLB L2 mode only works within the same broadcast domain. From another VLAN, traffic must be routed through the gateway.
- Check routing: Your router must have a route to the MetalLB subnet. If MetalLB IPs are on the same subnet as the nodes, the router already knows the route.
- Check firewall: The router may block traffic to the MetalLB range. Add a permit rule for the destination IPs.
- Verify ARP: From a device on the same VLAN:
arping 192.168.100.201— should get a reply from a node MAC.
VIP not working¶
- Check etcd:
talosctl -n <cp-node-ip> etcd members— all CP nodes should be listed - Check VIP config:
talosctl -n <cp-node-ip> get addresses— should show the VIP - Same L2 required: VIP uses gratuitous ARP — all CP nodes must be on the same bridge/VLAN
- Bootstrap timing: VIP is unavailable until etcd has quorum. On a fresh cluster, wait for all CP nodes to join.
- ARP cache: If VIP just moved between nodes, clients may have stale ARP. Wait 1-2 minutes or clear ARP:
arp -d 192.168.100.254
Jumbo Frames (MTU 9000)¶
Jumbo frames significantly improve throughput for iSCSI and NFS storage networks by reducing per-packet overhead. If your storage NIC is on a dedicated VLAN or bridge, you can set MTU 9000 on the additional NIC.
Prerequisites¶
The entire path must support the same MTU — any mismatch causes dropped packets:
- TrueNAS host interface/bridge — set MTU 9000 on the bridge or VLAN interface in TrueNAS Network settings
- Physical switch — enable jumbo frames on the relevant switch ports
- VM NIC — set
mtu: 9000in the MachineClassadditional_nicsconfig (the provider handles the rest)
MachineClass Configuration¶
The provider does two things when mtu is set:
- TrueNAS side: Passes
mtu: 9000to thevm.device.createNIC attributes so the virtual NIC is created with the correct MTU - Talos side: Generates a machine config patch that sets the MTU on the corresponding interface inside the VM, using the NIC's MAC address for reliable interface matching:
{
"machine": {
"network": {
"interfaces": [
{
"deviceSelector": {"hardwareAddr": "00:a0:98:xx:xx:xx"},
"mtu": 9000
}
]
}
}
}
Verifying MTU¶
After the VM boots, verify from inside a Talos node:
# Check interface MTU
talosctl -n <node-ip> get links
# Test end-to-end with a large ping (requires the storage network to be routable)
ping -M do -s 8972 <truenas-storage-ip>
Note: Only set
mtuon additional NICs used for storage networks. The primary NIC should use the default MTU (1500) unless your entire network supports jumbo frames.