TrueNAS Setup Guide¶
Step-by-step instructions for configuring TrueNAS SCALE to work with the Omni infrastructure provider. This covers everything that needs to be set up on the TrueNAS side — the provider handles the rest automatically.
For the overall deployment walkthrough (including Omni account, provider installation, and cluster creation), see the Getting Started guide.
Prerequisites¶
- TrueNAS SCALE 25.04+ (Fangtooth) — check in Dashboard or System > General
- A ZFS pool with available space
- Admin access to the TrueNAS web UI
1. Network Bridge¶
VMs need a network interface to communicate. A bridge lets VMs share your NAS's physical network connection.
- Go to Network > Interfaces
- Click Add
- Set Type to Bridge
- Bridge Members: select your primary network interface (the one your NAS uses for its IP — look for names like
enp5s0,eno1,eth0) - Name: leave the default (usually
br0) - DHCP: enable
- Click Save
Warning
Creating a bridge on your primary interface briefly interrupts the NAS's network connection. This is normal — the NAS's IP moves to the bridge. After reconnecting, both the NAS and all VMs share this bridge.
- Apply the network changes when prompted
- After reconnecting, verify the bridge appears under Network > Interfaces with an IP address
Record the bridge name (e.g., br0) — you'll use it as DEFAULT_NETWORK_INTERFACE in the provider config and network_interface in MachineClass configs.
Storage Network Bridge (Optional)¶
If you want a dedicated storage network (for NFS or iSCSI traffic between your cluster and TrueNAS), create a second bridge on a separate physical NIC or VLAN:
- Go to Network > Interfaces > Add
- Set Type to Bridge
- Bridge Members: select the dedicated storage NIC (e.g.,
enp6s0) - Name: e.g.,
br-storage - Assign a static IP on your storage subnet (e.g.,
10.10.10.1/24) - Click Save and apply
Use this bridge in your MachineClass additional_nics config:
additional_nics:
- network_interface: br-storage
mtu: 9000 # optional: jumbo frames for better throughput
Jumbo Frames (MTU 9000)¶
For storage bridges, jumbo frames significantly improve NFS/iSCSI throughput. All devices on the path must use the same MTU — the bridge, the physical switch ports, and the VM NICs.
- Go to Network > Interfaces
- Click your storage bridge (e.g.,
br-storage) - Set MTU to
9000 - Click Save and apply
- Configure the same MTU on your physical switch ports
The provider handles the VM side automatically when you set mtu: 9000 in additional_nics.
2. NFS Share (for Persistent Storage)¶
If your Kubernetes apps need persistent storage, the simplest option is NFS. TrueNAS serves the share, and your cluster mounts it.
Create the Dataset¶
- Go to Datasets
- Select your pool (e.g.,
tank) - Click Add Dataset
- Name:
k8s-nfs - Share Type: leave as Generic (NFS is configured separately)
- Click Save
Enable the NFS Service¶
- Go to System > Services
- Find NFS in the list
- Toggle it on
- Check Start Automatically so it survives reboots
- Click the pencil icon to edit NFS settings:
- Number of Servers: leave default (or increase to 16 for better concurrency)
- Enable NFSv4: recommended
- Click Save
Create the NFS Share¶
- Go to Shares > NFS
- Click Add
- Path:
/mnt/tank/k8s-nfs(or wherever your dataset is) - Maproot User:
root - Maproot Group:
wheel - Authorized Networks: add your cluster subnet (e.g.,
192.168.1.0/24) — this restricts who can mount the share - Click Save
- If prompted to enable the NFS service, confirm
Verify¶
From any machine on your network:
You can now use this share with democratic-csi or manual NFS PV definitions.
3. iSCSI Service (for Block Storage)¶
iSCSI provides block-level storage — significantly faster than NFS for databases and random I/O workloads. Used with democratic-csi in iSCSI mode.
Enable the iSCSI Service¶
- Go to System > Services
- Find iSCSI in the list
- Toggle it on
- Check Start Automatically
Configure iSCSI (for democratic-csi)¶
If using democratic-csi in iSCSI mode, the driver creates targets and zvols automatically. You just need the service running. The driver handles:
- Creating a zvol for each PersistentVolume
- Creating an iSCSI target and extent
- Mapping the zvol to the target
No manual target or extent configuration is needed — democratic-csi does it all via SSH or API.
Talos Extension¶
Your Talos nodes need the iscsi-tools extension to connect to iSCSI targets. Add it to your MachineClass:
Or via Omni config patch:
4. SSH Access (for democratic-csi SSH Mode)¶
democratic-csi's SSH-based drivers execute ZFS commands directly on TrueNAS. This is the most battle-tested mode.
Create a Dedicated User¶
Don't use root — create a dedicated service account:
- Go to Credentials > Local Users
- Click Add
- Username:
csi - Full Name:
CSI Storage Driver - Password: set a strong password (or use SSH key auth — see below)
- Home Directory:
/nonexistent - Shell:
bash - Click Save
Grant Sudo Access¶
The CSI driver needs to run ZFS commands as root:
- Go to System > Advanced > Allowed Sudo Commands
- Add a sudoers entry for the
csiuser. On TrueNAS SCALE 25.04+:- Go to Credentials > Local Users > click
csi> Edit - Enable Permit Sudo
- Or add to the sudoers group
- Go to Credentials > Local Users > click
Alternatively, use SSH key authentication (more secure):
SSH Key Authentication (Recommended)¶
- Generate an SSH key pair on the machine where democratic-csi will run (or use your existing key):
- Copy the public key (
~/.ssh/csi-truenas.pub) - In TrueNAS: Credentials > Local Users > click
csi> Edit - Paste the public key into the SSH Public Key field
- Click Save
Enable SSH Service¶
- Go to System > Services
- Find SSH in the list
- Toggle it on
- Check Start Automatically
- Click the pencil icon:
- Allow TCP Port Forwarding: disable (not needed)
- Password Login Groups: leave empty if using key auth only
- Click Save
Verify¶
ssh -i ~/.ssh/csi-truenas csi@<truenas-ip> "sudo zfs list"
# Should show your ZFS pools and datasets
5. API Key¶
The provider connects to TrueNAS via WebSocket with an API key — required in all deployments (as of v0.14.0 / TrueNAS 25.10, which removed implicit Unix-socket auth).
Recommended: dedicated non-root user in builtin_administrators¶
Create a user dedicated to the provider and add it to the builtin_administrators group. This is better than using the root user's key:
- The provider's API audit trail is separated from interactive admin activity.
- The key can be revoked by deleting just the provider user, without touching
root. - No password on the user → fewer attack surfaces than root, which typically has a console password.
Why builtin_administrators membership (not a scoped privilege):
The provider uploads Talos ISOs to TrueNAS via the /_upload HTTP endpoint (filesystem.put with a pipe-based multipart request). This endpoint enforces the SYS_ADMIN account attribute on top of the regular role system. SYS_ADMIN is granted only by membership in builtin_administrators — no custom privilege or granular role combination substitutes for it. This was verified empirically on TrueNAS SCALE 25.10.1; see upstream bug reports for the specifics.
Users not in builtin_administrators can call every JSON-RPC method the provider needs (VM lifecycle, dataset CRUD, filesystem queries) — but ISO upload fails with HTTP 403. The provider does not currently have a fallback, so builtin_administrators membership is required.
Setup¶
Create the user:
- Credentials > Local Users > Add
- Username:
omni-provider(or similar) - Full Name:
Omni Infra Provider - Password Disabled: ✅ check (API-only, no interactive login)
- Shell:
nologin - Create New Primary Group: ✅ check
- Save
Add the user to builtin_administrators:
- Credentials > Groups > builtin_administrators > Edit
- Under Members, add the
omni-provideruser. - Save.
Or via midclt (requires root / another admin):
# Get current member list + the new user's id
GROUP_ID=$(sudo midclt call group.query '[["name","=","builtin_administrators"]]' | jq '.[0].id')
USER_ID=$(sudo midclt call user.query '[["username","=","omni-provider"]]' | jq '.[0].id')
CURRENT=$(sudo midclt call group.query '[["name","=","builtin_administrators"]]' | jq -c '.[0].users')
NEW=$(echo "$CURRENT" | jq -c ". + [$USER_ID]")
sudo midclt call group.update "$GROUP_ID" "{\"users\": $NEW}"
Create the API key for this user:
- Credentials > API Keys > Add
- Name:
omni-infra-provider - Username: select
omni-provider - Save and copy the key immediately — it's shown only once.
Use this key as TRUENAS_API_KEY in your provider config.
Can scoped privileges work instead?¶
Empirically, no — not in TrueNAS 25.10.1. A custom privilege with these 13 roles covers every JSON-RPC method the provider calls:
READONLY_ADMIN, VM_READ, VM_WRITE, VM_DEVICE_READ, VM_DEVICE_WRITE,
DATASET_READ, DATASET_WRITE, DATASET_DELETE,
POOL_READ, DISK_READ, NETWORK_INTERFACE_READ,
FILESYSTEM_ATTRS_READ, FILESYSTEM_DATA_WRITE
A user with exactly these roles (and nothing else) can provision and deprovision VMs end-to-end EXCEPT for the Talos ISO upload, which fails at /_upload with HTTP 403 because SYS_ADMIN is missing.
This is a TrueNAS upstream bug — FILESYSTEM_DATA_WRITE should reasonably cover HTTP file-upload, not just the JSON-RPC filesystem.put method — but until upstream fixes it, builtin_administrators membership is required.
Do not¶
- Do not use the literal
rootuser's API key. Create a dedicated user instead. - Do not attach
FULL_ADMINas a role to a custom privilege in TrueNAS 25.10.1. It triggers an infinite-recursion middleware bug that breaks all auth for users bound to that privilege; recovery requires editing the privilege viamidcltto remove the offending roles.
Going further¶
For the full hardening story (key rotation, network isolation, secret storage, TLS, container-level controls, ZFS encryption, monitoring), see Security Hardening.
Verification Checklist¶
Before deploying the provider, verify everything is set up:
| Item | How to Check | Expected |
|---|---|---|
| TrueNAS version | Dashboard | 25.04+ (Fangtooth) |
| ZFS pool | Storage | Pool visible with free space |
| Network bridge | Network > Interfaces | Bridge has an IP, VMs can reach it |
| NFS service | System > Services | Running, auto-start enabled |
| NFS share | Shares > NFS | Share visible, path correct |
| iSCSI service (if needed) | System > Services | Running, auto-start enabled |
| SSH service (if needed) | System > Services | Running, CSI user can connect |
| API key (if remote) | Credentials > API Keys | Key created and saved |
Once everything checks out, proceed to install the provider.
Common Mistakes¶
| Mistake | Symptom | Fix |
|---|---|---|
Using pool name tank/k8s instead of tank |
pool not found error |
The pool field must be a top-level ZFS pool. Use dataset_prefix for nested paths. |
| Bridge not created | VMs have no network | Create a bridge under Network > Interfaces |
| NFS service not running | Pods stuck in ContainerCreating |
Enable NFS under System > Services |
| NFS share not authorized for cluster subnet | mount: permission denied |
Add your cluster subnet to Authorized Networks on the share |
| SSH key not added to CSI user | democratic-csi can't connect | Paste public key into the user's SSH Public Key field |
| MTU mismatch | Dropped packets, poor storage performance | Set the same MTU on the bridge, switch ports, and VM NIC config |
| Using TrueNAS SCALE < 25.04 | Provider fails at startup | Upgrade to 25.04+ (Fangtooth) |