CSI Driver User Guide
6 minute read.
This guide is for developers/users using a Kubernetes cluster that want to use the TrueNAS CSI driver to create and submit requests for storage to use in pods. It covers creating PersistentVolumeClaims, mounting storage volumes, and using the features available to TrueNAS storage users.
For an overview of the Kubernetes/CSI driver integration see TrueNAS CSI Driver
For reference material including a glossary of terms, see CSI Driver Reference
Kubernetes Cluster Administrators Guide provides instructions on configuring StorageClasses and the Kubernetes integration with the CSI driver.
The process involves creating a yaml file stored locally, and then submitting it to create the PVC using kubectl commands.
The Kubernetes uses the CSI driver to send the information from Kubernetes to TrueNAS. TrueNAS creates the storage volume based on the information in the PVC yaml file, and sends this back to Kubernetes through the CSI driver, where it can be mounted and used in a pod.
Developers/users should follow this process to set up storage they can mount in their Kubernetes pods.
Replace my-app-data with the name of your PVC in the commands in this section.
- Create a PersistentVolumeClaim.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-app-data
namespace: default
spec:
accessModes:
- ReadWriteMany # For NFS (multiple pods can access)
storageClassName: truenas-nfs
resources:
requests:
storage: 10Gi # Request 10GB of storage
- Apply the PersistentVolumeClaim.
kubectl apply -f my-app-data.yaml
Where my-app-data is the name of the yaml file (and PVC) created locally by the developer/user.
- Check the PVC status.
kubectl get pvc my-app-data
apiVersion: v1
kind: Pod
metadata:
name: my-app
namespace: default
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html # Where to mount in container
volumes:
- name: data
persistentVolumeClaim:
claimName: my-app-data # Reference the PVC
Then to apply a volume in a pod:
kubectl apply -f pod.yaml
These commands check if a pod is running, verify a mount inside a container, and then write test data to and read it back from the container.
# Check pod is running
kubectl get pod my-app
# Verify mount inside container
kubectl exec my-app -- df -h /usr/share/nginx/html
# Write test data
kubectl exec my-app -- sh -c "echo 'Hello from TrueNAS' > /usr/share/nginx/html/index.html"
# Read it back
kubectl exec my-app -- cat /usr/share/nginx/html/index.html
Check the current size. The CAPACITY shows current size.
kubectl get pvc my-app-dataEdit a PVC to request a larger size. Change the storage size, then save and exit.
kubectl edit pvc my-app-data # Change: resources: requests: storage: 10Gi # To: resources: requests: storage: 20Gi # Save and exitAlternatively, use kubectl patch:
kubectl patch pvc my-app-data -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'Wait for the expansion. Watch the CAPACITY column increase in size.
kubectl get pvc my-app-data -w # Watch CAPACITY column increase to 20GiNote: For iSCSI volumes, you might need to restart the pod for the filesystem resize to take effect.
Create a VolumeSnapshot, and then apply it.
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: my-app-snapshot-20250102 namespace: default spec: volumeSnapshotClassName: truenas-snapshot-class source: persistentVolumeClaimName: my-app-dataApply it:
kubectl apply -f snapshot.yamlVerify the snapshot.
kubectl get volumesnapshot my-app-snapshot-20250102Check in TrueNAS.
Navigate to Datasets, locate and select the dataset on the table, then click View Snapshots on the Data Protection card to open the Snapshots screen. Search for the snapshot. The snapshot name format is
pool/dataset@snapshot-name
You can restore from a snapshot by creating a new volume from a snapshot, or cloning an existing volume.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-app-restore
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: truenas-nfs
resources:
requests:
storage: 10Gi
dataSource:
name: my-app-snapshot-20250102
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
Apply it:
kubectl apply -f restore-pvc.yaml
kubectl wait --for=jsonpath='{.status.phase}'=Bound pvc/my-app-restore --timeout=120s
Result: New volume with data from snapshot
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-app-clone
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: truenas-nfs
resources:
requests:
storage: 10Gi
dataSource:
name: my-app-data # Source PVC to clone
kind: PersistentVolumeClaim
Apply it:
kubectl apply -f clone-pvc.yaml
kubectl wait --for=jsonpath='{.status.phase}'=Bound pvc/my-app-clone --timeout=120s
Result: New independent volume with copy of source data
The TrueNAS CSI driver supports advanced storage operations including snapshot-based backup and restore, volume cloning, and multi-protocol deployments. These features are available to all cluster users with the appropriate StorageClass configured by the cluster administrator.
The following is a backup workflow:
Create Pre-Upgrade Snapshot.
kubectl apply -f - <<EOF apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: db-before-upgrade namespace: production spec: volumeSnapshotClassName: truenas-snapshot-class source: persistentVolumeClaimName: postgres-data EOFPerform Application Upgrade.
kubectl set image deployment/postgres postgres=postgres:15Restore if upgrade fails. Stop the application and then create a new volume from a snapshot. Next update the deployment to use the restored volume and then restart the application.
# Stop the application kubectl scale deployment postgres --replicas=0 # Create new volume from snapshot kubectl apply -f - <<EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-data-restored namespace: production spec: accessModes: [ReadWriteOnce] storageClassName: truenas-iscsi resources: requests: storage: 100Gi dataSource: name: db-before-upgrade kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io EOF # Update deployment to use restored volume kubectl set volume deployment/postgres --add --name=data --claim-name=postgres-data-restored # Restart application kubectl scale deployment postgres --replicas=1
Volume cloning creates an instant copy of an existing volume using ZFS copy-on-write, making it ideal for spinning up staging or test environments with real production data.
Benefits:
- Instant copy via ZFS clones
- Minimal storage overhead (copy-on-write)
- Staging has real production data for testing
- No impact on production volume
# Clone production volume
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data-staging
namespace: staging
spec:
accessModes: [ReadWriteMany]
storageClassName: truenas-nfs
resources:
requests:
storage: 50Gi
dataSource:
name: app-data-prod
kind: PersistentVolumeClaim
EOF
Some applications benefit from using both protocols. Example content management system:
# Media files - NFS for shared access
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cms-media
spec:
accessModes: [ReadWriteMany]
storageClassName: truenas-nfs
resources:
requests:
storage: 100Gi
---
# Database - iSCSI for performance
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cms-database
spec:
accessModes: [ReadWriteOnce]
storageClassName: truenas-iscsi
resources:
requests:
storage: 20Gi

