Mount an NFS Volume

The following steps guide you through the process of connecting Network File Storage (NFS) with Persistent Volume Claims (PVCs) in a Managed Kubernetes cluster.

Note: Network File Storage (NFS) Kubernetes integration is currently available on a request basis. To access this product, please contact your sales representative or IONOS Cloud Support.

Prerequisites:

  • Ensure that the NFS volume and the Managed Kubernetes node pool are connected to the same private LAN.

  • Node pools can only retrieve their IPs in the private LAN via Dynamic Host Configuration Protocol (DHCP). Each private LAN has its own subnet distributed by the DHCP server.

  • The subnet of a private LAN becomes visible via the API when attaching a server to the LAN with a NIC, or by opening a node shell on the Kubernetes Cluster and inspecting the network interfaces.

To connect NFS with PVCs in a Managed Kubernetes cluster via the DCD (Data Center Designer), follow these steps:

  1. Drag a vCPU Server into the workspace to add a new server in the DCD.

  2. Click Add NIC. This action creates a new Network Inteface Controller (NIC) with a new private LAN. Note the LAN number.

  3. Click PROVISION CHANGES.

  4. Once the changes are provisioned, inspect the server’s NIC to see its primary IP in the private LAN. For example, 10.7.228.11. This reveals the private LAN’s DHCP subnet. For example, 10.7.228.0/24.

  5. Provision a Kubernetes Cluster.

  6. Provision a node pool for the Kubernetes Cluster and attach the previously created private LAN. Ensure that DHCP is enabled.

  7. Provision an NFS Cluster with the same private LAN Attached.

  8. Assign a static IP to the NFS cluster within the same subnet identified earlier. For example, 10.7.228.5/24.

  9. Provision an NFS Share. For more information, see Create Shares.

  10. Add the 10.7.228.0/24 subnet and a client group to establish the necessary permissions for Kubernetes and any other hosts within that subnet to access NFS.

  11. Delete the vCPU Server created in the first step.

Deploy an NFS Provisioner on Your Kubernetes Cluster

Choose one of the following NFS provisioners:

Both provisioners create a custom StorageClass configured with an NFS server. Use the static IP assigned earlier, for example, 10.7.228.5, as the server and /SHARE_UUID/SHARE_NAME as the share path.

Configure StorageClass for Managed Kubernetes

Managed Kubernetes requires a specific setting for the StorageClass because PersistentVolumes need a specific mount setting. Apply the following command:

kubectl patch storageclass nfs-client -p '{"mountOptions": ["soft"]}'

Using nfs-subdir-external-provisioner

Refer to the nfs-subdir-external-provisioner documentation for detailed instructions.

When creating PVCs, specify the custom StorageClass name. The PVCs should then get provisioned using the specified provisioner.

Using csi-driver-nfs

Install the helm chart for csi-driver-nfs:

helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system --version v4.8.0

Create a StorageClass with the necessary parameters:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: <nfs-ip>
  share: <share-name>
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - nfsvers=4.1

Create a PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: <name-of-your-pvc>

Result: The Managed Kubernetes cluster can now mount NFS volumes as PVCs.

Last updated