Kubernetes NFS Persistent Volumes - multiple claims on same volume? Claim stuck in pending?

ghz 1years ago ⋅ 7118 views

Question

Use case:

I have a NFS directory available and I want to use it to persist data for multiple deployments & pods.

I have created a PersistentVolume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: http://mynfs.com
    path: /server/mount/point

I want multiple deployments to be able to use this PersistentVolume, so my understanding of what is needed is that I need to create multiple PersistentVolumeClaims which will all point at this PersistentVolume.

kind: PersistentVolumeClaim
apiVersion: v1
metaData:
  name: nfs-pvc-1
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Mi

I believe this to create a 50MB claim on the PersistentVolume. When I run kubectl get pvc, I see:

NAME        STATUS     VOLUME    CAPACITY    ACCESSMODES   AGE
nfs-pvc-1   Bound      nfs-pv    10Gi        RWX           35s

I don't understand why I see 10Gi capacity, not 50Mi.

When I then change the PersistentVolumeClaim deployment yaml to create a PVC named nfs-pvc-2 I get this:

NAME        STATUS     VOLUME    CAPACITY    ACCESSMODES   AGE
nfs-pvc-1   Bound      nfs-pv    10Gi        RWX           35s
nfs-pvc-2   Pending                                        10s

PVC2 never binds to the PV. Is this expected behaviour? Can I have multiple PVCs pointing at the same PV?

When I delete nfs-pvc-1, I see the same thing:

NAME        STATUS     VOLUME    CAPACITY    ACCESSMODES   AGE
nfs-pvc-2   Pending                                        10s

Again, is this normal?

What is the appropriate way to use/re-use a shared NFS resource between multiple deployments / pods?


Answer

Basically you can't do what you want, as the relationship PVC <--> PV is one- on-one.

If NFS is the only storage you have available and would like multiple PV/PVC on one nfs export, use Dynamic Provisioning and a default storage class.

It's not in official K8s yet, but this one is in the incubator and I've tried it and it works well: <https://github.com/kubernetes-incubator/external- storage/tree/master/nfs-client>

This will enormously simplify your volume provisioning as you only need to take care of the PVC, and the PV will be created as a directory on the nfs export / server that you have defined.