Expanding K8s PVs in EKS on AWS

If that post title isn't a mouthful...

I'm excited to be moving a few EKS clusters into real-world production use after a few months of preparation. Besides my Raspberry Pi Dramble project (which is pretty low-key), these are the only production-grade Kubernetes clusters I've dealt withβ€”and I've learned a lot. Enough that I'm working on a new book.

Anyways, back to the main topic: As of Kubernetes 1.11, you can auto-expand PVs from most cloud providers, AWS included. And since EKS now runs Kubernetes 1.11.x, you can have your EBS PVs automatically expand by just increasing the PVC claim size in spec.resources.requests.storage to a larger size (e.g. 10Gi to 20Gi).

To make sure this works, though, you need to make sure of a few things:

Make sure you have the proper setting on your StorageClass

You need to make sure the StorageClass you're using has the allowVolumeExpansion setting enabled, e.g.:

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  - debug

Edit your PVC

Increase the storage request of your PVC (note that you cannot decrease the request... that's a lot weirder a use case and is not something trivial to do on most storage systems!) by editing the pvc:

$ kubectl edit pvc -n [namespace] [claim-name]
...
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi # increased from 10Gi
  storageClassName: standard

Save the edit, then wait, and monitor the PV associated with the PVC:

$ kubectl get pv pvc-d2adc816-d0c7-11e8-80aa-0ef7083fecf8 --watch

Once it's done, you're ready for the final step!

Delete/restart the Pod(s) which mount the PVC

This may seem unintuitive, but in order for Kubernetes to expand the actual volume on the disk to fill the newly-available free space, it has to detach the volume (so the running Pod has to be terminated), then it expands the PVC, then it attaches it to a new Pod:

$ kubectl delete pod -n [namespace] [pod identifier]

After a minute or two (or longer if the volume is huge), the new pod should be up with the now-larger PVC attached.

No need to use fdisk or anything arcane like that... just make the claim request larger, wait for it to expand, then delete pods using it, and things come back with the new larger size. Nice!

Comments

Another pro-tip: at least with EBS on AWS, you can only resize a volume once per 6 hours. I guess AWS considers this operation to be expensive (computationally, bandwidth, or something else) and they don't want you building your own elastic volume solution that increases size by tiny bits!

I'm not sure, but I believe the storage class must have allowVolumeExpansion set to true before the initial pvc is created for resizing to work. I say I'm not sure because I just tried to resize a volume created on EKS Kube 1.10, where allowVolumeExpansion could not be set to true.

When I modified the storage class and tried to expand a volume, it failed. With kubectl the edit it fails silently, via the Kube Dashboard, it fails with an error: "persistentvolumeclaims XXX is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize."

Note that you have to be running Kubernetes 1.11 or later for any of this to work; but yes, the allowVolumeExpansion must already be set in the StorageClass before you can expand any PVs using that StorageClass. I don't believe K8s 1.10 or earlier allows adding that setting to the StorageClass itself.