# Check allowVolumeExpansion setting
Check if allowVolumeExpansion is set in the storage class of the PV you want to expand.
allowVolumeExpansion: true
$ kubectl get storageclass ibmc-block-bronze -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2019-02-26T07:00:03Z" labels: app: ibmcloud-block-storage-plugin chart: ibmcloud-block-storage-plugin-1.5.0 heritage: Tiller release: ibm-block-storage-plugin name: ibmc-block-bronze resourceVersion: "52901588" selfLink: /apis/storage.k8s.io/v1/storageclasses/ibmc-block-bronze uid: xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx parameters: billingType: hourly classVersion: "2" fsType: ext4 iopsPerGB: "2" sizeRange: '[20-12000]Gi' type: Endurance provisioner: ibm.io/ibmc-block reclaimPolicy: Delete volumeBindingMode: Immediate
If allowVolumeExpansion is not set,
Please use the expansion method described below for cases where there is no setting.
# Change PVC to the capacity you want to expand
After editing the PVC of the PV you want to expand, change the capacity (200GB) to the capacity you want to expand, and then save
spec:
...
resources:
requests:
storage: 200Gi
$ kubectl edit pvc elasticsearch-data-elasticsearch-data-2 apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: ibm.io/provisioning-status: complete pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-class: ibmc-block-retain-silver volume.beta.kubernetes.io/storage-provisioner: ibm.io/ibmc-block creationTimestamp: "2019-xx-xxTxx:xx:xxZ" finalizers: - kubernetes.io/pvc-protection labels: app: elasticsearch component: elasticsearch region: jp-tok role: data zone: seo01 name: elasticsearch-data-elasticsearch-data-2 namespace: zcp-system resourceVersion: "xxxxx" selfLink: /api/v1/namespaces/zcp-system/persistentvolumeclaims/elasticsearch-data-elasticsearch-data-2 uid: 1af63cb4-3997-11e9-8301-9a4341108516 spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi storageClassName: ibmc-block-retain-silver volumeMode: Filesystem volumeName: pvc-1af63cb4-3997-11e9-8301-9a4341108516 status: accessModes: - ReadWriteOnce capacity: storage: 200Gi phase: Bound
# Check if the change has been made to the increased capacity
- PVC 용량 확인
$ kubectl get pvc -w NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE elasticsearch-data-test-elasticsearch-data-test-0 Bound pvc-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 200Gi RWO ibmc-block-retain-silver 21h elasticsearch-data-test-elasticsearch-data-test-1 Bound pvc-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 200Gi RWO ibmc-block-retain-silver 21h elasticsearch-data-test-elasticsearch-data-test-2 Bound pvc-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 200Gi RWO ibmc-block-retain-silver 21h elasticsearch-data-test-elasticsearch-data-test-2 Bound pvc-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 220Gi RWO ibmc-block-retain-silver 21h
- Check PV capacity
$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 220Gi RWO Retain Bound zcp-system/elasticsearch-data-test-elasticsearch-data-test-2 ibmc-block-retain-silver 21h
- Go into the pod and check
$ kubectl exec -it elasticsearch-data-test-2 bash [root@elasticsearch-data-test-2 elasticsearch]# df -h Filesystem Size Used Avail Use% Mounted on overlay 98G 6.1G 87G 7% / tmpfs 64M 0 64M 0% /dev tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/mapper/docker_data 98G 6.1G 87G 7% /etc/hosts shm 64M 0 64M 0% /dev/shm /dev/mapper/3600a09803830446d463f4c454857636c 216G 60M 216G 1% /usr/share/elasticsearch/data tmpfs 7.9G 12K 7.9G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 7.9G 0 7.9G 0% /proc/acpi tmpfs 7.9G 0 7.9G 0% /proc/scsi tmpfs 7.9G 0 7.9G 0% /sys/firmware
- Check on CSP console
Example below is IBM console
1. First, check the VolumeID of PV
spec:
...
flexVolume:
...
VolumeID: "131379026"
$ kubectl get pv -o yaml pvc-b4e6876b-011c-48c1-9a9b-d84172078d8f apiVersion: v1 kind: PersistentVolume metadata: # Please edit the object below. Lines beginning with a '#' will be ignored, annotations: ibm.io/dm: /dev/dm-1 ibm.io/mountpath: /var/data/kubelet/plugins/kubernetes.io/flexvolume/ibm/ibmc-block/mounts/pvc-b4e6876b-011c-48c1-9a9b-d84172078d8f ibm.io/mountstatus: mounted ibm.io/network-storage-id: "131379026" ibm.io/nodename: 10.178.218.161 pv.kubernetes.io/provisioned-by: ibm.io/ibmc-block volume.beta.kubernetes.io/storage-class: ibmc-block-retain-silver creationTimestamp: "2020-03-25T08:32:34Z" finalizers: - kubernetes.io/pv-protection labels: CapacityGb: "200" Datacenter: seo01 IOPS: "4" StorageType: Endurance billingType: hourly failure-domain.beta.kubernetes.io/region: jp-tok failure-domain.beta.kubernetes.io/zone: seo01 ibm-cloud.kubernetes.io/iaas-provider: softlayer name: pvc-b4e6876b-011c-48c1-9a9b-d84172078d8f resourceVersion: "118820860" selfLink: /api/v1/persistentvolumes/pvc-b4e6876b-011c-48c1-9a9b-d84172078d8f uid: 804748e8-8d72-4ae2-8a35-beff13292390 spec: accessModes: - ReadWriteOnce capacity: storage: 200Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: elasticsearch-data-test-elasticsearch-data-test-2 namespace: zcp-system resourceVersion: "118684400" uid: b4e6876b-011c-48c1-9a9b-d84172078d8f flexVolume: driver: ibm/ibmc-block fsType: ext4 options: Lun: "17" TargetPortal: 161.26.102.71 VolumeID: "131379026" volumeName: pvc-b4e6876b-011c-48c1-9a9b-d84172078d8f persistentVolumeReclaimPolicy: Retain storageClassName: ibmc-block-retain-silver volumeMode: Filesystem status: phase: Bound
2. Check the changed capacity on the IBM console
Reference Page
https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims
=====================================================================================================================================================================
< How to increase volume when allowVolumeExpansion is not set >
# PV modify LUN
1. In the IBM Cloud™ console (https://cloud.ibm.com), click Classic Infrastructure > Storage > Block Storage.
2. Select a LUN (Logical Unit Number) from the list and click Actions > Modify LUN.
3. Enter the new storage size in GB units.
You can only set it to be larger than the current PV size, and you can increase it up to 12,000 GB (12 TB).
4. Review the options and new pricing.\
You can change the storage class currently applied to the PV.
5. I have read the Master Services Agreement... Click the checkbox and click Place Order.
6. Please check whether the changed capacity has been applied after about 5 minutes.
If you do the above tasks, the actual storage capacity will be increased,
but the pods that are connected and using it will be displayed as the old capacity.
If you apply the patch below to the PV and restart the connected pods, they will be connected with the increased capacity.
$ kubectl patch pv <PV name> -p '{"metadata": {"annotations":{"ibm.io/autofix-resizefs":"true"}}}'
<Added> How to change the capacity defined in PV and PVC to be displayed as increased capacity
This method is necessary to make it appear as an increased capacity when checking with kubectl get pv or kubectl get pvc.
If you don't mind seeing the old capacity, you don't have to do it.
[If PV’s Reclaim Policy is Retain]
1. PVC Backup
Backup the PVC of the PV
$ kubectl get pvc <PVC name> -o yaml > <저장하고자 하는 filename>.yaml
2. Modify PVC yaml file <If you changed Endurance's IOPS, you will also change the storage-class>
Change the three parts below to the increased capacity and save.
metadata > annotation > kubectl.kubernetes.io/last-applied-configuration
spec > resources > requests > storage
status > capacity > storage
$ vi <저장한 PVC filename>.yaml ... kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=k8s/prod/pvc.yaml --namespace=zcp-system --record=true","volume.beta.kubernetes.io/storage-class":"ibmc-block-retain-bronze"},"labels":{"billingType":"monthly"},"name":"pvc-test","namespace":"zcp-system"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"30Gi"}}}} ... spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi storageClassName: ibmc-block-retain-bronze volumeName: pvc-######### status: accessModes: - ReadWriteOnce capacity: storage: 30Gi phase: Bound ...
3. PVC Delete
$ kubectl delete pvc --force --grace-period=0 <PVC name>
4. Modify PV <If you changed the IOPS of Endurance, you will also change the storage-class>
Change the two parts below to the increased capacity and save
labels > CapacityGb
spec > capacity > storage
claimRef 부분은 삭제
$ kubectl edit pv <PV name> ... labels: CapacityGb: "30" Datacenter: seo01 IOPS: "2" StorageType: Endurance billingType: hourly ... spec: ... capacity: storage: 30Gi ... claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: test-delete namespace: zcp-system resourceVersion: "########" uid: #####-#### ...
5. PV patch
$ kubectl patch pv <PV name> -p '{"metadata": {"annotations":{"ibm.io/autofix-resizefs":"true"}}}'
6. PVC create
Create PVC using the backed up PVC yaml
$ kubectl create -f <저장한 PVC filename>.yaml
[If PV’s Reclaim Policy is Delete]
<Caution> If you delete PVC first, not in the order below, Storage will be deleted.
1. PV, PVC backup
$ kubectl get pv <PV name> -o yaml > <저장하고자 하는 filename>.yaml $ kubectl get pvc <PVC name> -o yaml > <저장하고자 하는 filename>.yaml
2. Modify PV yaml <If you changed the IOPS of Endurance, you will also change the storage-class>
Change the two parts below to the increased capacity and save
labels > CapacityGb
spec > capacity > storage
claimRef 부분은 삭제
$ vi <저장한 PV filename>.yaml ... labels: CapacityGb: "30" Datacenter: seo01 IOPS: "2" StorageType: Endurance billingType: hourly ... spec: ... capacity: storage: 30Gi ... claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: test-delete namespace: zcp-system resourceVersion: "########" uid: #####-#### ...
3. Modify PVC yaml file <If you changed the IOPS of Endurance, change the storage-class as well>
Change the following 3 parts to the increased capacity and save
metadata > annotation > kubectl.kubernetes.io/last-applied-configuration
spec > resources > requests > storage
status > capacity > storage
$ vi <저장한 PVC filename>.yaml ... kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=k8s/prod/pvc.yaml --namespace=zcp-system --record=true","volume.beta.kubernetes.io/storage-class":"ibmc-block-bronze"},"labels":{"billingType":"monthly"},"name":"pvc-test","namespace":"zcp-system"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"30Gi"}}}}... spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi storageClassName: ibmc-block-bronze volumeName: pvc-######### status: accessModes: - ReadWriteOnce capacity: storage: 20Gi phase: Bound ...
4. PV Delete
$ kubectl delete pv --force --grace-period=0 <PV name> # 만일 위의 명령어 실행 후에도 상태가 Terminating가 계속 된다면 아래 patch 실행 $ kubectl patch pv <pvc name> -p '{"metadata":{"finalizers":null}}'
5. PVC Delete
$ kubectl delete pvc --force --grace-period=0 <PVC name>
6. PV create
Backup해둔 PV yaml로 PV생성
$ kubectl create -f <저장한 PV filename>.yaml
7. PVC create
Backup해둔 PVC yaml로 PVC생성
$ kubectl create -f <저장한 PVC filename>.yaml
8. PV patch
$ kubectl patch pv <PV name> -p '{"metadata": {"annotations":{"ibm.io/autofix-resizefs":"true"}}}'
<References>
https://cloud.ibm.com/docs/infrastructure/BlockStorage?topic=BlockStorage-expandingcapacity
Online consultation
Contact us