VMWARE

vSphere 7 with Kubernetes Part 6 – Cloud Native Storage with vSphere CSI for Persistent Volumes

 

vSphere 7 with Kubernetes Part 6 – Cloud Native Storage with vSphere CSI for Persistent Volumes

In part 1 of this blog series, I covered how to create Storage tag and Policy to be used for the Workload management.

In part 2 of this blog series, I covered how to enable Workload Management on vSphere 7.

In part 3 of this blog series, we discovered how to set up the Content Library.

In part 4 of this blog series, we enabled Harbor Image Registry.

In part 5 of this blog series, we deployed a Tanzu Kubernetes Grid (TKG) cluster.

In this 6th part of this blog series, I am going to cover how to patch vSphere 7 Kubernetes Storage Class to ‘Default’ to create persistent container volumes.

This configuration will allow to successfully create Persistent Volumes (PV) and attach them via Persistent Volume Claim (PVC) in vSphere 7 with Kubernetes environment.

Just a quick brief on what is Cloud Native Storage. Cloud Native Storage is a solution that provides comprehensive data management for stateful applications.

When you use Cloud Native Storage, you can create the containerized stateful applications capable of surviving restarts and outages. Stateful containers leverage storage exposed by vSphere while using such primitives as standard volume, persistent volume, and dynamic provisioning.

With Cloud Native Storage, you can create persistent container volumes independent of virtual machine and container life cycle. vSphere storage backs the volumes, and you can set a storage policy directly on the volumes. After you create the volumes, you can review them and their backing virtual disks in the vSphere Client, and monitor their storage policy compliance.

 

Before I could make this change in my environment, all the stateful pods used to remain in ‘Pending’ state.

The Pod describe command will show significant errors.

ExternalProvisioning – persistentvolume-controller   waiting for a volume to be created either by external provisioner “csi.vsphere.vmware.com” or manually created by system administrator.

ProvisioningFailed csi.vsphere.vmware.com failed to provision volume with StorageClass “projectpacpolicy”.

 

For your information, ‘projectpacpolicy’ reference above in the error message is the Storage policy in my environment and enabled with tag-based placement rules.

When we create a ‘Namespace’ in vSphere7 with Kubernetes environment, we will have to provide Storage Policy. Here is how the Storage Policy looks like in the Namespace.

 

Here is how normally the Storage Class looks like when you access API server get the Storage Class information.

Now, we will see how to change this Storage Class to ‘Default’

PS Note- I couldn’t change the default storage class by logging into kubectl as the administrator@vsphere.local user.

Therefore, I had to log into the ‘Supervisor Master’ as root to change the default storage class.

To login into the Supervisor Master, login into vcenter.

  1. SSH into the vCenter and enable the shell
  2. Run /usr/lib/vmware-wcp/decryptK8Pwd.py to get the IP address and root password

Once you run above command, you will get an IP of your Supervisor Master and Login Password.

Next, SSH as root into the supervisor master.

 

Once you are into the Supervisor Master, you can run the ‘Kubectl Patch storageclass’ as below to make enable ‘Default’ Storage Class.

Make sure you replace the name for your Storage Class. Get storage class name via  ‘kubectl get sc’

Once the above procedure is executed successfully, you will notice Storage Class is patched to ‘Default’.

Further, I describe the Storage Class to get more detail information.

Now, I create a deployment with a stateful pod for PostgreSQL DB container.

Notice, a PVC was created for the Stateful PostgreSQL pod.

Have a look at events on the PostgreSQL pod with PVC.

The PVC can be seen from the vCenter as well as part of Cloud Native Storage.

 

Hope you enjoyed this post, I’d be very grateful if you’d help to share it on Social Media. Thank you!