Professional Documents
Culture Documents
Provider
October 30, 2017
Table of Contents
1. Get Started
1.1.Introduction
2. vSphere Cloud Provider
2.1.Overview
2.2.Kubernetes Storage Support
2.3.Storage Policy Based Management for dynamic provisioning of volumes
2.4.High Availability of Kubernetes Cluster
3. Deployment
3.1.Prerequisites
3.2.Kubernetes Anywhere
3.3.Congurations on Existing Kubernetes Cluster
3.4.Best Practices
4. Applications & Examples
4.1.Running Stateful application - Guestbook App
4.2.Deploying Sharded MongoDB Cluster
4.3.Deploying S3 Stateful Containers - Minio
5. Miscellaneous
5.1.FAQs
5.2.Known Issues
1. Get Started
Get Started
1.1 Introduction
Introduction
Containers have changed the way applications are packaged and deployed. Not only containers are
ecient from infrastructure utilization point of view, they also provide strong isolation between
process on same host. They are lightweight and once packaged can run anywhere. Docker is the most
commonly used container runtime technology and this user guide outlines how vSphere is compatible
with Docker ecosystem.
Containers are ephemeral by nature, so the data that needs to be persisted has to survive
through the restart/re-scheduling of a container.
When containers are re-scheduled, they can die on one host and might get scheduled on a
dierent host. In such a case the storage should also be shifted and made available on new host
for the container to start gracefully.
The application should not have to worry about the volume/data and underlying infrastructure
should handle the complexity of unmounting and mounting etc.
Certain applications have a strong sense of identity (For example. Kafka, Elastic etc.) and the
disk used by a container with certain identity is tied to it. It is important that if a container with a
certain ID gets re-scheduled for some reason then the disk only associated with that ID is re-
attached on a new host.
API Resources:
Volumes
Persistent Volumes
Persistent Volumes Claims
Storage Class
Statefulsets
2.1 Overview
Overview
Containers are stateless and ephemeral but applications are stateful and need persistent storage.
vSphere adds this persistent storage support to Kubernetes through interface called Cloud Provider.
Cloud provider is an interface which helps in extending Kubernetes with cluster of instances managed
by virtualization technologies, public/private cloud platforms and required networking for these
instances.
Kubernetes cloud provider is an interface to integrate various nodes (i.e. hosts), load balancers and
networking routes. This interface allows extending Kubernetes to use various cloud and virtualization
solutions as base infrastructure to run on.
Datastores is an abstraction which hides storage details and provide uniform interface for storing
persistent data. Datastores enables simplied storage management with features like grouping them
in folders. Depending upon the backend storage the datastores can be vSAN, VMFS or NFS.
Kubernetes volumes are dened in Pod specication. They reference VMDK les and these VMDK les
are mounted as volumes when the container is running. When the Pod is deleted the Kubernetes
volume is unmounted and the data in VMDK les persists
vSphere is cloud provider which implements Instances interface and supports following Kubernetes
storages primitives:
Volumes
Persistent Volumes (PV)
Volumes
A Pod can specify vsphereVolume as Kubernetes Volumes and then vSphere VMDK is mounted as
Volume into your Pod. The contents of a volume are preserved when it is unmounted. It supports both
VMFS and VSAN datastores.
All the example yamls can be found here unless otherwise specied. Please download these examples.
Here is an example of how to create a VMDK le and how a Pod can use it.
Create VMDK
First ssh into ESX and then use following command to create vmdk on datastore1
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
#vsphere-volume-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-vmdk
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-vmdk
name: test-volume
volumes:
- name: test-volume
# This VMDK volume must already exist.
vsphereVolume:
volumePath: "[datastore1] volumes/myDisk"
fsType: ext4
Persistent Volumes API resource solves this problem where PVs have lifecycle independent of the
Pods and not created when Pod is run. PVs are unit of storage which we provision in advance, they are
Kubernetes objects backed by some storage, vSphere in this case. PVs are created, deleted using
kubectl commands.
In order to use these PVs user needs to create PersistentVolumeClaims which is nothing but a request
for PVs. A claim must specify the access mode and storage capacity, once a claim is created PV is
automatically bound to this claim. Kubernetes will bind a PV to PVC based on access mode and
storage capacity but claim can also mention volume name, selectors and volume class for a better
match. This design of PV-PVCs not only abstract storage provisioning and consumption but also
ensures security through access control.
Note:
All the example yamls can be found here unless otherwise specied. Please download these examples.
Here is an example of how to use PV and PVC to add persistent storage to your Pods.
Create VMDK
First ssh into ESX and then use following command to create vmdk,
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
#vsphere-volume-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
vsphereVolume:
volumePath: "[datastore1] volumes/myDisk"
fsType: ext4
In the above example datastore1 is located in the root folder. If datastore is member of Datastore
Cluster or located in sub folder, the folder path needs to be provided in the VolumePath as below.
vsphereVolume:
VolumePath:"[DatastoreCluster/datastore1] volumes/myDisk"
Source:
Type:vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath:[datastore1] volumes/myDisk
FSType:ext4
No events.
#vsphere-volume-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
#vpshere-volume-pvcpod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pvpod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /test-vmdk
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvc0001
10
StatefulSets
StatefulSets are valuable for applications which require any stable identiers or stable storage.
vSphere Cloud Provider suppoorts StatefulSets and vSphere volumes can be consumed by
StatefulSets.
Note:
All the example yamls can be found here unless otherwise specied. Please download these examples.
Create a storage class that will be used by the volumeClaimTemplates of a Stateful Set
#simple-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: thin-disk
provisioner: Kubernetes.io/vsphere-volume
parameters:
diskformat: thin
Create a Stateful set that consumes storage from the Storage Class created
#simple-statefulset.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 14
template:
metadata:
labels:
app: nginx
spec:
containers:
11
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
annotations:
volume.beta.kubernetes.io/storage-class: thin-disk
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
This will create Persistent Volume Claims for each replica and provision a volume for each claim if an
existing volume could be bound to the claim.
Overview
One of the most important features of vSphere for Storage Management is Policy based Management.
Storage Policy Based Management (SPBM) is a storage policy framework that provides a single unied
control plane across a broad range of data services and storage solutions. SPBM enables vSphere
administrators to overcome upfront storage provisioning challenges, such as capacity planning,
dierentiated service levels and managing capacity headroom
As we discussed in previously StorageClass species provisioner and parameters. And using these
parameters you can dene the policy for that particular PV which will be dynamically provisioned.
You can specify the existing vCenter Storage Policy Based Management (SPBM) policy to congure a
persistent volume with SPBM policy. storagePolicyName parameter is used for this.
Note:
SPBM policy based provisioning of persistent volumes will be available in 1.7.x release.**
All the example yamls can be found here unless otherwise specied. Please download these
examples.
#vsphere-volume-spbm-policy.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
storagePolicyName: gold
12
The admin species the SPBM policy - gold as part of storage class denition for dynamic volume
provisioning. When a PVC is created, the persistent volume will be provisioned on a compatible
datastore with maximum free space that satises the gold storage policy requirements.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
storagePolicyName: gold
datastore: VSANDatastore
The admin can also specify a custom datastore where he wants the volume to be provisioned along
with the SPBM policy name. When a PVC is created, the vSphere Cloud Provider checks if the user
specied datastore satises the gold storage policy requirements. If yes, it will provision the
persistent volume on user specied datastore. If not, it will error out to the user that the user specied
datastore is not compatible with gold storage policy requirements.
The ocial vSAN policy documentation describes in detail about each of the individual storage
capabilities that are supported by vSAN. The user can specify these storage capabilities as part of
storage class denition based on his application needs.
cacheReservation: Flash capacity reserved as read cache for the container object. Specied as a
percentage of the logical size of the virtual machine disk (vmdk) object. Reserved ash capacity
cannot be used by other objects. Unreserved ash is shared fairly among all objects. Use this
option only to address specic performance issues.
diskStripes: The minimum number of capacity devices across which each replica of a object is
striped. A value higher than 1 might result in better performance, but also results in higher use
of system resources. Default value is 1. Maximum value is 12.
forceProvisioning: If the option is set to Yes, the object is provisioned even if theNumber of
failures to tolerate, Number of disk stripes per object, and Flash read cache reservation policies
specied in the storage policy cannot be satised by the datastore
hostFailuresToTolerate: Denes the number of host and device failures that a virtual machine
object can tolerate. For n failures tolerated, each piece of data written is stored in n+1 places,
including parity copies if using RAID 5 or RAID 6.
iopsLimit: Denes the IOPS limit for an object, such as a VMDK. IOPS is calculated as the
number of I/O operations, using a weighted size. If the system uses the default base size of 32
KB, a 64-KB I/O represents two I/O operations
13
objectSpaceReservation: Percentage of the logical size of the virtual machine disk (vmdk)
object that must be reserved, or thick provisioned when deploying virtual machines. Default
value is 0%. Maximum value is 100%.
Note:
#vsphere-volume-sc-vsancapabilities.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
hostFailuresToTolerate: "2"
cachereservation: "20"
Here a persistent volume will be created with the Virtual SAN capabilities - hostFailuresToTolerate to 2
and cachereservation is 20% read cache reserved for storage object. Also the persistent volume will be
zeroedthickdisk.
The ocial vSAN policy documentation describes in detail about each of the individual storage
capabilities that are supported by vSAN and can be congured on the virtual disk. You can also specify
the datastore in the Storageclass as shown in above example. The volume will be created on the
datastore specied in the storage class. This eld is optional. If not specied as shown in example 1,
the volume will be created on the datastore specied in the vsphere cong le used to initialize the
vSphere Cloud Provider.
#vsphere-volume-sc-vsancapabilities.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: VSANDatastore
hostFailuresToTolerate: "2"
cachereservation: "20"
Note: If you do not apply a storage policy during dynamic provisioning on a vSAN datastore, it will use
a default Virtual SAN policy.
14
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvcsc-vsan
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Note: VMDK is created inside kubevols folder in datastore which is mentioned in vsphere
cloudprovider conguration. The cloudprovider cong is created during setup of Kubernetes cluster
on vSphere.
Create Pod which uses Persistent Volume Claim with storage class
#vsphere-volume-pvcscpod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pvpod
15
spec:
containers:
- name: test-container
image: gcr.io/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /test
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvcsc-vsan
Kubernetes ensures that all the the Pods are restarted in case the node goes down. The persistent
storage API objects ensure that same PVs are mounted back to the new Pods on restart or if they are
recreated.
But what happens if the node/host is the VM and the physical host fails? vSphere HA leverages
multiple ESXi hosts congured as a cluster to provide rapid recovery from outages and cost-eective
high availability for applications running in virtual machines. vSphere HA provides a base level of
protection for your virtual machines by restarting virtual machines in the event of a host failure
Applications running on Kubernetes on vSphere can take advantage of vSphere Availability ensuring
resilient and highly available applications.
Node VM Failure: Node VM failure will cause Kubernetes to recreate a new pod to run the containers.
vSphere Cloud Provider will mount the disk to a live node and unmount disk from the dead node
automatically. The validation description is as follows:
Shutdown one Kubernetes node VM. This will cause Kubernetes to remove the node VM from
the Kubernetes cluster.
The Kubernetes cluster will recreate the pod on an idle node in the original cluster after the
simulated node failure. Kubernetes vSphere Cloud Provider will:
Mount the disks from the shutdown node VM to the idle node.
Unmount the disks from the powered o node VM.
Fix the issue of the node VM (if any) and power it on. Kubernetes will add the node back to the
original cluster and it will be available for new pod creation.
Physical Host Failure: Powering o one of the ESXi hosts will cause the vSphere Availability to restart
the node on one of the running ESXi servers. The node in Kubernetes cluster will temporarily change
to UNKNOWN. After less than two minutes, the node will be available in the cluster. No pod recreation
is required.
Recovery
Failure Components Result and Behavior
Time
16
Recovery
Failure Components Result and Behavior
Time
Please note the recovery time is dependent upon the hardware. For additional details please refer this
blog.
17
3. Deployment
Deployment
18
3.1 Prerequisites
Following is the list of prerequisites for running Kubernetes with vSphere Cloud Provider:
Kubernetes Anywhere
There are several deployment mechanisms available to deploy Kubernetes. Kubernetes-Anywhere is
one of them. Please refer this link
User can just pull this image and run a container with this image to get into deployment wizard. User
can ll in the questionnaires, Kubernetes-Anywhere will create a terraform deployment script to install
Kubernetes cluster on vSphere. After the cluster is deployed, cluster cong le is available at /opt/
Kubernetes-anywhere/phase1/vsphere/.tmp/kubecong.json
Make sure to copy this le, before stopping the deployment container. This le is used to access
Kubernetes cluster using kubectl.
Please refer installation steps mentioned in the getting started guide for deploying Kubernetes using
Kubernete-Anywhere
19
For each of the virtual machine nodes that will be participating in the cluster, follow the steps below
using GOVC tool
govc ls /datacenter/vm/<vm-folder-name>
Note: If Kubernetes Node VMs are created from template VM then disk.EnableUUID=1 can be set on
the template VM. VMs cloned from this template, will automatically inherit this property.
vSphere Cloud Provider requires the following minimal set of privileges to interact with vCenter.
Please refer vSphere Documentation Center to know about steps for creating a Custom Role, User and
Role Assignment.
20
Propagate to
Roles Privileges Entities
Children
Resource.AssignVMToPool
System.Anonymous
System.Read
System.View
Cluster,
manage-k8s- VirtualMachine.Cong.AddExistingDisk
Hosts, Yes
node-vms VirtualMachine.Cong.AddNewDisk
VM Folder
VirtualMachine.Cong.AddRemoveDevice
VirtualMachine.Cong.RemoveDisk
VirtualMachine.Inventory.Create
VirtualMachine.Inventory.Delete
Datastore.AllocateSpace
Datastore.FileManagement
manage-k8s-
System.Anonymous Datastore No
volumes
System.Read
System.View
StorageProle.View
k8s-system-
System.Anonymous
read-and-spbm- vCenter No
System.Read
prole-view
System.View
Datacenter,
System.Anonymous Datastore
ReadOnly System.Read Cluster, No
System.View Datastore
Storage Folder
Step-5 Create the vSphere cloud cong le (vsphere.conf). Cloud cong template can be found here
This cong le needs to be placed in the shared directory which should be accessible from kubelet
container, controller-manager pod, and API server pod.
[Global]
user = "vCenter username for cloud provider"
password = "password"
server = "IP/FQDN for vCenter"
port = "443" #Optional
insecure-flag = "1" #set to 1 if the vCenter uses a self-signed cert
datacenter = "Datacenter name"
datastore = "Datastore name" #Datastore to use for provisioning volumes using storage
dynamic provisioning
working-dir = "vCenter VM folder path in which node VMs are located"
vm-name = "VM name of the Master Node" #Optional
vm-uuid = "UUID of the Node VM" # Optional
[Disk]
scsicontrollertype = pvscsi
Note: vm-name parameter is introduced in 1.6.4 release. Both vm-uuid and vm-name are optional
parameters. If vm-name is specied then vm-uuid is not used. If both are not specied then kubelet
will get vm-uuid from /sys/class/dmi/id/product_serial and query vCenter to nd the Node
VMs name.
21
vsphere.conf for Worker Nodes: (Only Applicable to 1.6.4 release and above. For older releases this
le should have all the parameters specied in Master nodes vSphere.conf le)
[Global]
vm-name = "VM name of the Worker Node"
vm-name is recently added conguration parameter. This is optional parameter. When this
parameter is present, vsphere.conf le on the worker node does not need vCenter
credentials.
Note: vm-name is added in the release 1.6.4. Prior releases does not support this parameter.
working-dir can be set to empty ( working-dir = ), if Node VMs are located in the root VM
folder.
vm-uuid is the VM Instance UUID of virtual machine. vm-uuid can be set to empty (vm-uuid =
""). If set to empty, this will be retrieved from /sys/class/dmi/id/product_serial le on virtual
machine (requires root access).
vm-uuid can be retrieved from Node Virtual machines using following command. This
will be dierent on each node VM.
datastore is the default datastore used for provisioning volumes using storage classes. If
datastore is located in storage folder or datastore is member of datastore cluster, make sure to
specify full datastore path. Make sure vSphere Cloud Provider user has Read Privilege set on
the datastore cluster or storage folder to be able to nd datastore.
.
For datastore located in the datastore cluster, specify datastore as mentioned below
datastore = "DatastoreCluster/datastore1"
For datastore located in the storage folder, specify datastore as mentioned below
datastore = "DatastoreStorageFolder/datastore1"
--cloud-provider=vsphere
--cloud-config=<Path of the vsphere.conf file>
22
Manifest les for API server and controller-manager are generally located at /etc/kubernetes
Note: After enabling the vSphere Cloud Provider, Node names will be set to the VM names from the
vCenter Inventory.
Best Practices
This section describes the vSphere specic congurations:
vSphere HA
vSphere Cloud Provider supports vSphere HA. To ensure high availability of node VMs, it is
recommended to enable HA on the cluster. Details
It is recommended to place Kubernetes node VMs in the resource pool to ensure guaranteed
performance. Details
23
24
Guestbook is PHP application with Redis as backend. In this section, we will demonstrate how to use
Kubernetes deployed on vSphere to run Guestbook Application with persistent storage. At the end of
this demo, you will have a sample guestbook app running inside Kubernetes where the data is resident
inside VMDKs managed by vSphere.
The data in the VMDK is independent of the lifecycle of the pods and persists even if pods are deleted.
Storage Setup
The backing VMDK les are needed when dynamic provisioning is not used. The VMDK les need to
exist before creating the service in k8s that uses them.
Login to ESX (if more than one ESX is used make sure to have a common/shared datastore) and create
the volumes
# VMFS
cd /vmfs/volumes/datastore1/
# VSAN
cd /vmfs/volumes/vsanDatastore/
# VMFS
mkdir kubevols # Not needed but good hygiene
# VSAN
/usr/lib/vmware/osfs/bin/osfs-mkdir kubevols # Needed
cd kubevols
vmkfstools -c 2G redis-slave.vmdk
vmkfstools -c 2G redis-master.vmdk
$ kubectl get pv
$ kubectl get pv
25
This should trigger creation of storage volumes on the datastore congured for the vSphere Cloud
Provider.
[root@promc-2n-dhcp41:/vmfs/volumes/57f5768f-856aa050-fde c-e0db55248054/
kubevols] ls *-pvc-* kubernetes-dynamic-pvc-91c9cd60-9a7c-11e6-
a431-00505690d6 4d-flat.vmdk kubernetes-dynamic-pvc-a6abf43a-9a7c-11e6-
a 431-00505690d64d-flat.vmdk kubernetes-dynamic-pvc-91c9cd60-9a7c-11e6-
a431-00505690d6 4d.vmdk kubernetes-dynamic-pvc-a6abf43a-9a7c-11e6-
a 431-00505690d64d.vmdk
All the dynamically provisioned volumes will be created in the kubevols directory inside the datastore.
Verify Guestbook Application The following steps verify that the Pods, storage and the application are
running correctly
Get the IP address of one of the nodes (This can also be obtained from vCenter) Also possible from
vCenter or the console output kube-up
Combine the IP and Port and head to the URL (Ex: http://10.20.105.59:31531 ). Try out the app and
leave a few messages. To check the attach status of VMDKs on ESX head to vCenter (Settings page or
Recent Tasks window)
26
This section describes the steps to create persistent storage for containers to be consumed by
MongoDB services on vSAN. After these steps are completed, Cloud Provider will create the virtual
disks (volumes in Kubernetes) and mount them to the Kubernetes nodes automatically. The virtual
disks are created with the vSAN default policy.
Dene StorageClass
A StorageClass provides a mechanism for the administrators to describe the classes of storage they
oer. Dierent classes map to quality-of-service levels, or to backup policies, or to arbitrary policies
determined by the cluster administrators. The YAML format denes a platinum level StorageClass.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
Metadata:
name: platinum
provisioner: Kubernetes.io/vsphere-volume
diskformat: thin
Note: Although all volumes are created on the same vSAN datastore, you can adjust the policy
according to actual storage capability requirement by modifying the vSAN policy in vCenter Server.
User can also specify VSAN storage capabilities in StorageClass denition based on this application
needs. Please refer to VSAN storage capability section mentioned in vSphere CP document
A PersistentVolumeClaim (PVC) is a request for storage by a user. Claims can request specic size and
access modes (for example, can be mounted once read/write or many times read-only). The YAML
format claims a 128GB volume with read and write capability.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc128gb
annotations:
volume.beta.Kubernetes.io/storage-class:
"platinum"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 128Gi
27
The YAML format species a MongoDB 3.4 image to use the volume from Step 2 and mount it to path
/data/db.
spec:
containers:
- image: mongo:3.4
name: mongo-ps
ports:
- name: mongo-ps
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: pvc-128gb
mountPath: /data/db
volumes:
- name: pvc-128gb
persistentVolumeClaim:
claimName: pvc128gb
Storage was created and provisioned from vSAN for containers for the MongoDB service by using
dynamic provisioning in YAML les. Storage volumes were claimed as persistent ones to preserve the
data on the volumes. All mongo servers are combined into one Kubernetes pod per node.
In Kubernetes, as each pod gets one IP address assigned, each service within a pod must have a
distinct port. As the mongos are the services by which you access your shard from other applications,
the standard MongoDB port 27017 is assigned to them.
Please refer this Reference Architecture for detailed understanding of how persistent storage for
containers is consumed by MongoDB services on vSAN.
Download the yaml les for deploying MondoDB on Kubernetes with vSphere Cloud Provider from
here
To understand the conguration mentioned in these YAMLs please refer this link
Execute following commands to deploy Sharded MongoDB Cluster on Kubernetes with vSphere Cloud
Provider.
Create StaogeClass
28
examples/kube-examples/mongodb-shards/node03-deployment.yaml
kubectl create -f https://raw.githubusercontent.com/vmware/kubernetes/kube-
examples/kube-examples/mongodb-shards/node03-deployment.yaml
Create Services
This case study describes the process to deploy distributed Minio server on Kubernetes. This example
uses the ocial Minio Docker image from Docker Hub.
#minio-sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: miniosc
provisioner: Kubernetes.io/vsphere-volume
parameters:
diskformat: thin
Headless Service controls the domain within which StatefulSets are created. The domain managed by
this Service takes the form: $(service name).$(namespace).svc.cluster.local (where cluster.local is the
cluster domain), and the pods in this domain take the form: $(pod-name-{i}).$(service name).
$(namespace).svc.cluster.local. This is required to get a DNS resolvable URL for each of the pods
created within the Statefulset.
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
name: minio
selector:
app: minio
29
A StatefulSet provides a deterministic name and a unique identity to each pod, making it easy to
deploy stateful distributed applications. To launch distributed Minio you need to pass drive locations
as parameters to the minio server command. Then, youll need to run the same command on all the
participating pods. StatefulSets oer a perfect way to handle this requirement. This is the Statefulset
description.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: minio
spec:
serviceName: minio
replicas: 4
template:
metadata:
annotations:
pod.alpha.Kubernetes.io/initialized: "true"
labels:
app: minio
spec:
containers:
- name: minio
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
image: minio/minio:RELEASE.2017-05-05T01-14-51Z
args:
- server
- http://minio-0.minio.default.svc.cluster.local/data
- http://minio-1.minio.default.svc.cluster.local/data
- http://minio-2.minio.default.svc.cluster.local/data
- http://minio-3.minio.default.svc.cluster.local/data
ports:
- containerPort: 9000
hostPort: 9000
# These volume mounts are persistent. Each pod in the PetSet
# gets a volume mounted based on this field.
volumeMounts:
- name: data
mountPath: /data
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.Kubernetes.io/storage-class: miniosc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
30
Now that you have a Minio statefulset running, you may either want to access it internally (within the
cluster) or expose it as a Service onto an external (outside of your cluster, maybe public internet) IP
address, depending on your use case. You can achieve this using Services.
There are 3 major service types default type is ClusterIP, which exposes a service to connection from
inside the cluster. NodePort and LoadBalancer are two types that expose services to external trac.
In this example, we expose the Minio Deployment by using NodePort. This is the service description.
#minio_NodePort.yaml
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: NodePort
ports:
- port: 9000
nodePort: 30000
selector:
app: minio
Access Minio
31
5. Miscellaneous
Miscellaneous
32
5.1 FAQs
Where can I nd required Roles and Privileges for the vCenter User
for vSphere Cloud Provider ?
Please refer to this section.
Where can I nd the list of vSAN, VMFS and NFS features supported
by vSphere Cloud Provider ?
Please refer to this section. Please report in case you nd any features are missing.
33
Can we have a setting to ensure all dynamic PVs with have default
policy Retain (instead of delete)? Or can we request the desired
policy from the moment we request the PV via the PVC?
If the volume was dynamically provisioned, then the default reclaim policy is set to delete. This means
that, by default, when the PVC is deleted, the underlying PV and storage asset will also be deleted. If
you want to retain the data stored on the volume, then you must change the reclaim policy from
34
delete to retain after the PV is provisioned. You cannot directly set to retain PV from PVC request
for dynamic volumes. Details
Release 1.7
Admin updating the SPBM policy name in vCenter could cause confusions/inconsistencies. Link
Two or more PVs could show dierent policy names but with the same policy ID. Link
Release 1.6.5
Node status becomes NodeReady from NodeNotSchedulable after Failover. Link
Release 1.5.7
Node status becomes NodeReady from NodeNotSchedulable after Failover. Link
35
Kubernetes-Anywhere
Destroying a Kubernetes cluster operation using Kubernetes-anywhere is aky Link
36