Persistent is currently the more commonly used database. When the database is found to be down
when writing a dynamic web page, the database can be restarted in this way, and the data still exists.
A Persistent Volume (PV) is a cluster storage system that has been provided by
the administrator or dynamically supplied by the Storage Classes. It's a cluster
resource much as a cluster resource is a node. PVs are volume modules like Volumes, but also have a lifecycle independently of any particular Pod which uses PV.
This API object collects the specifics of the storage architecture. Before continue download the file for this pages section tutorial,
NFS-Server
Now you need to install the nfs-server at VM1, VM2, VM3, follow the command below,
sudo apt-get install nfs-kernel-server nfs-common
At VM1 set the shared directory, for example /var/nfsshare/ ,
sudo mkdir -p /var/nfsshare
sudo chmod -R 777 /var/nfsshare/
Open /etc/exports and add the text file like below
Local end VM1 there will be a directory /var/nfsshare/ , and also VM2, VM3 can be mounted through this path.
Now, you can restart service, follow the command below,
systemctl restart rpcbind
systemctl restart nfs-server
systemctl status rpcbind
systemctl status nfs-server
Next step, create a storage path for the client (VM2 , VM3)
sudo mkdir -p /mnt/nfs/var/nfsshare
Here is the command for mount and umount (uninstall) on VM1,
mount -t nfs 172.16.98.154:/var/nfsshare /mnt/nfs/var/nfsshare
umount /mnt/nfs/var/nfsshare
After mounting, cd switch to the cd /mnt/nfs/var/nfsshare path and use touch to test,
touch a b c d
You can see the result on your client VM2, VM3 because the server VM1 has been synchronized, and you can see ls
of the result touch command on the at the Client and Server,
Kubernetes Network Storage
Provide network storage (persistent) in kubernetes, Now you can edit 1.yaml to create a my-pv service with
5G storage space , 2.yaml is asking for 1G of space from my-pv. If obtained successfully,
a my-pvc service will be generated and 3.yaml to change my-pvc to task-pv-pod and bind it to the httpd default path
gedit 1.yaml &
gedit 2.yaml &
gedit 3.yaml &
gedit pv.yaml &
gedit pvc.yaml &
After editing, you can start the yaml file,
kubectl apply -f 1.yaml
kubectl get pv
kubectl apply -f 2.yaml
kubectl get pvc
kubectl apply -f 3.yaml
kubectl get pods
The command above, when start to run 1.yaml means the pv service is established,
after that, continue to run 2.yaml means you can see that my-pv and my-pvc have been bound and then together
and last, when you run 3.yaml you cann check the pod information. Now, you can see task-pv-pod
detailed, by run command,
kubectl get pods -o wide
And you can try to carry out testing using command curl IP/filename.html
ConfigMaps
ConfigMaps allow you to decouple configuration artifacts from image content to keep
containerized applications portable. This section provides a series of usage examples demonstrating how to create ConfigMaps.
The first step is to create a yaml file and start the configmap service, create an archive configmap.yaml, and start the service, by following command,
kubectl apply -f configmap.yaml
kubectl get configmap
kubectl get cm
kubectl describe cm cm-demo
After that, you can use file to generate configmap service, Create a folder testcm, and create two files mysql.confas and redis.conf , the contents are as follows
After create the file, follow the command below,
To use the configmap you can create testpod.yaml and start service, it can be initialized the environment variables of the pod and set the system environment,
so run the following command below,
Persistent is currently the more commonly used database. When the database is found to be down when writing a dynamic web page, the database can be restarted in this way, and the data still exists. A Persistent Volume (PV) is a cluster storage system that has been provided by the administrator or dynamically supplied by the Storage Classes. It's a cluster resource much as a cluster resource is a node. PVs are volume modules like Volumes, but also have a lifecycle independently of any particular Pod which uses PV. This API object collects the specifics of the storage architecture. Before continue download the file for this pages section tutorial,
NFS-Server
Now you need to install the nfs-server at VM1, VM2, VM3, follow the command below,
sudo apt-get install nfs-kernel-server nfs-common
At VM1 set the shared directory, for example
/var/nfsshare/
,sudo mkdir -p /var/nfsshare
sudo chmod -R 777 /var/nfsshare/
Open
/etc/exports
and add the text file like below/var/nfsshare 172.16.98.0/24(rw,sync,no_root_squash,no_all_squash)
Local end VM1 there will be a directory
/var/nfsshare/
, and also VM2, VM3 can be mounted through this path. Now, you can restart service, follow the command below,systemctl restart rpcbind
systemctl restart nfs-server
systemctl status rpcbind
systemctl status nfs-server
Next step, create a storage path for the client (VM2 , VM3)
sudo mkdir -p /mnt/nfs/var/nfsshare
Here is the command for mount and umount (uninstall) on VM1,
mount -t nfs 172.16.98.154:/var/nfsshare /mnt/nfs/var/nfsshare
umount /mnt/nfs/var/nfsshare
After mounting,
cd
switch to thecd /mnt/nfs/var/nfsshare
path and use touch to test,touch a b c d
You can see the result on your client VM2, VM3 because the server VM1 has been synchronized, and you can see
ls
of the resulttouch
command on the at the Client and Server,Kubernetes Network Storage
Provide network storage (persistent) in kubernetes, Now you can edit
1.yaml
to create a my-pv service with 5G storage space ,2.yaml
is asking for 1G of space from my-pv. If obtained successfully, a my-pvc service will be generated and3.yaml
to change my-pvc to task-pv-pod and bind it to the httpd default pathgedit 1.yaml &
gedit 2.yaml &
gedit 3.yaml &
gedit pv.yaml &
gedit pvc.yaml &
After editing, you can start the yaml file,
kubectl apply -f 1.yaml
kubectl get pv
kubectl apply -f 2.yaml
kubectl get pvc
kubectl apply -f 3.yaml
kubectl get pods
The command above, when start to run 1.yaml means the pv service is established, after that, continue to run 2.yaml means you can see that my-pv and my-pvc have been bound and then together and last, when you run 3.yaml you cann check the pod information. Now, you can see task-pv-pod detailed, by run command,
kubectl get pods -o wide
And you can try to carry out testing using command
curl IP/filename.html
ConfigMaps
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This section provides a series of usage examples demonstrating how to create ConfigMaps. The first step is to create a yaml file and start the configmap service, create an archive
configmap.yaml
, and start the service, by following command,kubectl apply -f configmap.yaml
kubectl get configmap
kubectl get cm
kubectl describe cm cm-demo
After that, you can use file to generate configmap service, Create a folder testcm, and create two files mysql.confas and redis.conf , the contents are as follows After create the file, follow the command below,
kubectl create configmap cm-demo1 --from-file=testcm
The third step is to generate directly with command,
kubectl create configmap cm-demo3 --from-literal=db.host=localhost --from-literal=db.port=3306
To use the configmap you can create
testpod.yaml
and start service, it can be initialized the environment variables of the pod and set the system environment, so run the following command below,kubectl apply -f testpod.yaml
kubectl logs testcm1-pod