NFS (Network File System) Storage for Persistent volume claim part 2

Ok, this is the second part of this post and we talk about mounting the external volume and create a simple PV and PVC in your Kubernetes cluster.

If you followed the first part of this post, maybe notice that every time a node or the full cluster is restarted the mount volume disappears. and you have to mount:

mount 10.1.6.6:/var/nfs/general  /mnt/nfs/home

where that IP is your private IP address in case you were working with vm’s.

To set up the volume no mather the VM has to be restarted or deallocated, you have to modify the fstab file.

echo /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Sep 20 13:48:33 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=2c32741d-19d6-4f09-bdbc-a28db4e1d338 /     xfs  defaults     0 0
UUID=efd62c66-c0c1-45b1-848f-ec800f6243e7 /boot xfs  defaults     0 0

Fstab is your operating system’s file system table, so we have to add some lines to add a volume.

nano /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Sep 20 13:48:33 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=2c32741d-19d6-4f09-bdbc-a28db4e1d338 /  xfs    defaults        0 0
UUID=efd62c66-c0c1-45b1-848f-ec800f6243e7 /boot xfs defaults        0 0
10.1.6.6:/var/nfs/general/01_mysql    /var/nfs/general/01_mysql  nfs    rw    0    0

we have added this line 10.1.6.6:/var/nfs/general/01_mysql /var/nfs/general/01_mysql nfs rw 0 0 the server IP and the remote folder we are sharing the type of volume “nfs”, grants, if dump and fsck.

According to these column values

file system mount point type options dump pass

Now we have to do this process in each node including the master node.

Finally, we will be able to deploy a new PV and a PVC

This will be our pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: kube-pv
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /var/nfs/general
    server: 10.1.6.6
    readOnly: false

and our pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kube-pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
   requests:
     storage: 1Gi

Basically we have a Persistent Volume with a link between our NFS server so now we are able to say:

kubectl apply -f pv.yaml
kubectl apply -f pvc.yaml

Now you could use your volume in your future services or create new volumes.

You have to clean up your container’s logs.

Algunas veces ponemos nuestros contenedores a correr en un servidor y los dejamos funcionando con nuestro simple docker run “algo” por ejemplo, gitlab, una aplicación web, jenkins etc.

Pero aveces nos olvidamos de limpiar los logs que se generan y estos crecen y crecen hasta que pueden a llegar a ser un problema de espacio en el disco.

Problemas que puede causar:

Que tu contenedor no funcione de forma correcta
Que ni siquiera puedas entrar a server host
Que tampoco puedas entrar al contenedor incluso

Por lo regular lo que hacemos es un docker system prune pero esto no soluciona el problema de los logs.

Por lo que debemos ir un poco más allá y debemos buscar cuales son los logs más pesados y que debemos borrar.

De igual forma una buena practica es realizar un cron que haga esta tarea cada determinado tiempo.

df -h

Ahora que sabemos que tenemos un problema de espacio tenemos que saber cual de los logs estan siendo el problema vamos a limpiarlos por lo que primero los buscamos con.

du -d1 -h /var/lib/docker/containers | sort -h

ahora entramos a la carpeta como y aplicamos el sig. commando

cat /dev/null > your-heaviest-container-json.log

Y con esto liberamos espacio sin necesidad de pausar el servicio.

Quitando un node de un cluster de Kubernetes, removing a node from your Kubernetes cluster

Unfortunately, some times we have to remove a node from our Kubernetes cluster for many reasons like incompatibility versions, kernel or save money

Here you see some steps do this task and downgrade the version of your node.

  • In your master node, identify the node you want to remove.
  • Remove the node
  • Then go to your node and downgrade the kubernetes version.

In Master

kubectl get nodes
NAME       STATUS     ROLES    AGE   VERSION
master Ready master 32d v1.12.8
node1 Ready 32d v1.12.8
node2 Ready 41h v1.17.0

We will remove node2 to correct the Kubernetes version.

kubectl drain node2 --ignore-daemonsets --delete-local-data
kubectl delete node node2

In the node that you have removed

Get login by ssh, then run this:

sudo docker rm `docker ps -a -q`
sudo docker rmi `docker images -q`
sudo kubeadm reset 
sudo yum remove kubeadm kubectl kubelet kubernetes-cni kube*    
sudo yum autoremove 
sudo rm -rf ~/.kube

Finally – Reinstall the Kubernetes version: and attach the node again to the cluster- see this link:

you could see this article: http://www.regner.com.mx/bajar-la-version-de-kuberntes-en-un-cluster-base-centos-7/

Why you get this error: unable to upgrade connection when kubectl exec

error: unable to upgrade connection: <h3>Unauthorized</h3>

If you already have made a Kubernetes cluster from scratch and you have your Proxy working in port 8001 probably you need to access by ~/.kube/config file instead grant everybody to the master node.

Assuming that you expose your API by:

kubectl proxy --port=8001 --address 0.0.0.0 --accept-hosts .*

You could get the unauthorized error when you need to get into a pod by:

kubectl exec -ti 'your_pod' /bin/bash
error: unable to upgrade connection: <h3>Unauthorized</h3>

This is because there are many filters applied in the configuration, to avoid this error you could expose your API using –disable-filter=true like this:

kubectl proxy --disable-filter=true --port=8001 --address 0.0.0.0 --accept-hosts .*

Just be careful because a warning appear like this

W0109 03:41:13.851953  129738 proxy.go:140] Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious

This will be useful to allow your team to test but not recommendable to be on production