Mike Barkas

Software Technologies

Mike Barkas

Mike Barkas

Software Technologies

Setup NFS for Minikube

December 29, 2019

This article is an overview of how to setup a Network File Server (NFS) as persistent storage in a Minikube client. This configuration is a basic example and is not secure and should not be used on a public network. This is for developing and testing apps and services in a Kubernetes development environment.

The host machine in this article is Linux CentOS 8.

Process Overview

  • Install and configure NFS on the Linux host
  • Configure Minikube to be the NFS client
  • Create a Pod to verify file share

Server Configuration

Installing and configuring the host machine as a server requires root privileges. Either use sudo for every command or use the root account. The example commands below are using the root account.

Install the NFS files needed for the server.

dnf install nfs-utils

Run the NFS as a service with Systemd and persist between reboots. Enable the nfs-server with systemctl:

systemctl enable --now nfs-server

Verify the status of the server within systemd:

systemctl status nfs-server

Partial output example showing active:

nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Fri 2019-11-22 08:52:33 EST;
Process: 1710 ExecStart=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (>
Process: 1677 ExecStart=/usr/sbin/rpc.nfsd (code=exited, status=0/SUCCESS)

For development and testing, this configuration should be adequate. If additional settings are needed, the configuration files are located:

/etc/nfs.conf
/etc/nfsmount.conf

NFS Exports

Now that the server is running we need to expose directories to host files on the network. NFS uses a term called exports to define the directory structure and availability.

Create a directory to hold the files and export settings. A common place to create the directory is either /mnt or /srv on the host computer. I like to use srv on the server and mount directories on the client in the mnt directory. The naming convention for the directory structure is up to you. This is a simple example with nfsshare.

mkdir -p /srv/nfsshare

Configure the NFS exports with the newly created nfsshare directory.

The exports settings file describes the available directories and the clients that can access them. It also will contain limitations for these clients.

Use Vim to edit and create the exports settings file:

vim /etc/exports

The basic layout of the settings are:

/server/directory/share client(settings)

The client can be defined using a hostname or an IP address. The IP address can be expressed in different ways to identify client computers that are allowed to connect.

Example Exports settings:

# An IP address of a specific machine
/srv/nfsshare/www  192.168.1.100(rw,sync,no_root_squash)

# Using CIDR notation for a subnet for read only
/srv/nfsshare/backup  192.168.1.0/24(ro,sync,no_root_squash)

There are many options for permissions and security. This development environment will have minimal settings to get the server working and is not secure for a public network.

In the Minikube development environment we can set the access open to all IPs using a wildcard example:

/srv/nfsshare/data *(rw,sync,no_subtree_check,no_root_squash,insecure)

This will help make configuration simple for the Minikube client.

For more information on NFS export settings: https://linux.die.net/man/5/exports

After the exports file is created or edited you will need to run a command to reload the settings. Run the exportfs command with reload option:

exportfs -ra

Then you can verify what is available with:

exportfs -v


Server Firewall Configuration

If your host machine has a firewall running, you will need to allow the NFS service through it. You could also disable the firewall, but not recommended.

On CentOS 8 the firewall can be configured with firewall-cmd command.

Using the firewall-cmd you can get the services, and you will need to add services and make them permanent. Then you will need to reload the settings.

Get the services and note that nfs is in the list:

firewall-cmd --get-services

Next add the nfs, mountd, rpc-bind services into the firewall and make them permanent.

firewall-cmd --add-service nfs --permanent
firewall-cmd --add-service mountd --permanent
firewall-cmd --add-service nfs --permanent

Then reload the firewall settings with:

firewall-cmd --reload

Verify the NFS service is in the firewall settings with:

firewall-cmd --list-all

The server should now be configured and the next step is to set up Minikube as a NFS client to share the persistent files.


Client Configuration

The Linux computer is the host and the NFS server. Minikube is installed on the host computer and Minikube is the NFS client. You will need to SSH into the Minikube instance and configure it to be the NFS client.

Minikube uses boot2docker image so you could manually use SSH to login. The username is docker and the password would be tcuser.

You can get the Minikube IP address with: minikube ip command.

Minikube provides a nice command to SSH into it. To SSH into your Minikube instance use: minikube ssh command.

minikube ssh
$

You are now logged into the Minikube VM as the docker user.

Switch to the root user to configure a mount as a persistent volume.

$ su -
#

After switching to the root user, the command prompt will change to #

It is a good idea to verify the Minikube VM can communicate with the NFS server host. Use ping to verify communication by using the IP address of the Linux host that is hosting the NFS server. Limit the ping command to 2 with -c 2 so it will stop pinging.

# ping -c 2 192.168.1.100
PING 192.168.1.100 (192.168.1.100): 56 data bytes
64 bytes from 192.168.1.100: seq=0 ttl=63 time=0.322 ms
64 bytes from 192.168.1.100: seq=1 ttl=63 time=0.301 ms

--- 192.168.1.100 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.301/0.311/0.322 ms
#

Now that the client and server can communicate, lets set up the NFS client in Minikube.

Create a mount in Minikube as the NFS client. Kubernetes will handle mapping the directories and mounts for your containers and volumeMounts.

Create an NFS mount:

mount -t nfs 192.168.1.100:/srv/nfsshare/data /mnt

The mount command needs the source that is the NFS server IP address (change for your system) and absolute path to the shared directory. And this will mount in /mnt in Minikube so the Pods will have access.


Persistent Volume In A Pod

To verify a Pod will have access to the files from the NFS server, create a Persistent Volume (PV) that uses the nfs type in its spec and create a Persistent Volume Claim (PVC) that uses that new PV. Also make sure there are files in the NFS server to test with.

Create a Pod to verify the Container has access to files served by the NFS server. Use an existing Pod or use this example for testing.

kind: Pod
apiVersion: v1
metadata:
  name: nfs-pod
spec:
  volumes:
  - name: nfs-pv
    persistentVolumeClaim:
      claimName: nfs-pv-claim
  containers:
    - name: nfs-client
    image: centos:7
    command:
      - sleep
      - "3600"
    volumeMounts:
     - name: nfs-pv
       mountPath: "/nfsshare"

Save the above in a file called nfs-pod.yaml. This will create a container long enough for testing the NFS client and server.

Start this Pod with kubectl using the file provided option.

kubectl create -f nfs-pod.yaml

Once the Pod is running, use an interactive shell to examine the shared directory from the NFS server.

This command will execute exec an interactive shell -it in the Container and asking to run a Bash shell /bin/bash.

kubectl exec -it nfs-pod -- /bin/bash

Now verify the contents of the NFS shared volume by listing ls the contents.

ls /nfsshare

Example command and output:

kubectl exec -it nfs-pod -- /bin/bash
[root@nfs-pod /]# ls /nfsshare
file-one.txt  file-three.txt  file-two.txt  test-folder
[root@nfs-pod /]# exit
exit

Verify there are files in the NFS server to test read access. Also try creating a file from inside the container and verify it is visible from the server.

Now that the NFS share has been created and verified, the same concept can be used in more complex file sharing needs in Kubernetes Deployments.


Remove And Clean Up

Remember to delete the example PVs and Pods after testing the NFS client and server communication in the Container. Delete the Pod and then the PVC and finally the PV.