NetApp Persistent Storage in Kubernetes: Using ONTAP and NFS
This post is part of a multi-part series on how to use NetApp storage platforms to present persistent volumes in Kubernetes. The other posts in this series are:
Kubernetes is an open source project for automating deployment, operations, and scaling of containerized applications that came out of Google in June 2014. The community around Kubernetes has since exploded and is being adopted as one of the leading container deployment solutions.
A problem many run into with using containerized applications is what to do
with their data. Data written inside of a container is ephemeral and only exist
for the lifetime of the container it’s written in. To solve this problem,
Kubernetes offers a PersistentVolume
subsystem that abstracts the details of
how storage is provided from how it is consumed.
The Kubernetes PersistentVolume
API provides several plugins for integrating
your storage into Kubernetes for containers to consume. In this post, we’ll
focus on how to use the NFS plugin with ONTAP. More specifically, we will
use a slightly modified version of the NFS example
in the Kubernetes source code.
Environment
ONTAP
For this post, a single node clustered Data ONTAP 8.3 simulator was used. The setup and commands used are no different than what would be used in a production setup using real hardware.
Kubernetes
In this setup, Kubernetes 1.2.2 was used in a single master and single node setup running on VirtualBox using Vagrant. For tutorials on how to run Kubernetes in nearly any configuration and on any platform you can imagine, check out the Kubernetes Getting Started guides.
Setup
ONTAP
The setup for ONTAP consists of the following steps.
- Create a Storage Virtual Machine (SVM) to host your NFS volumes
- Enable NFS for the SVM created
- Create a data LIF for Kubernetes to use
- Create an export policy to allow the Kubernetes hosts to connect
- Create an NFS volume for Kubernetes to use
Of course you can skip some of these steps if you already have what you need there.
Here is an example that follows these steps:
Create a Storage Virtual Machine (SVM) to host your NFS volumes
1 |
|
Enable NFS for the SVM created
1 |
|
Create a data LIF for Kubernetes to use
The values specified in this example is specific to our ONTAP simulator. Update the appropriate values to match your environment.
1 |
|
Create an export policy to allow the Kubernetes hosts to connect
In this case, we are allowing any host to connect by specifying 0.0.0.0/0
for
clientmatch
. It’s unlikely you’d want to do this in production and should
instead set the value to match the IP range of your Kubernetes hosts.
1 |
|
Create an NFS volume for Kubernetes to use
1 |
|
Kubernetes
Now that we have an NFS volume, we need to let Kubernetes know about it. To do
this, we will create a PersistentVolume
and a PersistentVolumeClaim
.
Create a PersistentVolume
definition and save it as nfs-pv.yaml
.
nfs-pv.yaml
1 |
|
Then create a PersistentVolumeClaim
that uses the PersistentVolume
and save
it as nfs-pvc.yaml
.
nfs-pvc.yaml
1 |
|
Now that we have a PersistentVolume
definition and a PersistentVolumeClaim
definition, we need to create them in Kubernetes.
1 |
|
At this point, we can spin up a container that uses the PersistentVolumeClaim
we just created. To show this in action, we’ll continue using the
NFS example from the Kubernetes source code.
First, we’ll setup a “fake” backend that updates an index.html
file every 5
to 10 seconds with the current time and hostname of the pod doing the update.
Save the “fake” backend as nfs-busybox-rc.yaml
.
nfs-busybox-rc.yaml
1 |
|
Create the “fake” backend in Kubernetes.
1 |
|
Next, we’ll create a web server that also uses the NFS mount to serve the
index.html
file being generated by the “fake” backend.
The web server consists of a pod definition and a service definition.
Save the pod definition as nfs-web-rc.yaml
.
nfs-web-rc.yaml
1 |
|
Save the service definition as nfs-web-service.yaml
.
nfs-web-service.yaml
1 |
|
Create the web server in Kubernetes.
1 |
|
Now that everything is setup and running, we can verify that it is working as
expected. Using the busybox container we launched earlier, we can make a request
to nginx
to check that the data is being served properly.
1 |
|
As can be seen in this example, when we made a request to nginx
, the last pod
to have updated the index.html
file was nfs-busybox-gaqxs
at
Tue Apr 12 19:56:18 UTC 2016
. We can continue to make a request to nginx
and watch this data get updated every 5-10 seconds.