NetApp Persistent Storage in Kubernetes: Using ONTAP and iSCSI
This post is part of a multi-part series on how to use NetApp storage platforms to present persistent volumes in Kubernetes. The other posts in this series are:
The Kubernetes PersistentVolume
API provides several plugins for integrating
your storage into Kubernetes for containers to consume. In this post, we’ll
focus on how to use the iSCSI plugin with ONTAP.
Environment
ONTAP
For this post, a single node clustered Data ONTAP 8.3 simulator was used. The setup and commands used are no different than what would be used in a production setup using real hardware.
Kubernetes
In this setup, Kubernetes 1.2.2 was used in a single master and single node setup running on VirtualBox using Vagrant. For tutorials on how to run Kubernetes in nearly any configuration and on any platform you can imagine, check out the Kubernetes Getting Started guides.
Setup
ONTAP
The setup for ONTAP consists of the following steps.
- Create a Storage Virtual Machine (SVM) to host your iSCSI volumes
- Enable iSCSI for the SVM created
- Create a data LIF for Kubernetes to use
- Create an initiator group
- Add the Kubernetes host(s) to the initiator group
- Create a volume for iSCSI LUNs
- Create an iSCSI LUN for Kubernetes to use
- Map the iSCSI LUN to the initiator group
Of course you can skip some of these steps if you already have what you need there.
Here is an example that follows these steps:
Create a Storage Virtual Machine (SVM) to host your iSCSI volumes
1 |
|
Enable iSCSI for the SVM created
1 |
|
Create a data LIF for Kubernetes to use
The values specified in this example is specific to our ONTAP simulator. Update the appropriate values to match your environment.
1 |
|
Create an initiator group
1 |
|
Add the Kubernetes host(s) to the initiator group
For each node in our Kubernetes cluster, we need to add it’s InitiatorName
to
the igroup
. The initiator name can be found in the file
/etc/iscsi/initiatorname.iscsi
. If this file does not exist, it’s likely that
the iSCSI utilities have not been installed. See the
Kubernetes setup section for how to do this.
In our setup, the InitiatorName
is iqn.1994-05.com.redhat:27cc6d4e6da
. Update
the appropriate values to match your environment.
1 |
|
Create a volume for iSCSI LUNs
1 |
|
Create an iSCSI LUN for Kubernetes to use
1 |
|
Map the iSCSI LUN to the initiator group
1 |
|
Now that you have an iSCSI LUN to use in Kubernetes, we need to get the IQN of our SVM because we’ll need it in the later steps when using the storage in Kubernetes.
Run the following command and take note of the Target Name. In our example
below, that value is iqn.1992-08.com.netapp:sn.7dcf3853018611e6a3590800278b2267:vs.2
.
1 |
|
Kubernetes
To start, we need to install the needed iSCSI utilities on our Kubernetes nodes.
In our setup, the Vagrant box is using Fedora 23. The package to install
is iscsi-initiator-utils
. Install the appropriate package for the OS running
on your Kubernetes nodes.
1 |
|
In our example, we do not setup any authentication for the iSCSI LUN we created,
but if we had, we would need to also edit /etc/iscsi/iscsid.conf
to match
the configuration.
Next, we need to let Kubernetes know about our iSCSI LUN. To do this, we will
create a PersistentVolume
and a PersistentVolumeClaim
.
Create a PersistentVolume
definition and save it as iscsi-pv.yaml
.
iscsi-pv.yaml
1 |
|
Then create a PersistentVolumeClaim
that uses the PersistentVolume
and save
it as iscsi-pvc.yaml
.
iscsi-pvc.yaml
1 |
|
Now that we have a PersistentVolume
definition and a PersistentVolumeClaim
definition, we need to create them in Kubernetes.
1 |
|
At this point, we can spin up a container that uses the PersistentVolumeClaim
we just created.
First, we’ll setup a pod that we can use to write to an output.txt
file
the current time and hostname of the pod.
Save the pod definition as iscsi-busybox.yaml
.
iscsi-busybox.yaml
1 |
|
Create the pod in Kubernetes.
1 |
|
Now that we’ve created our pod with the iSCSI volume attached, we can write data to the volume to verify that everything is working as expected.
1 |
|
As can be seen, we have output the current date and time to output.txt
. Next,
we’ll stop this instance of the pod and create a new one and verify that our
data is still there.
1 |
|