TransWikia.com

Kubernetes: How to mount volumes into Windows pods?

Server Fault Asked by rabejens on February 4, 2021

I set up a small Kubernetes cluster for testing, and for storage I use glusterfs. No CSI, I just create the volumes manually for now.

Now, I added a Windows node to the cluster. It works fine so far, but I can’t get any volumes to mount in the container.

I did a very simple pod definition:

---
apiVersion: v1
kind: Pod
metadata:
  name: windows-test
  labels:
    name: windows-test
spec:
  containers:
    - name: sample
      image: mcr.microsoft.com/windows/servercore:ltsc2019
      resources:
        limits:
          cpu: "1.0"
          memory: "1Gi"
      command:
        - cmd
        - "/c"
        - "dir"
        - "/s"
        - 'c:'
      volumeMounts:
        - mountPath: "foo"
          name: testvol
  nodeSelector:
    "kubernetes.io/os": windows
  volumes:
    - name: testvol
      glusterfs:
        path: wintest
        endpoints: glusterfs-cluster
        readOnly: false

When I run this, the pod never starts and when I describe it, I get:

  Warning  FailedMount  <invalid>                      kubelet, node09    MountVolume.SetUp failed for volume "testvol" : mount failed: only cifs mount is supported now, fstype: "glusterfs", mounting source ("10.93.111.35:wintest"), target ("c:\var\lib\kubelet\pods\6d207138-09fb-4e1f-86e7-6a71cef5ce7e\volumes\kubernetes.io~glusterfs\testvol"), with options (["auto_unmount" "backup-volfile-servers=10.93.111.31:10.93.111.32:10.93.111.33:10.93.111.34:10.93.111.35:10.93.111.36:10.93.111.37:10.93.111.38" "log-file=\var\lib\kubelet\plugins\kubernetes.io\glusterfs\testvol\windows-test-glusterfs.log" "log-level=ERROR"]), the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod windows-test

It seems that Windows pods do not support glusterfs, but only "cifs". Can anyone guide me how to mount a volume via cifs? The only information I found in the docs was about the azure-file storage but I couldn’t figure out if that can also mount CIFS shares on-prem.

One Answer

I think you could start with this.

CIFS Flexvolume Plugin for Kubernetes

Installing

The flexvolume plugin is a single shell script named cifs. This shell script must be available on the Kubernetes master and on each of the Kubernetes nodes. By default, Kubernetes searches for third party volume plugins in /usr/libexec/kubernetes/kubelet-plugins/volume/exec/. The plugin directory can be configured with the kubelet's --volume-plugin-dir parameter, run ps aux | grep kubelet to learn the location of the plugin directory on your system (see #1). The cifs script must be located in a subdirectory named fstab~cifs/. The directory name fstab~cifs/ will be mapped to the Flexvolume driver name fstab/cifs.

On the Kubernetes master and on each Kubernetes node run the following commands:

VOLUME_PLUGIN_DIR="/usr/libexec/kubernetes/kubelet-plugins/volume/exec"
mkdir -p "$VOLUME_PLUGIN_DIR/fstab~cifs"
cd "$VOLUME_PLUGIN_DIR/fstab~cifs"
curl -L -O https://raw.githubusercontent.com/fstab/cifs/master/cifs
chmod 755 cifs

The cifs script requires a few executables to be available on each host system:

mount.cifs, on Ubuntu this is in the cifs-utils package. jq, on Ubuntu this is in the jq package. mountpoint, on Ubuntu this is in the util-linux package. base64, on Ubuntu this is in the coreutils package. To check if the installation was successful, run the following command:

VOLUME_PLUGIN_DIR="/usr/libexec/kubernetes/kubelet-plugins/volume/exec"
$VOLUME_PLUGIN_DIR/fstab~cifs/cifs init

It should output a JSON string containing "status": "Success". This command is >also run by Kubernetes itself when the cifs plugin is detected on the file system.

Running

The plugin takes the CIFS username and password from a Kubernetes Secret. To create the secret, you first have to convert your username and password to base64 encoding:

echo -n username | base64
echo -n password | base64

Then, create a file secret.yml and use the ouput of the above commands as username and password:

apiVersion: v1
kind: Secret
metadata:
  name: cifs-secret
  namespace: default
type: fstab/cifs
data:
  username: 'ZXhhbXBsZQ=='
  password: 'bXktc2VjcmV0LXBhc3N3b3Jk'

Apply the secret:

kubectl apply -f secret.yml

You can check if the secret was installed successfully using kubectl describe secret cifs-secret.

Next, create a file pod.yml with a test pod (replace //server/share with the network path of your CIFS share):

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: test
      mountPath: /data
  volumes:
  - name: test
    flexVolume:
      driver: "fstab/cifs"
      fsType: "cifs"
      secretRef:
        name: "cifs-secret"
      options:
        networkPath: "//server/share"
        mountOptions: "dir_mode=0755,file_mode=0644,noperm"

Start the pod:

kubectl apply -f pod.yml

You can verify that the volume was mounted successfully using kubectl describe pod busybox.


Additionally there is another github tutorial with Kubernetes CIFS Volume Driver.


Hope you find this useful.

Answered by Jakub on February 4, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP