TransWikia.com

Migration of kubernetes master node from 1 server to another server

Server Fault Asked by WantIt on February 4, 2021

I have a bare metal server that houses a master Kubernetes node. I would need to move the master node to a fresh bare metal server. How can we move or migrate it?

I’ve done my research but most of them are related to GCP cluster where we move 4 directories from old node to new node, and also change the IP and that question was asked 5 years ago which is outdated now.

/var/etcd
/srv/kubernetes
/srv/sshproxy
/srv/salt-overlay

Whats the proper way to move it assuming we are using the most recent k8s version of 1.17

One Answer

Following github issue mentioned in the comments and IP address changes in Kubernetes Master Node:

1. Verify your etcd data directory looking into etcd pod in kube-system namespace:

(default values using k8s v1.17.0 created with kubeadm),

    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data

2. Preparation:

  • copy /etc/kubernetes/pki from Master1 to the new Master2:
    #create backup directory in Master2,
    mkdir ~/backup

    #copy from Master1 all key,crt files into the Master2 
    sudo scp -r /etc/kubernetes/pki  [email protected]:~/backup

    ./etcd/peer.crt
    ./apiserver.crt

    rm ~/backup/pki/{apiserver.*,etcd/peer.*}
  • move pki directory to /etc/kubernetes
 cp -r ~/backup/pki  /etc/kubernetes/

3. On Master1 create etcd snapshot:

Verify your API version:

kubectl exec -it etcd-Master1 -n kube-system -- etcdctl  version

etcdctl version: 3.4.3
API version: 3.4
  • using current etcd pod:
    kubectl exec -it etcd-master1 -n kube-system --  etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key snapshot save /var/lib/etcd/snapshot1.db
    ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key snapshot save /var/lib/etcd/snapshot1.db

4. Copy created snapshot from Master1 to Master2 backup directory:

scp ./snapshot1.db  [email protected]:~/backup

5. Prepare Kubeadm config in order to reflect Master1 configuration:

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: x.x.x.x
  bindPort: 6443
nodeRegistration:
  name: master2
  taints: []     # Removing all taints from Master2 node.
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.0.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

6. Restore snapshot:

  • using etcd:3.4.3-0 docker image:
    docker run --rm 
        -v $(pwd):/backup 
        -v /var/lib/etcd:/var/lib/etcd 
        --env ETCDCTL_API=3 
        k8s.gcr.io/etcd:3.4.3-0 
        /bin/sh -c "etcdctl snapshot restore './snapshot1.db' ; mv /default.etcd/member/ /var/lib/etcd/"
  • or using etcdctl binaries:
    ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 snapshot restore  './snapshot1.db' ; mv ./default.etcd/member/ /var/lib/etcd/

7. Initialize Master2:

    sudo kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd --config kubeadm-config.yaml
    # kubeadm-config.yaml prepared in 5 step.
  • notice:

[WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 master2_IP]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master2 localhost] and IPs [master2_ip 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master2 localhost] and IPs [master2_ip 127.0.0.1 ::1]
.
.
.  
  Your Kubernetes control-plane has initialized successfully!
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • After k8s object verification (short example):
    kubectl get nodes
    kuebctl get pods - owide
    kuebctl get pods -n kube-system -o wide
    systemctl status kubelet 
  • If all deployed k8s objects like pods,deployments etc, were moved into your new Master2 node:
    kubectl drain Master1
    kubectl delete node Master1

Note:

In addition please consider Creating Highly Available clusters in this setup you should have possibility to more than 1 master, in this configuration you can create/remove additional control plane nodes in more safely way.

Answered by Mark on February 4, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP