K8S Cluster IP Change Procedure

2022. 8. 1. 11:14kubernetes | docker

1. PVC List Backup

# kubectl get pvc -A > pvc_list.txt

 

2. configMap Backup

#!/bin/bash
namespaces="kube-public kube-system minio monitoring sodaflow"for ns in $namespaces
do
  mkdir -p $ns
  configmaps=$(kubectl -n $ns get cm -o name|awk '{print $1}'|cut -d'/' -f 2)for cm in $configmaps; do
  	kubectl -n $ns get cm $cm -o yaml > $ns/$cm.yaml
  done
done

 

3. Configuration Backup

/etc/kubernetes, /var/lib/kubelet backup

 

4. Stop Services: kubelet, nfs, rpcbind, docker

# systemctl stop kubelet
# systemctl stop nfs
# systemctl stop rpcbind
# systemctl stop docker

5. PVC Backup

# mv <nas_storage>/nfs/sodaflow <nas_storage>/nfs/sodaflow_org
# mkdir -p <nas_storage>/nfs/sodaflow

6. Flush all the iptable chains

# iptables -F   # flush all the iptable chains
# iptables -L   # list chains to check if flushed

 

7. Start docker & nfs service

# systemctl start docker
# systemctl start nfs

 

8. Check /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS=--root-dir="/nas_storage/k8s_images/k8s"

 

9. Modify Cluster Configuration for kubeadm init: config.yaml

Replace ip address and domain name in ${SETUP_HOME}/etc/config.yaml

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
imageRepository: spsd.docker.io:150001
kubernetesVersion: v1.18.19
controlPlaneEndpoint: "<new_ip_addr>:6443"  # replace to new ip address if it's changed
apiServer:
  certSANs:
  - <new hostname>     # replace to new hostname if it's changed
  - <new domain name>  # replace to new dns name if it's changed
  - <new ip address>   # replace to new ip address if it's changed
networking:
  podSubnet: 6.2.0.0/16
  serviceSubnet: 6.5.0.0/16

 

10. Reset & Init

apiVersion: [kubeadm.k8s.io/v1beta1](http://kubeadm.k8s.io/v1beta1)
kind: ClusterConfiguration
imageRepository: spsd.docker.io:150001
kubernetesVersion: v1.18.19
controlPlaneEndpoint: "<new_ip_addr>:6443"  # replace to new ip address if it's changed
apiServer:
certSANs:

- <new hostname> # replace to new hostname if it's changed
- <new domain name> # replace to new dns name if it's changed
- <new ip address> # replace to new ip address if it's changed
networking:
podSubnet: 6.2.0.0/16
serviceSubnet: 6.5.0.0/16

Reset old cluster info and init new cluster

 

# kubeadm reset
# cd <path_to_config.yaml>
# kubeadm init --config ./config.yaml --upload-certs --v5 --ignore-preflight-errors=all

copy new cluster info to user's home

# cp -f /etc/kubernetes/admin.conf ~/.kube/config
# chown -R <user>:<group> ~/.kube

11. Remove node-role.kubernetes.io/master taint from any nodes

To make that the scheduler be able to schedule Pods everywhere

# kubectl taint nodes --all node-role.kubernetes.io/master-

 

 

12. Install calico

Modify ${SETUP_HOME}/etc/calico-v3.19-spsd.yaml

Find IP_AUTODETECT_METHOD and replace its value to system's real NIC

IP_AUTODETECT_METHOD: "interface=<nic_name>" ex) IP_AUTODETECT_METHOD: "interface=p1p1"

Install calico network

# kubectl apply -f calico-v3.19-spsd.yaml

 

12. Join worker node

From worker node, reset and remove old cluster info

# kubeadm reset
# systemctl stop kubelet
# systemctl stop nfs
# systemctl stop rpcbind
# systemctl stop docker
# rm /etc/kubernetes /var/lib/calico /var/lib/kubelet
# systemctl start docker
# systemctl start nfs

From master node, generate join command

# kubeadm token create --print-join-command
kubeadm join <master_node_ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:[hash]

From worker node, run join command

# kubeadm join <master_node_ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:[hash]

ex) kubeadm join 192.168.100.31:6443 --token m4tnqa.bbyipzsgp6a2n6s7--discovery-token-ca-cert-hash sha256:a27da077aa81424df5e...

 

 

 

 

nexus

Create PVC for Nexus:

# cd ${SETUP_HOME}/setup/03_nexus/yamls
# kubectl apply -f nexus_data_pvc.yaml
# kubectl get pvc  # check pvc

Restore Nexus data from backup:

# cd <nas_storage>/nfs/sodaflow/sodaflow-nexus-data-volume-pvc-*
# cp -rp <nas_storage/nfs/sodaflow_org/sodaflow-nexus-data-volume-pvc-*/nexus .

Create Nexu Registry Secret:

# cd ${SETUP_HOME}/setup/03_nexus/files
# bash nexus-registry-secret.sh

Install Nexus:

# cd ${SETUP_HOME}/setup/03_nexus/yamls
# kubectl apply -f install_nexus.yaml

Check Pod Running:

# kubectl get po | grep devainexus
devainexus-0                              1/1    Running  ...

 

 

 

 

Minio

Create PVC for Minio:

# cd ${SETUP_HOME}/setup/04_minio/yamls
# kubectl apply -f minio_data_pvc.yaml

 

Restore Minio data from backup:

# cd <nas_storage>/nfs/sodaflow/minio-minio-data-volume-pvc-*
# cp cp -rp <nas_storage>/nfs/sodaflow_org/minio-minio-data-volume-pvc-*/minio .

Install Minio Server:

# cd ${SETUP_HOME}/setup/04_minio/yamls
# kubectl apply -f install_minio.yaml

Check Pod Running:

# kubectl get po -n minio
minio-74f.....778-degsrd      1/1       Running    ....
minio-make-bucket-job-...     0/1       Complated  ....

 

MariaDB

Create PVC for MariaDB:

# cd ${SETUP_HOME}/setup/09_mariadb-9.3.2/yamls
# kubectl apply -f install_mariadb_pvc.yaml

Check PVC Creation

# kubectl get pvc | grep mariadb
mariadb-primary-volume                ......
mariadb-secondary-volume              ......

Restore Data from Backup:

// Restore Primary Volume:
# cd <nas_storage>/nfs/sodaflow/sodaflow-mariadb-primary-volume-pvc-*
# cp -rp <nas_storage>/nfs/sodaflow_org/sodaflow-mariadb-primary-volume-pvc-*/data .

// Restore Secondary Volume:
# cd <nas_storage>/nfs/sodaflow/sodaflow-mariadb-secondary-volume-pvc-*
# cp -rp <nas_storage>/nfs/sodaflow_org/sodaflow-mariadb-secondary-volume-pvc-*/date .

Install MariaDB:

# cd ${SETUP_HOME}/setup/09_mariadb-9.3.2/yamls
# kubectl apply -f install_mariadb.yaml

 

Check Pods Running:

# kubectl get po | grep mariadb
mariadb-cluster-primary-0                   1/1     Running ...
mariadb-cluster-secondary-0                 1/1     Running ...

 

Polyaxon

Create PVC for Polyaxon:

# cd ${SETUP_HOME}/setup/10_polyaxon-1.8.4/yamls
# kubectl apply -f artfiacts_pvc.yaml
# kubectl apply -f data_pvc.yaml

// Check PVC Creation
# kubectl get pvc | grep pol | grep polyaxon
polyaxon-artifacts-store
polyaxon-postgresql-data

Restore Data from Backup:

// Restore artifacts store
# cd <nas_storage>/nfs/sodaflow/sodaflow-polyaxon-artifacts-store-pvc-*
# cp -rp <nas_storage>/nfs/sodaflow_org/sodaflow-polyaxon-artifacts-store-pvc-*/* .

// Restore postgresql data
# cd <nas_storage>/nfs/sodaflow/sodaflow-polyaxon-postgresql-data-pvc-*
# cp -rp <nas_storage>/nfs/sodaflow_org/sodaflow-polyaxon-postgresql-data-pvc-*/data .

Install Polyaxon 1:

# cd ${SETUP_HOME}/setup/10_polyaxon-1.8.4/yamls
# kubectl apply -f install_polyaxon_postgres.yaml

Install Polyaxon 2:

# kubectl apply -f install_polyaxon_server.yaml

Check Pods Running:

# kubectl get po | grep polyaxon
polyaxon-admin-user-mn...                 0/1     Complated  ...
polyaxon-clean-runs-df...                 0/1     Complated  ...
polyaxon-polyaxon-api-...-...             1/1     Running    ...
polyaxon-polyaxon-gateway-...-...         1/1     Running    ...
polyaxon-polyaxon-operator-...-...        1/1     Running    ...
polyaxon-polyaxon-streams-...-...         1/1     Running    ...
polyaxon-postgresql-0                     1/1     Running    ...
polyaxon-sync-db-.....                    0/1     Complated  ...

 

 

gitlab

 

Create PVC for Gitlab:

# cd ${SETUP_HOME}/setup/11_gitlab/yamls
# kubectl apply -f postgresql/gitlab_postgresql_pvc.yaml
# kubectl apply -f gitlab_pvc.yaml
# kubectl apply -f gitlab_redis.yaml

// Check PVC
# kubectl get pvc | grep gitlab
gitlab-postgresql
gitlab-redis
gitlab-server

Restore Data from Backup:

# cd <nas_storage>/nfs/sodaflow/sodaflow-gitlab-postgresql-pvc-*
# cp -rp <nas_storage/nfs/sodaflow_org/sodaflow-gitlab-postgresql-pvc-*/. .

# cd <nas_storage>/nfs/sodaflow/sodaflow-gitlab-redis-pvc-*
# cp -rp <nas_storage/nfs/sodaflow_org/sodaflow-gitlab-redis-pvc-*/. .

# cd <nas_storage>/nfs/sodaflow/sodaflow-gitlab-server-pvc-*
# cp -rp <nas_storage/nfs/sodaflow_org/sodaflow-gitlab-server-pvc-*/. .

CAUTION: Make sure to restore hidden files and folders as well, such as: .ssh

Install Postgresql Initial:

# cd ${SETUP_HOME}/setup/11_gitlab/yamls
# kubectl apply -f postgresql/gitlab_postgresql_initial.yaml

Check Gitlab & Redis Pods Running:

# kubectl get po | grep gitlab
gitlab-postgesql-0                     1/1    Running ...
gitlab-redis-......-....               1/1    Running ...

Apply Postgresql Update:

# cd ${SETUP_HOME}/setup/11_gitlab
# bash install_postgresql_update.sh
service/gitlab-postgresql unchanged
statefulset.apps/gitlab-postgresql configured
All Done.

Install Gitlab Server:

# cd ${SETUP_HOME}/setup/11_gitlab
# bash install_gitlab.sh

Check Pods Running:

# kubectl get po | grep gitlab
gitlab-postgesql-0                     1/1    Running ...
gitlab-redis-......-....               1/1    Running ...
gitlab-server-......-....              1/1    Running ...

Apply Gitlab Update:

# cd ${SETUP_HOME}/setup/11_gitlab
# bash install_gitlab_update.sh

** Check Pods Running:
# kubectl get po | grep gitlab
gitlab-postgesql-0                     1/1    Running ...
gitlab-redis-......-....               1/1    Running ...
gitlab-server-......-....              1/1    Running ...

Check Gitlab Access:

Try to connect http://[SparklingSo_DA_DNS_NAME]/gitlab

'kubernetes | docker' 카테고리의 다른 글

Docker 운영방식과 구조  (0) 2022.08.01
docker와 CRI-O 같이 운영 방법  (0) 2022.08.01
callico - NODE 통신  (0) 2022.08.01
UTS namespace  (0) 2022.08.01
linux hostname  (0) 2022.08.01