Install Kuberntes 1.23 on docker runtime.
record my experience that install kuberntes 1.23 on virtual machines in china.
Prerequisites#
server types used in deployment of Kubernetes:#
- Master: A Kubernetes Master is where control API calls for the pods, replications controllers, services, nodes and other components of a Kubernetes cluster are executed.
- Node: A Node is a system that provides the run-time environments for the containers. A set of container pods can span multiple nodes.
minimum requirements for the viable setup:#
- Memory:2 GiB or more of RAM per machine.
- CPUs: At least 2 CPUs on the control plane machine.
- Internet connectivity for pulling containers required (Private registry can also be used)
- Full network connectivity between machines in the cluster - This is private or public.
upgrade all installed packages to their latest versions#
1
2
3
4
5
|
sudo apt update
sudo apt -y full-upgrade
[ -f /var/run/reboot-required ] && sudo reboot -f
# 调整时区为上海
sudo timedatectl set-timezone Asia/Shanghai
|
Disable Swap#
1
2
3
4
|
# adds a comment character (#) at the beginning of any line that contains the word "swap"
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
|
Kubernetes recommends turning off swap space on nodes for several reasons. Here are some of the main reasons:
- Performance: Kubernetes workloads typically require a lot of memory and CPU resources. Swap space is slower than physical memory, and swapping can cause significant performance degradation, particularly when running high-performance workloads that require low latency. By turning off swap space, the system can avoid the overhead associated with swapping, which can lead to improved performance.
- Predictability: Kubernetes is designed to provide predictable and consistent performance for containerized applications. When swap space is enabled, it introduces an additional layer of unpredictability into the system, as the operating system may swap out container memory to disk, causing performance issues for containerized applications. By disabling swap, the system can provide more consistent and predictable performance for Kubernetes workloads.
- OOM behavior: When a system is low on memory, the kernel uses the Out of Memory (OOM) killer to terminate processes in order to free up memory. When swap space is enabled, the OOM killer may swap out container memory to disk instead of terminating processes, leading to unpredictable behavior and potential application failures. By disabling swap, the system can avoid this issue and allow the OOM killer to behave predictably.
Overall, turning off swap space on Kubernetes nodes can help improve the predictability and performance of containerized workloads, particularly in high-performance environments. However, it’s important to ensure that the system has enough physical memory to handle the workload, as disabling swap without enough memory can lead to out-of-memory errors and system crashes.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
# Enable kernel modules
sudo modprobe overlay
sudo modprobe bridge
sudo modprobe br_netfilter
# Add some kernel-level settings to sysctl
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload sysctl
sudo sysctl --system
|
Install kubelet, kubeadm and kubectl#
1. add aliyun Kubernetes repository#
1
2
3
|
sudo apt -y install curl apt-transport-https
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
2. install kuberntes 1.23.6 version:#
1
2
3
4
5
6
7
8
|
sudo apt update
# 列出所有可用版本
sudo apt list -a kubeadm
# 安装1.23.6指定版本
sudo apt -y install vim git curl wget kubelet=1.23.6-00 kubeadm=1.23.6-00 kubectl=1.23.6-00
sudo apt-mark hold kubelet kubeadm kubectl
# Confirm installation by checking the version of kubectl:
kubectl version --client && kubeadm version
|
Install Container Docker container runtime#
refer the blog of installation of docker
Prepare essential images, China Only#
use kubeadm pull image from aliyun#
because in china, can’t directly pull kuberntes image from google,so pull from aliyun
1
2
3
|
sudo kubeadm config images pull \
--kubernetes-version=1.23.6 \
--image-repository registry.aliyuncs.com/google_containers
|
1
2
3
4
5
6
7
8
|
# output
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
|
use “docker tag” change image name to “k8s.gcr.io/”#
1
2
3
4
5
6
7
8
9
10
|
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6 k8s.gcr.io/kube-apiserver:v1.23.6 \
&& docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6 k8s.gcr.io/kube-scheduler:v1.23.6 \
&& docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6 k8s.gcr.io/kube-proxy:v1.23.6 \
&& docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6 k8s.gcr.io/kube-controller-manager:v1.23.6 \
&& docker tag registry.aliyuncs.com/google_containers/etcd:3.5.1-0 k8s.gcr.io/etcd:3.5.1-0 \
&& docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.6 k8s.gcr.io/coredns:v1.8.6 \
&& docker tag registry.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6
# check image status
docker images
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
# output
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.23.6 8fa62c12256d 6 weeks ago 135MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.23.6 8fa62c12256d 6 weeks ago 135MB
k8s.gcr.io/kube-controller-manager v1.23.6 df7b72818ad2 6 weeks ago 125MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.23.6 df7b72818ad2 6 weeks ago 125MB
k8s.gcr.io/kube-proxy v1.23.6 4c0375452406 6 weeks ago 112MB
registry.aliyuncs.com/google_containers/kube-proxy v1.23.6 4c0375452406 6 weeks ago 112MB
k8s.gcr.io/kube-scheduler v1.23.6 595f327f224a 6 weeks ago 53.5MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.23.6 595f327f224a 6 weeks ago 53.5MB
k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 6 months ago 293MB
registry.aliyuncs.com/google_containers/etcd 3.5.1-0 25f8c7f3da61 6 months ago 293MB
k8s.gcr.io/coredns v1.8.6 a4ca41631cc7 7 months ago 46.8MB
registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 7 months ago 46.8MB
k8s.gcr.io/pause 3.6 6270bb605e12 9 months ago 683kB
registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 9 months ago 683kB
|
(optional) pull image from my personal tencent repository#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
# login my tencetn repository accout
docker login --username=100011942350 ccr.ccs.tencentyun.com
docker pull ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_kube-apiserver:v1.23.6 &&
docker tag ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_kube-apiserver:v1.23.6 k8s.gcr.io/kube-apiserver:v1.23.6 &&
docker pull ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_kube-scheduler:v1.23.6 &&
docker tag ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_kube-scheduler:v1.23.6 k8s.gcr.io/kube-scheduler:v1.23.6 &&
docker pull ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_kube-proxy:v1.23.6 &&
docker tag ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_kube-proxy:v1.23.6 k8s.gcr.io/kube-proxy:v1.23.6 &&
docker pull ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_kube-controller-manager:v1.23.6 &&
docker tag ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_kube-controller-manager:v1.23.6 k8s.gcr.io/kube-controller-manager:v1.23.6 &&
docker pull ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_etcd:3.5.1-0 &&
docker tag ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_etcd:3.5.1-0 k8s.gcr.io/etcd:3.5.1-0 &&
docker pull ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_coredns:v1.8.6 &&
docker tag ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_coredns:v1.8.6 k8s.gcr.io/coredns:v1.8.6 &&
docker pull ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_pause:3.6 &&
docker tag ccr.ccs.tencentyun.com/tcr.finuks/k8s.gcr.io_pause:3.6 k8s.gcr.io/pause:3.6
|
Initialize Kubernetes Masternode#
add hostname to “/etc/hosts” file#
add kubernetes master node and worker node to this file, to resolve hostname
1
2
|
192.168.122.113 frederick.k8s.master
192.168.122.114 frederick.k8s.worker1
|
kubeadm initial masternode#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
sudo kubeadm init --kubernetes-version=1.23.6 \
--apiserver-advertise-address=192.168.122.135 \
--cri-socket unix:///run/cri-dockerd.sock \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
--control-plane-endpoint=k8s-cluster.fukangsun.com
# If initial failed,remember execute follow command:
sudo kubeadm reset
# Configure kubectl using commands in the output:
mkdir -p $HOME/.kube
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Check cluster status
kubectl cluster-info
|
- kubeadm init: Initialize a new Kubernetes cluster using kubeadm.
- –kubernetes-version=1.23.6: Specify the version of Kubernetes to use. In this case, version 1.23.6 is being used.
- –apiserver-advertise-address=192.168.122.135: Set the IP address or hostname that will be used to advertise the API server to nodes in the cluster. In this case, it is set to 192.168.122.135.
- –cri-socket unix:///run/cri-dockerd.sock: Specify the location of the CRI (Container Runtime Interface) socket. In this case, it is set to /run/cri-dockerd.sock.
- –image-repository registry.aliyuncs.com/google_containers: Specify the container image repository to use for the Kubernetes components. In this case, it is set to registry.aliyuncs.com/google_containers.
- –service-cidr=10.1.0.0/16: Specify the CIDR range for Kubernetes services. In this case, it is set to 10.1.0.0/16.
- –pod-network-cidr=10.244.0.0/16: Specify the CIDR range for the pod network. In this case, it is set to 10.244.0.0/16.
- –control-plane-endpoint=k8s-cluster.fukangsun.com: Specify the endpoint for the Kubernetes control plane. In this case, it is set to k8s-cluster.fukangsun.com.
Join other node to Master node#
network plugin install#
dashboard management web UI install#
add third party docker repository source for kubernetes#
API port reverse proxy by nginx#