本文最后更新于:May 7, 2022 pm
                
              
            
            
              本文主要在centos7系统上基于docker和flannel组件部署v1.23.6版本的k8s原生集群,由于集群主要用于自己平时学习和测试使用,加上资源有限,暂不涉及高可用部署。
此前写的一些关于k8s基础知识和集群搭建的一些方案 ,有需要的同学可以看一下。
 
1、准备工作 1.1 flannel-集群节点信息 机器均为8C8G的虚拟机,硬盘为100G。
IP 
Hostname 
 
 
10.31.8.1 
tiny-flannel-master-8-1.k8s.tcinternal 
 
10.31.8.11 
tiny-flannel-worker-8-11.k8s.tcinternal 
 
10.31.8.12 
tiny-flannel-worker-8-12.k8s.tcinternal 
 
10.8.64.0/18 
podSubnet 
 
10.8.0.0/18 
serviceSubnet 
 
1.2 检查mac和product_uuid 同一个k8s集群内的所有节点需要确保mac地址和product_uuid均唯一,开始集群初始化之前需要检查相关信息
 ip link   ifconfig -a sudo cat  /sys/class/dmi/id/product_uuid
 
1.3 配置ssh免密登录(可选) 如果k8s集群的节点有多个网卡,确保每个节点能通过正确的网卡互联访问
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27  su root ssh-keygencd  /root/.ssh/cat  id_rsa.pub >> authorized_keyschmod  600 authorized_keyscat  >> ~/.ssh/config <<EOF Host tiny-flannel-master-8-1.k8s.tcinternal     HostName 10.31.8.1     User root     Port 22     IdentityFile ~/.ssh/id_rsa Host tiny-flannel-worker-8-11.k8s.tcinternal     HostName 10.31.8.11     User root     Port 22     IdentityFile ~/.ssh/id_rsa Host tiny-flannel-worker-8-12.k8s.tcinternal     HostName 10.31.8.12     User root     Port 22     IdentityFile ~/.ssh/id_rsa EOF 
 
1.4 修改hosts文件 cat  >> /etc/hosts <<EOF 10.31.8.1  tiny-flannel-master-8-1 tiny-flannel-master-8-1.k8s.tcinternal 10.31.8.11 tiny-flannel-worker-8-11 tiny-flannel-worker-8-11.k8s.tcinternal 10.31.8.12 tiny-flannel-worker-8-12 tiny-flannel-worker-8-12.k8s.tcinternal EOF 
 
1.5 关闭swap内存  swapoff -a sed -i '/swap / s/^\(.*\)$/#\1/g'  /etc/fstab
 
1.6 配置时间同步 这里可以根据自己的习惯选择ntp或者是chrony同步均可,同步的时间源服务器可以选择阿里云的ntp1.aliyun.com或者是国家时间中心的ntp.ntsc.ac.cn。
使用ntp同步  yum install  ntpdate -y ntpdate ntp.ntsc.ac.cn hwclock
 
使用chrony同步 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31  yum install chrony -y systemctl enable  chronyd.service systemctl start chronyd.service systemctl status chronyd.service vim /etc/chrony.conf $ grep server /etc/chrony.conf server 0.centos.pool.ntp.org iburst server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburst $ grep server /etc/chrony.conf server ntp.ntsc.ac.cn iburst systemctl restart chronyd.service chronyc sourcestats -v chronyc sources -v
 
1.7 关闭selinux  setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/'  /etc/selinux/config
 
1.8 配置防火墙 k8s集群之间通信和服务暴露需要使用较多端口,为了方便,直接禁用防火墙
 systemctl disable  firewalld.service
 
1.9 配置netfilter参数 这里主要是需要配置内核加载br_netfilter和iptables放行ipv6和ipv4的流量,确保集群内的容器能够正常通信。
cat  <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat  <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF  sudo sysctl --system
 
1.10 关闭IPV6(可选) 虽然新版本的k8s已经支持双栈网络,但是本次的集群部署过程并不涉及IPv6网络的通信,因此关闭IPv6网络支持
 grubby --update-kernel=ALL --args=ipv6.disable=1
 
1.11 配置IPVS(可选) IPVS是专门设计用来应对负载均衡场景的组件,kube-proxy 中的 IPVS 实现 通过减少对 iptables 的使用来增加可扩展性。在 iptables 输入链中不使用 PREROUTING,而是创建一个假的接口,叫做 kube-ipvs0,当k8s集群中的负载均衡配置变多的时候,IPVS能实现比iptables更高效的转发性能。
注意在4.19之后的内核版本中使用nf_conntrack模块来替换了原有的nf_conntrack_ipv4模块
(Notes : use nf_conntrack instead of nf_conntrack_ipv4 for Linux kernel 4.19 and later)
 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37  sudo yum install ipset ipvsadm -y modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4cat  <<EOF | sudo tee /etc/modules-load.d/ipvs.conf ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4 EOF  sudo sysctl --system $ lsmod | grep -e ip_vs -e nf_conntrack_ipv4 ip_vs_sh               12688  0 ip_vs_wrr              12697  0 ip_vs_rr               12600  0 ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack_ipv4      15053  2 nf_defrag_ipv4         12729  1 nf_conntrack_ipv4 nf_conntrack          139264  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4 libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack $ cut  -f1 -d " "   /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4 ip_vs_sh ip_vs_wrr ip_vs_rr ip_vs nf_conntrack_ipv4
 
2、安装container runtime 2.1 安装docker 详细的官方文档可以参考这里 ,由于在刚发布的1.24版本中移除了docker-shim,因此安装的版本≥1.24的时候需要注意容器运行时的选择。这里我们安装的版本低于1.24,因此我们继续使用docker。
docker的具体安装可以参考我之前写的这篇文章 ,这里不做赘述。
 sudo yum install -y yum-utils device-mapper-persistent-data lvm2 sudo yum-config-manager --add-repo  https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce docker-ce-cli containerd.io
 
2.2 配置cgroup drivers CentOS7使用的是systemd来初始化系统并管理进程,初始化进程会生成并使用一个 root 控制组 (cgroup), 并充当 cgroup 管理器。 Systemd 与 cgroup 集成紧密,并将为每个 systemd 单元分配一个 cgroup。 我们也可以配置容器运行时和 kubelet 使用 cgroupfs。 连同 systemd 一起使用 cgroupfs 意味着将有两个不同的 cgroup 管理器。而当一个系统中同时存在cgroupfs和systemd两者时,容易变得不稳定,因此最好更改设置,令容器运行时和 kubelet 使用 systemd 作为 cgroup 驱动,以此使系统更为稳定。 对于 Docker, 需要设置 native.cgroupdriver=systemd 参数。
参考官方的说明文档:
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cgroup-drivers 
参考配置说明文档
https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/#docker 
 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 sudo mkdir  /etc/dockercat  <<EOF | sudo tee /etc/docker/daemon.json {   "exec-opts": ["native.cgroupdriver=systemd"],   "log-driver": "json-file",   "log-opts": {     "max-size": "100m"   },   "storage-driver": "overlay2" } EOF  sudo systemctl enable  docker sudo systemctl daemon-reload sudo systemctl restart docker $ docker info | grep systemd  Cgroup Driver: systemd
 
2.3 关于kubelet的cgroup driver k8s官方有详细的文档 介绍了如何设置kubelet的cgroup driver,需要特别注意的是,在1.22版本开始,如果没有手动设置kubelet的cgroup driver,那么默认会设置为systemd
Note:  In v1.22, if the user is not setting the cgroupDriver field under KubeletConfiguration, kubeadm will default it to systemd.
 
一个比较简单的指定kubelet的cgroup driver的方法就是在kubeadm-config.yaml加入cgroupDriver字段
kind:  ClusterConfiguration apiVersion:  kubeadm.k8s.io/v1beta3 kubernetesVersion:  v1.21.0 --- kind:  KubeletConfiguration apiVersion:  kubelet.config.k8s.io/v1beta1 cgroupDriver:  systemd 
 
我们可以直接查看configmaps来查看初始化之后集群的kubeadm-config配置。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 $ kubectl describe configmaps kubeadm-config -n kube-system Name:         kubeadm-config Namespace:    kube-system Labels:       <none> Annotations:  <none>Data ==== ClusterConfiguration: ----  apiServer:  extraArgs:     authorization-mode: Node,RBAC   timeoutForControlPlane: 4m0s  apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd:  local:     dataDir: /var/lib/etcd  imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.23.6 networking:  dnsDomain: cali-cluster.tclocal   serviceSubnet: 10.88.0.0/18  scheduler: {}BinaryData ====  Events:  <none>
 
当然因为我们需要安装的版本高于1.22.0并且使用的就是systemd,因此可以不用再重复配置。
3、安装kube三件套 
对应的官方文档可以参考这里
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl 
 
kube三件套就是kubeadm、kubelet 和 kubectl,三者的具体功能和作用如下:
kubeadm:用来初始化集群的指令。 
kubelet:在集群中的每个节点上用来启动 Pod 和容器等。 
kubectl:用来与集群通信的命令行工具。 
 
需要注意的是:
kubeadm不会帮助我们管理kubelet和kubectl,其他两者也是一样的,也就是说这三者是相互独立的,并不存在谁管理谁的情况; 
kubelet的版本必须小于等于API-server的版本,否则容易出现兼容性的问题; 
kubectl并不是集群中的每个节点都需要安装,也并不是一定要安装在集群中的节点,可以单独安装在自己本地的机器环境上面,然后配合kubeconfig文件即可使用kubectl命令来远程管理对应的k8s集群; 
 
CentOS7的安装比较简单,我们直接使用官方提供的yum源即可。需要注意的是这里需要设置selinux的状态,但是前面我们已经关闭了selinux,因此这里略过这步。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 cat  <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF cat  <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF  sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes sed -i 's/repo_gpgcheck=1/repo_gpgcheck=0/g'  /etc/yum.repos.d/kubernetes.repo sudo yum install -y kubelet kubeadm kubectl --nogpgcheck --disableexcludes=kubernetes sudo yum list --nogpgcheck kubelet kubeadm kubectl --showduplicates --disableexcludes=kubernetes sudo yum install -y kubelet-1.23.6-0 kubeadm-1.23.6-0 kubectl-1.23.6-0 --nogpgcheck --disableexcludes=kubernetes sudo systemctl enable  --now kubelet
 
4、初始化集群 4.1 编写配置文件 在集群中所有节点都执行完上面的三点操作之后,我们就可以开始创建k8s集群了。因为我们这次不涉及高可用部署,因此初始化的时候直接在我们的目标master节点上面操作即可。
 $ kubeadm config images list I0507 14:14:34.992275   20038 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.23 k8s.gcr.io/kube-apiserver:v1.23.6 k8s.gcr.io/kube-controller-manager:v1.23.6 k8s.gcr.io/kube-scheduler:v1.23.6 k8s.gcr.io/kube-proxy:v1.23.6 k8s.gcr.io/pause:3.6 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 $ kubeadm config print  init-defaults > kubeadm-flannel.conf
 
考虑到大多数情况下国内的网络无法使用谷歌的k8s.gcr.io镜像源,我们可以直接在配置文件中修改imageRepository参数为阿里的镜像源 
kubernetesVersion字段用来指定我们要安装的k8s版本 
localAPIEndpoint参数需要修改为我们的master节点的IP和端口,初始化之后的k8s集群的apiserver地址就是这个 
serviceSubnet和dnsDomain两个参数默认情况下可以不用修改,这里我按照自己的需求进行了变更 
nodeRegistration里面的name参数修改为对应master节点的hostname 
新增配置块使用ipvs,具体可以参考官方文档  
 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 apiVersion:  kubeadm.k8s.io/v1beta3 bootstrapTokens: -  groups:    -  system:bootstrappers:kubeadm:default-node-token    token:  abcdef.0123456789abcdef    ttl:  24h0m0s    usages:    -  signing    -  authentication kind:  InitConfiguration localAPIEndpoint:    advertiseAddress:  10.31 .8 .1    bindPort:  6443 nodeRegistration:    criSocket:  /var/run/dockershim.sock    imagePullPolicy:  IfNotPresent    name:  tiny-flannel-master-8-1.k8s.tcinternal    taints:  null --- apiServer:    timeoutForControlPlane:  4m0s apiVersion:  kubeadm.k8s.io/v1beta3 certificatesDir:  /etc/kubernetes/pki clusterName:  kubernetes controllerManager:  {}dns:  {}etcd:    local:      dataDir:  /var/lib/etcd imageRepository:  registry.aliyuncs.com/google_containers kind:  ClusterConfiguration kubernetesVersion:  1.23 .6 networking:    dnsDomain:  flan-cluster.tclocal    serviceSubnet:  10.8 .0 .0 /18    podSubnet:  10.8 .64 .0 /18 scheduler:  {}--- apiVersion:  kubeproxy.config.k8s.io/v1alpha1 kind:  KubeProxyConfiguration mode:  ipvs 
 
4.2 初始化集群 此时我们再查看对应的配置文件中的镜像版本,就会发现已经变成了对应阿里云镜像源的版本
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28  $ kubeadm config images list --config kubeadm-flannel.conf registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6 registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6 registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6 registry.aliyuncs.com/google_containers/pause:3.6 registry.aliyuncs.com/google_containers/etcd:3.5.1-0 registry.aliyuncs.com/google_containers/coredns:v1.8.6 $ kubeadm config images pull --config kubeadm-flannel.conf [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6 $ kubeadm init --config kubeadm-flannel.conf [init] Using Kubernetes version: v1.23.6 [preflight] Running pre-flight checks [preflight] Pulling images required for  setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in  beforehand using 'kubeadm config images pull'  ...此处略去一堆输出...
 
当我们看到下面这个输出结果的时候,我们的集群就算是初始化成功了。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user:   mkdir  -p $HOME /.kube   sudo cp  -i /etc/kubernetes/admin.conf $HOME /.kube/config   sudo chown  $(id  -u):$(id  -g) $HOME /.kube/config Alternatively, if  you are the root user, you can run:   export  KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml"  with one of the options listed at:   https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join  any number of worker nodes by running the following on each as root: kubeadm join  10.31.8.1:6443 --token abcdef.0123456789abcdef \         --discovery-token-ca-cert-hash sha256:d7160866920c0331731ad3c1c31a6e5b6c788b5682f86971cacaa940211db9ab         
 
4.3 配置kubeconfig 刚初始化成功之后,我们还没办法马上查看k8s集群信息,需要配置kubeconfig相关参数才能正常使用kubectl连接apiserver读取集群信息。
mkdir  -p $HOME /.kube sudo cp  -i /etc/kubernetes/admin.conf $HOME /.kube/config sudo chown  $(id  -u):$(id  -g) $HOME /.kube/configexport  KUBECONFIG=/etc/kubernetes/admin.confecho  "source <(kubectl completion bash)"  >> ~/.bashrc
 
前面我们提到过kubectl不一定要安装在集群内,实际上只要是任何一台能连接到apiserver的机器上面都可以安装kubectl并且根据步骤配置kubeconfig,就可以使用kubectl命令行来管理对应的k8s集群。
 
配置完成后,我们再执行相关命令就可以查看集群的信息了。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 $ kubectl cluster-info Kubernetes control plane is running at https://10.31.8.1:6443 CoreDNS is running at https://10.31.8.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump' . $ kubectl get nodes -o wide NAME                                     STATUS     ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME tiny-flannel-master-8-1.k8s.tcinternal   NotReady   control-plane,master   79s   v1.23.6   10.31.8.1     <none>        CentOS Linux 7 (Core)   3.10.0-1160.62.1.el7.x86_64   docker://20.10.14 $ kubectl get pods -A -o wide NAMESPACE     NAME                                                             READY   STATUS    RESTARTS   AGE   IP          NODE                                     NOMINATED NODE   READINESS GATES kube-system   coredns-6d8c4cb4d-2clkj                                          0/1     Pending   0          86s   <none>      <none>                                   <none>           <none> kube-system   coredns-6d8c4cb4d-8mznz                                          0/1     Pending   0          86s   <none>      <none>                                   <none>           <none> kube-system   etcd-tiny-flannel-master-8-1.k8s.tcinternal                      1/1     Running   0          91s   10.31.8.1   tiny-flannel-master-8-1.k8s.tcinternal   <none>           <none> kube-system   kube-apiserver-tiny-flannel-master-8-1.k8s.tcinternal            1/1     Running   0          92s   10.31.8.1   tiny-flannel-master-8-1.k8s.tcinternal   <none>           <none> kube-system   kube-controller-manager-tiny-flannel-master-8-1.k8s.tcinternal   1/1     Running   0          90s   10.31.8.1   tiny-flannel-master-8-1.k8s.tcinternal   <none>           <none> kube-system   kube-proxy-dkvrn                                                 1/1     Running   0          86s   10.31.8.1   tiny-flannel-master-8-1.k8s.tcinternal   <none>           <none> kube-system   kube-scheduler-tiny-flannel-master-8-1.k8s.tcinternal            1/1     Running   0          92s   10.31.8.1   tiny-flannel-master-8-1.k8s.tcinternal   <none>           <none>
 
4.4 添加worker节点 这时候我们还需要继续添加剩下的两个节点作为worker节点运行负载,直接在剩下的节点上面运行集群初始化成功时输出的命令就可以成功加入集群:
$ kubeadm join  10.31.8.1:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:d7160866920c0331731ad3c1c31a6e5b6c788b5682f86971cacaa940211db9ab [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'  [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"  [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"  [kubelet-start] Starting the kubelet [kubelet-start] Waiting for  the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes'  on the control-plane to see this node join  the cluster.
 
如果不小心没保存初始化成功的输出信息也没有关系,我们可以使用kubectl工具查看或者生成token
 $ kubeadm token list TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS abcdef.0123456789abcdef   23h         2022-05-08T06:27:34Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token $ kubeadm token create pyab3u.j1a9ld7vk03znbk8 $ kubeadm token list TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS abcdef.0123456789abcdef   23h         2022-05-08T06:27:34Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token pyab3u.j1a9ld7vk03znbk8   23h         2022-05-08T06:34:28Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token $ openssl x509 -pubkey -in  /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'  d6cdc5a3bc40cbb0ae85776eb4fcdc1854942e2dd394470ae0f2f97714dd9fb9
 
添加完成之后我们再查看集群的节点可以发现这时候已经多了两个node,但是此时节点的状态还是NotReady,接下来就需要部署CNI了。
$ kubectl get nodes NAME                                      STATUS     ROLES                  AGE     VERSION tiny-flannel-master-8-1.k8s.tcinternal    NotReady   control-plane,master   7m49s   v1.23.6 tiny-flannel-worker-8-11.k8s.tcinternal   NotReady   <none>                 2m58s   v1.23.6 tiny-flannel-worker-8-12.k8s.tcinternal   NotReady   <none>                 102s    v1.23.6
 
5、安装CNI 5.1 编写manifest文件 flannel 应该是众多开源的CNI插件中入门门槛最低的CNI之一了,部署简单,原理易懂,且相关的文档在网络上也非常丰富。
 $ wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
 
针对kube-flannel.yml文件,我们需要修改一些参数 以适配我们的集群:
5.2 部署flannel 修改完成之后我们直接部署即可
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 $ kubectl apply -f kube-flannel.yml Warning: policy/v1beta1 PodSecurityPolicy is deprecated in  v1.21+, unavailable in  v1.25+ podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created $ kubectl get pods -A NAMESPACE     NAME                                                             READY   STATUS    RESTARTS   AGE kube-system   coredns-6d8c4cb4d-np7q2                                          1/1     Running   0          14m kube-system   coredns-6d8c4cb4d-z8f5b                                          1/1     Running   0          14m kube-system   etcd-tiny-flannel-master-8-1.k8s.tcinternal                      1/1     Running   0          14m kube-system   kube-apiserver-tiny-flannel-master-8-1.k8s.tcinternal            1/1     Running   0          14m kube-system   kube-controller-manager-tiny-flannel-master-8-1.k8s.tcinternal   1/1     Running   0          14m kube-system   kube-flannel-ds-9fq4z                                            1/1     Running   0          12m kube-system   kube-flannel-ds-ckstx                                            1/1     Running   0          7m18s kube-system   kube-flannel-ds-qj55x                                            1/1     Running   0          8m25s kube-system   kube-proxy-bncfl                                                 1/1     Running   0          14m kube-system   kube-proxy-lslcm                                                 1/1     Running   0          7m18s kube-system   kube-proxy-pmwhf                                                 1/1     Running   0          8m25s kube-system   kube-scheduler-tiny-flannel-master-8-1.k8s.tcinternal            1/1     Running   0          14m $ kubectl logs -f -l app=flannel -n kube-system
 
6、部署测试用例 集群部署完成之后我们在k8s集群中部署一个nginx测试一下是否能够正常工作。首先我们创建一个名为nginx-quic的命名空间(namespace),然后在这个命名空间内创建一个名为nginx-quic-deployment的deployment用来部署pod,最后再创建一个service用来暴露服务,这里我们先使用nodeport的方式暴露端口方便测试。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 $  cat  nginx-quic.yaml apiVersion:  v1 kind:  Namespace metadata:    name:  nginx-quic --- apiVersion:  apps/v1 kind:  Deployment metadata:    name:  nginx-quic-deployment    namespace:  nginx-quic spec:    selector:      matchLabels:        app:  nginx-quic    replicas:  2    template:      metadata:        labels:          app:  nginx-quic      spec:        containers:        -  name:  nginx-quic          image:  tinychen777/nginx-quic:latest          imagePullPolicy:  IfNotPresent          ports:          -  containerPort:  80 --- apiVersion:  v1 kind:  Service metadata:    name:  nginx-quic-service    namespace:  nginx-quic spec:    selector:      app:  nginx-quic    ports:    -  protocol:  TCP      port:  8080       targetPort:  80       nodePort:  30088     type:  NodePort 
 
部署完成后我们直接查看状态
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43  $ kubectl apply -f nginx-quic.yaml namespace/nginx-quic created deployment.apps/nginx-quic-deployment created service/nginx-quic-service created $ kubectl get deployment -o wide -n nginx-quic NAME                    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                          SELECTOR nginx-quic-deployment   2/2     2            2           48s   nginx-quic   tinychen777/nginx-quic:latest   app=nginx-quic $ kubectl get service -o wide -n nginx-quic NAME                 TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE   SELECTOR nginx-quic-service   NodePort   10.8.4.218   <none>        8080:30088/TCP   62s   app=nginx-quic $ kubectl get pods -o wide -n nginx-quic NAME                                     READY   STATUS    RESTARTS   AGE   IP          NODE                                      NOMINATED NODE   READINESS GATES nginx-quic-deployment-696d959797-jm8w5   1/1     Running   0          73s   10.8.66.2   tiny-flannel-worker-8-12.k8s.tcinternal   <none>           <none> nginx-quic-deployment-696d959797-lwcqz   1/1     Running   0          73s   10.8.65.2   tiny-flannel-worker-8-11.k8s.tcinternal   <none>           <none> $ ipvsadm -ln  IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn TCP  172.17.0.1:30088 rr   -> 10.8.65.2:80                 Masq    1      0          0   -> 10.8.66.2:80                 Masq    1      0          0 TCP  10.8.4.218:8080 rr   -> 10.8.65.2:80                 Masq    1      0          0   -> 10.8.66.2:80                 Masq    1      0          0 TCP  10.8.64.0:30088 rr   -> 10.8.65.2:80                 Masq    1      0          0   -> 10.8.66.2:80                 Masq    1      0          0 TCP  10.8.64.1:30088 rr   -> 10.8.65.2:80                 Masq    1      0          0   -> 10.8.66.2:80                 Masq    1      0          0 TCP  10.31.8.1:30088 rr   -> 10.8.65.2:80                 Masq    1      0          0   -> 10.8.66.2:80                 Masq    1      0          0
 
最后我们进行测试,这个nginx-quic的镜像默认情况下会返回在nginx容器中获得的用户请求的IP和端口
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22  $ curl 10.8.66.2:80 10.8.64.0:38958 $ curl 10.8.65.2:80 10.8.64.0:46484 $ curl 10.8.4.218:8080 10.8.64.0:26305 $ curl 10.31.8.1:30088 10.8.64.0:6519 $ curl 10.31.8.1:30088 10.8.64.0:50688 $ curl 10.31.8.11:30088 10.8.65.1:41032 $ curl 10.31.8.12:30088 10.8.66.0:11422