kubernetes 1.22.0 + containerd 1.4.11 单节点安装
本文采用centos 8.3版本 使用ipvs作为网络规则共:
192.168.188.10 master
192.168.188.4 node
首先初始化配置
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-8.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
dnf install epel-release -y
dnf install vim net-tools lrzsz rsyslog wget bind-utils -y
dnf makecache;dnf update -y
setenforce 0
sed -ri 's/^SELINUX=(.*)/SELINUX=disabled/' /etc/selinux/config
systemctl stop firewalld;systemctl disable firewalld
关闭swap虚拟内存
swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
加载内置模块
cat <
cat < /etc/sysctl.d/k8s.conf
#即要求iptables不对bridge的数据进行处理
#配置系统内核参数使流过网桥的流量也进入iptables/netfilter框架中
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#1,开启理由转发功能 0,是禁止开启路由转发功能
net.ipv4.ip_forward=1
#文件包含限制一个进程可以拥有的VMA(虚拟内存区域)的数量
#查看当前值 sysctl -a|grep vm.max_map_count
#永久修改 /etc/sysctl.conf
vm.max_map_count=262144
EOF
#modprobe br_netfilter 加载模块
#modprobe -r br_netfilter 移除
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
配置ipvs模块 永久生效
dnf install ipvsadm ipset -y cat > /etc/sysconfig/modules/ipvs.modules <
配置docker yum源
#设置docker yum源 curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo sudo dnf install -y yum-utils device-mapper-persistent-data lvm2 #设置containerd 作为容器底层 ## 安装 containerd sudo dnf update -y && sudo yum install -y containerd.io # 配置 containerd sudo mkdir -p /etc/containerd sudo containerd config default > /etc/containerd/config.toml
systemd
结合
runc
使用systemd
cgroup 驱动,在/etc/containerd/config.toml
中设置[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
### /etc/containerd/config.toml 添加以下内容
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
endpoint=["https://registry.aliyuncs.com/k8sxio"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."shanyao.harbor.com"]
endpoint = ["shanyao.harbor.domain"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."shanyao.harbor.com".tls]
insecure_skip_verify = true
[plugins."io.containerd.grpc.v1.cri".registry.configs."shanyao.harbor.com".auth]
username = "admin"
password = "Shanyao@2022"
配置containerd cni 网络插件
### 配置cni 网络插件 #CNI 网络接口 wget https://github.com/containerd/containerd/releases/download/v1.4.3/cri-containerd-cni-1.4.3-linux-amd64.tar.gz tar -tf cri-containerd-cni-1.4.3-linux-amd64.tar.gz sudo tar -C / -xzf cri-containerd-cni-1.4.3-linux-amd64.tar.gz vim /etc/profile #插入以下内容 export PATH=$PATH:/usr/local/bin:/usr/local/sbin source ~/.bashrc ctr version Client: Version: v1.4.3 Revision: 269548fa27e0089a8b8278fc4fc781d7f65a939b Go version: go1.15.5 Server: Version: v1.4.3 Revision: 269548fa27e0089a8b8278fc4fc781d7f65a939b UUID: d1724999-91b3-4338-9288-9a54c9d52f70 #配置 crictl wget https://github.com/kubernetes-sigs/cri-tools/releases/crictl-v1.22.0-linux-amd64.tar.gz tar -C /usr/local/bin/ -xf crictl-v1.22.0-linux-amd64.tar.gz # 重启 containerd sudo systemctl restart containerd;sudo systemctl enable containerd
配置k8s yum源
cat <
/etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF setenforce 0 dnf install -y kubelet kubeadm kubectl systemctl enable kubelet
打印kubeadm初始化配置文件
$ kubeadm config print init-defaults > kubeadm.yaml
初始化后配置可能是这样 此为单master版本
cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.136.10 # apiserver地址,因为单master,所以配置master的节点内网IP bindPort: 6443 nodeRegistration: criSocket: /run/containerd/containerd.sock #此位置配置为containerd的sock name: k8s-master taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers # 修改成阿里镜像源 kind: ClusterConfiguration kubernetesVersion: v1.16.2 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 # Pod 网段,flannel插件需要使用这个网段 serviceSubnet: 10.96.0.0/12 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration # https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration mode: ipvs # or iptables --- kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 cgroupDriver: systemd
对于上面的资源清单的文档比较杂,要想完整了解上面的资源对象对应的属性,可以查看对应的 godoc 文档,地址: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2。
提前下载镜像
只在master上操作即可
# 查看需要使用的镜像列表,若无问题,将得到如下列表 $ kubeadm config images list --config kubeadm.yaml registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.0 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.0 registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.0 registry.aliyuncs.com/google_containers/kube-proxy:v1.22.0 registry.aliyuncs.com/google_containers/pause:3.5 registry.aliyuncs.com/google_containers/etcd:3.5.0-0 registry.aliyuncs.com/google_containers/coredns:v1.8.4 # 提前下载镜像到本地 k8s.gcr.io/kube-apiserver:v1.22.0 k8s.gcr.io/kube-controller-manager:v1.22.0 k8s.gcr.io/kube-scheduler:v1.22.0 k8s.gcr.io/kube-proxy:v1.22.0 k8s.gcr.io/pause:3.5 k8s.gcr.io/etcd:3.5.0-0 k8s.gcr.io/coredns/coredns:v1.8.4 $ kubeadm config images pull --config kubeadm.yaml [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.22.0 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.5 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.0-0 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.8.4 # 可以通过修改kubeadm.yaml 来调整 imageRepository: k8s.gcr.io
初始化节点 只在master执行即可
kubeadm init --config kubeadm.yaml
若初始化成功则生成如下信息:
... Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.188.10:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1c4305f032f4bf534f628c32f5039084f4b103c922ff71b12a5f0f98d1ca9a4f
接下来按照上述提示信息操作,配置kubectl客户端的认证
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
⚠️注意:此时使用 kubectl get nodes查看节点应该处于notReady状态,因为还未配置网络插件
若执行初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可
添加slave节点到集群中
操作节点:所有的slave节点(
k8s-slave
)需要执行
在每台slave节点,执行如下命令,该命令是在kubeadm init成功后提示信息中打印出来的,需要替换成实际init后打印出的命令。kubeadm join 192.168.188.10:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1c4305f032f4bf534f628c32f5039084f4b103c922ff71b12a5f0f98d1ca9a4f
网络插件
下载flannel的yaml文件
wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
修改配置,指定网卡名称,大概在文件的190行,添加一行配置:
$ vi kube-flannel.yml ... containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=eth0 # 如果机器存在多网卡的话,指定内网网卡的名称,默认不指定的话会找第一块网 resources: requests: cpu: "100m" ...
执行安装flannel网络插件
# 先拉取镜像,此过程国内速度比较慢 $ docker pull quay.io/coreos/flannel:v0.11.0-amd64 # 执行flannel安装 $ kubectl create -f kube-flannel.yml
设置master可调度 去除污点
默认部署成功后,master节点无法调度业务pod,如需设置master节点也可以参与pod的调度,需执行:
$ kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-
操作节点: 在master节点(
k8s-master
)执行$ kubectl get nodes #观察集群节点是否全部Ready
#添加新的node节点
获取master的join token
kubeadm token create –print-join-command
#查看token失效时间
kubeadm token list
生成token
生成永久token
[root@kmaster ~]# kubeadm token create --ttl 0
查看crt
删除新增节点
kubectl drain k8s-node1 --``delete``-local-data --force --ignore-daemonsets kubectl ``delete` `node k8s-node1
#在被移除节点,执行清除命令
kubeadm reset
#再次查看
kubectl get nodes
创建测试nginx服务
$ kubectl run test-nginx --image=nginx:alpine
查看pod是否创建成功,并访问pod ip测试是否可用
$ kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-nginx-5bd8859b98-5nnnw 1/1 Running 0 9s 10.244.1.2 k8s-slave1
$ curl 10.244.1.2 ... Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.Thank you for using nginx.