- 目录
一.服务器规划
二.资源准备(系统配置,内核优化,yum源配置)(所有节点操作)
三.keepalived配置(master节点操作)
四.haproxy配置(master节点操作)
五.docker安装(所有节点操作)
六.安装kubeadm,kubelet kubectl(所有节点操作)
七.k8s集群安装(在具有vip的master上操作),及其升级证书有效期为100年
八.安装集群网络(master节点操作)
九.其他节点加入集群
十.部署dashboard(k8s-master-01)
十一.部署ingress(0.30.0)
十二.部署metric
十三.部署kubernetes dns缓存及其reloader插件实现配置pod滚动更新
十四.k8s缩容扩容维护
十五.k8s常见操作十六.参考下一篇,关于应用部署实际应用案例
一.服务器规划(基于腾讯云服务器的规划)
k8s-master-01 10.206.16.14 master
k8s-master-02 10.206.16.15 master
k8s-master-03 10.206.16.16 master
k8s-node-01 10.206.16.8 node
k8s-node-02 10.206.16.9 node
k8s-harbor 10.206.16.4 harbor
vip 10.206.16.18 (https://cloud.tencent.com/document/product/215/36694 腾讯云vip申请办法)
服务器系统centos8 k8s版本1.16 机器配置4核16G
二.资源准备
1.设置主机的名字
hostnamectl set-hostname k8s-master-01(分别在对应的主机按照规划设置名字)
2.编辑hosts文件(在所有主机操作)
cat <<EOF >>/etc/hosts
10.206.16.18 master.k8s.io k8s-vip
10.206.16.14 master01.k8s.io k8s-master-01
10.206.16.15 master02.k8s.io k8s-master-02
10.206.16.16 master03.k8s.io k8s-master-03
10.206.16.8 node01.k8s.io k8s-node-01
10.206.16.9 node02.k8s.io k8s-node-02
10.206.16.4 k8s-harbor
EOF
3.关闭防火墙,selinux,swap(在所有主机操作)
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
4.配置内核参数(所有主机操作)
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables =1
net.bridge.bridge-nf-call-iptables =1
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
5.设置资源配置文件(所有主机操作)
echo "* soft nofile 65536" >> /etc/security/limits.confecho "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
6.配置yum源(所有主机操作)
yum install -y wgetmkdir /etc/yum.repos.d/bak && mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos8_base.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
yum clean all && yum makecache
7.配置kubernetes源(所有主机操作)
cat <<EOF >/etc/yum.repos.d/kubernetes.repo[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
8.配置docker源(所有主机操作)
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo9.安装相关包(所有主机操作)
yum install -y conntrack-tools libseccomp libtool-ltdl10.配置所有主机时间同步(腾讯云的机器已经在配置文件中配置好了时间主服务器,只需要启动即可)
yum install chronydsystemctl start chronyd
systemctl status chronyd
systemctl enable chronyd
三.部署keepalived(在3台master机器操作)
1.安装keepalived(在3台master操作)
yum install -y conntrack-tools libseccomp libtool-ltdl keepalived#!/bin/bash
if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then
systemctl start haproxy
sleep 3
if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then
systemctl stop keepalived
fi
fi
' > /etc/keepalived/check_haproxy.sh
chmod +x /etc/keepalived/check_haproxy.sh
2.配置(另外的两台master配置和上面类似,只需要修改对应的state配置为BACKUP,priority权重值不同即可,配置中的其他字段这里不做说明)
k8s-master-01配置:
cat > /etc/keepalived/keepalived.conf <<EOF! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
#script "killall -0 haproxy"
script "/etc/keepalived/check_haproxy.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 250
nopreempt #设置非抢占模式
preempt_delay 10 #抢占延时10分钟
advert_int 1 #检查间隔默认1s
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab11
}
unicast_src_ip 10.206.16.14 #设置本机内网ip
unicast_peer{ #其他两台master ip
10.206.16.15
10.206.16.16
}
virtual_ipaddress {
10.206.16.18
}
track_script {
check_haproxy
}
}
EOF
k8s-master-02配置
cat > /etc/keepalived/keepalived.conf <<EOF! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
#script "killall -0 haproxy"
script "/etc/keepalived/check_haproxy.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
nopreempt
preempt_delay 10
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab11
}
unicast_src_ip 10.206.16.15 #设置本机内网ip
unicast_peer{
10.206.16.14
10.206.16.16
}
virtual_ipaddress {
10.206.16.18
}
track_script {
check_haproxy
}
}
EOF
k8s-master-03配置:
cat > /etc/keepalived/keepalived.conf <<EOF! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
#script "killall -0 haproxy"
script "/etc/keepalived/check_haproxy.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 150
nopreempt
preempt_delay 10
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab11
}
unicast_src_ip 10.206.16.16
unicast_peer{
10.206.16.14
10.206.16.15
}
virtual_ipaddress {
10.206.16.18
}
track_script {
check_haproxy
}
}
EOF
3.启动检查
设置开机启动
systemctl enable keepalived.service启动
systemctl start keepalived.service查看启动状态
systemctl status keepalived.service启动后查看k8s-master-01网卡信息
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 52:54:00:f2:57:46 brd ff:ff:ff:ff:ff:ff
inet 10.206.16.14/20 brd 10.206.31.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet 10.206.16.18/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fef2:5746/64 scope link
valid_lft forever preferred_lft forever
尝试停掉k8s-master-01的keepalived服务,查看vip是否能漂移到其他的master,并且重新启动k8s-master-01的keepalived服务,查看vip是否能正常漂移回来,证明配置没有问题。
四.haproxy搭建(三台master上操作)
1.安装haproxy(三台master执行)
2.配置(三台master执行)
cat > /etc/haproxy/haproxy.cfg << EOF#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp
bind *:16443
option tcplog
default_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server master01.k8s.io 10.206.16.14:6443 check
server master02.k8s.io 10.206.16.15:6443 check
server master03.k8s.io 10.206.16.16:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats
EOF
3.启动和检查(三台master执行)
# 设置开机启动 systemctl enable haproxy
# 开启haproxy systemctl start haproxy
# 查看启动状态 systemctl status haproxy
4.检查服务端口是否启动
tcp 0 0 0.0.0.0:1080 0.0.0.0:* LISTEN 37567/haproxy
tcp 0 0 0.0.0.0:16443 0.0.0.0:* LISTEN 37567/haproxy
udp 0 0 0.0.0.0:48413 0.0.0.0:* 37565/haproxy
五.安装docker(所有节点)
1.安装
# step 1: 安装必要的一些系统工具
$ yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
$ sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 查找Docker-CE的版本:
$ yum list docker-ce.x86_64 --showduplicates | sort -r
# Step 4: 安装指定版本的Docker-CE
$ yum makecache
$ yum install -y docker-ce
2.配置
修改docker的配置文件,目前k8s推荐使用的docker文件驱动是systemd
mkdir /etc/docker
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"insecure-registries" : ["10.206.16.4"],
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn", "hub-mirror.c.163.com"]
}
EOF
修改docker的服务配置文件,指定docker的数据目录为外挂的磁盘--graph /data/docker
mkdir /data/dockersed -i "s#containerd.sock#containerd.sock --graph /data/docker#g" /lib/systemd/system/docker.service
添加如下执行语句:(如果pod之间无法通信的问题)
mkdir -p /etc/systemd/system/docker.service.d/cat>/etc/systemd/system/docker.service.d/10-docker.conf<<EOF
[Service]
ExecStartPost=/sbin/iptables --wait -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecStopPost=/bin/bash -c '/sbin/iptables --wait -D FORWARD -s 0.0.0.0/0 -j ACCEPT &> /dev/null || :'
ExecStartPost=/sbin/iptables --wait -I INPUT -i cni0 -j ACCEPT
ExecStopPost=/bin/bash -c '/sbin/iptables --wait -D INPUT -i cni0 -j ACCEPT &> /dev/null || :'
EOF
或者在node节点执行以下语句
iptables -P FORWARD ACCEPT**#并且把以下命令写入/etc/rc.local文件中,防止节点重启iptables FORWARD chain的默认策略又还原为DROP**
sleep 60 && /sbin/iptables -P FORWARD ACCEPT
3.启动docker
systemctl daemon-reloadsystemctl stop firewalld
systemctl disable firewalld
iptables -F && sudo iptables -X && sudo iptables -F -t nat && sudo iptables -X -t nat
systemctl start docker.service
systemctl enable docker.service
systemctl status docker.service
六.安装kubeadm,kubelet,和kubectl(所有节点操作)
1.安装
yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3systemctl enable kubelet
2.配置kubectl自动补全功能
source <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrc
七.安装k8s集群(在具有vip的k8s-master-01上操作)
1.创建配置文件:
mkdir /usr/local/kubernetes/manifests -p
cd /usr/local/kubernetes/manifests/
cat > kubeadm-config.yaml <<EOFapiServer:
certSANs:
- k8s-master-01
- k8s-master-02
- k8s-master-03
- master.k8s.io
- 10.206.16.14
- 10.206.16.15
- 10.206.16.16
- 10.206.16.18
- 127.0.0.1
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "master.k8s.io:16443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.1.0.0/16
scheduler: {}
EOF
2.初始化master节点
[root@VM-16-14-centos manifests]# kubeadm init --config kubeadm-config.yaml[init] Using Kubernetes version: v1.16.3
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.k8s.io k8s-master-01 k8s-master-02 k8s-master-03 master.k8s.io] and IPs [10.1.0.1 10.206.16.14 10.206.16.14 10.206.16.15 10.206.16.16 10.206.16.18 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [10.206.16.14 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [10.206.16.14 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 36.002615 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ee3zom.l4xeahsfqcj9uvvz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join master.k8s.io:16443 --token ee3zom.l4xeahsfqcj9uvvz \
--discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join master.k8s.io:16443 --token ee3zom.l4xeahsfqcj9uvvz \
--discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b
3.按照提示配置环境变量
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4.查看集群状态
[root@VM-16-14-centos manifests]# kubectl get csNAME AGE
scheduler <unknown>
controller-manager <unknown>
etcd-0 <unknown>
[root@VM-16-14-centos manifests]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-ch8pn 0/1 Pending 0 2m45s
coredns-58cc8c89f4-qdz7t 0/1 Pending 0 2m45s
etcd-k8s-master-01 1/1 Running 0 113s
kube-apiserver-k8s-master-01 1/1 Running 0 99s
kube-controller-manager-k8s-master-01 1/1 Running 0 98s
kube-proxy-wvp9b 1/1 Running 0 2m45s
kube-scheduler-k8s-master-01 1/1 Running 0 2m6s
里处于pending状态的原因是因为还没有安装网络组件
5.升级证书有效期为100年
5.1安装go环境 (1.12版本)
wget https://storage.googleapis.com/golang/go1.12.5.linux-amd64.tar.gz
tar -C /root -xzf go1.12.5.linux-amd64.tar.gz
vim ~/.bashrc
export GOPATH=/root/Go
export GOROOT=/root/go
export PATH=$PATH:$GOROOT/bin
source ~/.bashrc
5.2 下载kubenetes源码
cd /root/go/src
git clone https://github.com/kubernetes/kubernetes.git
5.3 git checkout v1.16.3(自己安装的版本)
5.4 vim cmd/kubeadm/app/constants/constants.go 内容后面*100将时间延长
5.5 make WHAT=cmd/kubeadm
5.6 备份原有的kubeadm和证书文件
cp /usr/bin/kubeadm{,.bak20210707}
cp -r /etc/kubernetes/pki{,.bak20210707}
5.7 将新生成的kubeadm进行替换
cp _output/bin/kubeadm /usr/bin/kubeadm
5.8 生成新的证书
cd /etc/kubernetes/pki
kubeadm alpha certs renew all
5.9 验证结果
kubeadm alpha certs check-expiration
for item in `find /etc/kubernetes/pki -maxdepth 2 -name "*.crt"`;do openssl x509 -in $item -text -noout| grep Not;echo ======================$item===============;done 这个命令也可以查看
CERTIFICATE EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
admin.conf Jun 13, 2121 09:31 UTC 99y no
apiserver Jun 13, 2121 09:31 UTC 99y no
apiserver-etcd-client Jun 13, 2121 09:31 UTC 99y no
apiserver-kubelet-client Jun 13, 2121 09:31 UTC 99y no
controller-manager.conf Jun 13, 2121 09:31 UTC 99y no
etcd-healthcheck-client Jun 13, 2121 09:31 UTC 99y no
etcd-peer Jun 13, 2121 09:31 UTC 99y no
etcd-server Jun 13, 2121 09:31 UTC 99y no
front-proxy-client Jun 13, 2121 09:31 UTC 99y no
scheduler.conf Jun 13, 2121 09:31 UTC 99y no
5.10 其他两个master也需要升级(k8s-master-01执行)
scp /usr/bin/kubeadm k8s-master-02:/usr/bin/ scp /usr/bin/kubeadm k8s-master-03:/usr/bin/
然后分别在k8s-master-02,k8s-master-03执行kubeadm alpha certs renew all
scp /etc/kubernetes/pki/ca.crt k8s-node-01:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.crt k8s-node-02:/etc/kubernetes/pki/
九.安装集群网络(master操作)
1.安装flannel插件
[root@VM-16-14-centos flannel]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
2.检查
[root@VM-16-14-centos flannel]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-ch8pn 1/1 Running 0 32m
coredns-58cc8c89f4-qdz7t 1/1 Running 0 32m
etcd-k8s-master-01 1/1 Running 0 31m
kube-apiserver-k8s-master-01 1/1 Running 0 31m
kube-controller-manager-k8s-master-01 1/1 Running 0 31m
kube-flannel-ds-qljzc 1/1 Running 0 63s
kube-proxy-wvp9b 1/1 Running 0 32m
kube-scheduler-k8s-master-01 1/1 Running 0 32m
十.其他节点加入集群(master,node都需要加入)
1.master加入集群
1.1复制密钥和相关文件(k8s-master-01执行)
建立免登录
ssh-keygen -t rsa
ssh-copy-id root@10.206.16.15
ssh-copy-id root@10.206.16.16
复制文件到k8s-master-02
ssh root@10.206.16.15 mkdir -p /etc/kubernetes/pki/etcdscp /etc/kubernetes/admin.conf root@10.206.16.15:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@10.206.16.15:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@10.206.16.15:/etc/kubernetes/pki/etcd
复制文件到k8s-master-03
ssh root@10.206.16.16 mkdir -p /etc/kubernetes/pki/etcdscp /etc/kubernetes/admin.conf root@10.206.16.16:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@10.206.16.16:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@10.206.16.16:/etc/kubernetes/pki/etcd
1.2 master加入集群
分别在其他两个master ,k8s-master-02,k8s-master-03上执行k8s-master-01 init后输出的join命令,如果找不到可以在k8s-master-01执行 kubeadm token create --print-join-command
在k8s-master-02上执行操作,需要带上参数--control-plane表示把master控制节点加入到集群
kubeadm join master.k8s.io:16443 --token 13dqfw.8vteayxksdn03mve --discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b --control-planemkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
在k8s-master-03上执行join命令
kubeadm join master.k8s.io:16443 --token 13dqfw.8vteayxksdn03mve --discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b --control-planemkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
1.3检查master是否加入成功
root@VM-16-14-centos flannel]# kubectl get nodeNAME STATUS ROLES AGE VERSION
k8s-master-01 Ready master 74m v1.16.3
k8s-master-02 Ready master 18m v1.16.3
k8s-master-03 Ready master 87s v1.16.3
[root@VM-16-14-centos flannel]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-58cc8c89f4-ch8pn 1/1 Running 0 74m
kube-system coredns-58cc8c89f4-qdz7t 1/1 Running 0 74m
kube-system etcd-k8s-master-01 1/1 Running 0 73m
kube-system etcd-k8s-master-02 1/1 Running 0 19m
kube-system etcd-k8s-master-03 1/1 Running 0 93s
kube-system kube-apiserver-k8s-master-01 1/1 Running 0 73m
kube-system kube-apiserver-k8s-master-02 1/1 Running 0 19m
kube-system kube-apiserver-k8s-master-03 1/1 Running 0 94s
kube-system kube-controller-manager-k8s-master-01 1/1 Running 1 73m
kube-system kube-controller-manager-k8s-master-02 1/1 Running 0 19m
kube-system kube-controller-manager-k8s-master-03 1/1 Running 0 94s
kube-system kube-flannel-ds-965w9 1/1 Running 0 94s
kube-system kube-flannel-ds-qljzc 1/1 Running 0 42m
kube-system kube-flannel-ds-vjn8d 1/1 Running 1 19m
kube-system kube-proxy-6w9ch 1/1 Running 0 19m
kube-system kube-proxy-p4mt8 1/1 Running 0 94s
kube-system kube-proxy-wvp9b 1/1 Running 0 74m
kube-system kube-scheduler-k8s-master-01 1/1 Running 1 73m
kube-system kube-scheduler-k8s-master-02 1/1 Running 0 19m
kube-system kube-scheduler-k8s-master-03 1/1 Running 0 94s
2.node加入到集群(分别在两个node上执行)
2.1配置:
kubeadm join master.k8s.io:16443 --token hx67nu.7nlxcsvcsa8uy46o --discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b2.2检查:
[root@VM-16-14-centos flannel]# kubectl get nodeNAME STATUS ROLES AGE VERSION
k8s-master-01 Ready master 80m v1.16.3
k8s-master-02 Ready master 24m v1.16.3
k8s-master-03 Ready master 7m11s v1.16.3
k8s-node-01 Ready <none> 2m58s v1.16.3
k8s-node-02 Ready <none> 101s v1.16.3
[root@VM-16-14-centos flannel]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-ch8pn 1/1 Running 0 80m
coredns-58cc8c89f4-qdz7t 1/1 Running 0 80m
etcd-k8s-master-01 1/1 Running 0 79m
etcd-k8s-master-02 1/1 Running 0 24m
etcd-k8s-master-03 1/1 Running 0 7m20s
kube-apiserver-k8s-master-01 1/1 Running 0 79m
kube-apiserver-k8s-master-02 1/1 Running 0 24m
kube-apiserver-k8s-master-03 1/1 Running 0 7m21s
kube-controller-manager-k8s-master-01 1/1 Running 1 79m
kube-controller-manager-k8s-master-02 1/1 Running 0 24m
kube-controller-manager-k8s-master-03 1/1 Running 0 7m21s
kube-flannel-ds-965w9 1/1 Running 0 7m21s
kube-flannel-ds-nvdhl 1/1 Running 0 111s
kube-flannel-ds-qljzc 1/1 Running 0 48m
kube-flannel-ds-vjn8d 1/1 Running 1 24m
kube-flannel-ds-z9zc2 1/1 Running 0 3m8s
kube-proxy-6w9ch 1/1 Running 0 24m
kube-proxy-fswvz 1/1 Running 0 111s
kube-proxy-p4mt8 1/1 Running 0 7m21s
kube-proxy-wvp9b 1/1 Running 0 80m
kube-proxy-z27lw 1/1 Running 0 3m8s
kube-scheduler-k8s-master-01 1/1 Running 1 79m
kube-scheduler-k8s-master-02 1/1 Running 0 24m
kube-scheduler-k8s-master-03 1/1 Running 0 7m21s
十一.部署dashboard(k8s-master-01执行)
部署最新版本v2.0.0-beta6,下载yaml
cd /usr/local/kubernetes/manifests/
mkdir dashboard && cd dashboard
wget -c https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
# 修改service类型为nodeport
vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
...
[root@k8s-master-01 dashboard]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@k8s-master-01 dashboard]# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-76585494d8-62vp9 1/1 Running 0 6m47s
kubernetes-dashboard-b65488c4-5t57x 1/1 Running 0 6m48s
[root@k8s-master-01 dashboard]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.1.207.27 <none> 8000/TCP 7m6s
kubernetes-dashboard NodePort 10.1.207.168 <none> 443:30001/TCP 7m7s
# 在node上通过https://nodeip:30001访问是否正常,注意在firefox浏览器执行,使用非安全模式进入
2.创建service account并绑定默认cluster-admin管理员集群角色
vim dashboard-adminuser.yaml
apiVersion: v1kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
[root@VM-16-14-centos dashboard]# kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
获取token,dashboard通过这个token进入系统
[root@VM-16-14-centos dashboard]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-p7wgc
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 0e9f5406-3c26-4141-a233-ff4eaa841401
Type: kubernetes.io/service-account-token
Data
====
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InBDdkJ1VWZFZENYbEd3ZGVrc3FldlhXWG94QU0ySjN1M1Y4ZVRJOUZPd1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXA3d2djIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwZTlmNTQwNi0zYzI2LTQxNDEtYTIzMy1mZjRlYWE4NDE0MDEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.jcCnl8hHWrZcFtd5H17gJdoaDiFsPUos_4oNYQexiXSjFDgy972Bk1qgYV-zHZhu7o_UZyESMRLTlzRFl3W5Eqbhq9fouD0j0DH_qnTGewNTEuByQj5n6uPLloPG5VNCOs1y3TINVj8LdG5q_n6DWfozfn76eNhU9eAnSJZVZ97dGKy_LDykpM9QtJQQkpaF9jSnDPCeoSnSd_1ud1FoQlNS3PAenB54khOmL5gbD6Pf4uJOVUjzxoHk_--gKDW7juVAsaDPbbGftuiM1mIfQ3K02VoNMiG1VB2hlzJ5kWeUn7wpqZpmngzrqBtVj5DJWSpnHAZZef_FFCakKMp5TA
ca.crt: 1025 bytes
十二.部署ingress控制器 mandatory.yaml
apiVersion: v1kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data: #为了让pod获取真实ip
compute-full-forwarded-for: 'true'
use-forwarded-headers: 'true'
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
containers:
- name: nginx-ingress-controller
image: lizhenliang/nginx-ingress-controller:0.20.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
---
需要运行ingress-service.yaml
apiVersion: v1kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app: ingress-nginx
十三.部署metric(v0.3.6) mandatory.yaml
## ServiceAccountapiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
## ClusterRole aggregated-metrics-reader
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods","nodes"]
verbs: ["get","list","watch"]
---
## ClusterRole metrics-server
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups: [""]
resources: ["pods","nodes","nodes/stats","namespaces","configmaps"]
verbs: ["get","list","watch"]
---
## ClusterRoleBinding auth-delegator
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
## RoleBinding metrics-server-auth-reader
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
## ClusterRoleBinding system:metrics-server
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
## APIService
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
## Service
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
targetPort: 4443
---
## Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
hostNetwork: true
serviceAccountName: metrics-server
containers:
- name: metrics-server
## 修改镜像源地址
image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls ## 增加
- --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname ## 增加
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources:
limits:
memory: 1Gi
cpu: 1000m
requests:
memory: 1Gi
cpu: 1000m
volumeMounts:
- name: tmp-dir
mountPath: /tmp
- name: localtime
readOnly: true
mountPath: /etc/localtime
volumes:
- name: tmp-dir
emptyDir: {}
- name: localtime
hostPath:
type: File
path: /etc/localtime
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: "amd64"
十四.安装kubernetes dns缓存,避免延迟问题
kubectl apply -f https://github.com/feiskyer/kubernetes-handbook/raw/master/examples/nodelocaldns/nodelocaldns-kubenet.yaml
(参考:https://mp.weixin.qq.com/s/t7nt87JPJnWEVCNBS-sBpw)
十五.安装reloader,配置改变触发pod滚动更新(也可以使用checksum的形式)
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml具体用法参考:
https://juejin.cn/post/6993128314055426084
十六. 集群的扩容,缩容
1.集群扩容
默认情况下加入集群的token是24小时过期,24小时后如果是想要新的node加入到集群,需要重新生成一个token,命令如下
# 显示获取token列表
$ kubeadm token list
# 生成新的token
$ kubeadm token create
除token外,join命令还需要一个sha256的值,通过以下方法计算
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
用上面输出的token和sha256的值或者是利用kubeadm token create --print-join-command拼接join命令即可
2.集群的缩容
kubectl cordon <node name> #设置为不可调度
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets 驱逐节点上的pod
kubectl delete node <node name>
3.初始化重新加入
需要把原来的配置清空
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker