当前位置 : 主页 > 编程语言 > 其它开发 >

二进制部署1.23.4版本k8s集群-6-部署Node节点服务

来源:互联网 收集:自由互联 发布时间:2022-05-17
本例中Master节点和Node节点部署在同一台主机上。 1 部署kubelet1.1 集群规划 主机名 角色 IP CFZX55-21.host.comkubelet10.211.55.21CFZX55-22.host.comkubelet10.211.55.22 在21主机上操作。 1.2 生成kubelet的kube

本例中Master节点和Node节点部署在同一台主机上。

1 部署kubelet 1.1 集群规划 主机名 角色 IP CFZX55-21.host.com kubelet 10.211.55.21 CFZX55-22.host.com kubelet 10.211.55.22

在21主机上操作。

1.2 生成kubelet的kubeconfig配置文件
#!/bin/bash
KUBE_CONFIG="/opt/kubernetes/cfg/kubelet-bootstrap.kubeconfig"
KUBE_APISERVER="https://10.211.55.10:7443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/bin/certs/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kubelet-bootstrap \
  --token=$(awk -F "," '{print $1}' /opt/kubernetes/bin/certs/kube-apiserver.token.csv) \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

说明:

关于kubectl create clusterrolebinding命令中的"kubelet-bootstrap"

第一个"kubelet-bootstrap":会在K8S集群中创建一个名为"kubelet-bootstrap"的"ClusterRoleBinding"资源,用kubectl get clusterrolebinding查看

第二个"--user=kubelet-bootstrap":表示将对应"ClusterRoleBinding"资源中的"subjects.kind"="User"、"subjects.name"="kubelet-bootstrap"

kubectl get clusterrolebinding kubelet-bootstrap -o yaml查看

在经过本命令的配置后,KUBE-APISERVER的"kube-apiserver.token.csv"配置文件中的用户名"kubelet-bootstrap"便真正的在K8S集群中有了意义

执行脚本

[root@cfzx55-21 k8s-shell]# vim kubelet-config.sh
[root@cfzx55-21 k8s-shell]# chmod +x kubelet-config.sh
[root@cfzx55-21 k8s-shell]# ./kubelet-config.sh
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
[root@cfzx55-21 k8s-shell]#

把生成的kubeconfig文件拷贝到22主机上

[root@cfzx55-21 k8s-shell]# scp /opt/kubernetes/cfg/kubelet-bootstrap.kubeconfig root@cfzx55-22:/opt/kubernetes/cfg/
root@cfzx55-22's password:
kubelet-bootstrap.kubeconfig                                          100% 2102     1.2MB/s   00:00
[root@cfzx55-21 k8s-shell]#
1.3 创建kubelet启动脚本

/opt/kubernetes/bin/kubelet-startup.sh

#!/bin/sh
./kubelet \
  --v=2 \
  --log-dir=/data/logs/kubernetes/kubelet \
  --hostname-override=cfzx55-21.host.com \
  --network-plugin=cni \
  --cluster-domain=cluster.local \
  --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  --bootstrap-kubeconfig=/opt/kubernetes/cfg/kubelet-bootstrap.kubeconfig \
  --config=/opt/kubernetes/cfg/kubelet-config.yml \
  --cert-dir=/opt/kubernetes/bin/certs \
  --pod-infra-container-image=ibmcom/pause:3.1

说明: 本例中为了方便测试,先删除--network-plugin

配置参数文件

/opt/kubernetes/cfg/kubelet-config.yml

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: systemd
clusterDNS:
- 192.168.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/bin/certs/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110

生成上面两个文件

[root@cfzx55-21 bin]# vim kubelet-startup.sh
[root@cfzx55-21 bin]# chmod +x kubelet-startup.sh
[root@cfzx55-21 cfg]# mkdir -pv /data/logs/kubernetes/kubelet
[root@cfzx55-21 bin]# cd ../cfg/
[root@cfzx55-21 cfg]# vim kubelet-config.yml
[root@cfzx55-21 cfg]#

创建supervisor启动文件

/etc/supervisord.d/kube-kubelet.ini

[program:kube-kubelet-55-21]
command=/opt/kubernetes/bin/kubelet-startup.sh
numprocs=1
directory=/opt/kubernetes/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kubelet/kubelet.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

启动服务

[root@cfzx55-21 cfg]# supervisorctl update
kube-kubelet-55-21: added process group
[root@cfzx55-21 cfg]# supervisorctl status
etcd-server-55-21                RUNNING   pid 1033, uptime 6:49:47
kube-apiserver-55-21             RUNNING   pid 1034, uptime 6:49:47
kube-controller-manager-55-21    RUNNING   pid 3558, uptime 1:16:30
kube-kubelet-55-21               RUNNING   pid 3762, uptime 0:00:31
kube-scheduler-55-21             RUNNING   pid 3486, uptime 1:33:53
[root@cfzx55-21 cfg]#

把以上几个文件拷贝到22主机

[root@cfzx55-21 ~]# scp /opt/kubernetes/bin/kubelet-startup.sh root@cfzx55-22:/opt/kubernetes/bin/
root@cfzx55-22's password:
kubelet-startup.sh                                                    100%  451   224.6KB/s   00:00
[root@cfzx55-21 ~]# scp /opt/kubernetes/cfg/kubelet-config.yml root@cfzx55-22:/opt/kubernetes/cfg/
root@cfzx55-22's password:
kubelet-config.yml                                                    100%  620   379.6KB/s   00:00
[root@cfzx55-21 ~]# scp /etc/supervisord.d/kube-kubelet.ini root@cfzx55-22:/etc/supervisord.d/
root@cfzx55-22's password:
kube-kubelet.ini                                                      100%  428   325.6KB/s   00:00

在22主机上操作

# 创建目录
[root@cfzx55-22 ~]# mkdir -pv /data/logs/kubernetes/kubelet
# 修改
[root@cfzx55-22 ~]# vim /opt/kubernetes/bin/kubelet-startup.sh
[root@cfzx55-22 ~]# vim /etc/supervisord.d/kube-kubelet.ini

启动服务

[root@cfzx55-22 ~]# supervisorctl update
[root@cfzx55-22 ~]# supervisorctl status
etcd-server-55-22                RUNNING   pid 1013, uptime 6:57:00
kube-apiserver-55-22             RUNNING   pid 1012, uptime 6:57:00
kube-controller-manager-55-22    RUNNING   pid 3256, uptime 1:23:46
kube-kubelet-55-22               RUNNING   pid 3357, uptime 0:00:44
kube-scheduler-55-22             RUNNING   pid 3187, uptime 1:34:20
[root@cfzx55-22 ~]#
1.4 批准kubelete证书申请并加入集群

查看kubelet证书请求

[root@cfzx55-22 ~]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           REQUESTEDDURATION   CONDITION
node-csr-wDoAeuoFDj7XW1J4CeJqF9nZ7-uaWxi-kcQI55as66M   8m23s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
node-csr-ytydzudqHyxrhrWO0MLxIs51gDxgGsxuwIts6C9r0dU   62s     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
[root@cfzx55-22 ~]#

批准证书

[root@cfzx55-22 ~]# kubectl certificate approve node-csr-wDoAeuoFDj7XW1J4CeJqF9nZ7-uaWxi-kcQI55as66M
certificatesigningrequest.certificates.k8s.io/node-csr-wDoAeuoFDj7XW1J4CeJqF9nZ7-uaWxi-kcQI55as66M approved
[root@cfzx55-22 ~]# kubectl certificate approve node-csr-ytydzudqHyxrhrWO0MLxIs51gDxgGsxuwIts6C9r0dU
certificatesigningrequest.certificates.k8s.io/node-csr-ytydzudqHyxrhrWO0MLxIs51gDxgGsxuwIts6C9r0dU approved

查看节点

[root@cfzx55-22 ~]# kubectl get no
NAME                 STATUS     ROLES    AGE     VERSION
cfzx55-21.host.com   NotReady   <none>   2m19s   v1.23.4
cfzx55-22.host.com   NotReady   <none>   2m9s    v1.23.4
[root@cfzx55-22 ~]#

由于没有安装网络插件,节点状态为NotReady

2 部署kube-proy 2.1 集群规划 主机名 角色 IP CFZX55-21.host.com kube-proxy 10.211.55.21 CFZX55-22.host.com kube-proxy 10.211.55.22 2.2 生成kube-proxy的kubeconfig文件

在运维主机200上操作

/opt/certs/kube-proxy-csr.json

{
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "system:masters",
            "OU": "system"            
        }
    ]
}

生成证书

[root@cfzx55-200 certs]# pwd
/opt/certs
[root@cfzx55-200 certs]# vim kube-proxy-csr.json
[root@cfzx55-200 certs]# cfssl gencert \
> -ca=ca.pem \
> -ca-key=ca-key.pem \
> -config=ca-config.json \
> -profile=kubernetes \
> kube-proxy-csr.json | cfssl-json -bare kube-proxy
2022/03/13 14:58:59 [INFO] generate received request
2022/03/13 14:58:59 [INFO] received CSR
2022/03/13 14:58:59 [INFO] generating key: rsa-2048
2022/03/13 14:58:59 [INFO] encoded CSR
2022/03/13 14:58:59 [INFO] signed certificate with serial number 705933654696297683901130256446644781117492665095
2022/03/13 14:58:59 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@cfzx55-200 certs]# ll kube-proxy*.pem
-rw------- 1 root root 1679 Mar 13 14:58 kube-proxy-key.pem
-rw-r--r-- 1 root root 1415 Mar 13 14:58 kube-proxy.pem
[root@cfzx55-200 certs]#

把生成的证书拷贝到21和22节点

[root@cfzx55-200 certs]# scp kube-proxy*.pem root@cfzx55-21:/opt/kubernetes/bin/certs/
root@cfzx55-21's password:
kube-proxy-key.pem                                                         100% 1679   591.9KB/s   00:00
kube-proxy.pem                                                             100% 1415   895.5KB/s   00:00
[root@cfzx55-200 certs]# scp kube-proxy*.pem root@cfzx55-22:/opt/kubernetes/bin/certs/
root@cfzx55-22's password:
kube-proxy-key.pem                                                         100% 1679   587.6KB/s   00:00
kube-proxy.pem                                                             100% 1415   737.5KB/s   00:00
[root@cfzx55-200 certs]#

创建脚本

#!/bin/bash
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://10.211.55.10:7443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/bin/certs/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/bin/certs/kube-proxy.pem \
  --client-key=/opt/kubernetes/bin/certs/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

执行脚本

[root@cfzx55-21 k8s-shell]# vim kube-proxy-config.sh
[root@cfzx55-21 k8s-shell]# chmod +x kube-proxy-config.sh
[root@cfzx55-21 k8s-shell]# ./kube-proxy-config.sh
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@cfzx55-21 k8s-shell]#

把生成kubeconfig文件拷贝到22主机。

[root@cfzx55-21 cfg]# scp kube-proxy.kubeconfig root@cfzx55-22:/opt/kubernetes/cfg/
root@cfzx55-22's password:
kube-proxy.kubeconfig                                                      100% 6224     2.9MB/s   00:00
[root@cfzx55-21 cfg]#
2.3 加载ipvs模块

编写脚本

#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir | grep -o "^[^.]*")
do
  /sbin/modinfo -F filename $i &>/dev/null
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done

在21上操作

[root@cfzx55-21 k8s-shell]# vim ipvs.sh
[root@cfzx55-21 k8s-shell]# chmod +x ipvs.sh
[root@cfzx55-21 k8s-shell]# ./ipvs.sh
[root@cfzx55-21 k8s-shell]# lsmod | grep ip_vs
ip_vs_wlc              12519  0
ip_vs_sed              12519  0
ip_vs_pe_sip           12740  0
nf_conntrack_sip       33780  1 ip_vs_pe_sip
ip_vs_nq               12516  0
ip_vs_lc               12516  0
ip_vs_lblcr            12922  0
ip_vs_lblc             12819  0
ip_vs_ftp              13079  0
ip_vs_dh               12688  0
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145458  24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
nf_nat                 26583  4 ip_vs_ftp,nf_nat_ipv4,xt_nat,nf_nat_masquerade_ipv4
nf_conntrack          139264  8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
[root@cfzx55-21 k8s-shell]#

在22主机上同样操作。本处略。

2.4 创建kube-proxy启动脚本

/opt/kubernetes/bin/kube-proxy-startup.sh

#!/bin/sh
./kube-proxy \
  --v=2 \
  --log-dir=/data/logs/kubernetes/kube-proxy \
  --config=/opt/kubernetes/cfg/kube-proxy-config.yml

生成文件、调整权限,创建目录

[root@cfzx55-22 bin]# vim kube-proxy-startup.sh
[root@cfzx55-22 bin]# chmod +x kube-proxy-startup.sh
[root@cfzx55-22 bin]# mkdir -p /data/logs/kubernetes/kube-proxy

配置参数文件

/opt/kubernetes/cfg/kube-proxy-config.yml

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: cfzx55-22.host.com
clusterCIDR: 192.168.0.0/16

把脚本和参数文件拷贝到21主机。

[root@cfzx55-22 ~]# scp /opt/kubernetes/bin/kube-proxy-startup.sh root@cfzx55-21:/opt/kubernetes/bin/
root@cfzx55-21's password:
kube-proxy-startup.sh                                                      100%  135    79.4KB/s   00:00
[root@cfzx55-22 ~]# scp /opt/kubernetes/cfg/kube-proxy-config.yml root@cfzx55-21:/opt/kubernetes/cfg/
root@cfzx55-21's password:
kube-proxy-config.yml                                                      100%  268   162.6KB/s   00:00
[root@cfzx55-22 ~]#

在21主机上修改

# 修改主机名
[root@cfzx55-21 ~]# vim /opt/kubernetes/cfg/kube-proxy-config.yml
# 创建目录
[root@cfzx55-21 ~]# mkdir -p /data/logs/kubernetes/kube-proxy

创建supervisor启动文件

/etc/supervisord.d/kube-proxy.ini

[program:kube-proxy-55-21]
command=/opt/kubernetes/bin/kube-proxy-startup.sh
numprocs=1
directory=/opt/kubernetes/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-proxy/kube-proxy.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

启动服务

[root@cfzx55-21 ~]# supervisorctl status
etcd-server-55-21                RUNNING   pid 1033, uptime 7:42:48
kube-apiserver-55-21             RUNNING   pid 1034, uptime 7:42:48
kube-controller-manager-55-21    RUNNING   pid 3558, uptime 2:09:31
kube-kubelet-55-21               RUNNING   pid 4143, uptime 0:37:23
kube-proxy-55-21                 RUNNING   pid 8899, uptime 0:00:31
kube-scheduler-55-21             RUNNING   pid 3486, uptime 2:26:54
[root@cfzx55-21 ~]#

把ini文件拷贝到22主机

[root@cfzx55-21 ~]# scp /etc/supervisord.d/kube-proxy.ini root@cfzx55-22:/etc/supervisord.d/
root@cfzx55-22's password:
kube-proxy.ini                                                             100%  435   245.6KB/s   00:00
[root@cfzx55-21 ~]#

在22主机上启动服务

# 修改程序名称
[root@cfzx55-22 ~]# vim /etc/supervisord.d/kube-proxy.ini
# 启动服务
[root@cfzx55-22 ~]# supervisorctl update
kube-proxy-55-22: added process group
[root@cfzx55-22 ~]# supervisorctl status
etcd-server-55-22                RUNNING   pid 1013, uptime 7:44:52
kube-apiserver-55-22             RUNNING   pid 1012, uptime 7:44:52
kube-controller-manager-55-22    RUNNING   pid 3256, uptime 2:11:38
kube-kubelet-55-22               RUNNING   pid 3740, uptime 0:39:37
kube-proxy-55-22                 RUNNING   pid 8714, uptime 0:00:32
kube-scheduler-55-22             RUNNING   pid 3187, uptime 2:22:12
[root@cfzx55-22 ~]#

至此,kubernetes集群部署完成。

网友评论