当前位置 : 主页 > 操作系统 > centos >

【kubernetes】集群高可用(二进制)

来源:互联网 收集:自由互联 发布时间:2022-06-20
1、k8s概述 K8s主要分为master节点(控制节点)和node节点(运行容器pod),master节点中有apiserver、controller manager、scheduler和etcd几个主要组件,node节点一般有kubelet、kube-proxy、pod、还有网络

1、k8s概述

【kubernetes】集群高可用(二进制)_虚拟化

      K8s主要分为master节点(控制节点)和node节点(运行容器pod),master节点中有apiserver、controller manager、scheduler和etcd几个主要组件,node节点一般有kubelet、kube-proxy、pod、还有网络插件等等。

K8s简单的工作流程:

  • 用户通过kubectl提交需要运行的docker container(pod);
  • master节点的api server把请求存储在etcd数据库中;
  • scheduler调度器进行扫描 ,将合适的node节点机器分配出去;
  • node节点的kublet找到自己要跑的container,在本机上运行。
  • k8s核心功能:

  • 自愈: 重新启动失败的容器,在节点不可用时,替换和重新调度节点上的容器,对用户定义的健康检查不响应的容器会被中止,并且在容器准备好服务之前不会把其向客户端广播。
  • 弹性伸缩: 通过监控容器的cpu的负载值,如果这个平均高于80%,增加容器的数量,如果这个平均低于10%,减少容器的数量。
  • 服务的自动发现和负载均衡: 不需要修改您的应用程序来使用不熟悉的服务发现机制,k8s为容器提供了自己的IP地址和一组容器的单个DNS 名称,并可以在它们之间进行负载均衡。
  • 滚动升级和一键回滚: k8s逐渐部署对应用程序或其配置的更改,同时监视应用程序运行状况,以确保它不会同时终止所有实例。 如果出现问题,k8s会为您恢复更改,利用日益增长的部署解决方案的生态系统。
  • 私密配置文件管理: web容器里面,数据库的账户密码(测试库密码)。
  • 2、服务器初始化

    2.1、基础配置

    所有安装包获取地址:​​网盘地址​​

    提取码:6qer


    【kubernetes】集群高可用(二进制)_虚拟化_02

    2.2、配置hosts文件


    [root@k8smaster1 ~]# more /etc/hosts   #四个节点都需要配置
    192.168.2.180 k8smaster1
    192.168.2.181 k8smaster2
    192.168.2.182 k8smaster3
    192.168.2.183 k8snode1

    2.3、配置主机之间免密


    #生成ssh 密钥对
    [root@k8smaster1 ~]# ssh-keygen -t rsa    #一路回车,不输入密码  在k8smaster1操作即可
    把本地的ssh公钥文件安装到远程主机对应的账户
    [root@k8smaster1 ~]# ssh-copy-id -i .ssh/id_rsa.pub k8smaster2
    [root@k8smaster1 ~]# ssh-copy-id -i .ssh/id_rsa.pub k8smaster3
    [root@k8smaster1 ~]# ssh-copy-id -i .ssh/id_rsa.pub k8snode1

    2.4、关闭防火墙和selinux


    #以下操作所有主机
    [root@k8smaster1 ~]# systemctl stop firewalld ; systemctl disable firewalld
    [root@k8smaster1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    特别注意交换分区,如果有请关闭

    2.5、修改内核参数


    # 以下操作所有主机
    #加载br_netfilter模块
    [root@k8smaster1 ~]# modprobe br_netfilter
    #验证模块是否加载成功:
    [root@k8smaster1 ~]# lsmod |grep br_netfilter
    #修改内核参数
    [root@k8smaster1 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF
    #使刚才修改的内核参数生效
    [root@k8smaster1 ~]# sysctl -p /etc/sysctl.d/k8s.conf

    2.6、配置yum源

    # 所有主机进行YUM源更换,一般情况下更换为阿里云

    [root@k8smaster1 ~]# yum install lrzsz -y
    [root@k8smaster1 ~]# mkdir /root/repo.bak
    [root@k8smaster1 ~]# cd /etc/yum.repos.d/
    [root@k8smaster1 ~]# mv * /root/repo.bak/

    2.7、安装iptables

    # 所有主机进行操作

    [root@k8smaster1 ~]# yum install iptables-services -y
    [root@k8smaster1 ~]# service iptables stop   && systemctl disable iptables
    [root@k8smaster1 ~]# iptables -F
    [root@k8smaster1 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

    2.8、安装基础软件包

    # 所有主机进行操作

    [root@k8smaster1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet rsync

    2.9、安装docker-ce和镜像加速源

    # 所有主机进行操作


    [root@k8smaster1 ~]# yum install docker-ce docker-ce-cli containerd.io -y
    [root@k8smaster1 ~]# systemctl start docker && systemctl enable docker.service && systemctl status docker
    [root@k8smaster1 ~]# tee /etc/docker/daemon.json << 'EOF'
    {
     "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    EOF
    [root@k8smaster1 ~]# systemctl daemon-reload
    [root@k8smaster1 ~]# systemctl restart docker
    [root@k8smaster1 ~]# systemctl status docker

    3、搭建etcd集群

           ectd是一个非关系数据库,在k8s中主要作用是存放k8s的apiserver的数据

    3.1、配置etcd工作目录

    主要用于存放配置文件和证书文件存放目录

    [root@k8smaster1 ~]# mkdir -p /etc/etcd
    [root@k8smaster1 ~]# mkdir -p /etc/etcd/ssl
    [root@k8smaster2 ~]# mkdir -p /etc/etcd
    [root@k8smaster2 ~]# mkdir -p /etc/etcd/ssl
    [root@k8smaster3 ~]#  mkdir -p /etc/etcd
    [root@k8smaster3 ~]# mkdir -p /etc/etcd/ssl

    3.2、安装签发证书工具cfssl

    在k8smaster1上安装即可

    [root@k8smaster1 ~]# mkdir /data/work -p
    [root@k8smaster1 ~]# cd /data/work/
    #cfssl-certinfo_linux-amd64 、cfssljson_linux-amd64 、cfssl_linux-amd64上传到/data/work/目录下
    [root@k8smaster1 work]# ls
    cfssl-certinfo_linux-amd64  cfssljson_linux-amd64  cfssl_linux-amd64
    #把文件变成可执行权限
    [root@k8smaster1 work]# chmod +x *
    [root@k8smaster1 work]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
    [root@k8smaster1 work]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
    [root@k8smaster1 work]# mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

    3.3、配置ca证书

    [root@k8smaster1 work]# vim ca-csr.json   #CA证书请求文件
    {
      "CN": "kubernetes",
      "key": {
          "algo": "rsa",
          "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Hubei",
          "L": "Wuhan",
          "O": "k8s",
          "OU": "system"
        }
      ],
      "ca": {
              "expiry": "87600h"
      }
    }
    [root@k8smaster1 work]# cfssl gencert -initca ca-csr.json  | cfssljson -bare ca

    3.4、生成etcd证书

    [root@k8smaster1 work]# vim ca-config.json 
    {
      "signing": {
          "default": {
              "expiry": "87600h"
            },
          "profiles": {
              "kubernetes": {
                  "usages": [
                      "signing",
                      "key encipherment",
                      "server auth",
                      "client auth"
                  ],
                  "expiry": "87600h"
              }
          }
      }
    }

    #配置etcd证书请求,hosts的ip变成自己etcd所在节点的ip

    [root@k8smaster1 work]# vim etcd-csr.json
    {
      "CN": "etcd",
      "hosts": [
        "127.0.0.1",
        "192.168.2.180",
        "192.168.2.181",
        "192.168.2.182",
        "192.168.2.199"
      ],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [{
        "C": "CN",
        "ST": "Hubei",
        "L": "Wuhan",
        "O": "k8s",
        "OU": "system"
      }]
    }

    #上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,可以预留几个,做扩容用

    [root@k8smaster1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd

    3.5、部署etcd集群

     把etcd-v3.4.13-linux-amd64.tar.gz上传到/data/work目录下

    [root@k8smaster1 work]# pwd
    /data/work
    [root@k8smaster1 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz
    [root@k8smaster1 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
    [root@k8smaster1 work]# for i in `k8smaster2 k8smaster3`; do scp -r etcd-v3.4.13-linux-amd64/etcd* $i:/usr/local/bin/;done

     #创建配置文件

    [root@k8smaster1 work]# vim etcd.conf 
    #[Member]
    ETCD_NAME="etcd1"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.2.180:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.2.180:2379,http://127.0.0.1:2379"
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.180:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.180:2379"
    ETCD_INITIAL_CLUSTER=";etcd1=https://192.168.2.180:2380,etcd2=https://192.168.2.181:2380,etcd3=https://192.168.2.182:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"

    #注:

    ETCD_NAME:节点名称,集群中唯一 记得修改

    ETCD_DATA_DIR:数据目录

    ETCD_LISTEN_PEER_URLS:集群通信监听地址  记得修改

    ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址  记得修改

    ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址  记得修改

    ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址   记得修改

    ETCD_INITIAL_CLUSTER:集群节点地址

    ETCD_INITIAL_CLUSTER_TOKEN:集群Token

    ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

    #创建启动服务文件

    [root@k8smaster1 work]# vim etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
     
    [Service]
    Type=notify
    EnvironmentFile=-/etc/etcd/etcd.conf
    WorkingDirectory=/var/lib/etcd/
    ExecStart=/usr/local/bin/etcd \
      ;--cert-file=/etc/etcd/ssl/etcd.pem \
      ;--key-file=/etc/etcd/ssl/etcd-key.pem \
      ;--trusted-ca-file=/etc/etcd/ssl/ca.pem \
      ;--peer-cert-file=/etc/etcd/ssl/etcd.pem \
      ;--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
      ;--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
      --peer-client-cert-auth \
      --client-cert-auth
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
     
    [Install]
    WantedBy=multi-user.target

     #将相应配置文件、启动文件、证书拷贝至相应的服务器目录下

    [root@k8smaster1 work]# cp ca*.pem etcd*.pem /etc/etcd/ssl/
    [root@k8smaster1 work]# cp etcd.conf /etc/etcd/
    [root@k8smaster1 work]# cp etcd.service /usr/lib/systemd/system/
    [root@k8smaster1 work]# for i in k8smaster2 k8smaster3;do rsync -vaz etcd.conf $i:/etc/etcd/;done
    [root@k8smaster1 work]# for i in k8smaster2 k8smaster3;do rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;done
    [root@k8smaster1 work]# for i in k8smaster2 k8smaster3;do rsync -vaz etcd. service $i:/usr/lib/systemd/system/;done

    #启动etcd集群

    #k8smaster1-3服务器分别创建如下目录:

    # mkdir -p /var/lib/etcd/default.etcd

    #k8smaster2修改配置文件如下:

    #[Member]
    ETCD_NAME="etcd2"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.2.181:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.2.181:2379,http://127.0.0.1:2379"
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.181:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.181:2379"
    ETCD_INITIAL_CLUSTER=";etcd1=https://192.168.2.180:2380,etcd2=https://192.168.2.181:2380,etcd3=https://192.168.2.182:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new

    #k8smaster3修改配置文件如下:

    #[Member]
    ETCD_NAME="etcd2"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.2.182:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.2.182:2379,http://127.0.0.1:2379"
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.182:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.182:2379"
    ETCD_INITIAL_CLUSTER=";etcd1=https://192.168.2.180:2380,etcd2=https://192.168.2.181:2380,etcd3=https://192.168.2.182:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new

     #k8smaster1-3启动服务:

    # systemctl daemon-reload
    # systemctl enable etcd.service
    # systemctl start etcd.service


    特别注意:启动etcd的时候,先启动k8smaster1的etcd服务,会一直卡住在启动的状态,然后接着再启动k8smaster2的etcd,这样k8smaster1这个节点etcd才会正常起来

    【kubernetes】集群高可用(二进制)_docker_03

    4、安装kubernetes组件

    4.1、部署基础包

    [root@k8smaster1 work]# tar zxvf kubernetes-server-linux-amd64.tar.gz
    [root@k8smaster1 work]# cd kubernetes/server/bin/
    [root@k8smaster1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
    [root@k8smaster1 bin]# for k8smaster2 k8smaster3;do rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl $i:/usr/local/bin/;done
    [root@k8smaster1 bin]# scp kubelet kube-proxy k8snode1:/usr/local/bin/
    [root@k8smaster1 bin]# cd /data/work/ 
    #在所有机器上创建如下目录:
    # mkdir -p /etc/kubernetes/ssl  /var/log/kubernetes

    4.2、部署apiserver组件

           TLS Bootstrapping机制

           每个节点的kubelet组件都是需要和apiserver进行通讯的,是通过apiserver签发的ca证书通讯,也是apiserver启用TLS认证,kulelet组件通过apiserver进行通讯。因为直接启动TLS认证,这种方式客户端证书颁发是手动,如果面临大量的node就行不通了,所以有了TLS Bootstrapping机制,kubelet以以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

             Bootstrap一般都是预先配置在开启或系统启动的时候加载,生成一个指定环境,node上kubetel在启动时同样会加载一个这样配置文件,文件类似如下:

    apiVersion: v1
    clusters: null
    contexts:
    - context:
        cluster: kubernetes
        user: kubelet-bootstrap
      name: default
    current-context: default
    kind: Config
    preferences: {}
    users:
    - name: kubelet-bootstrap
      user: {}

                TLS作用通讯加密,RBAC作用解决权限问题,所以工作流程是想要与apiserver通讯,采用apiserverCA签发的证书,形成信任关系,建立TLS连接,通过证书的CN、o字段提供RBAC所需的用户与用户组。

    在apiserver配置中指定一个token.csv文件,在文件中预设用户,同时用该用户的Token和由apiserver的CA签发的用户被写入kubelet所使用的bootstrap.kubeconfig配置文件中;这样在首次请求时,kubelet 使用bootstrap.kubeconfig中被apiserver CA 签发证书时信任的用户来与apiserver建立TLS通讯,使用bootstrap.kubeconfig中的用户Token来向apiserver声明自己的RBAC授权身份,token.csv格式如下:

    3940fd7fbb391d1b4d861ad17a1f0613,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

    #创建token.csv文件

    [root@k8smaster1 work]# cat > token.csv << EOF
    $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    EOF

     #格式:token,用户名,UID,用户组

     #创建csr请求文件,替换为自己机器的IP

    [root@k8smaster1 work]# vim kube-apiserver-csr.json 
    {
      "CN": "kubernetes",
      "hosts": [
        "127.0.0.1",
        "192.168.2.180",
        "192.168.2.181",
        "192.168.2.182",
        "192.168.2.183",
        "192.168.2.199",
        "10.255.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
      ],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Hubei",
          "L": "Wuhan",
          "O": "k8s",
          "OU": "system"
        }
      ]
    }

    注:如果hosts字段不为空则需要指定授权使用该证书的IP或域名列表。 由于该证书后续被kubernetes master集群使用,需要将master节点的IP都填上,同时还需要填写service网络的首个IP(一般是 kube-apiserver指定的 service-cluster-ip-range网段的第一个IP,如10.255.0.1)

    #生成证书

    [root@k8smaster1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

    #创建api-server的配置文件,替换成自己的ip

    [root@k8smaster1 work]# vim kube-apiserver.conf 
    KUBE_APISERVER_OPTS=";--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
      ;--anonymous-auth=false \
      ;--bind-address=192.168.2.180 \
      ;--secure-port=6443 \
      ;--advertise-address=192.168.2.180 \
      ;--insecure-port=0 \
      ;--authorization-mode=Node,RBAC \
      ;--runtime-config=api/all=true \
      --enable-bootstrap-token-auth \
      ;--service-cluster-ip-range=10.255.0.0/16 \
      ;--token-auth-file=/etc/kubernetes/token.csv \
      ;--service-node-port-range=30000-50000 \
      ;--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
      ;--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
      ;--client-ca-file=/etc/kubernetes/ssl/ca.pem \
      ;--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
      ;--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
      ;--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
      ;--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
      ;--service-account-issuer=https://kubernetes.default.svc.cluster.local \
      ;--etcd-cafile=/etc/etcd/ssl/ca.pem \
      ;--etcd-certfile=/etc/etcd/ssl/etcd.pem \
      ;--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
      ;--etcd-servers=https://192.168.2.180:2379,https://192.168.2.181:2379,https://192.168.2.182:2379 \
      ;--enable-swagger-ui=true \
      ;--allow-privileged=true \
      ;--apiserver-count=3 \
      ;--audit-log-maxage=30 \
      ;--audit-log-maxbackup=3 \
      ;--audit-log-maxsize=100 \
      ;--audit-log-path=/var/log/kube-apiserver-audit.log \
      ;--event-ttl=1h \
      ;--alsologtostderr=true \
      ;--logtostderr=false \      
      ;--log-dir=/var/log/kubernetes \
      ;--v=4"

    配置文件关键参数解析:

    --logtostderr:启用日志

    --v:日志等级

    --log-dir:日志目录

    --etcd-servers:etcd集群地址

    --bind-address:监听地址

    --secure-port:https安全端口

    --advertise-address:集群通告地址

    --allow-privileged:启用授权

    --service-cluster-ip-range:Service虚拟IP地址段

    --enable-admission-plugins:准入控制模块

    --authorization-mode:认证授权,启用RBAC授权和节点自管理

    --enable-bootstrap-token-auth:启用TLS bootstrap机制

    --token-auth-file:bootstrap token文件

    --service-node-port-range:Service nodeport类型默认分配端口范围

    --kubelet-client-xxx:apiserver访问kubelet客户端证书

    --tls-xxx-file:apiserver https证书

    --etcd-xxxfile:连接Etcd集群证书 –

    -audit-log-xxx:审计日志

    #创建服务启动文件

    [root@k8smaster1 work]# vim kube-apiserver.service 
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=etcd.service
    Wants=etcd.service
     
    [Service]
    EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
    ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
    Restart=on-failure
    RestartSec=5
    Type=notify
    LimitNOFILE=65536
     
    [Install]
    WantedBy=multi-user.target

    将相应apiserver配置文件和启动文件放到相应的目录下

    [root@k8smaster1 work]# cp ca*.pem kube-apiserver*.pem /etc/kubernetes/ssl
    [root@k8smaster1 work]# cp token.csv kube-apiserver.conf /etc/kubernetes/
    [root@k8smaster1 work]# cp kube-apiserver.service /usr/lib/systemd/system/
    [root@k8smaster1 work]# for k8smaster2 k8smaster3;do rsync -vaz ca*.pem kube-apiserver*.pem $i:/etc/kubernetes/ssl;done
    [root@k8smaster1 work]# for k8smaster2 k8smaster3;do rsync -vaz token.csv kube-apiserver.conf $i:/etc/kubernetes/;done
    [root@k8smaster1 work]# for k8smaster2 k8smaster3;do rsync -vaz kube-apiserver.service $i:/usr/lib/systemd/system/;done

    注:修改k8smaster2 k8smaster3配置文件kube-apiserver.conf的IP地址修改为实际IP(--bind-address、--advertise-address)

    所有主机启动:

    # systemctl daemon-reload
    # systemctl enable kube-apiserver
    # systemctl start kube-apiserver

    检验apiserver是否安装成功,code状态为401,是正常的,还没有认证,如下图:

    【kubernetes】集群高可用(二进制)_kubernetes_04

    4.3、部署kubectl组件

     kubectl是客户端工具,操作k8s资源,如增删改查等等

     创建csr请求文件​

    [root@k8smaster1 work]# vim admin-csr.json 
    {
      "CN": "admin",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Hubei",
          "L": "Wuhan",
          "O": "system:masters",             
          "OU": "system"
        }
      ]
    }

    生成证书

    [root@k8smaster1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
    [root@k8smaster1 work]# cp admin*.pem /etc/kubernetes/ssl/

    配置安全上下文

    #创建kubeconfig配置文件,比较重要

    kubeconfig为kubectl的配置文件,包含访问apiserver的所有信息,如apiserver地址、CA 证书和自身使用的证书(这里如果报错找不到kubeconfig路径,请手动复制到相应路径下,没有则忽略)

    设置集群参数​

    [root@k8smaster1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.2.180:6443 --kubeconfig=kube.config

    设置客户端认证参数

    [root@k8smaster1 work]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config

    设置上下文参数

    [root@k8smaster1 work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config

    设置当前上下文

    [root@k8smaster1 work]# kubectl config use-context kubernetes --kubeconfig=kube.config
    [root@k8smaster1 work]# mkdir ~/.kube -p
    [root@k8smaster1 work]# cp kube.config ~/.kube/config

    授权kubernetes证书访问kubelet-api权限

    [root@k8smaster1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

    查看集群

    【kubernetes】集群高可用(二进制)_kubernetes_05

    【kubernetes】集群高可用(二进制)_docker_06

    [root@k8smaster1 work]# kubectl get componentstatuses
    [root@k8smaster1 work]# kubectl get all --all-namespaces
    其他节点部署k8smaster2 k8smaster3
    # mkdir /root/.kube/
    [root@k8smaster1 work]# for k8smaster2 k8smaster3;do rsync -vaz /root/.kube/config $i:/root/.kube/;done
    [root@k8smaster1 work]# yum install -y bash-completion
    [root@k8smaster1 work]# source /usr/share/bash-completion/bash_completion
    [root@k8smaster1 work]# source <(kubectl completion bash)
    [root@k8smaster1 work]# kubectl completion bash > ~/.kube/completion.bash.inc
    [root@k8smaster1 work]# source '/root/.kube/completion.bash.inc'
    [root@k8smaster1 work]# source $HOME/.bash_profile

    4.4、部署kube-controller-manager组件

     #创建csr请求文件

    [root@k8smaster1 work]# vim kube-controller-manager-csr.json 
    {
        "CN": "system:kube-controller-manager",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "hosts": [
          "127.0.0.1",
          "192.168.2.180",
          "192.168.2.181",
          "192.168.2.182",
          "192.168.2.199"
        ],
        "names": [
          {
            "C": "CN",
            "ST": "Hubei",
            "L": "Wuhan",
            "O": "system:kube-controller-manager",
            "OU": "system"
          }
        ]
    }

     #生成证书

    [root@k8smaster1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

    设置集群参数

    [root@k8smaster1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.2.180:6443 --kubeconfig=kube-controller-manager.kubeconfig

    设置客户端认证参数

    [root@k8smaster1 work]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig

    设置上下文参数

    [root@k8smaster1 work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

    设置当前上下文

    [root@k8smaster1 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

    #创建配置文件kube-controller-manager.conf

    [root@k8smaster1 work]# vim kube-controller-manager.conf 
    KUBE_CONTROLLER_MANAGER_OPTS=";--port=0 \
      ;--secure-port=10252 \
      ;--bind-address=127.0.0.1 \
      ;--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
      ;--service-cluster-ip-range=10.255.0.0/16 \
      ;--cluster-name=kubernetes \
      ;--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
      ;--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
      ;--allocate-node-cidrs=true \
      ;--cluster-cidr=10.0.0.0/16 \
      ;--experimental-cluster-signing-duration=87600h \
      ;--root-ca-file=/etc/kubernetes/ssl/ca.pem \
      ;--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
      ;--leader-elect=true \
      ;--feature-gates=RotateKubeletServerCertificate=true \
      ;--controllers=*,bootstrapsigner,tokencleaner \
      ;--horizontal-pod-autoscaler-use-rest-clients=true \
      ;--horizontal-pod-autoscaler-sync-period=10s \
      ;--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
      ;--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
      ;--use-service-account-credentials=true \
      ;--alsologtostderr=true \
      ;--logtostderr=false \
      ;--log-dir=/var/log/kubernetes \
      ;--v=2"

    #创建启动文件

    [root@k8smaster1 work]# vim kube-controller-manager.service 
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    [Service]
    EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
    ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
    Restart=on-failure
    RestartSec=5
    [Install]
    WantedBy=multi-user.target

    配置文件证书分发到相应的目录(k8smaster1-3)

    [root@k8smaster1 work]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/
    [root@k8smaster1 work]# cp kube-controller-manager.kubeconfig  kube-controller-manager.conf /etc/kubernetes/
    [root@k8smaster1 work]# cp kube-controller-manager.service /usr/lib/systemd/system/
    [root@k8smaster1 work]# for k8smaster2 k8smaster3;do rsync -vaz kube-controller-manager*.pem $i:/etc/kubernetes/ssl/;done
    [root@k8smaster1 work]# for k8smaster2 k8smaster3;do rsync -vaz kube-controller-manager.kubeconfig  kube-controller-manager.conf $i:/etc/kubernetes/;done
    [root@k8smaster1 work]# for k8smaster2 k8smaster3;do rsync -vaz kube-controller-manager.service $i:/usr/lib/systemd/system/;done

    启动服务:

    # systemctl daemon-reload 
    # systemctl enable kube-controller-manager
    # systemctl start kube-controller-manager

    4.5、部署kube-scheduler组件

     #创建csr请求文件

    [root@k8smaster1 work]# vim kube-scheduler-csr.json 
    {
        "CN": "system:kube-scheduler",
        "hosts": [
          "127.0.0.1",
          "192.168.2.180",
          "192.168.2.181",
          "192.168.2.182",
          "192.168.2.199"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
          {
            "C": "CN",
            "ST": "Hubei",
            "L": "Wuhan",
            "O": "system:kube-scheduler",
            "OU": "system"
          }
        ]
    }

    #生成证书

    [root@k8smaster1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

    设置集群参数

    [root@k8smaster1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.2.180:6443 --kubeconfig=kube-scheduler.kubeconfig

    设置客户端认证参数

    [root@k8smaster1 work]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig

    设置上下文参数

    [root@k8smaster1 work]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

    设置当前上下文

    [root@k8smaster1 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

    #创建配置文件kube-scheduler.conf

    [root@k8smaster1 work]# vim kube-scheduler.conf 
    KUBE_SCHEDULER_OPTS=";--address=127.0.0.1 \
    --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
    --leader-elect=true \
    --alsologtostderr=true \
    --logtostderr=false \
    --log-dir=/var/log/kubernetes \
    --v=2"

    #创建服务启动文件

    [root@k8smaster1 work]# vim kube-scheduler.service 
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
     
    [Service]
    EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
    ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
    Restart=on-failure
    RestartSec=5
     
    [Install]
    WantedBy=multi-user.target

    配置文件证书分发到相应的目录(k8smaster1-3)

    [root@k8smaster1 work]# cp kube-scheduler*.pem /etc/kubernetes/ssl/
    [root@k8smaster1 work]# cp kube-scheduler.kubeconfig   kube-scheduler.conf /etc/kubernetes/
    [root@k8smaster1 work]# cp kube-scheduler.service /usr/lib/systemd/system/
    [root@k8smaster1 work]# for k8smaster2 k8smaster3;do rsync -vaz kube-scheduler*.pem $i:/etc/kubernetes/ssl/;done
    [root@k8smaster1 work]# for k8smaster2 k8smaster3;do rsync -vaz kube-scheduler.kubeconfig   kube-scheduler.conf $i:/etc/kubernetes/;done
    [root@k8smaster1 work]# for k8smaster2 k8smaster3;do rsync -vaz kube-scheduler.service $i:/usr/lib/systemd/system/;done

    启动服务:

    # systemctl daemon-reload 
    # systemctl enable kube-controller-manager
    # systemctl start kube-controller-manager

    4.6、部署kubelet组件(node)

    #把pause-cordns.tar.gz上传到xianchaonode1节点,手动解压

    [root@k8snode1 ~]# docker load -i pause-cordns.tar.gz

    创建kubelet-bootstrap.kubeconfig

    [root@k8smaster1 work]# cd /data/work/
    [root@k8smaster1 work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
    [root@k8smaster1 work]# rm -r kubelet-bootstrap.kubeconfig
    [root@k8smaster1 work]#  kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.2.180:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
    [root@k8smaster1 work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
    [root@k8smaster1 work]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
    [root@k8smaster1 work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
    [root@k8smaster1 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

     #创建配置文件kubelet.json

    "cgroupDriver": "systemd"要和docker的驱动一致。

    address替换为自己xianchaonode1的IP地址。

    [root@k8smaster1 work]# vim kubelet.json 
    {
      "kind": "KubeletConfiguration",
      "apiVersion": "kubelet.config.k8s.io/v1beta1",
      "authentication": {
        "x509": {
          "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
        },
        "webhook": {
          "enabled": true,
          "cacheTTL": "2m0s"
        },
        "anonymous": {
          "enabled": false
        }
      },
      "authorization": {
        "mode": "Webhook",
        "webhook": {
          "cacheAuthorizedTTL": "5m0s",
          "cacheUnauthorizedTTL": "30s"
        }
      },
      "address": "192.168.2.183",
      "port": 10250,
      "readOnlyPort": 10255,
      "cgroupDriver": "systemd",
      "hairpinMode": "promiscuous-bridge",
      "serializeImagePulls": false,
      "featureGates": {
        "RotateKubeletClientCertificate": true,
        "RotateKubeletServerCertificate": true
      },
      "clusterDomain": "cluster.local.",
      "clusterDNS": ["10.255.0.2"]
    }


    [root@xianchaomaster1 work]# vim kubelet.service 
    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/kubernetes/kubernetes
    After=docker.service
    Requires=docker.service
    [Service]
    WorkingDirectory=/var/lib/kubelet
    ExecStart=/usr/local/bin/kubelet \
      ;--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
      ;--cert-dir=/etc/kubernetes/ssl \
      ;--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
      ;--config=/etc/kubernetes/kubelet.json \
      ;--network-plugin=cni \
      ;--pod-infra-container-image=k8s.gcr.io/pause:3.2 \
      ;--alsologtostderr=true \
      ;--logtostderr=false \
      ;--log-dir=/var/log/kubernetes \
      ;--v=2
    Restart=on-failure
    RestartSec=5
     
    [Install]
    WantedBy=multi-user.target

    配置文件参数解析:

    #注: –hostname-override:显示名称,集群中唯一

    –network-plugin:启用CNI

    –kubeconfig:空路径,会自动生成,后面用于连接apiserver

    –bootstrap-kubeconfig:首次启动向apiserver申请证书

    –config:配置参数文件

    –cert-dir:kubelet证书生成目录

    –pod-infra-container-image:管理Pod网络容器的镜像

    #注:kubelete.json配置文件address改为各个节点的ip地址,在各个work节点上启动服务

    [root@k8snode1 ~]# mkdir /etc/kubernetes/ssl -p
    [root@k8smaster1 work]# scp kubelet-bootstrap.kubeconfig kubelet.json k8snode1:/etc/kubernetes/
    [root@k8smaster1 work]# scp  ca.pem k8snode1:/etc/kubernetes/ssl/
    [root@k8smaster1 work]# scp  kubelet.service k8snode1:/usr/lib/systemd/system/

    #启动kubelet服务

    [root@k8snode1 ~]# mkdir /var/lib/kubelet
    [root@k8snode1 ~]# mkdir /var/log/kubernetes
    [root@k8snode1 ~]#  systemctl daemon-reload
    [root@k8snode1 ~]# systemctl enable kubelet
    [root@k8snode1 ~]# systemctl start kubelet
    [root@k8snode1 ~]#  systemctl status kubelet

    确认kubelet服务启动成功后,接着到k8smaster1节点上Approve一下bootstrap请求。

    执行如下命令可以看到一个worker节点发送了一个 CSR 请求:​

    【kubernetes】集群高可用(二进制)_docker_07

    4.7、部署kube-proxy组件(node)

    #创建csr请求

    [root@k8smaster1 work]# vim kube-proxy-csr.json 
    {
      "CN": "system:kube-proxy",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Hubei",
          "L": "Wuhan",
          "O": "k8s",
          "OU": "system"
        }
      ]
    }

    生成证书

    [root@k8smaster1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

    #创建kubeconfig文件

    [root@k8smaster1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.2.180:6443 --kubeconfig=kube-proxy.kubeconfig
    [root@k8smaster1 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
    [root@k8smaster1 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
    [root@k8smaster1 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

    #创建kube-proxy配置文件

    [root@k8smaster1 work]# vim kube-proxy.yaml 
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 192.168.2.183
    clientConnection:
      kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
    clusterCIDR: 192.168.2.0/24
    healthzBindAddress: 192.168.2.183:10256
    kind: KubeProxyConfiguration
    metricsBindAddress: 192.168.2.183:10249
    mode: "ipvs"

    #创建服务启动文件

    [root@k8smaster1 work]# vim kube-proxy.service 
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
     
    [Service]
    WorkingDirectory=/var/lib/kube-proxy
    ExecStart=/usr/local/bin/kube-proxy \
      ;--config=/etc/kubernetes/kube-proxy.yaml \
      ;--alsologtostderr=true \
      ;--logtostderr=false \
      ;--log-dir=/var/log/kubernetes \
      ;--v=2
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
     
    [Install]
    WantedBy=multi-user.target
     
    [root@k8smaster1 work]# scp  kube-proxy.kubeconfig kube-proxy.yaml k8snode1:/etc/kubernetes/
    [root@k8smaster1 work]# scp  kube-proxy.service k8snode1:/usr/lib/systemd/system/

    #启动服务

    [root@k8snode1 ~]# mkdir -p /var/lib/kube-proxy
    [root@k8snode1 ~]# systemctl daemon-reload
    [root@k8snode1 ~]# systemctl enable kube-proxy
    [root@k8snode1 ~]# systemctl  start kube-proxy
    [root@k8snode1 ~]# systemctl status kube-proxy


    4.8、部署calico组件(node)

    #解压离线镜像压缩包

    #把cni.tar.gz和node.tar.gz上传到k8snode1节点,手动解压

    [root@k8snode1 ~]# docker load -i cni.tar.gz
    [root@k8snode1 ~]# docker load -i node.tar.gz

    #把calico.yaml文件上传到k8smaster1上的的/data/work目录

    [root@k8smaster1 work]# kubectl apply -f calico.yaml
    [root@k8smaster1 ~]# kubectl get pods -n kube-system
    NAME                READY   STATUS    RESTARTS   AGE
    calico-node-xk7n4   1/1     Running   0          13s
     
    [root@k8smaster1 ~]# kubectl get nodes
    NAME            STATUS   ROLES    AGE   VERSION
    k8snode1   Ready    <none>   73m   v1.20.7


    4.9、部署coredns组件(node)

    [root@k8smaster1 ~]# kubectl apply -f coredns.yaml
    [root@k8smaster1 ~]# kubectl get pods -n kube-system
    NAME                       READY   STATUS    RESTARTS   AGE
    calico-node-xk7n4          1/1     Running   0          6m6s
    coredns-7bf4bd64bd-dt8dq   1/1     Running   0          51s
    [root@k8smaster1 ~]# kubectl get svc -n kube-system
    NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
    kube-dns   ClusterIP   10.255.0.2   <none>        53/UDP,53/TCP,9153/TCP   12m

    5、测试集群

    5.1、测试k8s集群部署tomcat服务

    #把tomcat.tar.gz和busybox-1-28.tar.gz上传到k8snode1,手动解压

    [root@k8snode1 ~]# docker load -i tomcat.tar.gz
    [root@k8snode1 ~]# docker load -i busybox-1-28.tar.gz 
    [root@k8smaster1 ~]# kubectl apply -f tomcat.yaml
     
    [root@k8smaster1 ~]# kubectl get pods
    NAME       READY   STATUS    RESTARTS   AGE
    demo-pod   2/2     Running   0          11m
    [root@k8smaster1 ~]# kubectl apply -f tomcat-service.yaml
    [root@k8smaster1 ~]# kubectl get svc
    NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
    kubernetes   ClusterIP   10.255.0.1       <none>        443/TCP          158m
    tomcat       NodePort    10.255.227.179   <none>        8080:30080/TCP   19m

    在浏览器访问k8snode1节点的ip:30080即可请求到浏览器​

    【kubernetes】集群高可用(二进制)_docker_08

    5.2、验证cordns是否正常

    [root@k8smaster1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
    # ping www.baidu.com
    PING www.baidu.com (39.156.66.18): 56 data bytes
    64 bytes from 39.156.66.18: ;seq=0 ;ttl=127 ;time=39.3 ms
    #通过上面可以看到能访问网络
    # nslookup kubernetes.default.svc.cluster.local
    Server:   10.255.0.2
    Address:  10.255.0.2:53
    Name: kubernetes.default.svc.cluster.local
    Address: 10.255.0.1
     
    # nslookup tomcat.default.svc.cluster.local
    Server:    10.255.0.2
    Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local
     
    Name:      tomcat.default.svc.cluster.local
    Address 1: 10.255.227.179 tomcat.default.svc.cluster.local

     

    #注意:

    busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip,报错如下:

    / # nslookup kubernetes.default.svc.cluster.local

    Server: 10.255.0.2

    Address: 10.255.0.2:53

    *** Can't find kubernetes.default.svc.cluster.local: No answer

    *** Can't find kubernetes.default.svc.cluster.local: No answer

    【kubernetes】集群高可用(二进制)_虚拟化_09

    10.255.0.2 就是我们coreDNS的clusterIP,说明coreDNS配置好了。

    解析内部Service的名称,是通过coreDNS去解析的。

    6、k8sapiserver高可用

    把epel.repo上传到k8smaster1的/etc/yum.repos.d目录下,这样才能安装keepalived和nginx 

    把epel.repo传到k8smaster2、k8smaster3、k8snode1上

    [root@k8smaster1 ~]# scp /etc/yum.repos.d/epel.repo k8smaster2:/etc/yum.repos.d/
    [root@k8smaster1 ~]# scp /etc/yum.repos.d/epel.repo k8smaster3:/etc/yum.repos.d/
    [root@k8smaster1 ~]# scp /etc/yum.repos.d/epel.repo k8snode1:/etc/yum.repos.d/

    6.1、nginx主备

    在k8smaster1和k8smaster2上做nginx主备安装

    [root@k8smaster1 ~]#  yum install nginx keepalived -y  # 注意YUM源中没有nginx包就使用提供软件包的
    [root@k8smaster2 ~]#  yum install nginx keepalived -y

    修改nginx配置文件。主备一样

    [root@k8smaster1 ~]# cat /etc/nginx/nginx.conf
    [root@k8smaster1 ~]# cat /etc/nginx/nginx.conf
    user nginx;
    worker_processes auto;
    error_log /var/log/nginx/error.log;
    pid /run/nginx.pid;
     
    include /usr/share/nginx/modules/*.conf;
     
    events {
        worker_connections 1024;
    }
     
    # 四层负载均衡,为两台Master apiserver组件提供负载均衡
    stream {
     
        log_format  main  &#39;$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
     
        access_log  /var/log/nginx/k8s-access.log  main;
     
        upstream k8s-apiserver {
           server 192.168.2.180:6443;   # k8smaster1 APISERVER IP:PORT
           server 192.168.2.181:6443;   # k8smaster2 APISERVER IP:PORT
           server 192.168.2.182:6443;   # k8smaster3 APISERVER IP:PORT
     
        }
        
        server {
           listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
           proxy_pass k8s-apiserver;
        }
    }
     
    http {
        log_format  main  &#39;$remote_addr - $remote_user [$time_local] "$request" '
                          &#39;$status $body_bytes_sent "$http_referer" '
                          &#39;"$http_user_agent" "$http_x_forwarded_for"';
     
        access_log  /var/log/nginx/access.log  main;
     
        sendfile            on;
        tcp_nopush          on;
        tcp_nodelay         on;
        keepalive_timeout   65;
        types_hash_max_size 2048;
     
        include             /etc/nginx/mime.types;
        default_type        application/octet-stream;
     
        server {
            listen       80 default_server;
            server_name  _;
     
            location / {
            }
        }
    }
     
     
    [root@k8smaster2 ~]# cat /etc/nginx/nginx.conf
    user nginx;
    worker_processes auto;
    error_log /var/log/nginx/error.log;
    pid /run/nginx.pid;
     
    include /usr/share/nginx/modules/*.conf;
     
    events {
        worker_connections 1024;
    }
     
    # 四层负载均衡,为两台Master apiserver组件提供负载均衡
    stream {
     
        log_format  main  &#39;$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
     
        access_log  /var/log/nginx/k8s-access.log  main;
     
        upstream k8s-apiserver {
           server 192.168.2.180:6443;   # k8smaster1 APISERVER IP:PORT
           server 192.168.2.181:6443;   # k8smaster2 APISERVER IP:PORT
           server 192.168.2.182:6443;   # k8smaster3 APISERVER IP:PORT
     
        }
        
        server {
           listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
           proxy_pass k8s-apiserver;
        }
    }
     
    http {
        log_format  main  &#39;$remote_addr - $remote_user [$time_local] "$request" '
                          &#39;$status $body_bytes_sent "$http_referer" '
                          &#39;"$http_user_agent" "$http_x_forwarded_for"';
     
        access_log  /var/log/nginx/access.log  main;
     
        sendfile            on;
        tcp_nopush          on;
        tcp_nodelay         on;
        keepalive_timeout   65;
        types_hash_max_size 2048;
     
        include             /etc/nginx/mime.types;
        default_type        application/octet-stream;
     
        server {
            listen       80 default_server;
            server_name  _;
     
            location / {
            }
        }
    }

    6.2、keepalived配置

    主keepalived
    [root@k8smaster1 ~]# cat /etc/keepalived/keepalived.conf 
    global_defs { 
       notification_email { 
         acassen@firewall.loc 
         failover@firewall.loc 
         sysadmin@firewall.loc 
       } 
       notification_email_from Alexandre.Cassen@firewall.loc  
       smtp_server 127.0.0.1 
       smtp_connect_timeout 30 
       router_id NGINX_MASTER

     
    vrrp_script check_nginx {
        script "/etc/keepalived/check_nginx.sh"
    }
     
    vrrp_instance VI_1 { 
        state MASTER 
        interface ens33  # 修改为实际网卡名
        virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
        priority 100    # 优先级,备服务器设置 90 
        advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
        authentication { 
            auth_type PASS      
            auth_pass 1111 
        }  
        # 虚拟IP
        virtual_ipaddress { 
            192.168.2.199/24
        } 
        track_script {
            check_nginx
        } 
    }
     
    #vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
    #virtual_ipaddress:虚拟IP(VIP)
     
    [root@k8smaster1 ~]# cat /etc/keepalived/check_nginx.sh 
    #!/bin/bash
    count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
    if [ "$count" -eq 0 ];then
        systemctl stop keepalived
    fi
     
    [root@k8smaster1 ~]# chmod +x  /etc/keepalived/check_nginx.sh
     
    备keepalive
    [root@k8smaster2 ~]# cat /etc/keepalived/keepalived.conf 
    global_defs { 
       notification_email { 
         acassen@firewall.loc 
         failover@firewall.loc 
         sysadmin@firewall.loc 
       } 
       notification_email_from Alexandre.Cassen@firewall.loc  
       smtp_server 127.0.0.1 
       smtp_connect_timeout 30 
       router_id NGINX_BACKUP

     
    vrrp_script check_nginx {
        script "/etc/keepalived/check_nginx.sh"
    }
     
    vrrp_instance VI_1 { 
        state BACKUP 
        interface ens33
        virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
        priority 90
        advert_int 1
        authentication { 
            auth_type PASS      
            auth_pass 1111 
        }  
        virtual_ipaddress { 
            192.168.2.199/24
        } 
        track_script {
            check_nginx
        } 
    }
     
     
    [root@k8smaster2 ~]# cat /etc/keepalived/check_nginx.sh 
    #!/bin/bash
    count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
    if [ "$count" -eq 0 ];then
        systemctl stop keepalived
    fi
    [root@k8smaster2 ~]# chmod +x /etc/keepalived/check_nginx.sh
    #注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。
    # systemctl daemon-reload
    # systemctl start nginx
    # systemctl start keepalived
    # systemctl enable nginx keepalived

    测试vip是否绑定成功

    【kubernetes】集群高可用(二进制)_虚拟化_10

    目前所有的Worker Node组件连接都还是k8smaster1 Node,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。

    因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,由原来192.168.2.180修改为192.168.2.199(VIP)。

    在所有Worker Node执行:

    [root@k8snode1 ~]# sed -i 's#192.168.2.180:6443#192.168.2.199:16443#' /etc/kubernetes/kubelet-bootstrap.kubeconfig
    [root@k8snode1 ~]# sed -i 's#192.168.2.180:6443#192.168.2.199:16443#' /etc/kubernetes/kubelet.json
    [root@k8snode1 ~]# sed -i 's#192.168.2.180:6443#192.168.2.199:16443#' /etc/kubernetes/kubelet.kubeconfig
    [root@k8snode1 ~]# sed -i 's#192.168.2.180:6443#192.168.2.199:16443#' /etc/kubernetes/kube-proxy.yaml
    [root@k8snode1 ~]# sed -i 's#192.168.2.180:6443#192.168.2.199:16443#' /etc/kubernetes/kube-proxy.kubeconfig
    [root@k8snode1 ~]# systemctl restart kubelet kube-proxy


    上一篇:Linux磁盘与文件系统的管理
    下一篇:没有了
    网友评论