@[toc] 一.master2 节点部署 承接上篇文章 //从 master01 节点上拷贝证书文件、各master组件的配置文件和服务管理文件到 master02 节点scp -r /opt/etcd/ root@192.168.19.18:/opt/scp -r /opt/kubernetes/ root@192
@[toc]
一.master2 节点部署
承接上篇文章
//从 master01 节点上拷贝证书文件、各master组件的配置文件和服务管理文件到 master02 节点 scp -r /opt/etcd/ root@192.168.19.18:/opt/ scp -r /opt/kubernetes/ root@192.168.19.18:/opt scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.19.18:/usr/lib/systemd/system/ //修改配置文件kube-apiserver中的IP vim /opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.19.18:2379,https://192.168.19.11:2379,https://192.168.19.17:2379 \ --bind-address=192.168.19.18 \ #修改 --secure-port=6443 \ --advertise-address=192.168.19.18 \ #修改 ...... //在 master02 节点上启动各服务并设置开机自启 systemctl start kube-apiserver.service systemctl enable kube-apiserver.service systemctl start kube-controller-manager.service systemctl enable kube-controller-manager.service systemctl start kube-scheduler.service systemctl enable kube-scheduler.service //查看node节点状态 ln -s /opt/kubernetes/bin/* /usr/local/bin/ kubectl get nodes kubectl get nodes -o wide #-o=wide:输出额外信息;对于Pod,将输出Pod所在的Node名 //此时在master02节点查到的node节点状态仅是从etcd查询到的信息,而此时node节点实际上并未与master02节点建立通信连接,因此需要使用一个VIP把node节点与master节点都关联起来首先按照master1的配置来做master2的服务
==接下来master配置文件==