[关闭]
@zhangsiming65965 2019-03-26T06:12:19.000000Z 字数 54907 阅读 88

Kubernetes集群部署

K8S

---Author:张思明 ZhangSiming

---Mail:1151004164@cnu.edu.cn

---QQ:1030728296

如果有梦想,就放开的去追;
因为只有奋斗,才能改变命运。


一、官方提供的三种部署方式

Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。
部署地址:https://kubernetes.io/docs/setup/minikube/

Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
缺点:
1.还在测试中;
2.主控节点与Node节点有一个连接超时时间,超过这个时间就连接不上了(1年);
3.所有内部组件都被部署好了,难以得到K8S的学习。

推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
下载地址:https://github.com/kubernetes/kubernetes/releases

二、Kubernetes平台环境规划

2.1实验环境配置

软件 版本
Linux操作系统 CentOS7.5_x64
Kubernetes 1.12
Docker 18.xx-ce(社区版)
Etcd 3.3.11
Flannel 0.10
角色 IP 组件
Master01 192.168.17.130 Kube-APIServer、Kube-controller-manager、Kube-scheduler、Etcd
Master02 192.168.17.131 Kube-APIServer、Kube-controller-manager、Kube-scheduler、Etcd
Node01 192.168.17.132 Kubelet、Kube-proxy、Docker、flannel、Etcd
Node02 192.168.17.133 Kubelet、Kube-proxy、Docker、flannel
Load Balance(Master) 192.168.17.134、192.168.17.135(VIP) Nginx L4
Load Balancer(Backup) 192.168.17.136 Nginx L4
Registry 192.168.17.137 Harbor

2.2实验架构图

image_1d2emg58flf0i4fnoi1mdik539.png-81.4kB

三、自签SSL证书

3.1什么是SSL证书

SSL证书(SSL Certificates)是HTTP明文协议升级HTTPS加密协议必备的数字证书。它在客户端(浏览器)与服务端(网站服务器)之间搭建一条安全的加密通道,对两者之间交换的信息进行加密,确保传输数据不被泄露或篡改。
HTTP------>HTTPS

什么是自签的SSL证书?

注:自签的SSL证书顾名思义就是不是受到受信任的机构颁发的SSL证书,自己签发的证书。这种证书随意签发,不受监督也不会受任何浏览器以及操作系统信任。

3.2K8S集群使用自签证书表

组件 使用的证书
etcd ca.pem,server.pem,server-key.pem
flannel ca.pem,server.pem,server-key.pem
kube-apiserver ca.pem,server.pem,server-key.pem
kubelet ca.pem,ca-key.pem
kube-proxy ca.pem,kube-proxy.pem,kube-proxy-key.pem
kubectl ca.pem,admin.pem,admin-key.pem

注:ca是证书发布机构

3.3生成Etcd的自签SSL证书,增强安全性

3.3.1获取自签cfssl证书生成工具

  1. [root@ZhangSiming ~]# mkdir K8S
  2. [root@ZhangSiming ~]# cd K8S
  3. #在K8SMaster01上生成需要的证书
  4. [root@ZhangSiming K8S]# ls K8S/
  5. cfssl.sh Etcd-cert.sh
  6. #cfssl.sh是下载证书生成工具的脚本;Etcd-cert.sh是利用cfssl工具生成相应证书的脚本
  7. [root@ZhangSiming K8S]# cat cfssl.sh
  8. #!/bin/bash
  9. #designed by Zhangsiming
  10. curl -L https:/\/pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
  11. curl -L https:/\/pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
  12. #使cfssl支持json语言的工具包
  13. curl -L https:/\/pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
  14. #查看cfssl信息包
  15. chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
  16. #执行脚本获取工具
  17. [root@ZhangSiming K8S]# bash cfssl.sh
  18. % Total % Received % Xferd Average Speed Time Time Time Current
  19. Dload Upload Total Spent Left Speed
  20. 100 9.8M 100 9.8M 0 0 1215k 0 0:00:08 0:00:08 --:--:-- 1933k
  21. % Total % Received % Xferd Average Speed Time Time Time Current
  22. Dload Upload Total Spent Left Speed
  23. 100 2224k 100 2224k 0 0 635k 0 0:00:03 0:00:03 --:--:-- 635k
  24. % Total % Received % Xferd Average Speed Time Time Time Current
  25. Dload Upload Total Spent Left Speed
  26. 100 6440k 100 6440k 0 0 959k 0 0:00:06 0:00:06 --:--:-- 1379k
  27. [root@ZhangSiming K8S]# ls /usr/local/bin/ | grep cfssl
  28. cfssl
  29. cfssl-certinfo
  30. cfssljson
  31. #获取成功

3.3.2给Etcd生成证书

  1. ###生成脚本
  2. [root@ZhangSiming K8S]# cat Etcd-cert.sh
  3. #!/bin/bash
  4. #Designed by Zhangsiming
  5. cat > ca-config.json << EOF
  6. {
  7. "signing": {
  8. "default": {
  9. "expiry": "87600h" #过期时间
  10. },
  11. "profiles": {
  12. "www": {
  13. "expiry": "87600h",
  14. "usages": [
  15. "signing",
  16. "key encipherment",
  17. "server auth",
  18. "client auth"
  19. ]
  20. }
  21. }
  22. }
  23. }
  24. EOF
  25. cat > ca-csr.json << EOF
  26. {
  27. "CN": "etcd CA",
  28. "key": {
  29. "algo": "rsa", #加密算法
  30. "size": 2048
  31. },
  32. "names": [
  33. {
  34. "C": "CN",
  35. "L": "Beijing",
  36. "ST": "Beijing"
  37. }
  38. ]
  39. }
  40. EOF
  41. #生成根证书的文件
  42. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  43. #生成根证书
  44. cat > server-csr.json << EOF
  45. {
  46. "CN": "etcd",
  47. "hosts": [
  48. "192.168.17.130",
  49. "192.168.17.131",
  50. "192.168.17.132" #这三个IP要是Etcd节点的的三个IP
  51. ],
  52. "key": {
  53. "algo": "rsa",
  54. "size": 2048
  55. },
  56. "names": [
  57. {
  58. "C": "CN",
  59. "L": "Beijing",
  60. "ST": "Beijing"
  61. }
  62. ]
  63. }
  64. EOF
  65. #利用根证书颁发子证书
  66. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
  67. #生成所有Etcd需要的证书
  68. #执行脚本
  69. [root@ZhangSiming K8S]# mkdir cert
  70. [root@ZhangSiming K8S]# ls
  71. cert cfssl.sh Etcd-cert.sh
  72. [root@ZhangSiming K8S]# cd cert/
  73. [root@ZhangSiming cert]# sh ~/K8S/Etcd-cert.sh
  74. 2019/01/30 16:36:15 [INFO] generating a new CA key and certificate from CSR
  75. 2019/01/30 16:36:15 [INFO] generate received request
  76. 2019/01/30 16:36:15 [INFO] received CSR
  77. 2019/01/30 16:36:15 [INFO] generating key: rsa-2048
  78. 2019/01/30 16:36:16 [INFO] encoded CSR
  79. 2019/01/30 16:36:16 [INFO] signed certificate with serial number 278108394651810943628140075243673420492920215858
  80. 2019/01/30 16:36:16 [INFO] generate received request
  81. 2019/01/30 16:36:16 [INFO] received CSR
  82. 2019/01/30 16:36:16 [INFO] generating key: rsa-2048
  83. 2019/01/30 16:36:16 [INFO] encoded CSR
  84. 2019/01/30 16:36:16 [INFO] signed certificate with serial number 61614068451067277863698620865442956258180155064
  85. 2019/01/30 16:36:16 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
  86. websites. For more information see the Baseline Requirements for the Issuance and Management
  87. of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
  88. specifically, section 10.2.3 ("Information Requirements").
  89. [root@ZhangSiming cert]# ls
  90. ca-config.json ca-csr.json ca.pem server-csr.json server.pem
  91. ca.csr ca-key.pem server.csr server-key.pem
  92. #Etcd需要的三个证书,ca.pem、server-key.pem、server.pem已经全部生成好

四、Etcd数据库Cluster集群部署

4.1下载Etcd的二进制安装包(从GitHub上)

Etcd二进制包下载地址
https://github.com/etcd-io/etcd/releases

image_1d2hlm6a18b067o1mhc51k8it9.png-183.2kB

注:下载3.3.11版本的,3.3.10版本systemctl自启动服务启动时候会出错......

拷贝下载链接:https://github.com/etcd-io/etcd/releases/download/v3.3.11/etcd-v3.3.11-linux-amd64.tar.gz

  1. [root@ZhangSiming cert]# cat /etc/redhat-release
  2. CentOS Linux release 7.5.1804 (Core)
  3. [root@ZhangSiming cert]# uname -r
  4. 3.10.0-862.el7.x86_64
  5. [root@ZhangSiming cert]# systemctl stop firewalld
  6. [root@ZhangSiming cert]# systemctl disable firewalld
  7. [root@ZhangSiming cert]# getenforce 0
  8. Disabled
  9. #下载Etcd包
  10. [root@ZhangSiming K8S]# ls
  11. cert cfssl.sh Etcd-cert.sh
  12. [root@ZhangSiming K8S]# which wget
  13. /usr/bin/wget
  14. [root@ZhangSiming K8S]# wget https:/\/github.com/etcd-io/etcd/releases/download/v3.3.11/etcd-v3.3.11-linux-amd64.tar.gz
  15. [root@ZhangSiming K8S]# ls
  16. cert cfssl.sh Etcd-cert.sh etcd-v3.3.11-linux-arm64.tar.gz
  17. #下载成功

4.2编写一键生成Etcd配置文件及服务自启动systemctl配置文件的脚本

  1. [root@ZhangSiming etcd-v3.3.10-linux-arm64]# cat ~/K8S/Deploy/etcd.sh
  2. #!/bin/bash
  3. #Designed by Zhangsiming
  4. # example: ./etcd.sh etcd01 192.168.17.130 etcd02=https://192.168.17.131:2380,etcd03=https://192.168.17.132:2380
  5. #传入参数范例
  6. ETCD_NAME=$1
  7. ETCD_IP=$2
  8. ETCD_CLUSTER=$3
  9. WORK_DIR=/opt/etcd
  10. #脚本传入的参数及自定义全局变量
  11. cat > $WORK_DIR/cfg/etcd << EOF
  12. #生成etcd的配置文件
  13. #[Member]
  14. ETCD_NAME="${ETCD_NAME}"
  15. #Etcd数据库名称
  16. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  17. #数据目录
  18. ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
  19. #Etcd-cluster集群通信连接端口2380
  20. ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
  21. #Etcd对外访问开放端口2379
  22. #[Clustering]
  23. ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
  24. #本地Etcd的IP:端口
  25. ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
  26. #本地Etcd对外访问端口
  27. ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
  28. #Etcd集群IP及端口
  29. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  30. #集群认证,这个随意指定,需要所以cluster一致
  31. ETCD_INITIAL_CLUSTER_STATE="new"
  32. #new代表新创建的Etcd-cluster
  33. EOF
  34. cat >/usr/lib/systemd/system/etcd.service <<EOF
  35. #生成CentOS7的systemctl服务配置文件
  36. [Unit]
  37. Description=Etcd Server
  38. #服务描述
  39. After=network.target
  40. After=network-online.target
  41. Wants=network-online.target
  42. #服务依赖条件,启动网络之后启动等...
  43. [Service]
  44. Type=notify
  45. #notify模式,即指启动守护进程之后,进行后续监听操作(找cluster节点)
  46. EnvironmentFile=${WORK_DIR}/cfg/etcd
  47. #使用的Etcd配置文件
  48. ExecStart=${WORK_DIR}/bin/etcd \
  49. --name=\${ETCD_NAME} \
  50. --data-dir=\${ETCD_DATA_DIR} \
  51. --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
  52. --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
  53. --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
  54. --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
  55. --initial-cluster=\${ETCD_INITIAL_CLUSTER} \
  56. --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
  57. --initial-cluster-state=new \
  58. --cert-file=${WORK_DIR}/ssl/server.pem \
  59. --key-file=${WORK_DIR}/ssl/server-key.pem \
  60. --peer-cert-file=${WORK_DIR}/ssl/server.pem \
  61. --peer-key-file=${WORK_DIR}/ssl/server-key.pem \
  62. --trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
  63. --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
  64. #这个是启动Etcd守护进程的命令,其中附带使用的自签证书
  65. Restart=on-failure
  66. #出错时重启
  67. LimitNOFILE=65536
  68. #单个进程文件描述符65535
  69. [Install]
  70. WantedBy=multi-user.target
  71. #一般这里都是multi-user即多用户服务
  72. EOF
  73. systemctl daemon-reload #这句使编写的Etcd服务生效
  74. systemctl enable etcd #开机自启动Etcd服务(系统服务)
  75. systemctl restart etcd #重启Etcd服务

4.3配置部署Etcd节点

  1. #拷贝文件到指定的/opt/etcd目录下
  2. [root@ZhangSiming etcd-v3.3.10-linux-arm64]# tree /opt/etcd
  3. /opt/etcd
  4. ├── bin
  5.    ├── etcd
  6.    └── etcdctl #解压二进制包把其中的etcd、etcdctl拷贝到bin目录下
  7. ├── cfg #执行脚本生成的etcd配置文件会在这个目录下
  8. └── ssl
  9. ├── ca-config.json
  10. ├── ca.csr
  11. ├── ca-csr.json
  12. ├── ca-key.pem
  13. ├── ca.pem
  14. ├── server.csr
  15. ├── server-csr.json
  16. ├── server-key.pem
  17. └── server.pem #把自签ca证书拷贝到ssl目录下
  18. 3 directories, 12 files
  19. #执行脚本
  20. [root@ZhangSiming K8S]# ~/K8S/Deploy/etcd.sh etcd01 192.168.17.130 etcd02=https://192.168.17.131:2380,etcd03=https://192.168.17.132:2380
  21. [root@ZhangSiming etcd-v3.3.11-linux-amd64]# systemctl start etcd
  22. Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details.
  23. #这里会等待很长时间,因为没有配置cluster中的其他两台Etcd节点,所以最后会失败,没有关系,我们接下来就配置

4.4配置部署Etcd-cluster集群

  1. [root@ZhangSiming etcd-v3.3.11-linux-amd64]# scp -r /opt/etcd/ root@192.168.17.131:/opt
  2. #把/opt/etcd目录下所有文件拷贝到其他etcd节点的对应位置
  3. The authenticity of host '192.168.17.131 (192.168.17.131)' can't be established.
  4. ECDSA key fingerprint is SHA256:hcae7bE6sNTeEGHgZaykEfqSiPQCoW2dJBwSZ8DqUTA.
  5. ECDSA key fingerprint is MD5:4e:30:92:c6:66:00:48:d5:69:1a:10:1f:16:ef:2e:de.
  6. Are you sure you want to continue connecting (yes/no)? yes
  7. Warning: Permanently added '192.168.17.131' (ECDSA) to the list of known hosts.
  8. root@192.168.17.131's password:
  9. etcd 100% 18MB 18.3MB/s 00:01
  10. etcdctl 100% 15MB 29.3MB/s 00:00
  11. etcd 100% 516 636.4KB/s 00:00
  12. ca-config.json 100% 269 335.0KB/s 00:00
  13. ca.csr 100% 956 1.5MB/s 00:00
  14. ca-csr.json 100% 194 339.2KB/s 00:00
  15. ca-key.pem 100% 1679 1.8MB/s 00:00
  16. ca.pem 100% 1265 2.0MB/s 00:00
  17. server.csr 100% 1013 72.9KB/s 00:00
  18. server-csr.json 100% 294 237.4KB/s 00:00
  19. server-key.pem 100% 1679 1.5MB/s 00:00
  20. server.pem 100% 1338 1.5MB/s 00:00
  21. #拷贝自启动systemctl文件到其他Etcd节点
  22. [root@ZhangSiming etcd-v3.3.11-linux-amd64]# scp /usr/lib/systemd/system/etcd.service root@192.168.17.131:/usr/lib/systemd/system/etcd.service
  23. root@192.168.17.131's password:
  24. etcd.service 100% 923 27.3KB/s 00:00

启动Etcd-cluster集群

  1. #在三个Etcd节点都启动etcd服务
  2. [root@ZhangSiming etcd]# systemctl start etcd
  3. [root@ZhangSiming etcd]#
  4. [root@ZhangSiming ssl]# pwd
  5. /opt/etcd/ssl
  6. [root@ZhangSiming ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.17.130:2379,https://192.168.17.131:2379,https://192.168.17.132:2379" cluster-health
  7. #访问Etcd的三个节点的2379端口,进行Etcd的健康检测
  8. member 3a6f8c78a708ea9 is healthy: got healthy result from https://192.168.17.132:2379
  9. member 5658b2b12e105fd0 is healthy: got healthy result from https://192.168.17.131:2379
  10. member 9d10fc7919c17197 is healthy: got healthy result from https://192.168.17.130:2379
  11. cluster is healthy
  12. #集群是健康的!

在配置Etcd-cluster集群的时候,如果出了问题,首先查看/opt/etcd/cfg/etcd配置文件和systemctl配置文件有无错误,之后再看/var/log/messages进行排查。

五、Node安装Docker

5.1K8S提供服务架构层次图

image_1d2hp4e0d8td1g28m601lr512hu9.png-29.7kB

5.2K8SNode节点安装Docker

去docker官方网址:docs.docker.com,根据提示安装docker-ce

  1. #以下操作Node01和Node02一致
  2. #安装docker依赖包
  3. [root@ZhangSiming etcd]# yum install -y yum-utils device-mapper-persistent-data lvm2
  4. #下载dockerrepo
  5. [root@ZhangSiming etcd]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  6. #安装2.9版本以上的container-selinux包
  7. [root@ZhangSiming etcd]# yum -y localinstall container-selinux-2.10-2.el7.noarch.rpm
  8. #yum方式安装docker
  9. [root@ZhangSiming etcd]# yum install docker-ce -y
  10. #查看docker是否安装
  11. [root@ZhangSiming etcd]# systemctl start docker
  12. [root@ZhangSiming etcd]# systemctl enable docker
  13. Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
  14. [root@ZhangSiming etcd]# docker version
  15. Client:
  16. Version: 18.09.1
  17. API version: 1.39
  18. Go version: go1.10.6
  19. Git commit: 4c52b90
  20. Built: Wed Jan 9 19:35:01 2019
  21. OS/Arch: linux/amd64
  22. Experimental: false
  23. Server: Docker Engine - Community
  24. Engine:
  25. Version: 18.09.1
  26. API version: 1.39 (minimum version 1.12)
  27. Go version: go1.10.6
  28. Git commit: 4c52b90
  29. Built: Wed Jan 9 19:06:30 2019
  30. OS/Arch: linux/amd64
  31. Experimental: false

5.3在下载好Docker的Node进行优化配置

  1. [root@ZhangSiming etcd]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
  2. #daocloud是docker的加速器,这里复制daocloud官网的命令给本地docker安装加速器,因为
  3. docker version >= 1.12
  4. {"registry-mirrors": ["http://f1361db2.m.daocloud.io"]}
  5. Success.
  6. You need to restart docker to take effect: sudo systemctl restart docker
  7. [root@ZhangSiming etcd]# systemctl restart docker #安装完加速器重启Docker
  8. [root@ZhangSiming etcd]# docker run -dit nginx #启动一个nginxDocker进程(没有镜像默认先下载镜像再启动容器进程)
  9. Unable to find image 'nginx:latest' locally
  10. latest: Pulling from library/nginx
  11. 5e6ec7f28fb7: Pull complete
  12. ab804f9bbcbe: Pull complete
  13. 052b395f16bc: Pull complete
  14. Digest: sha256:f630aa07be1af4ce6cbe1d5e846bb6bb3b5ba6bfec0ec0edc08ba48d8c1d9b0f
  15. Status: Downloaded newer image for nginx:latest
  16. 04f73c49514fcca4c1d48efe706537cf2b2378a842feb4aa937f06ea1b537f57
  17. #下载nginx镜像并启动完成,因为有加速器,所以非常快
  18. [root@ZhangSiming etcd]# docker ps -a
  19. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  20. 04f73c49514f nginx "nginx -g 'daemon of…" 28 seconds ago Up 27 seconds 80/tcp eager_panini

六、Flannel容器集群网络部署

6.1基于Flannel实现不同主机容器间通信的工作原理

Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。
VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址。
Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式。

image_1d2huid6712qo1v9d110o1nj4kdt9.png-61.2kB

1.主机A的container1要和主机B的container2建立通信,先把数据包发给Docker0网关;
2.Docker0把数据发给flanneld0,flanneld服务进行封装并从主机A的ens32网卡发出到以太网中;
3.主机Bens32网卡接收到数据包,由主机B的flanneld服务进行解封装,去到主机B的Docker0之后到达container2。
注意:
flanneld通过Etcd存储一个路由表,路由表指示了容器要想和容器通信,需要哪台主机发送给哪台主机;因为flanneld会首先在Etcd分配并存储一个子网,然后由这个子网给容器下发IP,所以容器间可以通信------flannel覆盖网络

6.2部署flannel容器集群网络

6.2.1写入分配给flannel的子网到Etcd,供flanneld使用

  1. [root@ZhangSiming ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.17.130:2379,https://192.168.17.131:2379,https://192.168.17.132:2379" set /coreos.com/network/config '{"Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
  2. {"Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
  3. #这是在Etcd中写入flanneld分配的子网,并指定封装类型为VXLAN
  4. [root@ZhangSiming ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.17.130:2379,https://192.168.17.131:2379,https://192.168.17.132:2379" get /coreos.com/network/config
  5. {"Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
  6. #etcdctl get 查看一下刚刚设置的

6.2.2去GitHub下载flannel二进制包,并解压到/opt/kubernetes/bin/中

  1. [root@ZhangSiming opt]# mkdir -p kubernetes/{bin,ssl,cfg}
  2. [root@ZhangSiming opt]# cd kubernetes/bin/
  3. [root@ZhangSiming bin]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
  4. [root@ZhangSiming bin]# ls
  5. flannel-v0.11.0-linux-amd64.tar.gz
  6. [root@ZhangSiming bin]# tar xf flannel-v0.11.0-linux-amd64.tar.gz
  7. [root@ZhangSiming bin]# ls
  8. flanneld flannel-v0.11.0-linux-amd64.tar.gz mk-docker-opts.sh README.md

6.2.3编写flannel服务脚本,并配置docker服务脚本,使其使用flannel分配的子网

  1. [root@ZhangSiming kubernetes]# cat flannel.sh
  2. #!/bin/bash
  3. #Designed by Zhangsiming
  4. cat >/opt/kubernetes/cfg/flanneld << EOF
  5. FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.17.130:2379,https://192.168.17.131:2379,https://192.168.17.132:2379 \
  6. -etcd-cafile=/opt/etcd/ssl/ca.pem \
  7. -etcd-certfile=/opt/etcd/ssl/server.pem \
  8. -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  9. EOF
  10. #这个文件中的变量是flanneld服务配置文件中引用的环境文件,用于启动flanneld服务
  11. cat <<EOF >/usr/lib/systemd/system/flanneld.service
  12. [Unit]
  13. Description=Flanneld overlay address etcd agent
  14. After=network-online.target network.target
  15. Before=docker.service
  16. [Service]
  17. Type=notify
  18. EnvironmentFile=/opt/kubernetes/cfg/flanneld
  19. ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
  20. #引用FLANNEL_OPTIONS变量,不能加{}否则会出错
  21. ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
  22. #启动后(Post)生成一个子网文件/run/flannel/subnet.env供给Docker使用
  23. Restart=on-failure
  24. [Install]
  25. WantedBy=multi-user.target
  26. EOF
  27. #重新修改配置docker服务的配置文件,使其使用flannel分配的子网
  28. cat <<EOF >/usr/lib/systemd/system/docker.service
  29. [Unit]
  30. Description=Docker Application Container Engine
  31. Documentation=https://docs.docker.com
  32. After=network-online.target firewalld.service
  33. Wants=network-online.target
  34. [Service]
  35. Type=notify
  36. EnvironmentFile=/run/flannel/subnet.env
  37. ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
  38. #引用/run/flannel/subnet.env中的变量,其实就是使用flannel覆盖网络的子网来使得Docker间可以通信
  39. ExecReload=/bin/kill -s HUP \$MAINPID
  40. LimitNOFILE=infinity
  41. LimitNPROC=infinity
  42. LimitCORE=infinity
  43. TimeoutStartSec=0
  44. Delegate=yes
  45. KillMode=process
  46. Restart=on-failure
  47. StartLimitBurst=3
  48. StartLimitInterval=60s
  49. [Install]
  50. WantedBy=multi-user.target
  51. EOF
  52. systemctl daemon-reload
  53. #修改完systemctl自启动配置文件要daemon-reload
  54. systemctl enable flanneld
  55. systemctl restart flanneld
  56. systemctl restart docker
  57. #重新启动flannel服务和docker服务
  58. #我们不妨看一下flannel服务生成的/run/flannel/subnet.env文件
  59. [root@ZhangSiming kubernetes]# cat /run/flannel/subnet.env
  60. DOCKER_OPT_BIP="--bip=172.17.63.1/24"
  61. DOCKER_OPT_IPMASQ="--ip-masq=false"
  62. DOCKER_OPT_MTU="--mtu=1450"
  63. DOCKER_NETWORK_OPTIONS=" --bip=172.17.63.1/24 --ip-masq=false --mtu=1450"
  64. #这个变量就是分配的子网,被docker服务配置文件引用的

6.2.4 执行脚本,查看docker是否使用flannel覆盖网络

  1. [root@ZhangSiming kubernetes]# sh flannel.sh
  2. [root@ZhangSiming kubernetes]# ps -elf | grep flanneld
  3. 4 S root 2413 1 0 80 0 - 78971 futex_ 23:31 ? 00:00:00 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.17.130:2379,https://192.168.17.131:2379,https://192.168.17.132:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
  4. 0 R root 3684 1195 0 80 0 - 28176 - 23:45 pts/0 00:00:00 grep --color=auto flanneld
  5. #启动flannel服务成功
  6. [root@ZhangSiming kubernetes]# ps -elf | grep dockerd
  7. 4 S root 2485 1 1 80 0 - 166806 do_wai 23:31 ? 00:00:01 /usr/bin3.1/24 --ip-masq=false --mtu=1450
  8. #docker成功使用flannel分配的子网

K8SNode2节点就scp过去执行脚本就好了

  1. [root@ZhangSiming kubernetes]# scp -r /opt/ root@192.168.17.133:/opt/
  2. #scp只要是复制两个,第一个就是/opt/kubernetes目录,里面包含flannel工具和服务生产脚本;第二,还要复制/opt/etcd/ssl下的自签证书过去,别忘了我们指定的证书位置是/opt/etcd/ssl
  3. root@192.168.17.133\'s password
  4. etcd 100% 18MB 28.6MB/s 00:00
  5. etcdctl 100% 15MB 24.0MB/s 00:00
  6. etcd 100% 516 761.0KB/s 00:00
  7. ca-config.json 100% 269 3.6KB/s 00:00
  8. ca.csr 100% 956 260.6KB/s 00:00
  9. ca-csr.json 100% 194 154.4KB/s 00:00
  10. ca-key.pem 100% 1679 1.4MB/s 00:00
  11. ca.pem 100% 1265 2.1MB/s 00:00
  12. server.csr 100% 1013 768.5KB/s 00:00
  13. server-csr.json 100% 294 127.1KB/s 00:00
  14. server-key.pem 100% 1679 2.3MB/s 00:00
  15. server.pem 100% 1338 2.6MB/s 00:00
  16. container-selinux-2.10-2.el7.noarch.rpm 100% 28KB 1.1MB/s 00:00
  17. flannel-v0.11.0-linux-amd64.tar.gz 100% 9342KB 18.2MB/s 00:00
  18. flanneld 100% 34MB 33.6MB/s 00:01
  19. mk-docker-opts.sh 100% 2139 1.6MB/s 00:00
  20. README.md 100% 4300 55.0KB/s 00:00
  21. flanneld 100% 236 199.1KB/s 00:00
  22. .flannel.sh.swp 100% 12KB 11.3MB/s 00:00
  23. .flannel.sh.swn 100% 12KB 4.5MB/s 00:00
  24. flannel.sh 100% 1489 1.5MB/s 00:00
  25. #复制过来直接执行脚本
  26. [root@ZhangSiming kubernetes]# sh flannel.sh
  27. [root@ZhangSiming kubernetes]# ps -elf | grep flanneld
  28. 4 S root 2663 1 0 80 0 - 95884 futex_ 00:04 ? 00:00:00 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.17.130:2379,https://192.168.17.131:2379,https://192.168.17.132:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
  29. 0 R root 2908 1770 0 80 0 - 28176 - 00:05 pts/0 00:00:00 grep --color=auto flanneld
  30. [root@ZhangSiming kubernetes]# ps -elf | grep dockerd
  31. 4 S root 2737 1 6 80 0 - 133445 futex_ 00:04 ? 00:00:01 /usr/bin/dockerd --bip=172.17.98.1/24 --ip-masq=false --mtu=1450
  32. 0 R root 2916 1770 0 80 0 - 28176 - 00:05 pts/0 00:00:00 grep --color=auto dockerd
  33. #到此所有节点覆盖flannel网络成功

6.3测试flannel网络下容器间的通信

部署成功之后,ifconfig看一下容器的网卡信息

  1. [root@ZhangSiming kubernetes]# ifconfig | grep -EA 1 flannel
  2. flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
  3. inet 172.17.98.0 netmask 255.255.255.255 broadcast 0.0.0.0
  4. [root@ZhangSiming kubernetes]# ifconfig | grep -EA 1 docker0
  5. docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
  6. inet 172.17.98.1 netmask 255.255.255.0 broadcast 172.17.98.255
  7. #可以看到flannel自己分配了个172.17.98.0网段,docker0和下面的容器都会使用这个网段的IP
  8. [root@ZhangSiming kubernetes]# route -n
  9. Kernel IP routing table
  10. Destination Gateway Genmask Flags Metric Ref Use Iface
  11. 0.0.0.0 192.168.17.2 0.0.0.0 UG 100 0 0 ens32
  12. 172.17.63.0 172.17.63.0 255.255.255.0 UG 0 0 0 flannel.1
  13. 172.17.98.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
  14. 192.168.17.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32
  15. #并加入了路由表中通信
  16. [root@ZhangSiming kubernetes]# ifconfig | grep -EA 1 flannel
  17. flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
  18. inet 172.17.98.0 netmask 255.255.255.255 broadcast 0.0.0.0
  19. [root@ZhangSiming kubernetes]# ping 172.17.63.0
  20. PING 172.17.63.0 (172.17.63.0) 56(84) bytes of data.
  21. 64 bytes from 172.17.63.0: icmp_seq=1 ttl=64 time=0.259 ms
  22. 64 bytes from 172.17.63.0: icmp_seq=2 ttl=64 time=0.379 ms
  23. ^C
  24. --- 172.17.63.0 ping statistics ---
  25. 2 packets transmitted, 2 received, 0% packet loss, time 1000ms
  26. rtt min/avg/max/mdev = 0.259/0.319/0.379/0.060 ms
  27. #98网段的主机ping63网主机的网关可以通
  28. #在两个节点分别启动两个测试镜像,并记录IP
  29. [root@ZhangSiming kubernetes]# docker run -it busybox
  30. #busybox是测试镜像
  31. Unable to find image 'busybox:latest' locally
  32. latest: Pulling from library/busybox
  33. 57c14dd66db0: Pull complete
  34. Digest: sha256:7964ad52e396a6e045c39b5a44438424ac52e12e4d5a25d94895f2058cb863a0
  35. Status: Downloaded newer image for busybox:latest
  36. / # ifconfig | grep -A 1 eth0
  37. eth0 Link encap:Ethernet HWaddr 02:42:AC:11:3F:02
  38. inet addr:172.17.63.2 Bcast:172.17.63.255 Mask:255.255.255.0
  39. / # ping 172.17.98.2
  40. #自己是63网段,ping98网段(另一个容器的)
  41. PING 172.17.98.2 (172.17.98.2): 56 data bytes
  42. 64 bytes from 172.17.98.2: seq=0 ttl=62 time=0.459 ms
  43. 64 bytes from 172.17.98.2: seq=1 ttl=62 time=2.668 ms
  44. #容器间也ping通,说明flannel网络已经覆盖了容器,成功
  45. #最后别忘了加入开机启动
  46. [root@ZhangSiming kubernetes]# systemctl enable flanneld

七、部署Master组件

部署Master组件是有一个顺序要求的,先部署apiserver,再部署controller-manager和scheduler。
配置组件流程和上面一致,就是:
1.自签证书准备
2.配置文件
3.systemd管理
4.启动组件
5.验证是否运行成功

7.1生成自签证书

  1. [root@ZhangSiming ~]# mkdir K8S/k8s-cert
  2. [root@ZhangSiming ~]# cd K8S/k8s-cert
  3. #建立一个专门的K8S证书
  4. [root@ZhangSiming k8s-cert]# mv ~/k8s-cert.sh .
  5. [root@ZhangSiming k8s-cert]# ls
  6. k8s-cert.sh
  7. #生成证书脚本,类似生成Etcd证书脚本
  8. [root@ZhangSiming k8s-cert]# cat k8s-cert.sh
  9. cat > ca-config.json <<EOF
  10. {
  11. "signing": {
  12. "default": {
  13. "expiry": "87600h"
  14. },
  15. "profiles": {
  16. "kubernetes": {
  17. "expiry": "87600h",
  18. "usages": [
  19. "signing",
  20. "key encipherment",
  21. "server auth",
  22. "client auth"
  23. ]
  24. }
  25. }
  26. }
  27. }
  28. EOF
  29. cat > ca-csr.json <<EOF
  30. {
  31. "CN": "kubernetes",
  32. "key": {
  33. "algo": "rsa",
  34. "size": 2048
  35. },
  36. "names": [
  37. {
  38. "C": "CN",
  39. "L": "Beijing",
  40. "ST": "Beijing",
  41. "O": "k8s",
  42. "OU": "System"
  43. #O、OU是用户、用户组验证
  44. }
  45. ]
  46. }
  47. EOF
  48. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  49. #生成ca根证书
  50. #-----------------------
  51. cat > server-csr.json <<EOF
  52. {
  53. "CN": "kubernetes",
  54. "hosts": [
  55. "10.0.0.1",
  56. "127.0.0.1",
  57. "192.168.17.130",
  58. "192.168.17.131",
  59. "192.168.17.134",
  60. "192.168.17.135",
  61. "192.168.17.136",
  62. #上面5个IP为,Master01,Master02,LoadMaster自身IP、VIP和LoadNode的IP地址
  63. "kubernetes",
  64. "kubernetes.default",
  65. "kubernetes.default.svc",
  66. "kubernetes.default.svc.cluster",
  67. "kubernetes.default.svc.cluster.local"
  68. ],
  69. "key": {
  70. "algo": "rsa",
  71. "size": 2048
  72. },
  73. "names": [
  74. {
  75. "C": "CN",
  76. "L": "BeiJing",
  77. "ST": "BeiJing",
  78. "O": "k8s",
  79. "OU": "System"
  80. #O、OU是用户、用户组验证
  81. }
  82. ]
  83. }
  84. EOF
  85. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
  86. #使用ca证书签发apiserver的证书
  87. #-----------------------
  88. cat > admin-csr.json <<EOF
  89. {
  90. "CN": "admin",
  91. "hosts": [],
  92. "key": {
  93. "algo": "rsa",
  94. "size": 2048
  95. },
  96. "names": [
  97. {
  98. "C": "CN",
  99. "L": "BeiJing",
  100. "ST": "BeiJing",
  101. "O": "system:masters",
  102. "OU": "System"
  103. #O、OU是用户、用户组验证
  104. }
  105. ]
  106. }
  107. EOF
  108. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
  109. #使用ca证书签发kubeadmin证书
  110. #-----------------------
  111. cat > kube-proxy-csr.json <<EOF
  112. {
  113. "CN": "system:kube-proxy",
  114. "hosts": [],
  115. "key": {
  116. "algo": "rsa",
  117. "size": 2048
  118. },
  119. "names": [
  120. {
  121. "C": "CN",
  122. "L": "BeiJing",
  123. "ST": "BeiJing",
  124. "O": "k8s",
  125. "OU": "System"
  126. #O、OU是用户、用户组验证
  127. }
  128. ]
  129. }
  130. EOF
  131. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  132. #使用ca证书签发kubeproxy证书
  133. [root@ZhangSiming k8s-cert]# sh k8s-cert.sh &>/dev/null
  134. [root@ZhangSiming k8s-cert]# ls
  135. admin.csr ca.csr kube-proxy.csr server-csr.json
  136. admin-csr.json ca-csr.json kube-proxy-csr.json server-key.pem
  137. admin-key.pem ca-key.pem kube-proxy-key.pem server.pem
  138. admin.pem ca.pem kube-proxy.pem
  139. ca-config.json k8s-cert.sh server.csr
  140. #全部的证书生成成功

7.2部署Master组件

7.2.1部署apiserver

  1. [root@ZhangSiming K8S]# cd k8smaster-soft/
  2. [root@ZhangSiming k8smaster-soft]# ls
  3. apiserver.sh kubernetes-server-linux-amd64.tar.gz
  4. controller-manager.sh scheduler.sh
  5. [root@ZhangSiming k8smaster-soft]# mkdir -p /opt/kubernetes/{bin,cfg,ssl}
  6. [root@ZhangSiming k8smaster-soft]# tar xf kubernetes-server-linux-amd64.tar.gz
  7. #解压下载好的Kubernetes二进制包
  8. [root@ZhangSiming K8S]# ls kubernetes/
  9. addons kubernetes-src.tar.gz LICENSES server
  10. #addons是一些插件
  11. [root@ZhangSiming K8S]# ls kubernetes/server/bin/
  12. apiextensions-apiserver kube-controller-manager.tar
  13. cloud-controller-manager kubectl
  14. cloud-controller-manager.docker_tag kubelet
  15. cloud-controller-manager.tar kube-proxy
  16. hyperkube kube-proxy.docker_tag
  17. kubeadm kube-proxy.tar
  18. kube-apiserver kube-scheduler
  19. kube-apiserver.docker_tag kube-scheduler.docker_tag
  20. kube-apiserver.tar kube-scheduler.tar
  21. kube-controller-manager mounter
  22. kube-controller-manager.docker_tag
  23. #其中的kube-apiserver kubectl kube-controller-manager kube-scheduler kubelet是需要的,拷贝到刚刚创建的/opt/kubernetes下的bin目录中
  24. [root@ZhangSiming K8S]# cd kubernetes/server/bin/
  25. [root@ZhangSiming bin]# cp kubelet kubectl kube-proxy kube-scheduler kube-controller-manager kube-apiserver /opt/kubernetes/bin/
  26. [root@ZhangSiming bin]# ls /opt/kubernetes/bin/
  27. kube-apiserver kubectl kube-proxy
  28. kube-controller-manager kubelet kube-scheduler
  29. #拷贝命令工具成功
  30. #拷贝证书
  31. [root@ZhangSiming K8S]# cd k8s-cert/
  32. [root@ZhangSiming k8s-cert]# cp * /opt/kubernetes/ssl/
  33. [root@ZhangSiming k8s-cert]# ls /opt/kubernetes/ssl/
  34. admin.csr ca.csr kube-proxy.csr server-csr.json
  35. admin-csr.json ca-csr.json kube-proxy-csr.json server-key.pem
  36. admin-key.pem ca-key.pem kube-proxy-key.pem server.pem
  37. admin.pem ca.pem kube-proxy.pem
  38. ca-config.json k8s-cert.sh server.csr
  39. [root@ZhangSiming k8s-cert]# ls /opt/etcd/ssl
  40. ca-config.json ca-csr.json ca.pem server-csr.json server.pem
  41. ca.csr ca-key.pem server.csr server-key.pem
  42. #Kubernetes组件的证书和etcd的证书都在指定位置
  43. [root@ZhangSiming ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.17.130:2379,https://192.168.17.131:2379,https://192.168.17.132:2379" cluster-health
  44. member 3a6f8c78a708ea9 is healthy: got healthy result from https://192.168.17.132:2379
  45. member 5658b2b12e105fd0 is healthy: got healthy result from https://192.168.17.131:2379
  46. member 9d10fc7919c17197 is healthy: got healthy result from https://192.168.17.130:2379
  47. cluster is healthy
  48. #Etcd集群健康

下面开始生成apiserver的配置文件和服务配置文件

  1. [root@ZhangSiming k8smaster-soft]# cat apiserver.sh
  2. #!/bin/bash
  3. MASTER_ADDRESS=$1
  4. ETCD_SERVERS=$2
  5. #脚本要传入两个参数,一个是自身的IP,一个是Etcd集群的IP
  6. cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
  7. KUBE_APISERVER_OPTS="--logtostderr=true \\
  8. #错误日志开启
  9. --v=4 \\
  10. #错误日志级别为4
  11. --etcd-servers=${ETCD_SERVERS} \\
  12. #Etcd集群的IP
  13. --bind-address=${MASTER_ADDRESS} \\
  14. #绑定自身的masterIP
  15. --secure-port=6443 \\
  16. #安全端口6443
  17. --advertise-address=${MASTER_ADDRESS} \\
  18. --allow-privileged=true \\
  19. --service-cluster-ip-range=10.0.0.0/24 \\
  20. #Kubernetes给apiserver分配的虚拟IP
  21. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
  22. #支持验证插件
  23. --authorization-mode=RBAC,Node \\
  24. #安全验证模式
  25. --kubelet-https=true \\
  26. --enable-bootstrap-token-auth \\
  27. #token验证方式
  28. --token-auth-file=/opt/kubernetes/cfg/token.csv \\
  29. #这个文件要先生成,给token验证读取
  30. --service-node-port-range=30000-50000 \\
  31. #虚拟IP分配的端口
  32. --tls-cert-file=/opt/kubernetes/ssl/server.pem \\
  33. --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
  34. --client-ca-file=/opt/kubernetes/ssl/ca.pem \\
  35. --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  36. --etcd-cafile=/opt/etcd/ssl/ca.pem \\
  37. --etcd-certfile=/opt/etcd/ssl/server.pem \\
  38. --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  39. #下面都是证书
  40. EOF
  41. cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
  42. [Unit]
  43. Description=Kubernetes API Server
  44. Documentation=https://github.com/kubernetes/kubernetes
  45. [Service]
  46. EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
  47. #配置文件位置
  48. ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
  49. #启动apiserver服务引用的参数,就是配置文件里指定的
  50. Restart=on-failure
  51. [Install]
  52. WantedBy=multi-user.target
  53. EOF
  54. systemctl daemon-reload
  55. #system服务生效
  56. systemctl enable kube-apiserver
  57. systemctl restart kube-apiserver
  58. #生成token-auth文件
  59. [root@ZhangSiming k8smaster-soft]# touch /opt/kubernetes/cfg/token.csv
  60. [root@ZhangSiming k8smaster-soft]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
  61. 5e479c0f03a3261a0e15bf05ff272831
  62. [root@ZhangSiming k8smaster-soft]# echo "5e479c0f03a3261a0e15bf05ff272831" > /opt/kubernetes/cfg/token.csv
  63. [root@ZhangSiming k8smaster-soft]# vim /opt/kubernetes/cfg/token.csv
  64. [root@ZhangSiming k8smaster-soft]# cat /opt/kubernetes/cfg/token.csv
  65. 5e479c0f03a3261a0e15bf05ff272831,kubelet-bootstrap,10001,"system:kuberlet-bootstrap"
  66. #这个后面部署Node会用到,暂时不用管。
  67. #执行脚本
  68. [root@ZhangSiming k8smaster-soft]# sh apiserver.sh 192.168.17.130 https://192.168.17.130:2379,https://192.168.17.131:2379,https://192.168.17.132:2379
  69. #传入两个参数
  70. [root@ZhangSiming k8s-cert]# ps -elf | grep api
  71. 4 S root 2536 1 77 80 0 - 71336 futex_ 20:11 ? 00:00:04 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.17.130:2379,https://192.168.17.131:2379,https://192.168.17.132:2379 --bind-address=192.168.17.130 --secure-port=6443 --advertise-address=192.168.17.130 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
  72. 0 R root 2544 2324 0 80 0 - 28176 - 20:11 pts/1 00:00:00 grep --color=auto api
  73. #成功启动apiserver服务

7.2.2部署kube-controller-manager

  1. [root@ZhangSiming k8smaster-soft]# netstat -antup | grep 8080
  2. tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 2715/kube-apiserver
  3. #这个8080端口是apiserver的一个非安全端口,用于连接controller-manager的
  4. #生成controller-manager配置文件和服务配置文件的脚本
  5. [root@ZhangSiming k8smaster-soft]# cat controller-manager.sh
  6. #!/bin/bash
  7. MASTER_ADDRESS=$1
  8. #由于和apiserver都在本地,所以传入127.0.0.1
  9. cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
  10. KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
  11. --v=4 \\
  12. --master=${MASTER_ADDRESS}:8080 \\
  13. #和apiserver通信地址是8080,非安全端口
  14. --leader-elect=true \\
  15. #高可用选举功能(多个controller-manager时)
  16. --address=127.0.0.1 \\
  17. --service-cluster-ip-range=10.0.0.0/24 \\
  18. #虚拟子网,和apiserver一致
  19. --cluster-name=kubernetes \\
  20. --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
  21. --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  22. #证书
  23. --root-ca-file=/opt/kubernetes/ssl/ca.pem \\
  24. --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  25. --experimental-cluster-signing-duration=87600h0m0s"
  26. EOF
  27. cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
  28. [Unit]
  29. Description=Kubernetes Controller Manager
  30. Documentation=https://github.com/kubernetes/kubernetes
  31. [Service]
  32. EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
  33. ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
  34. Restart=on-failure
  35. [Install]
  36. WantedBy=multi-user.target
  37. EOF
  38. #服务配置文件和apiserver类似
  39. systemctl daemon-reload
  40. systemctl enable kube-controller-manager
  41. systemctl restart kube-controller-manager
  42. #执行脚本
  43. [root@ZhangSiming k8smaster-soft]# sh controller-manager.sh 127.0.0.1
  44. #传入127.0.0.1
  45. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
  46. [root@ZhangSiming k8smaster-soft]# ps -elf | grep controller-manager
  47. 4 S root 2804 1 3 80 0 - 34587 ep_pol 20:25 ? 00:00:01 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0s
  48. 0 S root 2811 2324 0 80 0 - 28176 pipe_w 20:26 pts/1 00:00:00 grep --color=auto controller-manager
  49. #controller-manager服务启动成功

7.2.3部署scheduler组件

  1. [root@ZhangSiming k8smaster-soft]# cat scheduler.sh
  2. #!/bin/bash
  3. MASTER_ADDRESS=$1
  4. cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
  5. KUBE_SCHEDULER_OPTS="--logtostderr=true \\
  6. --v=4 \\
  7. #开启错误日志,等级为4
  8. --master=${MASTER_ADDRESS}:8080 \\
  9. #scheduler与apiserver连接也是通过8080端口
  10. --leader-elect"
  11. #高可用选举功能开启
  12. EOF
  13. cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
  14. [Unit]
  15. Description=Kubernetes Scheduler
  16. Documentation=https://github.com/kubernetes/kubernetes
  17. [Service]
  18. EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
  19. ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
  20. Restart=on-failure
  21. [Install]
  22. WantedBy=multi-user.target
  23. EOF
  24. systemctl daemon-reload
  25. systemctl enable kube-scheduler
  26. systemctl restart kube-scheduler
  27. #执行脚本
  28. [root@ZhangSiming k8smaster-soft]# sh scheduler.sh 127.0.0.1
  29. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
  30. [root@ZhangSiming k8smaster-soft]# ps -elf | grep scheduler
  31. 4 S root 2873 1 0 80 0 - 11075 futex_ 20:31 ? 00:00:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
  32. #scheduler服务启动成功

验证apiserver、controller-manager、scheduler连接状况

  1. [root@ZhangSiming k8smaster-soft]# /opt/kubernetes/bin/kubectl get cs
  2. NAME STATUS MESSAGE ERROR
  3. controller-manager Healthy ok
  4. scheduler Healthy ok
  5. etcd-1 Healthy {"health":"true"}
  6. etcd-2 Healthy {"health":"true"}
  7. etcd-0 Healthy {"health":"true"}
  8. #三者连接成功,如果报错,优先查看日志/var/log/messages

八、部署Node组件

部署Node组件流程:(证书刚刚部署Master组件的时候就签发好了)
1.将kubelet-bootstrap用户绑定到系统集群角色
2.创建kubeconfig文件
3.部署kubelet、kube-proxy组件

添加Node节点到K8S集群流程

image_1d45h675r15j08mdarl1cuo1a9h9.png-32.7kB

1.kubelet工具启动首先需要验证集群有无kubelet-bootstrap用户;
2.验证kubeconfig配置文件;
3.之后验证连接apiserver、之后验证token、验证证书;
4.之后K8S集群(Master)csr检测到Node节点kubelet-bootstrap用户发出的证书请求,签发证书给Node节点,把Node节点放行,把Node节点加入到K8S集群中。

8.1将kubelet-bootstrap用户绑定到系统集群角色

  1. #前提先把kubelet、kubel-proxy、kubectl和证书拷贝到Node服务器
  2. [root@ZhangSiming kubernetes]# cat cfg/token.csv
  3. 5e479c0f03a3261a0e15bf05ff272831,kubelet-bootstrap,10001,"system:kuberlet-bootstrap"
  4. #token中我们指定了用户kubelet-bootstrap,但是我们需要真正的将它绑定到集群角色才可以
  5. [root@ZhangSiming kubernetes]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
  6. clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
  7. #必须在Master节点也就是K8S集群生成kubelet-bootstrap集群角色,否则后面启动kubelet服务会失败。

8.2创建kubeconfig文件

  1. #以下操作都是在Node节点
  2. [root@ZhangSiming ~]# echo "PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile
  3. [root@ZhangSiming ~]# source /etc/profile
  4. [root@ZhangSiming ~]# which kubectl
  5. /opt/kubernetes/bin/kubectl
  6. #创建环境变量
  7. #创建kubeconfig脚本
  8. [root@ZhangSiming ~]# cat kubeconfig.sh
  9. #!/bin/bash
  10. APISERVER=$1
  11. SSL_DIR=$2
  12. #传入两个参数,一个是apiserver地址,一个是证书目录
  13. export KUBE_APISERVER="https://$APISERVER:6443"
  14. # 设置集群参数
  15. kubectl config set-cluster kubernetes \
  16. --certificate-authority=$SSL_DIR/ca.pem \
  17. --embed-certs=true \
  18. #将证书写入kubeconfig
  19. --server=${KUBE_APISERVER} \
  20. --kubeconfig=bootstrap.kubeconfig
  21. #生成文件名
  22. # 设置客户端认证参数
  23. kubectl config set-credentials kubelet-bootstrap \
  24. --token=5e479c0f03a3261a0e15bf05ff272831 \
  25. #token复制我们刚刚/opt/kubernetes/cfg/token.csv中生成的token值
  26. --kubeconfig=bootstrap.kubeconfig
  27. # 设置上下文参数
  28. kubectl config set-context default \
  29. --cluster=kubernetes \
  30. --user=kubelet-bootstrap \
  31. #添加的用户角色
  32. --kubeconfig=bootstrap.kubeconfig
  33. # 设置默认上下文
  34. kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  35. #----------------------
  36. # 创建kube-proxy kubeconfig文件
  37. kubectl config set-cluster kubernetes \
  38. --certificate-authority=$SSL_DIR/ca.pem \
  39. --embed-certs=true \
  40. --server=${KUBE_APISERVER} \
  41. --kubeconfig=kube-proxy.kubeconfig
  42. kubectl config set-credentials kube-proxy \
  43. --client-certificate=$SSL_DIR/kube-proxy.pem \
  44. --client-key=$SSL_DIR/kube-proxy-key.pem \
  45. --embed-certs=true \
  46. --kubeconfig=kube-proxy.kubeconfig
  47. kubectl config set-context default \
  48. --cluster=kubernetes \
  49. --user=kube-proxy \
  50. --kubeconfig=kube-proxy.kubeconfig
  51. kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  52. #脚本一共生成了两个文件一个是bootstrap.kubeconfig,一个是kube-proxy.kubeconfig
  53. #把脚本移动到/opt/kubernetes/kubeconfig中执行
  54. [root@ZhangSiming kubeconfig]# pwd
  55. /opt/kubernetes/kubeconfig
  56. [root@ZhangSiming kubeconfig]# ls
  57. kubeconfig.sh
  58. [root@ZhangSiming kubeconfig]# sh kubeconfig.sh 192.168.17.130 /opt/kubernetes/ssl/
  59. #这里先做一个单Master的
  60. Cluster "kubernetes" set.
  61. User "kubelet-bootstrap" set.
  62. Context "default" created.
  63. Switched to context "default".
  64. Cluster "kubernetes" set.
  65. User "kube-proxy" set.
  66. Context "default" created.
  67. Switched to context "default".
  68. [root@ZhangSiming kubeconfig]# ls
  69. bootstrap.kubeconfig kubeconfig.sh kube-proxy.kubeconfig
  70. #两个文件都成功生成
  71. [root@ZhangSiming kubeconfig]# mv bootstrap.kubeconfig kube-proxy.kubeconfig ../cfg/
  72. #把kubeconfig移动到cfg下

8.3部署kubelet组件

  1. [root@ZhangSiming ~]# cat kubelet.sh
  2. #!/bin/bash
  3. NODE_ADDRESS=$1
  4. DNS_SERVER_IP=${2:-"10.0.0.2"}
  5. #传入Node节点IP和DNS虚拟地址(也可以自己架构一个DNS)
  6. cat <<EOF >/opt/kubernetes/cfg/kubelet
  7. KUBELET_OPTS="--logtostderr=true \\
  8. --v=4 \\
  9. #开启错误日志,级别为4
  10. --address=${NODE_ADDRESS} \\
  11. #Node节点IP
  12. --hostname-override=${NODE_ADDRESS} \\
  13. #K8S可以看到的Node标识,这里也用IP表示了
  14. --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
  15. --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
  16. --config=/opt/kubernetes/cfg/kubelet.config \\
  17. #三个config文件,有两个刚刚生成了,有一个在下面生成
  18. --cert-dir=/opt/kubernetes/ssl \\
  19. --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
  20. EOF
  21. cat <<EOF >/opt/kubernetes/cfg/kubelet.config
  22. kind: KubeletConfiguration
  23. apiVersion: kubelet.config.k8s.io/v1beta1
  24. address: ${NODE_ADDRESS}
  25. port: 10250
  26. #节点地址和端口
  27. cgroupDriver: cgroupfs
  28. clusterDNS:
  29. - ${DNS_SERVER_IP}
  30. #使用的DNS
  31. clusterDomain: cluster.local.
  32. failSwapOn: false
  33. EOF
  34. #服务的配置文件
  35. cat <<EOF >/usr/lib/systemd/system/kubelet.service
  36. [Unit]
  37. Description=Kubernetes Kubelet
  38. After=docker.service
  39. Requires=docker.service
  40. [Service]
  41. EnvironmentFile=/opt/kubernetes/cfg/kubelet
  42. ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
  43. Restart=on-failure
  44. KillMode=process
  45. [Install]
  46. WantedBy=multi-user.target
  47. EOF
  48. systemctl daemon-reload
  49. systemctl enable kubelet
  50. systemctl restart kubelet
  51. #执行脚本
  52. [root@ZhangSiming ~]# sh kubelet.sh 192.168.17.132 10.0.0.2
  53. #脚本执行成功
  54. [root@ZhangSiming ~]# systemctl start kubelet
  55. [root@ZhangSiming ~]# ps -elf | grep kubelet
  56. 4 S root 34950 1 1 80 0 - 94480 futex_ 00:46 ? 00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --address=192.168.17.132 --hostname-override=192.168.17.132 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
  57. #成功启动kubelet服务
  58. #看一下kubelet配置文件
  59. [root@ZhangSiming cfg]# cat kubelet
  60. KUBELET_OPTS="--logtostderr=true \
  61. --v=4 \
  62. --address=192.168.17.132 \
  63. --hostname-override=192.168.17.132 \
  64. --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  65. --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
  66. --config=/opt/kubernetes/cfg/kubelet.config \
  67. --cert-dir=/opt/kubernetes/ssl \
  68. --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
  69. #这一行就是一个pod,后面地址要可访问,否则pod服务会启动失败

8.4部署kube-proxy组件

  1. [root@ZhangSiming ~]# cat proxy.sh
  2. #!/bin/bash
  3. NODE_ADDRESS=$1
  4. #传入当前节点IP
  5. cat <<EOF >/opt/kubernetes/cfg/kube-proxy
  6. KUBE_PROXY_OPTS="--logtostderr=true \\
  7. --v=4 \\
  8. --hostname-override=${NODE_ADDRESS} \\
  9. #这个K8S显示标识要和kubelet对应
  10. --cluster-cidr=10.0.0.0/24 \\
  11. --proxy-mode=ipvs \\
  12. #使用ipvs传递模式,效率更高
  13. --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
  14. #这个刚刚已经生成了
  15. EOF
  16. cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
  17. [Unit]
  18. Description=Kubernetes Proxy
  19. After=network.target
  20. [Service]
  21. EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
  22. ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
  23. Restart=on-failure
  24. [Install]
  25. WantedBy=multi-user.target
  26. EOF
  27. systemctl daemon-reload
  28. systemctl enable kube-proxy
  29. systemctl restart kube-proxy
  30. #执行脚本
  31. [root@ZhangSiming ~]# sh proxy.sh 192.168.17.132
  32. Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
  33. [root@ZhangSiming ~]# ps -elf | grep kube-proxy
  34. 4 S root 35710 1 1 80 0 - 10397 futex_ 00:55 ? 00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.17.132 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
  35. #成功启动服务

将节点加入K8S集群

  1. #csr检验是否有证书请求
  2. [root@ZhangSiming kubernetes]# kubectl get csr
  3. NAME AGE REQUESTOR CONDITION
  4. node-csr-E9_6_jyQXKprDDIPiYQTE-AmoRKnBZEWfNeDzEwMde8 13m kubelet-bootstrap Pending
  5. [root@ZhangSiming kubernetes]# kubectl certificate approve node-csr-E9_6_jyQXKprDDIPiYQTE-AmoRKnBZEWfNeDzEwMde8
  6. #将节点加入K8S集群
  7. certificatesigningrequest.certificates.k8s.io/node-csr-E9_6_jyQXKprDDIPiYQTE-AmoRKnBZEWfNeDzEwMde8 approved
  8. [root@ZhangSiming kubernetes]# kubectl get node
  9. NAME STATUS ROLES AGE VERSION
  10. 192.168.17.132 Ready <none> 8s v1.12.1
  11. #成功加入
  12. #同样加入另外一个节点
  13. [root@ZhangSiming kubernetes]# kubectl get node
  14. NAME STATUS ROLES AGE VERSION
  15. 192.168.17.132 Ready <none> 10m v1.12.1
  16. 192.168.17.133 Ready <none> 4s v1.12.1
  17. #成功加入两个节点到K8S集群,完成了单Master节点多个Node节点的K8S集群

九、部署一个Nginx测试示例

pod是K8S的最小部署单元,由多个容器组成,是具体跑我们的业务的。

  1. [root@ZhangSiming ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. 192.168.17.132 Ready <none> 3d9h v1.12.1
  4. 192.168.17.133 Ready <none> 3d9h v1.12.1
  5. #K8S集群两个节点都已经准备好了
  6. [root@ZhangSiming ~]# kubectl get pods
  7. No resources found.
  8. #现在还没有部署pod
  9. [root@ZhangSiming ~]# kubectl run nginx --image=nginx
  10. #启动一个Nginx镜像作为pod
  11. kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
  12. deployment.apps/nginx created
  13. [root@ZhangSiming ~]# kubectl get pods -o wide
  14. #-o wide表示详细地看pods信息
  15. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
  16. nginx-dbddb74b8-6882f 1/1 Running 0 4m56s 172.17.22.2 192.168.17.132 <none>
  17. #可以看到部署了一个pod,调度器把pod分配到了192.168.17.132节点
  18. [root@ZhangSiming ~]# kubectl get all
  19. NAME READY STATUS RESTARTS AGE
  20. pod/nginx-dbddb74b8-6882f 1/1 Running 0 23s
  21. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  22. service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d14h
  23. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
  24. deployment.apps/nginx 1 1 1 1 23s
  25. #我们刚刚的命令开启了一个deployment、一个service和一个pod。
  26. NAME DESIRED CURRENT READY AGE
  27. replicaset.apps/nginx-dbddb74b8 1 1 1 23s
  28. [root@ZhangSiming ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
  29. #开放nginxpod的80端口(target-port),到服务器的80端口(port)
  30. service/nginx exposed
  31. [root@ZhangSiming ~]# kubectl get svc nginx
  32. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  33. nginx NodePort 10.0.0.211 <none> 80:30792/TCP 10s
  34. #可以看到服务,内网(Node节点)访问用10.0.0.211:80;外网(PC)访问用Node节点ip:30792

访问测试

  1. [root@ZhangSiming ~]# curl -I 10.0.0.211:80
  2. HTTP/1.1 200 OK
  3. Server: nginx/1.15.8
  4. Date: Sun, 24 Feb 2019 03:14:05 GMT
  5. Content-Type: text/html
  6. Content-Length: 612
  7. Last-Modified: Tue, 25 Dec 2018 09:56:47 GMT
  8. Connection: keep-alive
  9. ETag: "5c21fedf-264"
  10. Accept-Ranges: bytes
  11. [root@ZhangSiming ~]# curl -I 10.0.0.211:80
  12. HTTP/1.1 200 OK
  13. Server: nginx/1.15.8
  14. Date: Sun, 24 Feb 2019 03:14:05 GMT
  15. Content-Type: text/html
  16. Content-Length: 612
  17. Last-Modified: Tue, 25 Dec 2018 09:56:47 GMT
  18. Connection: keep-alive
  19. ETag: "5c21fedf-264"
  20. Accept-Ranges: bytes
  21. #两个节点都访问成功

image_1d4enuvkg16ic13kg4m11v161va9.png-30.5kB

  1. [root@ZhangSiming ~]# tail -3 /opt/kubernetes/cfg/kubelet.config
  2. authentication:
  3. anonymous:
  4. enabled: true
  5. #在节点的kubelet配置文件协上述3行,表示kubelet允许匿名用户验证
  6. [root@ZhangSiming ~]# systemctl restart kubelet
  7. [root@ZhangSiming ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
  8. #绑定system:anonymous角色到系统最大权限,用于访问pod的日志
  9. clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
  10. [root@ZhangSiming ~]# kubectl logs nginx-dbddb74b8-6882f
  11. #查看nginxpod的日志
  12. 10.0.0.211 - - [24/Feb/2019:02:55:24 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
  13. 172.17.52.0 - - [24/Feb/2019:02:55:34 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
  14. 172.17.52.0 - - [24/Feb/2019:02:57:56 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36" "-"
  15. 2019/02/24 02:57:56 [error] 6#6: *3 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 172.17.52.0, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "192.168.17.133:30792", referrer: "http://192.168.17.133:30792/"
  16. 172.17.52.0 - - [24/Feb/2019:02:57:56 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.17.133:30792/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36" "-"

测试实例至此部署成功。

十、部署K8S的Web UI(Dashboard)

10.1下载K8S的Web UI部署yaml文件

  1. [root@ZhangSiming dashboard]# pwd
  2. /root/K8S/kubernetes/cluster/addons/dashboard
  3. [root@ZhangSiming dashboard]# ls
  4. dashboard-configmap.yaml dashboard-secret.yaml OWNERS
  5. dashboard-controller.yaml dashboard-service.yaml README.md
  6. dashboard-rbac.yaml MAINTAINERS.md
  7. #这里面都是部署K8SWebUI的yaml文件,yaml文件主要是用于解耦部署pod的过程,一次性把多条部署内容结合为一个文档,方便部署与修改
  8. [root@ZhangSiming dashboard]# cat dashboard-controller.yaml
  9. apiVersion: v1
  10. kind: ServiceAccount
  11. metadata:
  12. labels:
  13. k8s-app: kubernetes-dashboard
  14. addonmanager.kubernetes.io/mode: Reconcile
  15. name: kubernetes-dashboard
  16. namespace: kube-system
  17. ---
  18. apiVersion: apps/v1
  19. kind: Deployment
  20. metadata:
  21. name: kubernetes-dashboard
  22. namespace: kube-system
  23. labels:
  24. k8s-app: kubernetes-dashboard
  25. kubernetes.io/cluster-service: "true"
  26. addonmanager.kubernetes.io/mode: Reconcile
  27. spec:
  28. selector:
  29. matchLabels:
  30. k8s-app: kubernetes-dashboard
  31. template:
  32. metadata:
  33. labels:
  34. k8s-app: kubernetes-dashboard
  35. annotations:
  36. scheduler.alpha.kubernetes.io/critical-pod: ''
  37. seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
  38. spec:
  39. priorityClassName: system-cluster-critical
  40. containers:
  41. - name: kubernetes-dashboard
  42. image: registry.cn-hangzhou.aliyuncs.com/kuberneters/kubernetes-dashboard-amd64
  43. #这里替换为上面这个地址,因为默认的镜像地址是国外的,国内不能访问
  44. resources:
  45. limits:
  46. cpu: 100m
  47. memory: 300Mi
  48. requests:
  49. cpu: 50m
  50. memory: 100Mi
  51. ports:
  52. - containerPort: 8443
  53. protocol: TCP
  54. args:
  55. # PLATFORM-SPECIFIC ARGS HERE
  56. - --auto-generate-certificates
  57. volumeMounts:
  58. - name: kubernetes-dashboard-certs
  59. mountPath: /certs
  60. - name: tmp-volume
  61. mountPath: /tmp
  62. livenessProbe:
  63. httpGet:
  64. scheme: HTTPS
  65. path: /
  66. port: 8443
  67. initialDelaySeconds: 30
  68. timeoutSeconds: 30
  69. volumes:
  70. - name: kubernetes-dashboard-certs
  71. secret:
  72. secretName: kubernetes-dashboard-certs
  73. - name: tmp-volume
  74. emptyDir: {}
  75. serviceAccountName: kubernetes-dashboard
  76. tolerations:
  77. - key: "CriticalAddonsOnly"
  78. operator: "Exists"

image_1d4epdco9oca51m1l3n88v1ru5m.png-74.3kB

  1. #create所有的yaml文件,K8SWebUIpod就部署好了
  2. [root@ZhangSiming dashboard]# kubectl create -f dashboard-configmap.yaml
  3. configmap/kubernetes-dashboard-settings created
  4. [root@ZhangSiming dashboard]# kubectl create -f dashboard-controller.yaml
  5. serviceaccount/kubernetes-dashboard created
  6. deployment.apps/kubernetes-dashboard created
  7. [root@ZhangSiming dashboard]# kubectl create -f dashboard-rbac.yaml
  8. role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
  9. rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
  10. [root@ZhangSiming dashboard]# kubectl create -f dashboard-secret.yaml
  11. secret/kubernetes-dashboard-certs created
  12. secret/kubernetes-dashboard-key-holder created
  13. [root@ZhangSiming dashboard]# kubectl get pods -n kube-system -o wide
  14. #由于是系统组件,所以要用get pods -n kube-system方式查看
  15. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
  16. kubernetes-dashboard-5f5bfdc89f-wf5sw 1/1 Running 0 2m59s 172.17.52.2 192.168.17.133 <none>
  17. #开放外部访问service
  18. [root@ZhangSiming dashboard]# tail -6 dashboard-service.yaml
  19. type: NodePort
  20. #NodePort指定外部访问类型
  21. selector:
  22. k8s-app: kubernetes-dashboard
  23. ports:
  24. - port: 443
  25. #开放为Node访问用8443端口
  26. targetPort: 8443
  27. #pod的443端口
  28. [root@ZhangSiming dashboard]# kubectl create -f dashboard-service.yaml
  29. service/kubernetes-dashboard created
  30. [root@ZhangSiming dashboard]# kubectl get -n kube-system svc
  31. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  32. kubernetes-dashboard NodePort 10.0.0.161 <none> 443:45294/TCP 5m14s

image_1d4ere0c91u891rkjg0rbpt2up13.png-58.5kB

注意:一定要是用safari浏览器或者firefox浏览器才可以进入。

  1. #我们选择令牌token方式验证进入
  2. [root@ZhangSiming kubernetes]# cat k8s-admin.yaml
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. #将WebUI使用的ServiceAccount账户绑定到K8S管理员账户,最高权限cluster-admin
  6. metadata:
  7. name: dashboard-admin
  8. namespace: kube-system
  9. ---
  10. kind: ClusterRoleBinding
  11. apiVersion: rbac.authorization.k8s.io/v1beta1
  12. metadata:
  13. name: dashboard-admin
  14. subjects:
  15. - kind: ServiceAccount
  16. name: dashboard-admin
  17. namespace: kube-system
  18. roleRef:
  19. kind: ClusterRole
  20. name: cluster-admin
  21. apiGroup: rbac.authorization.k8s.io
  22. [root@ZhangSiming kubernetes]# kubectl create -f k8s-admin.yaml
  23. serviceaccount/dashboard-admin created
  24. clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
  25. [root@ZhangSiming kubernetes]# kubectl get secret -n kube-system
  26. NAME TYPE DATA AGE
  27. dashboard-admin-token-h4dht kubernetes.io/service-account-token 3 14s
  28. default-token-h6kgb kubernetes.io/service-account-token 3 3d15h
  29. kubernetes-dashboard-certs Opaque 0 31m
  30. kubernetes-dashboard-key-holder Opaque 2 31m
  31. kubernetes-dashboard-token-5mk8p kubernetes.io/service-account-token 3 31m
  32. [root@ZhangSiming kubernetes]# kubectl describe secret -n kube-system dashboard-admin-token-h4dht
  33. #看一下这个dashboard-admin-token-h4dht的值
  34. Name: dashboard-admin-token-h4dht
  35. Namespace: kube-system
  36. Labels: <none>
  37. Annotations: kubernetes.io/service-account.name: dashboard-admin
  38. kubernetes.io/service-account.uid: 4917ffb2-37eb-11e9-9874-000c29192b70
  39. Type: kubernetes.io/service-account-token
  40. Data
  41. ====
  42. ca.crt: 1359 bytes
  43. namespace: 11 bytes
  44. token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4taDRkaHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDkxN2ZmYjItMzdlYi0xMWU5LTk4NzQtMDAwYzI5MTkyYjcwIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.AS1DzFtgZhQsy-Ns4U1-e8ffhbEKW20BM0rkwKS1EL_p-ewdX1BFIleIVDdykdNQk1coiY3gzciCUpD1qHQ92Q_p-gbLY9Lq8C4u8DTtC07L_NIf2H178-HdjjQc_LDlIU0oJ1MQ5iT51RV4VZHIe0dtK_kA2162OxB45lepqVpDSBaBDi1SARujRiOtqYkd6KD9GygSFDtvb0zB7GmpZM7lTEy0g9CXSzRwg6K1EUWpAeOjk3qHbJN55AGWemtneSYA2Vi_Z84sSi9B_OFShqAb-Lo0ol9b81NNE15s03zlttkC59BA2QRsY9_aK5iQ2OUBkirwyyzRQShMllZ9rA
  45. #复制上面这个token,浏览器验证,进入K8S WebUI界面

image_1d4ervtda8ago3i1r4jeh01uml1g.png-59kB

成功进入WebUI,注意token中间不能有换行。
K8S的Web UI一般调试的时候用的比较多,生产环境用的比较少。
可以看不同命名空间的容器pod情况,一个命名空间就是一个虚拟集群,我们的WebUIpod就是在kube-system空间的,nginx是在defalut空间的,可以切换查看日志或者容器shell界面等操作......

十一、部署K8SMaster节点高可用架构

K8S集群中,主要是Master集群需要高可用,因为Node集群全部是覆盖网络,即使其中一个Node宕机,我们也可以通过其他存活Node访问服务(pod自动漂移),所以不存在单点问题;
而Master组件的高可用主要是apiserver的高可用,因为scheduler和controller-manager都具有自身的选举参数进行高可用。

  1. #部署好一台和Master01完全一样的Master节点
  2. [root@ZhangSiming cfg]# kubectl get cs
  3. NAME STATUS MESSAGE ERROR
  4. controller-manager Healthy ok
  5. scheduler Healthy ok
  6. etcd-0 Healthy {"health":"true"}
  7. etcd-1 Healthy {"health":"true"}
  8. etcd-2 Healthy {"health":"true"}
  9. [root@ZhangSiming cfg]# kubectl get node
  10. NAME STATUS ROLES AGE VERSION
  11. 192.168.17.132 Ready <none> 3d11h v1.12.1
  12. 192.168.17.133 Ready <none> 3d11h v1.12.1
  13. #虽然Master02没有连接到Node节点,但是由于Etcd数据库都连接了,所以可以看到Etcd中的数据,检测到节点。
  1. #这里选用Nginx反向代理作为Master apiserver的负载均衡
  2. [root@ZhangSiming ~]# cat nginx.sh
  3. #!/bin/bash
  4. #designed by ZhangSiming
  5. id nginx
  6. if [ $? != 0 ];then
  7. useradd nginx -s /sbin/nologin -M
  8. fi
  9. #创建Nginx程序用户
  10. yum -y install openssl-devel pcre-devel gcc gcc-c++ make
  11. #安装依赖包
  12. cd /tmp
  13. tar xf nginx-1.10.2.tar.gz -C /usr/src
  14. cd /usr/src/nginx-1.10.2/
  15. ./configure --user=nginx --group=nginx --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-stream
  16. #这里注意编译的时候启用了一个--with-stream模块,这个模块的功能是开启Nginx四层负载均衡的功能(Nginx1.9version之后新增加的功能),如果不开启这个功能,那么我们连接apiserver还需要ca证书,比较麻烦,所以我们用的是Nginx的4层负载均衡功能
  17. make
  18. make install
  19. #编译安装Nginx
  20. ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin
  21. rm -rf /tmp/nginx-1.10.2.tar.gz
  22. #进行四层负载均衡Nginx的配置
  23. [root@ZhangSiming ~]# cd /usr/local/nginx/
  24. [root@ZhangSiming nginx]# vim conf/nginx.conf
  25. [root@ZhangSiming nginx]# sed -n '16,24p' conf/nginx.conf
  26. stream {
  27. #stream模块的支持,stream与http模块同级
  28. upstream k8s-apiserver {
  29. server 192.168.17.130:6443;
  30. server 192.168.17.131:6443;
  31. #两个Masterapi的连接ip与端口,注意这里apiserver用的安全端口6443连接,master其他组件与apiserver连接用的非安全端口8080
  32. }
  33. server {
  34. listen 0.0.0.0:6443;
  35. #由于后面我们要监听VIP过来的访问,所以这里指定为0.0.0.0
  36. proxy_pass k8s-apiserver;
  37. #这里区别于http模块推反向代理池,反向代理池名字前面是不加http://的,需要注意
  38. }
  39. [root@ZhangSiming nginx]# nginx -s reload
  1. #需要把所有连接Master节点的都连接到Nginx L4负载均衡器
  2. [root@ZhangSiming cfg]# pwd
  3. /opt/kubernetes/cfg
  4. [root@ZhangSiming cfg]# grep 130 *
  5. bootstrap.kubeconfig: server: https://192.168.17.130:6443
  6. flanneld:FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.17.130:2379,https://192.168.17.131:2379,https://192.168.17.132:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  7. kubelet.kubeconfig: server: https://192.168.17.130:6443
  8. kube-proxy.kubeconfig: server: https://192.168.17.130:6443
  9. [root@ZhangSiming cfg]# vim bootstrap.kubeconfig
  10. [root@ZhangSiming cfg]# vim kubelet.kubeconfig
  11. [root@ZhangSiming cfg]# vim kube-proxy.kubeconfig
  12. [root@ZhangSiming cfg]# grep 134 *
  13. bootstrap.kubeconfig: server: https://192.168.17.134:6443
  14. kubelet.kubeconfig: server: https://192.168.17.134:6443
  15. kube-proxy.kubeconfig: server: https://192.168.17.134:6443
  16. #修改之后重启
  17. [root@ZhangSiming cfg]# systemctl restart kubelet
  18. [root@ZhangSiming cfg]# systemctl restart kube-proxy
  1. [root@ZhangSiming yum.repos.d]# vim nginx.repo
  2. [root@ZhangSiming yum.repos.d]# cat nginx.repo
  3. [nginx]
  4. name=nginx.repo
  5. baseurl=http://nginx.org/packages/centos/7/$basearch/
  6. gpgcheck=0
  7. enabled=1
  8. [root@ZhangSiming yum.repos.d]# yum -y install nginx
  9. #从官网yum源安装nginx
  10. [root@ZhangSiming yum.repos.d]# nginx -V
  11. nginx version: nginx/1.14.2
  12. #Nginx新版本1.11版本之后支持stream模块里面支持记录日志功能
  13. #编辑配置文件
  14. [root@ZhangSiming yum.repos.d]# sed -n '12,23p' /etc/nginx/nginx.conf
  15. stream {
  16. log_format main "$remote_addr--->$upstream_addr time:$time_local $status";
  17. access_log /var/log/nginx/k8s-access.log main;
  18. upstream k8s-apiserver {
  19. server 192.168.17.130:6443;
  20. server 192.168.17.131:6443;
  21. }
  22. server {
  23. listen 0.0.0.0:6443;
  24. #由于后面我们要监听VIP过来的访问,所以这里指定为0.0.0.0
  25. proxy_pass k8s-apiserver;
  26. }
  27. }
  28. [root@ZhangSiming yum.repos.d]# nginx
  1. #Keepalived主节点配置文件
  2. [root@ZhangSiming etc]# cat keepalived/keepalived.conf
  3. ! Configuration File for keepalived
  4. global_defs {
  5. # 接收邮件地址
  6. notification_email {
  7. acassen@firewall.loc
  8. failover@firewall.loc
  9. sysadmin@firewall.loc
  10. }
  11. # 邮件发送地址
  12. notification_email_from Alexandre.Cassen@firewall.loc
  13. smtp_server 127.0.0.1
  14. smtp_connect_timeout 30
  15. router_id NGINX_MASTER
  16. }
  17. #前面两个地址没啥用
  18. vrrp_script check_nginx
  19. {
  20. script "/etc/keepalived/nginx.sh"
  21. }
  22. #这里注意上下空格,苦逼的格式.....浪费我好几个小时
  23. #检测nginx脚本
  24. vrrp_instance VI_1 {
  25. state MASTER
  26. interface ens32
  27. virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
  28. priority 100 # 优先级,备服务器设置 90
  29. advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
  30. authentication {
  31. auth_type PASS
  32. auth_pass 1111
  33. }
  34. virtual_ipaddress {
  35. 192.168.17.135/24
  36. }
  37. track_script
  38. {
  39. check_nginx
  40. }
  41. #这里注意上下空格,苦逼的格式.....浪费我好几个小时
  42. }
  43. [root@ZhangSiming etc]# cat /etc/keepalived/nginx.sh
  44. #!/bin/bash
  45. if [ `netstat -antup | grep nginx | wc -l ` -eq 0 ];then
  46. systemctl stop keepalived.service
  47. fi
  48. [root@ZhangSiming etc]# systemctl start keepalived.service
  49. [root@ZhangSiming etc]# chmod +x /etc/keepalived/nginx.sh
  50. #Keepalived备节点配置文件
  51. [root@ZhangSiming etc]# cat keepalived/keepalived.conf
  52. ! Configuration File for keepalived
  53. global_defs {
  54. # 接收邮件地址
  55. notification_email {
  56. acassen@firewall.loc
  57. failover@firewall.loc
  58. sysadmin@firewall.loc
  59. }
  60. # 邮件发送地址
  61. notification_email_from Alexandre.Cassen@firewall.loc
  62. smtp_server 127.0.0.1
  63. smtp_connect_timeout 30
  64. router_id NGINX_MASTER
  65. }
  66. vrrp_script check_nginx
  67. {
  68. script "/etc/keepalived/nginx.sh"
  69. }
  70. vrrp_instance VI_1 {
  71. state BACKUP
  72. interface ens32
  73. virtual_router_id 51 # 这个ID不一致的话会出现高可用裂脑
  74. priority 90 # 优先级,备服务器设置 90
  75. advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
  76. authentication {
  77. auth_type PASS
  78. auth_pass 1111
  79. }
  80. virtual_ipaddress {
  81. 192.168.17.135/24
  82. }
  83. track_script
  84. {
  85. check_nginx
  86. }
  87. }
  88. [root@ZhangSiming etc]# ls /etc/keepalived/nginx.sh
  89. /etc/keepalived/nginx.sh
  90. [root@ZhangSiming etc]# systemctl start keepalived.service
  91. [root@ZhangSiming etc]# chmod +x /etc/keepalived/nginx.sh
  92. [root@ZhangSiming keepalived]# ip addr | grep 192.168
  93. inet 192.168.17.134/24 brd 192.168.17.255 scope global noprefixroute ens32
  94. inet 192.168.17.135/24 scope global secondary ens32
  95. #192.168.17.135就是我们的VIP
  1. #将Node连接LB的物理IP改为连接VIP即可
  2. [root@ZhangSiming cfg]# grep 134 *
  3. bootstrap.kubeconfig: server: https://192.168.17.134:6443
  4. kubelet.kubeconfig: server: https://192.168.17.134:6443
  5. kube-proxy.kubeconfig: server: https://192.168.17.134:6443
  6. [root@ZhangSiming cfg]# vim bootstrap.kubeconfig
  7. [root@ZhangSiming cfg]# vim kubelet.kubeconfig
  8. [root@ZhangSiming cfg]# vim kube-proxy.kubeconfig
  9. [root@ZhangSiming cfg]# systemctl restart kubelet
  10. [root@ZhangSiming cfg]# systemctl restart kube-proxy
  11. [root@ZhangSiming cfg]# grep 135 *
  12. bootstrap.kubeconfig: server: https://192.168.17.135:6443
  13. kubelet.kubeconfig: server: https://192.168.17.135:6443
  14. kube-proxy.kubeconfig: server: https://192.168.17.135:6443
  1. #停止LB主的nginx
  2. [root@ZhangSiming keepalived]# nginx -s stop
  3. [root@ZhangSiming keepalived]# ip addr | grep 192.168
  4. inet 192.168.17.134/24 brd 192.168.17.255 scope global noprefixroute ens32
  5. #在node节点操作
  6. [root@ZhangSiming cfg]# systemctl restart kubelet
  7. #动态查看LB备k8s日志
  8. [root@ZhangSiming keepalived]# tail -f /var/log/nginx/k8s-access.log
  9. 192.168.17.132 192.168.17.131:6443 24/Feb/2019:18:08:27 +0800 200
  10. 192.168.17.132 192.168.17.131:6443 24/Feb/2019:18:08:28 +0800 200
  11. 192.168.17.132 192.168.17.131:6443 24/Feb/2019:18:08:28 +0800 200
  12. #K8S集群Master节点查看node状态
  13. [root@ZhangSiming ~]# kubectl get node
  14. NAME STATUS ROLES AGE VERSION
  15. 192.168.17.132 Ready <none> 3d17h v1.12.1
  16. 192.168.17.133 Ready <none> 3d16h v1.12.1
  17. #Node还是Ready状态,实现高可用!

多Master节点高可用K8S架构至此构建完毕,部署过程繁杂,细心,出问题及时看日志。

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注