[关闭]
@zhangsiming65965 2019-03-24T05:46:48.000000Z 字数 33656 阅读 136

Kubernetes深入学习(上)

K8S

---Author:张思明 ZhangSiming

---Mail:1151004164@cnu.edu.cn

---QQ:1030728296

如果有梦想,就放开的去追;
因为只有奋斗,才能改变命运。


一、kubectl命令行管理工具

1.1kubectl管理命令概要

命令 描述
create 通过文件名或者标准输入创建资源,例如-f+.yaml文件创建或者创建集群角色等...
expose 将一个资源公开为一个新的service
run 在集群中运行一个特定的镜像
set 在对象上设置特定的功能
get 查看一个或多个资源,常见的有get pods或者get svc,get node等...还支持-o wide查看详细信息或者-n kube-system指定不同的命名空间等
explain 文档参考资料
edit 使用默认的编辑器编辑一个资源
delete 通过文件名、标准输入、资源名称或者标签选择器来删除资源
命令 描述
rollout 管理资源的发布
rolling-update 对给定的复制控制器滚动更新
scale 扩容或缩容Pod数量,Deployment、ReplicaSet、RC或Job
autoscale 创建一个自动选择扩容或缩容并设置Pod数量
命令 描述
certificate 修改证书资源
cluster-info 显示集群信息
top 显示资源(CPU/Memory/Storage)使用。需要先部署Heapster
cordon 标记节点不可调度,标记了就不会有pod调度过来
uncordon 标记节点可调度
drain 驱逐节点上的应用,准备下线维护
taint 修改节点taint标记
命令 描述
describe 查看特定资源或资源组的详细信息
logs 在一个Pod中打印一个容器日志。如果Pod只有一个容器,容器名称是可选的
attach 附加到一个运行的容器
exec 执行命令到容器
port-forward 转发一个或多个本地端口到一个pod
proxy 运行一个proxy到KubernetesAPIserver
cp 拷贝文件或目录到容器中
auth 检查授权
命令 描述
apply 通过文件名或标准输入对资源应用配置
patch 使用补丁修改、更新资源的字段
replace 通过文件名或标准输入替换一个资源
convert 不同的API版本之间转换配置文件
命令 描述
label 更新资源上的标签
annotate 更新资源上的注释
completion 用于实现kubectl工具自动补全
命令 描述
api-versions 打印受支持的API版本
config 生成或者修改kubeconfig文件(用于访问API,比如配置认证信息)
help 所有命令帮助
plugin 运行一个命令行插件
version 打印客户端和服务版本信息

1.2kubectl管理应用程序生命周期

流程:

创建-->发布-->更新-->回滚-->删除

  1. #创建一个label为nginx的项目,3个副本(高可用),使用nginx1.14版本镜像,端口为80
  2. [root@ZhangSiming ~]# kubectl run nginx --replicas=3 --image=nginx:1.10 --port=80
  3. kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
  4. deployment.apps/nginx created
  5. #K8S项目的发布是由镜像发布的,不是jar包或者war包,发布之后由scheduler自动调度到合适的Node
  6. [root@ZhangSiming ~]# kubectl get pods
  7. NAME READY STATUS RESTARTS AGE
  8. nginx-7b67cfbf9f-6t7z9 1/1 Running 0 18s
  9. nginx-7b67cfbf9f-96v9d 1/1 Running 0 18s
  10. nginx-7b67cfbf9f-q5hgl 1/1 Running 0 18s
  11. #事实上,除了pod,操作的run命令是生产了deployment,是用来管理更新项目的
  12. [root@ZhangSiming ~]# kubectl get deployments,pods
  13. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
  14. deployment.extensions/nginx 3 3 3 3 3m29s
  15. NAME READY STATUS RESTARTS AGE
  16. pod/nginx-7b67cfbf9f-6t7z9 1/1 Running 0 3m28s
  17. pod/nginx-7b67cfbf9f-96v9d 1/1 Running 0 3m28s
  18. pod/nginx-7b67cfbf9f-q5hgl 1/1 Running 0 3m28s
  1. [root@ZhangSiming ~]# kubectl expose deployment nginx --port=100 --type=NodePort --target-port=80 --name=nginx-service
  2. #指定deployment更新项目,开放服务供外界连接
  3. #--port表示供Node集群访问端口
  4. #--type=NodePort这个类型是供外界访问的类型
  5. #--target-port是容器内的服务端口
  6. service/nginx-service exposed
  7. [root@ZhangSiming ~]# kubectl get service
  8. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  9. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 27h
  10. nginx NodePort 10.0.0.211 <none> 80:30792/TCP 28h
  11. nginx-service NodePort 10.0.0.55 <none> 100:35373/TCP 3s
  12. #测试访问:
  13. [root@ZhangSiming ~]# curl -I 10.0.0.55:100
  14. HTTP/1.1 200 OK
  15. Server: nginx/1.14.2
  16. Date: Mon, 25 Feb 2019 07:08:54 GMT
  17. Content-Type: text/html
  18. Content-Length: 612
  19. Last-Modified: Tue, 04 Dec 2018 14:44:49 GMT
  20. Connection: keep-alive
  21. ETag: "5c0692e1-264"
  22. Accept-Ranges: bytes

image_1d4ho880c1joe60g1rum1nnv4vr9.png-30.8kB

  1. [root@ZhangSiming ~]# kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. nginx-7b67cfbf9f-6t7z9 1/1 Running 0 20m
  4. nginx-7b67cfbf9f-96v9d 1/1 Running 0 20m
  5. nginx-7b67cfbf9f-q5hgl 1/1 Running 0 20m
  6. [root@ZhangSiming ~]# kubectl describe pods nginx-7b67cfbf9f-q5hgl | grep 1.10
  7. Image: nginx:1.10
  8. Normal Pulling 20m kubelet, 192.168.17.132 pulling image "nginx:1.10"
  9. Normal Pulled 20m kubelet, 192.168.17.132 Successfully pulled image "nginx:1.10"
  10. #查看好现在是nginx1.14的镜像版本,实现CI好了docker镜像,想要把服务更新为nginx1.10版本
  11. [root@ZhangSiming ~]# kubectl set image deployment/nginx nginx=nginx1.14
  12. deployment.extensions/nginx image updated
  13. #根据deployment更新nginx镜像为1.14
  14. [root@ZhangSiming ~]# kubectl set image deployment/nginx nginx=nginx:1.14
  15. deployment.extensions/nginx image updated
  16. [root@ZhangSiming ~]# kubectl get pods
  17. NAME READY STATUS RESTARTS AGE
  18. nginx-787b58fd95-psw9g 1/1 Running 0 2m7s
  19. nginx-787b58fd95-vxtr4 1/1 Running 0 2m7s
  20. nginx-7b67cfbf9f-m6mp2 0/1 ContainerCreating 0 2s
  21. nginx-d87b556b7-tmbj6 0/1 Terminating 0 64s
  22. #更新是平滑更新,停止一个pod副本更新一个pod,不影响正常服务
  23. [root@ZhangSiming ~]# kubectl describe pod nginx-7b67cfbf9f-cksvf | grep 1.14
  24. Image: nginx:1.14
  25. Normal Pulled 2m17s kubelet, 192.168.17.132 Container image "nginx:1.1" already present on machine
  26. #成功更新nginx为1.14版本
  1. #更新的如果有问题,要及时回滚
  2. [root@ZhangSiming ~]# kubectl rollout history deployment.apps/nginx
  3. deployment.apps/nginx
  4. REVISION CHANGE-CAUSE
  5. 1 <none>
  6. 4 <none>
  7. 5 <none>
  8. [root@ZhangSiming ~]# kubectl rollout undo deployment.apps/nginx --to-revision=1deployment.apps/nginx
  9. [root@ZhangSiming ~]# kubectl get pods
  10. NAME READY STATUS RESTARTS AGE
  11. nginx-787b58fd95-7pmnd 1/1 Running 0 2s
  12. nginx-787b58fd95-rcg9d 0/1 ContainerCreating 0 0s
  13. nginx-7b67cfbf9f-m6mp2 1/1 Running 0 8m31s
  14. nginx-7b67cfbf9f-scs5h 1/1 Terminating 0 8m28s
  15. [root@ZhangSiming ~]# kubectl describe pod nginx-787b58fd95-7pmnd | grep 1.10
  16. Image: nginx:1.10
  17. Normal Pulled 60s kubelet, 192.168.17.132 Container image "nginx:1.10" already present on machine
  18. #成功回滚到了nginx1.10版本
  1. #我们主要就是部署了deployment管理和service服务,只要删除这两个,pod自然就没了
  2. [root@ZhangSiming ~]# kubectl delete deployment/nginx
  3. deployment.extensions "nginx" deleted
  4. [root@ZhangSiming ~]# kubectl delete svc/nginx-service
  5. service "nginx-service" deleted
  6. [root@ZhangSiming ~]# kubectl get pods
  7. No resources found.
  8. #注意如果不删除deployment直接删除pod,是没有用的,因为deployment会自动生成pod

1.3kubectl远程连接K8S集群进行管理

我们上面管理K8S集群都是通过Master节点的本地kubuctl管理的,这是因为kubectl默认连接的本地的apiserver,可以进行管理。但是如果工作中我们需要远程管理怎么办?
需要生成一个kubectl配置文件,在远程节点上才能管理K8S集群。

  1. #在Node节点上
  2. [root@ZhangSiming ~]# cd /opt/kubernetes/ssl/
  3. #利用kubectl生成kubectl连接K8S集群的配置文件
  4. [root@ZhangSiming ssl]# cat kubeconfig.sh
  5. #!/bin/bash
  6. kubectl config set-cluster kubernetes \
  7. --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  8. --embed-certs=true \
  9. --server=https://192.168.17.135:6443 \
  10. --kubeconfig=config
  11. #连接的是MasterVIP
  12. kubectl config set-credentials cluster-admin \
  13. --certificate-authority=ca.pem \
  14. --embed-certs=true \
  15. --client-key=admin-key.pem \
  16. --client-certificate=admin.pem \
  17. --kubeconfig=config
  18. kubectl config set-context default \
  19. --cluster=kubernetes \
  20. --user=cluster-admin \
  21. --kubeconfig=config
  22. #上下文连接
  23. kubectl config use-context default --kubeconfig=config
  24. [root@ZhangSiming ssl]# sh kubeconfig.sh
  25. Cluster "kubernetes" set.
  26. User "cluster-admin" set.
  27. Context "default" created.
  28. Switched to context "default".
  29. [root@ZhangSiming ssl]# mv config /opt/kubernetes/cfg
  30. [root@ZhangSiming ssl]# kubectl --kubeconfig=/opt/kubernetes/cfg/config get all
  31. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  32. service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 28h
  33. #访问成功

二、YAML文件(资源编排)

使用kubecrl管理配置K8S集群的时候,一条一条输入的话太慢,而且也不好修改也不好调试。可以把所有的部署配置都写到一个yaml文件中,直接kubectl create -f 执行这个yaml文件,达到一键部署的效果,更适合后期的修改调试。

2.1yaml文件格式说明

YAML是一种简介的非标记语言。

语法格式

2.2YAML文件创建资源对象

  1. #yaml文件编写
  2. [root@ZhangSiming ssl]# cat nginx-deployment.yaml
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6. name: nginx-deployment
  7. labels:
  8. app: nginx
  9. spec:
  10. replicas: 3
  11. selector:
  12. matchLabels:
  13. app: nginx
  14. #以上是控制器定义
  15. template:
  16. metadata:
  17. labels:
  18. app: nginx
  19. spec:
  20. containers:
  21. - name: nginx
  22. image: nginx:1.15.4
  23. ports:
  24. - containerPort: 80
  25. #这些是被管理对象定义
  26. [root@ZhangSiming ssl]# cat nginx-service.yaml
  27. apiVersion: v1
  28. kind: Service
  29. metadata:
  30. name: nginx-service
  31. labels:
  32. app: nginx
  33. spec:
  34. type: NodePort
  35. ports:
  36. - port: 80
  37. targetPort: 80
  38. selector:
  39. app: nginx
  40. #执行YAML文件生成项目
  41. [root@ZhangSiming ssl]# kubectl create -f nginx-deployment.yaml
  42. deployment.apps/nginx-deployment created
  43. [root@ZhangSiming ssl]# kubectl get pods
  44. NAME READY STATUS RESTARTS AGE
  45. nginx-deployment-d55b94fd-qd8ds 1/1 Running 0 30s
  46. nginx-deployment-d55b94fd-swmm5 1/1 Running 0 30s
  47. nginx-deployment-d55b94fd-z4npc 1/1 Running 0 30s
  48. [root@ZhangSiming ssl]# kubectl create -f nginx-service.yaml
  49. service/nginx-service created
  50. [root@ZhangSiming ssl]# kubectl get svc
  51. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  52. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 28h
  53. nginx-service NodePort 10.0.0.189 <none> 80:30177/TCP 5s

创建什么资源,创建什么对象是要在一个文件中写全的。相比于kubectl,YAML文件更容易留存,修改调试,不再像kubectl一条一条输入繁琐。
在部署一些微服务或者复杂的服务的时候,YAML文件方式的优点就大大体现出来了。

2.3导出YAML文件留存

实际工作中,YAML字段太多记不住,我们可以从网上下载YAML模板进行修改,也可以进行YAML文件导出。

  1. #把不是真实存在的资源对象试运行之后导出yaml文件
  2. [root@ZhangSiming ssl]# kubectl run nginx --replicas=3 --image=nginx:1.14 --port=80 --dry-run -o yaml > ~/nginx-deployment.yaml
  3. #-o指定输出格式,可选的有YAML和json等
  4. #--dry-run表示测试运行,不会真的运行
  5. kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
  6. [root@ZhangSiming ssl]# cat nginx-deployment.yaml
  7. apiVersion: apps/v1
  8. kind: Deployment
  9. metadata:
  10. name: nginx-deployment
  11. labels:
  12. app: nginx
  13. spec:
  14. replicas: 3
  15. selector:
  16. matchLabels:
  17. app: nginx
  18. template:
  19. metadata:
  20. labels:
  21. app: nginx
  22. spec:
  23. containers:
  24. - name: nginx
  25. image: nginx:1.15.4
  26. ports:
  27. - containerPort: 80
  28. #把已经存在的资源对象的YAML文件导出来
  29. [root@ZhangSiming ssl]# kubectl get pods
  30. NAME READY STATUS RESTARTS AGE
  31. nginx-deployment-d55b94fd-qd8ds 1/1 Running 0 23m
  32. nginx-deployment-d55b94fd-swmm5 1/1 Running 0 23m
  33. nginx-deployment-d55b94fd-z4npc 1/1 Running 0 23m
  34. [root@ZhangSiming ssl]# kubectl get deployment.apps/nginx-deployment --export -o yaml > ~/nginx-deployment.yaml
  35. [root@ZhangSiming ssl]# cat ~/nginx-deployment.yaml
  36. apiVersion: apps/v1
  37. kind: Deployment
  38. metadata:
  39. labels:
  40. app: nginx
  41. name: nginx-deployment
  42. replicas: 3
  43. selector:
  44. matchLabels:
  45. app: nginx
  46. type: RollingUpdate
  47. template:
  48. metadata:
  49. labels:
  50. app: nginx
  51. spec:
  52. containers:
  53. - image: nginx:1.15.4
  54. imagePullPolicy: IfNotPresent
  55. name: nginx
  56. ports:
  57. - containerPort: 80
  58. protocol: TCP
  59. #可能比较多,不认识的,或者为{}的空段可以删除
  60. #如果哪个字段不会写了,可以explain进行查看
  61. [root@ZhangSiming ssl]# kubectl explain deployment

三、深入理解Pod对象

官方网站:https://kubernetes.io/docs/concepts/

5.1Pod的介绍与容器的分类

5.1.1Pod介绍

  • 最小部署单元;
  • 一组容器的集合,将一些紧密协作的容器放到一个pod中;
  • 一个pod中的容器共享网络命名空间;
  • pod是短暂的,刚开始设置的时候是无状态的pod,调度到哪个Node节点都不用考虑。

5.1.2Pod中容器的分类

Infrastructure Container:基础容器,创建Pod的时候自动拉取kubelet配置文件中指定的镜像仓库进行构建,作用是维护整个Pod网络空间;
InitContainers:初始化容器,先于业务容器开始执行,完成一些初始化操作,哪个业务容器先运行之类的;
Containers:业务容器,所有业务容器并行启动,真正执行业务。

5.2Pod镜像拉取策略

当你创建一个资源对象的时候,镜像的构建是有一个镜像拉取策略的,由部署K8S项目的yaml文件的imagePullPolicy字段管控。
imagePullPolicy取值有三种:
1.IfNotPresent:默认值,镜像在Node节点上不存在时才拉取;
2.Always:每次创建Pod 都会重新拉取一次镜像;
3.Never:Pod 永远不会主动拉取这个镜像。

  1. [root@ZhangSiming ~]# kubectl get deploy/nginx-deployment -o yaml | grep imagePullPolicy
  2. imagePullPolicy: IfNotPresent
  3. #查看已经运行的资源的镜像拉取策略

如果是公开的镜像仓库,可以直接根据Pod镜像拉取策略拉取镜像部署Pod;但是如果是私有云镜像仓库Harbor,还需要认证,就要在yaml文件添加一条凭证进行认证。

  1. #在已经登录Harbor的服务器获取认证信息
  2. [root@ZhangSiming ~]# docker login -uadmin -paptx65965697 www.yunjisuan2.com
  3. WARNING! Using --password via the CLI is insecure. Use --password-stdin.
  4. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  5. Configure a credential helper to remove this warning. See
  6. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  7. Login Succeeded
  8. [root@ZhangSiming kubernetes]# cat /root/.docker/config.json | base64 -w 0
  9. ewoJImF1dGhzIjogewoJCSJ3d3cueXVuamlzdWFuMi5jb20iOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2WVhCMGVEWTFPVFkxTmprMyIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTguMDkuMSAobGludXgpIgoJfQp9
  10. #不换行的复制下来备用
  11. #切换到Master节点
  12. [root@ZhangSiming ~]# cat centos.yaml
  13. apiVersion: apps/v1
  14. kind: Deployment
  15. metadata:
  16. name: deploy-centos
  17. spec:
  18. replicas: 1
  19. selector:
  20. matchLabels:
  21. app: centos
  22. template:
  23. metadata:
  24. labels:
  25. app: centos
  26. spec:
  27. containers:
  28. - name: app-centos
  29. image: www.yunjisuan2.com/library/mongo
  30. #从我们自己的Harbor上拉取镜像
  31. imagePullPolicy: Always
  32. #拉取策略是每次构建都必须重新拉取
  33. imagePullSecrets:
  34. - name: registry-pull-secret
  35. #这是验证信息,在下面生成
  36. [root@ZhangSiming ~]# cat centos-secret.yaml
  37. apiVersion: v1
  38. kind: Secret
  39. metadata:
  40. name: registry-pull-secret
  41. data:
  42. .dockerconfigjson: ewoJImF1dGhzIjogewoJCSJ3d3cueXVuamlzdWFuMi5jb20iOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2WVhCMGVEWTFPVFkxTmprMyIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTguMDkuMSAobGludXgpIgoJfQp9
  43. type: kubetnetes.io/dockerconfigjson
  44. #这是生成验证Harbor的凭据
  45. [root@ZhangSiming ~]# kubectl create -f centos-secret.yaml
  46. secret/registry-pull-secret created
  47. [root@ZhangSiming ~]# kubectl get secrets
  48. NAME TYPE DATA AGE
  49. default-token-f2kfs kubernetes.io/service-account-token 3 7d16h
  50. registry-pull-secret kubetnetes.io/dockerconfigjson 1 7s
  51. [root@ZhangSiming ~]# kubectl create -f centos.yaml
  52. deployment.apps/deploy-centos created
  53. #查看拉取镜像结果
  54. [root@ZhangSiming ~]# kubectl get pods
  55. NAME READY STATUS RESTARTS AGE
  56. deploy-centos-6fc88f94c9-xbqtr 1/1 Running 0 55s
  57. #拉取成功,一定要拉取已经有的镜像哦~

5.3Pod进行资源限制

  1. [root@ZhangSiming ~]# cat limit.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: frontend
  6. spec:
  7. containers:
  8. - name: db
  9. image: mysql
  10. env:
  11. - name: MYSQL_ROOT_PASSWORD
  12. value: "password"
  13. resources:
  14. requests:
  15. memory: "64Mi"
  16. cpu: "250m"
  17. limits:
  18. memory: "128Mi"
  19. cpu: "500m"
  20. - name: wp
  21. image: wordpress
  22. resources:
  23. requests:
  24. memory: "64Mi"
  25. cpu: "250m"
  26. limits:
  27. memory: "128Mi"
  28. cpu: "500m"
  29. #字段是resources,requests是调度的时候有这么多资源才会调度到指定node,limits是运行最高占用资源不超过多少
  30. [root@ZhangSiming ~]# kubectl create -f limit.yaml
  31. pod/frontend created
  32. [root@ZhangSiming ~]# kubectl get pods -o wide
  33. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
  34. frontend 2/2 Running 0 5m29s 172.17.3.3 192.168.17.132 <none>
  35. [root@ZhangSiming ~]# kubectl describe nodes 192.168.17.132
  36. #可以查看到限制资源的信息

5.4Pod重启策略

  • Always:当容器终止退出后,总是重启容器,默认策略;
  • OnFailure:当容器异常退出(退出状态码非0)时,才重启容器;
  • Never::当容器终止退出,从不重启容器.
    一直运行的容器,重启策略设置为Always就可以了,有特殊需求再改。

5.5Pod健康检查(Probe探针)

Probe有以下两种类型:

  • livenessProbe
    如果检查失败,将杀死容器,根据Pod的restartPolicy(三种重启策略)来操作;
  • readinessProbe
    如果检查失败,Kubernetes会把Pod从service endpoints中剔除,用户就访问不了了。

Probe支持以下三种检查方法:

  • httpGet
    发送HTTP请求,返回200-400范围状态码为成功;
  • exec
    执行Shell命令返回状态码是0为成功;
  • tcpSocket
    发起TCP Socket建立成功,看socket进程有没有响应。
  1. [root@ZhangSiming ~]# cat healthy.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. labels:
  6. test: liveness
  7. name: liveness-exec
  8. spec:
  9. containers:
  10. - name: liveness
  11. image: busybox
  12. args:
  13. - /bin/sh
  14. - -c
  15. - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy
  16. livenessProbe:
  17. exec:
  18. command:
  19. - cat
  20. - /tmp/healthy
  21. initialDelaySeconds: 5
  22. periodSeconds: 5
  23. #在30秒之后的健康检查会触发重启pod
  24. [root@ZhangSiming ~]# kubectl create -f healthy.yaml
  25. pod/liveness-exec created
  26. [root@ZhangSiming ~]# kubectl get pods
  27. NAME READY STATUS RESTARTS AGE
  28. liveness-exec 1/1 Running 0 7s
  29. [root@ZhangSiming ~]# kubectl get pods
  30. NAME READY STATUS RESTARTS AGE
  31. liveness-exec 1/1 Running 0 19s
  32. [root@ZhangSiming ~]# kubectl get pods
  33. NAME READY STATUS RESTARTS AGE
  34. liveness-exec 1/1 Running 1 33s

5.6Pod调度约束

默认的调度是根据每个节点的资源利用率进行调度,已经很合理了。实际工作中我们还可以根据需求进行控制调度。

image_1d4plagj21e951faj1186hoog789.png-55.3kB

Pod发布流程

1.用户在通过kubectl创建Pod;
2.apiserver写入Pod信息到Etcd-cluster中;
3.Scheduler通过watch获取到Pod信息,根据设置的调度算法返回合理的调度节点信息到Etcd集群,之后返回apiserver;
4.调度节点的kubelet通过watch检测到自己是被选举出来的调度节点,之后在自身部署Pod;
5.启动docker容器引擎运行服务;
6.kubelet更新信息回到Etcd-cluster集群之后,返回给apiserver,用户就可以看到Pod部署状况了。

调度约束

  • nodeName用于将Pod调度到指定的Node名称上
  • nodeSelector用于将Pod调度到匹配Label的Node上
  1. [root@ZhangSiming ~]# cat sche.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: pod-example
  6. labels:
  7. app: nginx
  8. spec:
  9. nodeName: 192.168.17.132
  10. #nodeName是跳过调度器,强制调度
  11. containers:
  12. - name: nginx
  13. image: nginx:1.15
  14. [root@ZhangSiming ~]# kubectl create -f sche.yaml
  15. pod/pod-example created
  16. [root@ZhangSiming ~]# kubectl get pods -o wide
  17. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
  18. pod-example 1/1 Running 0 16s 172.17.3.3 192.168.17.132 <none>
  1. [root@ZhangSiming ~]# kubectl label nodes 192.168.17.132 team=a
  2. node/192.168.17.132 labeled
  3. #给Node一个label
  4. [root@ZhangSiming ~]# kubectl get nodes --show-labels
  5. NAME STATUS ROLES AGE VERSION LABELS
  6. 192.168.17.132 Ready <none> 7d16h v1.12.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.17.132,team=a
  7. 192.168.17.133 NotReady <none> 7d15h v1.12.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.17.133
  8. [root@ZhangSiming ~]# vim sche.yaml
  9. [root@ZhangSiming ~]# cat sche.yaml
  10. apiVersion: v1
  11. kind: Pod
  12. metadata:
  13. name: pod-example
  14. labels:
  15. app: nginx
  16. spec:
  17. nodeSelector:
  18. team: a
  19. #通过调度器调度到team:a的node节点
  20. containers:
  21. - name: nginx
  22. image: nginx:1.15
  23. [root@ZhangSiming ~]# kubectl create -f sche.yaml
  24. pod/pod-example created
  25. [root@ZhangSiming ~]# kubectl get pods -o wide
  26. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
  27. pod-example 1/1 Running 0 25s 172.17.3.3 192.168.17.132 <none>

5.7Pod故障排查

5.7.1Pod的五种状态

状态 描述
Pending Pod创建已经提交到Kubernetes。但是,因为某种原因而不能顺利创建。例如下载镜像慢,调度不成功
Running Pod已经绑定到一个节点,并且已经创建了所有容器。至少有一个容器正在运行中,或正在启动或重新启动
Succeeded Pod中的所有容器都已成功终止,不会重新启动
Failed Pod的所有容器均已终止,且至少有一个容器已在故障中终止。也就是说,容器要么以非零状态退出,要么被系统终止
Unknown 由于某种原因apiserver无法获得Pod的状态,通常是由于Master与Pod所在主机kubelet通信时出错
  1. [root@ZhangSiming ~]# kubectl describe TYPE/NAME
  2. #查看事件,可以看事件位置,是卡在拉取镜像,还是卡在调度
  3. [root@ZhangSiming ~]# kubectl logs TYPE/NAME [-c CONTAINER]
  4. #查看容器里的日志
  5. [root@ZhangSiming ~]# kubectl exec POD [-c CONTAINER] --COMMAND [args...]
  6. #可以进入到容器中查看应用的状态

四、深入理解Service(与外界连通)

1.Service是为了防止Pod失联,Service可以动态感知Pod的ip变化;
2.定义一组Pod的访问策略,把一些Pod关联起来,同时起到负载均衡的效果,把请求平均分到Pod的副本上;
3.Service的三种类型:ClusterIP、NodePort、LoadBalance;
4.Service的底层实现主要有Iptables和IPVS梁总网络模式,这两种网络模式决定了如何去转发请求的流量,和怎么起到负载均衡的作用。

4.1Pod与Service的关系

image_1d6fdfbpv14qibkup3c1epr1c7p9.png-40.6kB

通过Pod指定标签,Service上使用Selector匹配标签关联Service与Pod。
Service实现了Pod的4层负载均衡(TCP/UDP),默认是根据endpoints控制台检测到的Podip地址进行轮询算法调度服务。

4.2Service定义

  1. [root@ZhangSiming ~]# vim service.yaml
  2. [root@ZhangSiming ~]# cat service.yaml
  3. kind: Service
  4. apiVersion: v1
  5. metadata:
  6. name: my-service
  7. namespace: default
  8. spec:
  9. clusterIP: 10.0.0.123
  10. #Service默认是ClusterIP类型,如果不指定默认分配一个随机的ClusterIP
  11. selector:
  12. app: nginx
  13. #这个selector是为了匹配用的,好让K8S知道这个是哪个Pod的Service
  14. ports:
  15. - name: http
  16. protocol: TCP
  17. port: 80
  18. #Service的80端口
  19. targetPort: 80
  20. #
  21. #创建Service
  22. [root@ZhangSiming ~]# kubectl create -f service.yaml
  23. service/my-service created
  24. [root@ZhangSiming ~]# kubectl get svc
  25. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  26. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 21d
  27. my-service ClusterIP 10.0.0.123 <none> 80/TCP 6s
  28. [root@ZhangSiming ~]# kubectl get endpoints
  29. NAME ENDPOINTS AGE
  30. kubernetes 192.168.17.130:6443 21d
  31. my-service <none> 66s
  32. #endpoints是可以查看Service关联到哪些Pod的,这里我们没有关联

解释:Service是为了防止Pod失联,Service可以动态感知Pod的ip变化
实际是由endpoints来感知的,当Pod重启有了新的IP,endpoints控制器感知到新的IP,之后将新的IP关联到Service,才可以访问。

  1. #可以describe查看Service详细信息
  2. [root@ZhangSiming ~]# kubectl describe svc my-service
  3. Name: my-service
  4. Namespace: default
  5. Labels: <none>
  6. Annotations: <none>
  7. Selector: app=nginx
  8. Type: ClusterIP
  9. IP: 10.0.0.123
  10. Port: http 80/TCP
  11. TargetPort: 80/TCP
  12. Endpoints: <none>
  13. Session Affinity: None
  14. Events: <none>

4.3Service类型

image_1d6fdqrf3smp51b130jptlult19.png-55.6kB

这种方式是用于集群内部组件之间访问的,是默认的Service类型。Service只是逻辑上将多个Pod关联起来,底层是Iptables或者IPVS实现的负载均衡,只需要知道Service的clusterip即可。

YAML格式:

  1. [root@ZhangSiming ~]# cat service.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: my-service
  6. spec:
  7. selector:
  8. app: A
  9. ports:
  10. - protocol: TCP
  11. port: 80
  12. targetPort: 8080

image_1d6fe2uq4nna35d1mjikn51fnc1m.png-56.2kB

NodePort类型的Service不仅仅会生成一个CLusterIP类型的集群内部访问端口,还会在每个关联的Pod上都开放监听一个端口,可以通过这个端口在集群外部访问到Pod。
NodeIP一般是不固定的,开放的Port一般是需要固定的。

YAML格式:

  1. [root@ZhangSiming ~]# cat service.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: my-service
  6. spec:
  7. selector:
  8. app: A
  9. ports:
  10. - protocol: TCP
  11. port: 80
  12. targetPort: 8080
  13. nodePort: 30001
  14. #固定nodePort端口
  15. type: NodePort
  16. [root@ZhangSiming ~]# kubectl apply -f service.yaml
  17. #更新服务
  18. Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
  19. service/my-service configured
  20. [root@ZhangSiming ~]# kubectl get svc
  21. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  22. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 21d
  23. my-service NodePort 10.0.0.123 <none> 80:30001/TCP 29m
  24. #80是ClusterIP访问端口;30001是NodePort访问端口
  25. #Node节点
  26. [root@ZhangSiming ~]# netstat -antup | grep 30001
  27. tcp6 0 0 :::30001 :::* LISTEN 884/kube-proxy

image_1d6fesa9n4frnfb18fcd9u9g23.png-54kB

LoadBalance类型一般是工作在公有云上的,不适用于自身创建的K8S集群。用户访问会先连接公有云上的LB负载均衡,LB负载均衡自动关联到下面Service的IP进行访问。

LoadBalance提供特定云提供商底层LB接口。利于AWS、Google、Openstack,自动调用负载均衡关联底层IP。

4.4Service代理模式

实际上Service的底层流量转发与负载均衡实现是由K8SNode节点的kube-proxy进行的。

  1. [root@ZhangSiming kubernetes]# cat cfg/kube-proxy
  2. KUBE_PROXY_OPTS=\"--logtostderr=true \
  3. --v=4 \
  4. --hostname-override=192.168.17.132 \
  5. --cluster-cidr=10.0.0.0/24 \
  6. --proxy-mode=ipvs \
  7. #我们现在使用的是IPVS作为Service底层代理
  8. --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
  1. #切换IPVS代理模式到iptables代理模式
  2. [root@ZhangSiming kubernetes]# vim cfg/kube-proxy
  3. [root@ZhangSiming kubernetes]# cat cfg/kube-proxy
  4. KUBE_PROXY_OPTS="--logtostderr=true \
  5. --v=4 \
  6. --hostname-override=192.168.17.132 \
  7. --cluster-cidr=10.0.0.0/24 \
  8. --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
  9. [root@ZhangSiming kubernetes]# systemctl restart kube-proxy.service
  10. [root@ZhangSiming kubernetes]# ps -elf | grep kube-proxy
  11. 4 S root 21842 1 1 80 0 - 10661 futex_ 14:40 ? 00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.17.132 --cluster-cidr=10.0.0.0/24 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
  12. 0 R root 21969 1315 0 80 0 - 28176 - 14:40 pts/0 00:00:00 grep --color=auto kube-proxy
  1. [root@ZhangSiming ~]# kubectl create -f nginx.yaml
  2. deployment.apps/nginx created
  3. [root@ZhangSiming ~]# kubectl create -f service.yaml
  4. service/my-service created
  5. [root@ZhangSiming ~]# kubectl get svc
  6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  7. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 21d
  8. my-service NodePort 10.0.0.133 <none> 80:30001/TCP 27s
  9. #找出Service关联的集群PodIP
  1. [root@ZhangSiming kubernetes]# iptables-save | grep 10.0.0.133
  2. -A KUBE-SERVICES ! -s 10.0.0.0/24 -d 10.0.0.133/32 -p tcp -m comment --comment "default/my-service: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
  3. -A KUBE-SERVICES -d 10.0.0.133/32 -p tcp -m comment --comment "default/my-service: cluster IP" -m tcp --dport 80 -j KUBE-SVC-KEAUNL7HVWWSEZA6
  4. [root@ZhangSiming kubernetes]# iptables-save | grep KUBE-SVC-KEAUNL7HVWWSEZA6
  5. :KUBE-SVC-KEAUNL7HVWWSEZA6 - [0:0]
  6. -A KUBE-NODEPORTS -p tcp -m comment --comment "default/my-service:" -m tcp --dport 30001 -j KUBE-SVC-KEAUNL7HVWWSEZA6
  7. -A KUBE-SERVICES -d 10.0.0.133/32 -p tcp -m comment --comment "default/my-service: cluster IP" -m tcp --dport 80 -j KUBE-SVC-KEAUNL7HVWWSEZA6
  8. -A KUBE-SVC-KEAUNL7HVWWSEZA6 -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-JQ4W2NCACGVYJ6IM
  9. -A KUBE-SVC-KEAUNL7HVWWSEZA6 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YVQHBEOICAVG6NCJ
  10. -A KUBE-SVC-KEAUNL7HVWWSEZA6 -j KUBE-SEP-H32GYTWB4BTUZQJM
  11. #可以看出是轮询算法,从上往下匹配,第一个0.3概率,之后1:1概率实现Service的轮询转发

iptables方式的弊端:

1.会创建规则(更新的话,是非增量式更新,比较耗费资源);
2.iptables规则是从上到下进行逐条匹配的,如果规模较大,可能会有延迟。

引入IPVS:

基于此,引入了IPVS方式代理。
好处是:任何处理都是基于内核进行处理的,效率高,可以解决iptables遇到的问题。

  1. [root@ZhangSiming ~]# kubectl get endpoints
  2. NAME ENDPOINTS AGE
  3. kubernetes 192.168.17.130:6443 21d
  4. my-service 172.17.83.4:80,172.17.83.5:80,172.17.83.6:80 38m
  5. [root@ZhangSiming ~]# kubectl get svc
  6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  7. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 21d
  8. my-service NodePort 10.0.0.133 <none> 80:30001/TCP 39m
  1. [root@ZhangSiming kubernetes]# ipvsadm -ln | grep -A 3 10.0.0.133
  2. TCP 10.0.0.133:80 rr
  3. -> 172.17.83.4:80 Masq 1 0 0
  4. -> 172.17.83.5:80 Masq 1 0 0
  5. -> 172.17.83.6:80 Masq 1 0 0
  6. #基于内核态的IPVS负载均衡要比基于用户态的iptables负载均衡要好的多的多,效率大得多,所以一般kube-proxy我们选择IPVS代理模式。

综上所述,Service代理模式iptables与IPVS对比

iptables IPVS
灵活,功能强大(可以在数据包不同阶段对包进行操作) 工作在内核态,有更好的性能
规则遍历匹配和更新,呈线性时延 调度算法丰富:rr,wrr,lc,wlc,ip hash...

轮询算法等配置都在kube-proxy配置文件中配置。

4.5DNS(coredns)

DNS服务监视Kubernetes API,为每一个Service创建DNS记录用于域名解析。

  1. [root@ZhangSiming ~]# cat /opt/kubernetes/cfg/kubelet.config
  2. kind: KubeletConfiguration
  3. apiVersion: kubelet.config.k8s.io/v1beta1
  4. address: 192.168.17.132
  5. port: 10250
  6. cgroupDriver: cgroupfs
  7. clusterDNS:
  8. - 10.0.0.2
  9. clusterDomain: cluster.local.
  10. #这里的clusterDomain和clusterDNS要在后面创建coreDNS进行对应修改
  11. failSwapOn: false
  12. authentication:
  13. anonymous:
  14. enabled: true
  15. [root@ZhangSiming ~]# vim coredns.yaml.sed
  16. [root@ZhangSiming ~]# cat coredns.yaml.sed
  17. # Warning: This is a file generated from the base underscore template file: coredns.yaml.base
  18. apiVersion: v1
  19. kind: ServiceAccount
  20. metadata:
  21. name: coredns
  22. namespace: kube-system
  23. labels:
  24. kubernetes.io/cluster-service: "true"
  25. addonmanager.kubernetes.io/mode: Reconcile
  26. ---
  27. apiVersion: rbac.authorization.k8s.io/v1
  28. kind: ClusterRole
  29. metadata:
  30. labels:
  31. kubernetes.io/bootstrapping: rbac-defaults
  32. addonmanager.kubernetes.io/mode: Reconcile
  33. name: system:coredns
  34. rules:
  35. - apiGroups:
  36. - ""
  37. resources:
  38. - endpoints
  39. - services
  40. - pods
  41. - namespaces
  42. verbs:
  43. - list
  44. - watch
  45. - apiGroups:
  46. - ""
  47. resources:
  48. - nodes
  49. verbs:
  50. - get
  51. ---
  52. apiVersion: rbac.authorization.k8s.io/v1
  53. kind: ClusterRoleBinding
  54. metadata:
  55. annotations:
  56. rbac.authorization.kubernetes.io/autoupdate: "true"
  57. labels:
  58. kubernetes.io/bootstrapping: rbac-defaults
  59. addonmanager.kubernetes.io/mode: EnsureExists
  60. name: system:coredns
  61. roleRef:
  62. apiGroup: rbac.authorization.k8s.io
  63. kind: ClusterRole
  64. name: system:coredns
  65. subjects:
  66. - kind: ServiceAccount
  67. name: coredns
  68. namespace: kube-system
  69. ---
  70. apiVersion: v1
  71. kind: ConfigMap
  72. metadata:
  73. name: coredns
  74. namespace: kube-system
  75. labels:
  76. addonmanager.kubernetes.io/mode: EnsureExists
  77. data:
  78. Corefile: |
  79. .:53 {
  80. errors
  81. health
  82. kubernetes cluster.local in-addr.arpa ip6.arpa {
  83. #修改coreDNS域
  84. pods insecure
  85. upstream
  86. fallthrough in-addr.arpa ip6.arpa
  87. }
  88. prometheus :9153
  89. forward . /etc/resolv.conf
  90. cache 30
  91. loop
  92. reload
  93. loadbalance
  94. }
  95. ---
  96. apiVersion: apps/v1
  97. kind: Deployment
  98. metadata:
  99. name: coredns
  100. namespace: kube-system
  101. labels:
  102. k8s-app: kube-dns
  103. kubernetes.io/cluster-service: "true"
  104. addonmanager.kubernetes.io/mode: Reconcile
  105. kubernetes.io/name: "CoreDNS"
  106. spec:
  107. # replicas: not specified here:
  108. # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  109. # 2. Default is 1.
  110. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  111. strategy:
  112. type: RollingUpdate
  113. rollingUpdate:
  114. maxUnavailable: 1
  115. selector:
  116. matchLabels:
  117. k8s-app: kube-dns
  118. template:
  119. metadata:
  120. labels:
  121. k8s-app: kube-dns
  122. annotations:
  123. seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
  124. spec:
  125. priorityClassName: system-cluster-critical
  126. serviceAccountName: coredns
  127. tolerations:
  128. - key: "CriticalAddonsOnly"
  129. operator: "Exists"
  130. nodeSelector:
  131. beta.kubernetes.io/os: linux
  132. containers:
  133. - name: coredns
  134. image: coredns/coredns:1.3.1
  135. #修改拉取的镜像源
  136. imagePullPolicy: IfNotPresent
  137. resources:
  138. limits:
  139. memory: 170Mi
  140. requests:
  141. cpu: 100m
  142. memory: 70Mi
  143. args: [ "-conf", "/etc/coredns/Corefile" ]
  144. volumeMounts:
  145. - name: config-volume
  146. mountPath: /etc/coredns
  147. readOnly: true
  148. ports:
  149. - containerPort: 53
  150. name: dns
  151. protocol: UDP
  152. - containerPort: 53
  153. name: dns-tcp
  154. protocol: TCP
  155. - containerPort: 9153
  156. name: metrics
  157. protocol: TCP
  158. livenessProbe:
  159. httpGet:
  160. path: /health
  161. port: 8080
  162. scheme: HTTP
  163. initialDelaySeconds: 60
  164. timeoutSeconds: 5
  165. successThreshold: 1
  166. failureThreshold: 5
  167. readinessProbe:
  168. httpGet:
  169. path: /health
  170. port: 8080
  171. scheme: HTTP
  172. securityContext:
  173. allowPrivilegeEscalation: false
  174. capabilities:
  175. add:
  176. - NET_BIND_SERVICE
  177. drop:
  178. - all
  179. readOnlyRootFilesystem: true
  180. dnsPolicy: Default
  181. volumes:
  182. - name: config-volume
  183. configMap:
  184. name: coredns
  185. items:
  186. - key: Corefile
  187. path: Corefile
  188. ---
  189. apiVersion: v1
  190. kind: Service
  191. metadata:
  192. name: kube-dns
  193. namespace: kube-system
  194. annotations:
  195. prometheus.io/port: "9153"
  196. prometheus.io/scrape: "true"
  197. labels:
  198. k8s-app: kube-dns
  199. kubernetes.io/cluster-service: "true"
  200. addonmanager.kubernetes.io/mode: Reconcile
  201. kubernetes.io/name: "CoreDNS"
  202. spec:
  203. selector:
  204. k8s-app: kube-dns
  205. clusterIP: 10.0.0.2
  206. #修改clusterIP
  207. ports:
  208. - name: dns
  209. port: 53
  210. protocol: UDP
  211. - name: dns-tcp
  212. port: 53
  213. protocol: TCP
  214. - name: metrics
  215. port: 9153
  216. protocol: TCP
  217. #启动coredns
  218. [root@ZhangSiming ~]# kubectl create -f coredns.yaml.sed
  219. serviceaccount/coredns created
  220. clusterrole.rbac.authorization.k8s.io/system:coredns created
  221. clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
  222. configmap/coredns created
  223. deployment.apps/coredns created
  224. service/kube-dns created
  225. [root@ZhangSiming ~]# kubectl get pods -n kube-system
  226. NAME READY STATUS RESTARTS AGE
  227. coredns-64479cf49b-6mgpv 0/1 Running 0 17s
  228. kubernetes-dashboard-5f5bfdc89f-zkc9n 1/1 Running 6 24d
  229. #测试DNS解析
  230. [root@ZhangSiming ~]# kubectl get svc
  231. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  232. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 21d
  233. my-service NodePort 10.0.0.133 <none> 80:30001/TCP 126m
  234. #coredns解析的是Service的NAME和CLUSTER-IP对应关系
  235. [root@ZhangSiming ~]# kubectl run -it --image=busybox:1.28.4 --rm --restart=Never sh
  236. If you don\'t see a command prompt, try pressing enter.
  237. / # nslookup kubernetes
  238. Server: 10.0.0.2
  239. Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
  240. Name: kubernetes
  241. Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
  242. / # nslookup my-service
  243. Server: 10.0.0.2
  244. Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
  245. Name: my-service
  246. Address 1: 10.0.0.133 my-service.default.svc.cluster.local
  247. #成功解析
  1. [root@ZhangSiming ~]# kubectl get svc -n default
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 21d
  4. my-service NodePort 10.0.0.133 <none> 80:30001/TCP 131m
  5. [root@ZhangSiming ~]# kubectl run -it --image=busybox:1.28.4 --rm --restart=Never sh -n kube-system
  6. If you don't see a command prompt, try pressing enter.
  7. / # nslookup my-service
  8. Server: 10.0.0.2
  9. Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
  10. nslookup: can't resolve 'my-service'
  11. / # nslookup my-service.default
  12. Server: 10.0.0.2
  13. Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
  14. Name: my-service.default
  15. Address 1: 10.0.0.133 my-service.default.svc.cluster.local
  16. #在Service的NAME后面加.namespace可以跨命名空间进行Service的DNS解析

五、Ingress学习

5.1K8S的服务类别

5.2Ingress Controller部署

image_1d6ft8324hshu6q1ou9fgb1qb42g.png-25.6kB

通过Ingress Controller控制器关联域名和服务,进行暴露服务对外访问。

Ingress Controller控制器YAML下载地址:

https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

  1. [root@ZhangSiming ~]# mkdir ingress
  2. [root@ZhangSiming ~]# cd ingress/
  3. [root@ZhangSiming ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
  4. --2019-03-21 18:44:09-- https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
  5. Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.228.133
  6. Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.228.133|:443... connected.
  7. HTTP request sent, awaiting response... 200 OK
  8. Length: 5976 (5.8K) [text/plain]
  9. Saving to: mandatory.yaml
  10. 100%[======================================>] 5,976 --.-K/s in 0s
  11. 2019-03-21 18:44:10 (38.5 MB/s) - mandatory.yaml saved [5976/5976]
  12. [root@ZhangSiming ingress]# ls
  13. mandatory.yaml
  14. #查看Ingress ControllerYAML文件
  15. [root@ZhangSiming ingress]# cat mandatory.yaml
  16. apiVersion: v1
  17. kind: Namespace
  18. metadata:
  19. name: ingress-nginx
  20. labels:
  21. app.kubernetes.io/name: ingress-nginx
  22. app.kubernetes.io/part-of: ingress-nginx
  23. ---
  24. kind: ConfigMap
  25. apiVersion: v1
  26. metadata:
  27. name: nginx-configuration
  28. namespace: ingress-nginx
  29. labels:
  30. app.kubernetes.io/name: ingress-nginx
  31. app.kubernetes.io/part-of: ingress-nginx
  32. ---
  33. kind: ConfigMap
  34. apiVersion: v1
  35. metadata:
  36. name: tcp-services
  37. namespace: ingress-nginx
  38. labels:
  39. app.kubernetes.io/name: ingress-nginx
  40. app.kubernetes.io/part-of: ingress-nginx
  41. ---
  42. kind: ConfigMap
  43. apiVersion: v1
  44. metadata:
  45. name: udp-services
  46. namespace: ingress-nginx
  47. labels:
  48. app.kubernetes.io/name: ingress-nginx
  49. app.kubernetes.io/part-of: ingress-nginx
  50. ---
  51. apiVersion: v1
  52. kind: ServiceAccount
  53. metadata:
  54. name: nginx-ingress-serviceaccount
  55. namespace: ingress-nginx
  56. labels:
  57. app.kubernetes.io/name: ingress-nginx
  58. app.kubernetes.io/part-of: ingress-nginx
  59. ---
  60. apiVersion: rbac.authorization.k8s.io/v1beta1
  61. kind: ClusterRole
  62. metadata:
  63. name: nginx-ingress-clusterrole
  64. labels:
  65. app.kubernetes.io/name: ingress-nginx
  66. app.kubernetes.io/part-of: ingress-nginx
  67. rules:
  68. - apiGroups:
  69. - ""
  70. resources:
  71. - configmaps
  72. - endpoints
  73. - nodes
  74. - pods
  75. - secrets
  76. verbs:
  77. - list
  78. - watch
  79. - apiGroups:
  80. - ""
  81. resources:
  82. - nodes
  83. verbs:
  84. - get
  85. - apiGroups:
  86. - ""
  87. resources:
  88. - services
  89. verbs:
  90. - get
  91. - list
  92. - watch
  93. - apiGroups:
  94. - "extensions"
  95. resources:
  96. - ingresses
  97. verbs:
  98. - get
  99. - list
  100. - watch
  101. - apiGroups:
  102. - ""
  103. resources:
  104. - events
  105. verbs:
  106. - create
  107. - patch
  108. - apiGroups:
  109. - "extensions"
  110. resources:
  111. - ingresses/status
  112. verbs:
  113. - update
  114. ---
  115. apiVersion: rbac.authorization.k8s.io/v1beta1
  116. kind: Role
  117. metadata:
  118. name: nginx-ingress-role
  119. namespace: ingress-nginx
  120. labels:
  121. app.kubernetes.io/name: ingress-nginx
  122. app.kubernetes.io/part-of: ingress-nginx
  123. rules:
  124. - apiGroups:
  125. - ""
  126. resources:
  127. - configmaps
  128. - pods
  129. - secrets
  130. - namespaces
  131. verbs:
  132. - get
  133. - apiGroups:
  134. - ""
  135. resources:
  136. - configmaps
  137. resourceNames:
  138. # Defaults to "<election-id>-<ingress-class>"
  139. # Here: "<ingress-controller-leader>-<nginx>"
  140. # This has to be adapted if you change either parameter
  141. # when launching the nginx-ingress-controller.
  142. - "ingress-controller-leader-nginx"
  143. verbs:
  144. - get
  145. - update
  146. - apiGroups:
  147. - ""
  148. resources:
  149. - configmaps
  150. verbs:
  151. - create
  152. - apiGroups:
  153. - ""
  154. resources:
  155. - endpoints
  156. verbs:
  157. - get
  158. ---
  159. apiVersion: rbac.authorization.k8s.io/v1beta1
  160. kind: RoleBinding
  161. metadata:
  162. name: nginx-ingress-role-nisa-binding
  163. namespace: ingress-nginx
  164. labels:
  165. app.kubernetes.io/name: ingress-nginx
  166. app.kubernetes.io/part-of: ingress-nginx
  167. roleRef:
  168. apiGroup: rbac.authorization.k8s.io
  169. kind: Role
  170. name: nginx-ingress-role
  171. subjects:
  172. - kind: ServiceAccount
  173. name: nginx-ingress-serviceaccount
  174. namespace: ingress-nginx
  175. ---
  176. apiVersion: rbac.authorization.k8s.io/v1beta1
  177. kind: ClusterRoleBinding
  178. metadata:
  179. name: nginx-ingress-clusterrole-nisa-binding
  180. labels:
  181. app.kubernetes.io/name: ingress-nginx
  182. app.kubernetes.io/part-of: ingress-nginx
  183. roleRef:
  184. apiGroup: rbac.authorization.k8s.io
  185. kind: ClusterRole
  186. name: nginx-ingress-clusterrole
  187. subjects:
  188. - kind: ServiceAccount
  189. name: nginx-ingress-serviceaccount
  190. namespace: ingress-nginx
  191. ---
  192. apiVersion: apps/v1
  193. kind: Deployment
  194. metadata:
  195. name: nginx-ingress-controller
  196. namespace: ingress-nginx
  197. labels:
  198. app.kubernetes.io/name: ingress-nginx
  199. app.kubernetes.io/part-of: ingress-nginx
  200. spec:
  201. replicas: 1
  202. selector:
  203. matchLabels:
  204. app.kubernetes.io/name: ingress-nginx
  205. app.kubernetes.io/part-of: ingress-nginx
  206. template:
  207. metadata:
  208. labels:
  209. app.kubernetes.io/name: ingress-nginx
  210. app.kubernetes.io/part-of: ingress-nginx
  211. annotations:
  212. prometheus.io/port: "10254"
  213. prometheus.io/scrape: "true"
  214. spec:
  215. hostNetwork: true
  216. #使用宿主机网络,意思是调度到Node节点使用本地的网络,80,443端口都会在本地体现。也就是说,本地的80、443端口不能被占用,否则启动不起来
  217. serviceAccountName: nginx-ingress-serviceaccount
  218. containers:
  219. - name: nginx-ingress-controller
  220. image: lizhenliang/nginx-ingress-controller:0.20.0
  221. args:
  222. - /nginx-ingress-controller
  223. - --configmap=$(POD_NAMESPACE)/nginx-configuration
  224. - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
  225. - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
  226. - --publish-service=$(POD_NAMESPACE)/ingress-nginx
  227. - --annotations-prefix=nginx.ingress.kubernetes.io
  228. securityContext:
  229. allowPrivilegeEscalation: true
  230. capabilities:
  231. drop:
  232. - ALL
  233. add:
  234. - NET_BIND_SERVICE
  235. # www-data -> 33
  236. runAsUser: 33
  237. env:
  238. - name: POD_NAME
  239. valueFrom:
  240. fieldRef:
  241. fieldPath: metadata.name
  242. - name: POD_NAMESPACE
  243. valueFrom:
  244. fieldRef:
  245. fieldPath: metadata.namespace
  246. - name: KUBERNETES_MASTER
  247. value: http://192.168.17.130:8080
  248. #需要加入这一行,否则会报错
  249. ports:
  250. - name: http
  251. containerPort: 80
  252. - name: https
  253. containerPort: 443
  254. livenessProbe:
  255. failureThreshold: 3
  256. httpGet:
  257. path: /healthz
  258. port: 10254
  259. scheme: HTTP
  260. initialDelaySeconds: 10
  261. periodSeconds: 10
  262. successThreshold: 1
  263. timeoutSeconds: 10
  264. readinessProbe:
  265. failureThreshold: 3
  266. httpGet:
  267. path: /healthz
  268. port: 10254
  269. scheme: HTTP
  270. periodSeconds: 10
  271. successThreshold: 1
  272. timeoutSeconds: 10
  273. ---
  274. #部署Ingress Controller控制器
  275. [root@ZhangSiming ingress]# kubectl create -f mandatory.yaml
  276. namespace/ingress-nginx created
  277. configmap/nginx-configuration created
  278. configmap/tcp-services created
  279. configmap/udp-services created
  280. serviceaccount/nginx-ingress-serviceaccount created
  281. clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
  282. role.rbac.authorization.k8s.io/nginx-ingress-role created
  283. rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
  284. clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
  285. deployment.apps/nginx-ingress-controller created
  286. [root@ZhangSiming ingress]# kubectl get pods -n ingress-nginx
  287. NAME READY STATUS RESTARTS AGE
  288. nginx-ingress-controller-7d8dc989d6-qzcll 1/1 Running 0 4m52s
  289. #启动成功

5.2Ingress规则编写

我们已经部署好了Ingress Controller控制器,也就是说可以提供Ingress对外访问服务了,但是具体怎么提供对外服务,我们需要自己定义Ingress规则。

  1. [root@ZhangSiming ingress]# kubectl get svc
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 21d
  4. my-service NodePort 10.0.0.21 <none> 80:30001/TCP 9s
  5. [root@ZhangSiming ingress]# vim ingress.yaml
  6. [root@ZhangSiming ingress]# cat ingress.yaml
  7. apiVersion: extensions/v1beta1
  8. kind: Ingress
  9. metadata:
  10. name: example-ingress
  11. spec:
  12. rules:
  13. - host: example.foo.com
  14. #域名
  15. http:
  16. paths:
  17. - backend:
  18. serviceName: my-service
  19. #绑定NodePort的Service
  20. servicePort: 80
  21. #端口写K8S集群内部端口
  22. [root@ZhangSiming ingress]# kubectl create -f ingress.yaml
  23. ingress.extensions/example-ingress created
  24. [root@ZhangSiming ingress]# kubectl get ingress
  25. NAME HOSTS ADDRESS PORTS AGE
  26. example-ingress example.foo.com 80 12s

image_1d6fuu2gd18q74s3o9m13khibq3a.png-66.8kB

image_1d6futnre154gsruf83a8g5d92t.png-46.9kB

5.4Ingress(HTTP与HTTPS)

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注