当前位置 : 主页 > 操作系统 > centos >

k8s实践2:从解决报错开始入门RBAC

来源:互联网 收集:自由互联 发布时间:2022-06-20
1.在k8s集群使用过程中,总是遇到各种rbac的权限问题.记录了几个报错,见下: 报错1: "message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope""message": "p

1.在k8s集群使用过程中,总是遇到各种rbac的权限问题.记录了几个报错,见下:

报错1:

"message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope" "message": "pservices is forbidden: User \"kubernetes\" cannot list resource \"pservices\" in API group \"\" at the cluster scope",

报错2:

[root@k8s-master2 ~]# curl https://192.168.32.127:8443/logs  --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem {   "kind": "Status",   "apiVersion": "v1",   "metadata": {   },   "status": "Failure",   "message": "forbidden: User \"kubernetes\" cannot get path \"/logs\"",   "reason": "Forbidden",   "details": {   },   "code": 403 curl https://192.168.32.127:8443/metrics --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem {   "kind": "Status",   "apiVersion": "v1",   "metadata": {   },   "status": "Failure",   "message": "forbidden: User \"kubernetes\" cannot get path \"/metrics\"",   "reason": "Forbidden",   "details": {   },   "code": 403

深入学习了解rbac的各种基础知识,相当必要.

2.从分析报错开始

报错1:

"message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope"

先看这条报错的命令记录:

[root@k8s-master1 ~]# curl https://192.168.32.127:8443/api/v1/pods --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem {   "kind": "Status",   "apiVersion": "v1",   "metadata": {   },   "status": "Failure",   "message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope",   "reason": "Forbidden",   "details": {     "kind": "pods"   },   "code": 403

这条报错的意思是什么呢?字面上理解,用户kubernetes在api Group里没有权限,无法获取资源pod列表.从解决这个报错开始我们的入门学习.

3.User kubernetes是从哪冒出来的呢?这个用户是我们部署apiserver时,生成的api访问etcd的用户.检索用户kubernetes的权限和绑定的群组,见下:

[root@k8s-master1 ~]# kubectl describe clusterrolebindings |grep -B 9 "User  kubernetes " Name:         discover-base-url Labels:       <none> Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"discover-base-url","namespace":""},"roleR... Role:   Kind:  ClusterRole   Name:  discover_base_url Subjects:   Kind  Name        Namespace   ----  ----        ---------   User  kubernetes  -- Name:         kube-apiserver Labels:       <none> Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"roleRef"... Role:   Kind:  ClusterRole   Name:  kube-apiserver Subjects:   Kind  Name        Namespace   ----  ----        ---------   User  kubernetes 

权限:

[root@k8s-master1 ~]# kubectl describe clusterroles discover_base_url Name:         discover_base_url Labels:       kubernetes.io/bootstrapping=rbac-defaults Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab...               rbac.authorization.kubernetes.io/autoupdate=true PolicyRule:   Resources  Non-Resource URLs  Resource Names  Verbs   ---------  -----------------  --------------  -----              [/]                []              [get] [root@k8s-master1 ~]#

##注意这条权限是上篇apiserver里面新增的权限.

[root@k8s-master1 ~]# kubectl describe clusterroles kube-apiserver Name:         kube-apiserver Labels:       <none> Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGr... PolicyRule:   Resources      Non-Resource URLs  Resource Names  Verbs   ---------      -----------------  --------------  -----   nodes/metrics  []                 []              [get create]   nodes/proxy    []                 []              [get create] [root@k8s-master1 ~]#

##一个用的是Resources##一个用的是Non-Resource

4.引出问题1:Non-Resouce是什么?google了好久,也只是看到只言片语.以下是我自己的理解:回头看上篇检索apiserver时显示的信息:

[root@k8s-master1 ~]# curl https://192.168.32.127:8443/ --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem {   "paths": [     "/api",     "/api/v1",     "/apis",     "/apis/",     "/apis/admissionregistration.k8s.io",     "/apis/admissionregistration.k8s.io/v1beta1",     "/apis/apiextensions.k8s.io",     "/apis/apiextensions.k8s.io/v1beta1",     "/apis/apiregistration.k8s.io",     "/apis/apiregistration.k8s.io/v1",     "/apis/apiregistration.k8s.io/v1beta1",     "/apis/apps",     "/apis/apps/v1",     "/apis/apps/v1beta1",     "/apis/apps/v1beta2",     "/apis/authentication.k8s.io",     "/apis/authentication.k8s.io/v1",     "/apis/authentication.k8s.io/v1beta1",     "/apis/authorization.k8s.io",     "/apis/authorization.k8s.io/v1",     "/apis/authorization.k8s.io/v1beta1",     "/apis/autoscaling",     "/apis/autoscaling/v1",     "/apis/autoscaling/v2beta1",     "/apis/autoscaling/v2beta2",     "/apis/batch",     "/apis/batch/v1",     "/apis/batch/v1beta1",     "/apis/certificates.k8s.io",     "/apis/certificates.k8s.io/v1beta1",     "/apis/coordination.k8s.io",     "/apis/coordination.k8s.io/v1beta1",     "/apis/events.k8s.io",     "/apis/events.k8s.io/v1beta1",     "/apis/extensions",     "/apis/extensions/v1beta1",     "/apis/networking.k8s.io",     "/apis/networking.k8s.io/v1",     "/apis/policy",     "/apis/policy/v1beta1",     "/apis/rbac.authorization.k8s.io",     "/apis/rbac.authorization.k8s.io/v1",     "/apis/rbac.authorization.k8s.io/v1beta1",     "/apis/scheduling.k8s.io",     "/apis/scheduling.k8s.io/v1beta1",     "/apis/storage.k8s.io",     "/apis/storage.k8s.io/v1",     "/apis/storage.k8s.io/v1beta1",     "/healthz",     "/healthz/autoregister-completion",     "/healthz/etcd",     "/healthz/log",     "/healthz/ping",     "/healthz/poststarthook/apiservice-openapi-controller",     "/healthz/poststarthook/apiservice-registration-controller",     "/healthz/poststarthook/apiservice-status-available-controller",     "/healthz/poststarthook/bootstrap-controller",     "/healthz/poststarthook/ca-registration",     "/healthz/poststarthook/generic-apiserver-start-informers",     "/healthz/poststarthook/kube-apiserver-autoregistration",     "/healthz/poststarthook/rbac/bootstrap-roles",     "/healthz/poststarthook/scheduling/bootstrap-system-priority-classes",     "/healthz/poststarthook/start-apiextensions-controllers",     "/healthz/poststarthook/start-apiextensions-informers",     "/healthz/poststarthook/start-kube-aggregator-informers",     "/healthz/poststarthook/start-kube-apiserver-admission-initializer",     "/healthz/poststarthook/start-kube-apiserver-informers",     "/logs",     "/metrics",     "/openapi/v2",     "/swagger-2.0.0.json",     "/swagger-2.0.0.pb-v1",     "/swagger-2.0.0.pb-v1.gz",     "/swagger-ui/",     "/swagger.json",     "/swaggerapi",     "/version"   ] }[root@k8s-master1 ~]# 

从healthz开始的都是Non-resouce,是不是呢?修改clusterroles,测试见下:

[root@k8s-master1 roles]# cat clusterroles1.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:   annotations:     rbac.authorization.kubernetes.io/autoupdate: "true"   labels:     kubernetes.io/bootstrapping: rbac-defaults   name: discover_base_url rules: - nonResourceURLs: #  - /   - /healthz/*   verbs:   - get [root@k8s-master1 roles]# [root@k8s-master1 roles]# kubectl apply -f clusterroles1.yaml clusterrole.rbac.authorization.k8s.io "discover_base_url" configured [root@k8s-master1 roles]# kubectl apply -f clusterrolebindings1.yaml clusterrolebinding.rbac.authorization.k8s.io "discover-base-url" configured [root@k8s-master1 roles]# [root@k8s-master1 roles]# kubectl describe clusterroles discover_base_url Name:         discover_base_url Labels:       kubernetes.io/bootstrapping=rbac-defaults Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab...               rbac.authorization.kubernetes.io/autoupdate=true PolicyRule:   Resources  Non-Resource URLs  Resource Names  Verbs   ---------  -----------------  --------------  -----              [/healthz/*]       []              [get]

##具有Non-Resources /healthz的get权限

[root@k8s-master1 roles]# curl https://192.168.32.127:8443/logs --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem {   "kind": "Status",   "apiVersion": "v1",   "metadata": {   },   "status": "Failure",   "message": "forbidden: User \"kubernetes\" cannot get path \"/logs\"",   "reason": "Forbidden",   "details": {   },   "code": 403 }[root@k8s-master1 roles]# [root@k8s-master1 roles]# curl https://192.168.32.127:8443/metrics --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem {   "kind": "Status",   "apiVersion": "v1",   "metadata": {   },   "status": "Failure",   "message": "forbidden: User \"kubernetes\" cannot get path \"/metrics\"",   "reason": "Forbidden",   "details": {   },   "code": 403 }[root@k8s-master1 roles]#

可以看到除了healthz执行成功,其他全部失败.修改clusterroles,再测试,见下:

[root@k8s-master1 roles]# kubectl describe clusterroles discover_base_url Name:         discover_base_url Labels:       kubernetes.io/bootstrapping=rbac-defaults Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab...               rbac.authorization.kubernetes.io/autoupdate=true PolicyRule:   Resources  Non-Resource URLs  Resource Names  Verbs   ---------  -----------------  --------------  -----              [/healthz/*]       []              [get]              [/logs]            []              [get]              [/metrics]         []              [get]              [/version]         []              [get] [root@k8s-master1 roles]#

再执行上面报错的命令,全部正常.可见,Non-Resourece包含了/healthz/*,/logs,/metrics等等.

5.引出问题2:Resource的权限配置?

先来条执行报错的命令:

[root@k8s-master1 roles]#curl https://192.168.32.127:8443/api/v1/nodes/proxy  --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem {   "kind": "Status",   "apiVersion": "v1",   "metadata": {   },   "status": "Failure",   "message": "nodes \"proxy\" is forbidden: User \"kubernetes\" cannot get resource \"nodes\" in API group \"\" at the cluster scope",   "reason": "Forbidden",   "details": {     "name": "proxy",     "kind": "nodes"   },   "code": 403 }[root@k8s-master1 roles]#

好奇怪,根据我们上面检索的权限,见下:

-- Name:         kube-apiserver Labels:       <none> Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"roleRef"... Role:   Kind:  ClusterRole   Name:  kube-apiserver Subjects:   Kind  Name        Namespace   ----  ----        ---------   User  kubernetes   Name:         kube-apiserver Labels:       <none> Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGr... PolicyRule:   Resources      Non-Resource URLs  Resource Names  Verbs   ---------      -----------------  --------------  -----   nodes/metrics  []                 []              [get create]   nodes/proxy    []                 []              [get create] [root@k8s-master1 ~]#

按道理是应该可以正常检索得到的.为什么报错呢?先不管,添加权限测试下看看,见下:

获取kube-apiserver这个clusterroles权限的描述,见下:

 [root@k8s-master1 roles]# kubectl get clusterroles kube-apiserver -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:   annotations:     kubectl.kubernetes.io/last-applied-configuration: |       {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGroups":[""],"resources":["nodes/proxy","nodes/metrics"],"verbs":["get","create"]}]}   creationTimestamp: 2019-02-28T06:51:53Z   name: kube-apiserver   resourceVersion: "35075"   selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/kube-apiserver   uid: 5519ea8d-3b25-11e9-95a3-000c29383c89 rules: - apiGroups:   - ""   resources:   - nodes/proxy   - nodes/metrics   verbs:   - get   - create [root@k8s-master1 roles]#

修改:

[root@k8s-master1 roles]# cat clusterroles2.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:   name: kube-apiserver rules: - apiGroups: [""]   resources: ["nodes", "nodes/proxy","nodes/metrics"]   verbs: ["get", "list","create"] [root@k8s-master1 roles]# [root@k8s-master1 roles]# kubectl apply -f clusterroles2.yaml clusterrole.rbac.authorization.k8s.io "kube-apiserver" configured [root@k8s-master1 roles]# kubectl get clusterroles kube-apiserver -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:   annotations:     kubectl.kubernetes.io/last-applied-configuration: |       {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGroups":[""],"resources":["nodes","nodes/proxy","nodes/metrics"],"verbs":["get","list","create"]}]}   creationTimestamp: 2019-02-28T06:51:53Z   name: kube-apiserver   resourceVersion: "476880"   selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/kube-apiserver   uid: 5519ea8d-3b25-11e9-95a3-000c29383c89 rules: - apiGroups:   - ""   resources:   - nodes   - nodes/proxy   - nodes/metrics   verbs:   - get   - list   - create

再执行前面报错的命令:

[root@k8s-master1 roles]# curl https://192.168.32.127:8443/api/v1/nodes/k8s-master1 --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem {   "kind": "Node",   "apiVersion": "v1",   "metadata": {     "name": "k8s-master1",     "selfLink": "/api/v1/nodes/k8s-master1",     "uid": "46a353d3-3b07-11e9-95a3-000c29383c89",     "resourceVersion": "477158",     "creationTimestamp": "2019-02-28T03:16:44Z",     "labels": {       "beta.kubernetes.io/arch": "amd64",       "beta.kubernetes.io/os": "linux",       "kubernetes.io/hostname": "k8s-master1"     },     "annotations": {       "node.alpha.kubernetes.io/ttl": "0",       "volumes.kubernetes.io/controller-managed-attach-detach": "true"     }   },   "spec": {   },   "status": {     "capacity": {       "cpu": "1",       "ephemeral-storage": "17394Mi",       "hugepages-1Gi": "0",       "hugepages-2Mi": "0",       "memory": "1867264Ki",       "pods": "110"     },     "allocatable": {       "cpu": "1",       "ephemeral-storage": "16415037823",       "hugepages-1Gi": "0",       "hugepages-2Mi": "0",       "memory": "1764864Ki",       "pods": "110"     },     "conditions": [       {         "type": "OutOfDisk",         "status": "False",         "lastHeartbeatTime": "2019-03-18T06:36:47Z",         "lastTransitionTime": "2019-03-13T08:07:21Z",         "reason": "KubeletHasSufficientDisk",         "message": "kubelet has sufficient disk space available"       },       {         "type": "MemoryPressure",         "status": "False",         "lastHeartbeatTime": "2019-03-18T06:36:47Z",         "lastTransitionTime": "2019-03-13T08:07:21Z",         "reason": "KubeletHasSufficientMemory",         "message": "kubelet has sufficient memory available"       },       {         "type": "DiskPressure",         "status": "False",         "lastHeartbeatTime": "2019-03-18T06:36:47Z",         "lastTransitionTime": "2019-03-13T08:07:21Z",         "reason": "KubeletHasNoDiskPressure",         "message": "kubelet has no disk pressure"       },       {         "type": "PIDPressure",         "status": "False",         "lastHeartbeatTime": "2019-03-18T06:36:47Z",         "lastTransitionTime": "2019-02-28T03:16:45Z",         "reason": "KubeletHasSufficientPID",         "message": "kubelet has sufficient PID available"       },       {         "type": "Ready",         "status": "True",         "lastHeartbeatTime": "2019-03-18T06:36:47Z",         "lastTransitionTime": "2019-03-13T08:07:31Z",         "reason": "KubeletReady",         "message": "kubelet is posting ready status"       }     ],     "addresses": [       {         "type": "InternalIP",         "address": "192.168.32.128"       },       {         "type": "Hostname",         "address": "k8s-master1"       }     ],     "daemonEndpoints": {       "kubeletEndpoint": {         "Port": 10250       }     },     "nodeInfo": {       "machineID": "d1471d605c074c43bf44cd5581364aea",       "systemUUID": "84F64D56-0428-2BBD-7F9E-26CE9C1D7023",       "bootID": "c49804b6-0645-49d3-902f-e66b74fed805",       "kernelVersion": "3.10.0-514.el7.x86_64",       "osImage": "CentOS Linux 7 (Core)",       "containerRuntimeVersion": "docker://17.3.1",       "kubeletVersion": "v1.12.3",       "kubeProxyVersion": "v1.12.3",       "operatingSystem": "linux",       "architecture": "amd64"     },     "images": [       {         "names": [           "registry.access.redhat.com/rhel7/pod-infrastructure@sha256:92d43c37297da3ab187fc2b9e9ebfb243c1110d446c783ae1b989088495db931",           "registry.access.redhat.com/rhel7/pod-infrastructure:latest"         ],         "sizeBytes": 208612920       },       {         "names": [           "tutum/dnsutils@sha256:d2244ad47219529f1003bd1513f5c99e71655353a3a63624ea9cb19f8393d5fe",           "tutum/dnsutils:latest"         ],         "sizeBytes": 199896828       },       {         "names": [           "httpd@sha256:5e7992fcdaa214d5e88c4dfde274befe60d5d5b232717862856012bf5ce31086"         ],         "sizeBytes": 131692150       },       {         "names": [           "httpd@sha256:20ead958907f15b638177071afea60faa61d2b6747c216027b8679b5fa58794b",           "httpd@sha256:e76e7e1d4d853249e9460577d335154877452937c303ba5abde69785e65723f2",           "httpd:latest"         ],         "sizeBytes": 131679770       }     ]   } }[root@k8s-master1 roles]#

整个node的数据都读取出来了.

6.接上面问题的思考,先对比下,修改前和修改后权限的对比,见下:修改前:

rules: - apiGroups:   - ""   resources:   - nodes/proxy   - nodes/metrics   verbs:   - get   - create

修改后:

 rules: - apiGroups:   - ""   resources:   - nodes   - nodes/proxy   - nodes/metrics   verbs:   - get   - list   - create

修改的就是resources加上了nodes这个资源.其他pods,svc之类的权限,参考这个权限修改就能够实现访问.我的理解是:只有具有了访问这个资源的权限之后,才能够访问它的子资源. 

7.遗留问题

还遇到个报错,见下:

[root@k8s-master1 roles]#curl https://192.168.32.127:8443/api/v1/nodes/proxy  --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem {   "kind": "Status",   "apiVersion": "v1",   "metadata": {   },   "status": "Failure",   "message": "nodes \"proxy\" not found",   "reason": "NotFound",   "details": {     "name": "proxy",     "kind": "nodes"   },   "code": 404 }[root@k8s-master1 roles]#

这是子资源没有生成的问题.后面再来测试.

上一篇:终于揪出数据库负载高的元凶:高效云盘
下一篇:没有了
网友评论