k8s实践6:从解决报错开始入门RBAC

1.
在k8s集群使用过程中,总是遇到各种rbac的权限问题.
记录了几个报错,见下:

报错1:

"message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope"

"message": "pservices is forbidden: User \"kubernetes\" cannot list resource \"pservices\" in API group \"\" at the cluster scope",

报错2:

[root@k8s-master2 ~]# curl https://192.168.32.127:8443/logs? --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
? "kind": "Status",
? "apiVersion": "v1",
? "metadata": {

? },
? "status": "Failure",
? "message": "forbidden: User \"kubernetes\" cannot get path \"/logs\"",
? "reason": "Forbidden",
? "details": {

? },
? "code": 403
curl https://192.168.32.127:8443/metrics --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
? "kind": "Status",
? "apiVersion": "v1",
? "metadata": {

? },
? "status": "Failure",
? "message": "forbidden: User \"kubernetes\" cannot get path \"/metrics\"",
? "reason": "Forbidden",
? "details": {

? },
? "code": 403

深入学习了解rbac的各种基础知识,相当必要.

2.
从分析报错开始

报错1:

"message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope"

先看这条报错的命令记录:

[root@k8s-master1 ~]# curl https://192.168.32.127:8443/api/v1/pods --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
? "kind": "Status",
? "apiVersion": "v1",
? "metadata": {

? },
? "status": "Failure",
? "message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
? "reason": "Forbidden",
? "details": {
? ? "kind": "pods"
? },
? "code": 403

这条报错的意思是什么呢?
字面上理解,用户kubernetes在api Group里没有权限,无法获取资源pod列表.
从解决这个报错开始我们的入门学习.

3.
User kubernetes是从哪冒出来的呢?
这个用户是我们部署apiserver时,生成的api访问etcd的用户.
检索用户kubernetes的权限和绑定的群组,见下:

[root@k8s-master1 ~]# kubectl describe clusterrolebindings |grep -B 9 "User? kubernetes "
Name:? ? ? ?? discover-base-url
Labels:? ? ?? <none>
Annotations:? kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"discover-base-url","namespace":""},"roleR...
Role:
? Kind:? ClusterRole
? Name:? discover_base_url
Subjects:
? Kind? Name? ? ? ? Namespace
? ----? ----? ? ? ? ---------
? User? kubernetes?
--
Name:? ? ? ?? kube-apiserver
Labels:? ? ?? <none>
Annotations:? kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"roleRef"...
Role:
? Kind:? ClusterRole
? Name:? kube-apiserver
Subjects:
? Kind? Name? ? ? ? Namespace
? ----? ----? ? ? ? ---------
? User? kubernetes?

权限:

[root@k8s-master1 ~]# kubectl describe clusterroles discover_base_url
Name:? ? ? ?? discover_base_url
Labels:? ? ?? kubernetes.io/bootstrapping=rbac-defaults
Annotations:? kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab...
? ? ? ? ? ? ? rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
? Resources? Non-Resource URLs? Resource Names? Verbs
? ---------? -----------------? --------------? -----
? ? ? ? ? ?? [/]? ? ? ? ? ? ? ? []? ? ? ? ? ? ? [get]
[root@k8s-master1 ~]#

##注意这条权限是上篇apiserver里面新增的权限.

[root@k8s-master1 ~]# kubectl describe clusterroles kube-apiserver
Name:? ? ? ?? kube-apiserver
Labels:? ? ?? <none>
Annotations:? kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGr...
PolicyRule:
? Resources? ? ? Non-Resource URLs? Resource Names? Verbs
? ---------? ? ? -----------------? --------------? -----
? nodes/metrics? []? ? ? ? ? ? ? ?? []? ? ? ? ? ? ? [get create]
? nodes/proxy? ? []? ? ? ? ? ? ? ?? []? ? ? ? ? ? ? [get create]
[root@k8s-master1 ~]#

##一个用的是Resources
##一个用的是Non-Resource

4.
引出问题1:
Non-Resouce是什么?
google了好久,也只是看到只言片语.以下是我自己的理解:
回头看上篇检索apiserver时显示的信息:

[root@k8s-master1 ~]# curl https://192.168.32.127:8443/ --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
? "paths": [
? ? "/api",
? ? "/api/v1",
? ? "/apis",
? ? "/apis/",
? ? "/apis/admissionregistration.k8s.io",
? ? "/apis/admissionregistration.k8s.io/v1beta1",
? ? "/apis/apiextensions.k8s.io",
? ? "/apis/apiextensions.k8s.io/v1beta1",
? ? "/apis/apiregistration.k8s.io",
? ? "/apis/apiregistration.k8s.io/v1",
? ? "/apis/apiregistration.k8s.io/v1beta1",
? ? "/apis/apps",
? ? "/apis/apps/v1",
? ? "/apis/apps/v1beta1",
? ? "/apis/apps/v1beta2",
? ? "/apis/authentication.k8s.io",
? ? "/apis/authentication.k8s.io/v1",
? ? "/apis/authentication.k8s.io/v1beta1",
? ? "/apis/authorization.k8s.io",
? ? "/apis/authorization.k8s.io/v1",
? ? "/apis/authorization.k8s.io/v1beta1",
? ? "/apis/autoscaling",
? ? "/apis/autoscaling/v1",
? ? "/apis/autoscaling/v2beta1",
? ? "/apis/autoscaling/v2beta2",
? ? "/apis/batch",
? ? "/apis/batch/v1",
? ? "/apis/batch/v1beta1",
? ? "/apis/certificates.k8s.io",
? ? "/apis/certificates.k8s.io/v1beta1",
? ? "/apis/coordination.k8s.io",
? ? "/apis/coordination.k8s.io/v1beta1",
? ? "/apis/events.k8s.io",
? ? "/apis/events.k8s.io/v1beta1",
? ? "/apis/extensions",
? ? "/apis/extensions/v1beta1",
? ? "/apis/networking.k8s.io",
? ? "/apis/networking.k8s.io/v1",
? ? "/apis/policy",
? ? "/apis/policy/v1beta1",
? ? "/apis/rbac.authorization.k8s.io",
? ? "/apis/rbac.authorization.k8s.io/v1",
? ? "/apis/rbac.authorization.k8s.io/v1beta1",
? ? "/apis/scheduling.k8s.io",
? ? "/apis/scheduling.k8s.io/v1beta1",
? ? "/apis/storage.k8s.io",
? ? "/apis/storage.k8s.io/v1",
? ? "/apis/storage.k8s.io/v1beta1",
? ? "/healthz",
? ? "/healthz/autoregister-completion",
? ? "/healthz/etcd",
? ? "/healthz/log",
? ? "/healthz/ping",
? ? "/healthz/poststarthook/apiservice-openapi-controller",
? ? "/healthz/poststarthook/apiservice-registration-controller",
? ? "/healthz/poststarthook/apiservice-status-available-controller",
? ? "/healthz/poststarthook/bootstrap-controller",
? ? "/healthz/poststarthook/ca-registration",
? ? "/healthz/poststarthook/generic-apiserver-start-informers",
? ? "/healthz/poststarthook/kube-apiserver-autoregistration",
? ? "/healthz/poststarthook/rbac/bootstrap-roles",
? ? "/healthz/poststarthook/scheduling/bootstrap-system-priority-classes",
? ? "/healthz/poststarthook/start-apiextensions-controllers",
? ? "/healthz/poststarthook/start-apiextensions-informers",
? ? "/healthz/poststarthook/start-kube-aggregator-informers",
? ? "/healthz/poststarthook/start-kube-apiserver-admission-initializer",
? ? "/healthz/poststarthook/start-kube-apiserver-informers",
? ? "/logs",
? ? "/metrics",
? ? "/openapi/v2",
? ? "/swagger-2.0.0.json",
? ? "/swagger-2.0.0.pb-v1",
? ? "/swagger-2.0.0.pb-v1.gz",
? ? "/swagger-ui/",
? ? "/swagger.json",
? ? "/swaggerapi",
? ? "/version"
? ]
}[root@k8s-master1 ~]#?

从healthz开始的都是Non-resouce,是不是呢?修改clusterroles,测试见下:

[root@k8s-master1 roles]# cat clusterroles1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
? annotations:
? ? rbac.authorization.kubernetes.io/autoupdate: "true"
? labels:
? ? kubernetes.io/bootstrapping: rbac-defaults
? name: discover_base_url
rules:
- nonResourceURLs:
#? - /
? - /healthz/*
? verbs:
? - get
[root@k8s-master1 roles]#
[root@k8s-master1 roles]# kubectl apply -f clusterroles1.yaml
clusterrole.rbac.authorization.k8s.io "discover_base_url" configured
[root@k8s-master1 roles]# kubectl apply -f clusterrolebindings1.yaml
clusterrolebinding.rbac.authorization.k8s.io "discover-base-url" configured
[root@k8s-master1 roles]#

[root@k8s-master1 roles]# kubectl describe clusterroles discover_base_url
Name:? ? ? ?? discover_base_url
Labels:? ? ?? kubernetes.io/bootstrapping=rbac-defaults
Annotations:? kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab...
? ? ? ? ? ? ? rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
? Resources? Non-Resource URLs? Resource Names? Verbs
? ---------? -----------------? --------------? -----
? ? ? ? ? ?? [/healthz/*]? ? ?? []? ? ? ? ? ? ? [get]

##具有Non-Resources /healthz的get权限

[root@k8s-master1 roles]# curl https://192.168.32.127:8443/logs --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
? "kind": "Status",
? "apiVersion": "v1",
? "metadata": {

? },
? "status": "Failure",
? "message": "forbidden: User \"kubernetes\" cannot get path \"/logs\"",
? "reason": "Forbidden",
? "details": {

? },
? "code": 403
}[root@k8s-master1 roles]#
[root@k8s-master1 roles]# curl https://192.168.32.127:8443/metrics --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
? "kind": "Status",
? "apiVersion": "v1",
? "metadata": {

? },
? "status": "Failure",
? "message": "forbidden: User \"kubernetes\" cannot get path \"/metrics\"",
? "reason": "Forbidden",
? "details": {

? },
? "code": 403
}[root@k8s-master1 roles]#

可以看到除了healthz执行成功,其他全部失败.
修改clusterroles,再测试,见下:

[root@k8s-master1 roles]# kubectl describe clusterroles discover_base_url
Name:? ? ? ?? discover_base_url
Labels:? ? ?? kubernetes.io/bootstrapping=rbac-defaults
Annotations:? kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab...
? ? ? ? ? ? ? rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
? Resources? Non-Resource URLs? Resource Names? Verbs
? ---------? -----------------? --------------? -----
? ? ? ? ? ?? [/healthz/*]? ? ?? []? ? ? ? ? ? ? [get]
? ? ? ? ? ?? [/logs]? ? ? ? ? ? []? ? ? ? ? ? ? [get]
? ? ? ? ? ?? [/metrics]? ? ? ?? []? ? ? ? ? ? ? [get]
? ? ? ? ? ?? [/version]? ? ? ?? []? ? ? ? ? ? ? [get]
[root@k8s-master1 roles]#

再执行上面报错的命令,全部正常.
可见,Non-Resourece包含了/healthz/*,/logs,/metrics等等.

5.
引出问题2:
Resource的权限配置?

先来条执行报错的命令:

[root@k8s-master1 roles]#curl https://192.168.32.127:8443/api/v1/nodes/proxy? --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
? "kind": "Status",
? "apiVersion": "v1",
? "metadata": {

? },
? "status": "Failure",
? "message": "nodes \"proxy\" is forbidden: User \"kubernetes\" cannot get resource \"nodes\" in API group \"\" at the cluster scope",
? "reason": "Forbidden",
? "details": {
? ? "name": "proxy",
? ? "kind": "nodes"
? },
? "code": 403
}[root@k8s-master1 roles]#

好奇怪,根据我们上面检索的权限,见下:

--
Name:? ? ? ?? kube-apiserver
Labels:? ? ?? <none>
Annotations:? kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"roleRef"...
Role:
? Kind:? ClusterRole
? Name:? kube-apiserver
Subjects:
? Kind? Name? ? ? ? Namespace
? ----? ----? ? ? ? ---------
? User? kubernetes ?
Name:? ? ? ?? kube-apiserver
Labels:? ? ?? <none>
Annotations:? kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGr...
PolicyRule:
? Resources? ? ? Non-Resource URLs? Resource Names? Verbs
? ---------? ? ? -----------------? --------------? -----
? nodes/metrics? []? ? ? ? ? ? ? ?? []? ? ? ? ? ? ? [get create]
? nodes/proxy? ? []? ? ? ? ? ? ? ?? []? ? ? ? ? ? ? [get create]
[root@k8s-master1 ~]#

按道理是应该可以正常检索得到的.为什么报错呢?先不管,添加权限测试下看看,见下:

获取kube-apiserver这个clusterroles权限的描述,见下:

?[root@k8s-master1 roles]# kubectl get clusterroles kube-apiserver -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
? annotations:
? ? kubectl.kubernetes.io/last-applied-configuration: |
? ? ? {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGroups":[""],"resources":["nodes/proxy","nodes/metrics"],"verbs":["get","create"]}]}
? creationTimestamp: 2019-02-28T06:51:53Z
? name: kube-apiserver
? resourceVersion: "35075"
? selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/kube-apiserver
? uid: 5519ea8d-3b25-11e9-95a3-000c29383c89
rules:
- apiGroups:
? - ""
? resources:
? - nodes/proxy
? - nodes/metrics
? verbs:
? - get
? - create
[root@k8s-master1 roles]#

修改:

[root@k8s-master1 roles]# cat clusterroles2.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
? name: kube-apiserver
rules:
- apiGroups: [""]
? resources: ["nodes", "nodes/proxy","nodes/metrics"]
? verbs: ["get", "list","create"]
[root@k8s-master1 roles]#
[root@k8s-master1 roles]# kubectl apply -f clusterroles2.yaml
clusterrole.rbac.authorization.k8s.io "kube-apiserver" configured
[root@k8s-master1 roles]# kubectl get clusterroles kube-apiserver -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
? annotations:
? ? kubectl.kubernetes.io/last-applied-configuration: |
? ? ? {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGroups":[""],"resources":["nodes","nodes/proxy","nodes/metrics"],"verbs":["get","list","create"]}]}
? creationTimestamp: 2019-02-28T06:51:53Z
? name: kube-apiserver
? resourceVersion: "476880"
? selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/kube-apiserver
? uid: 5519ea8d-3b25-11e9-95a3-000c29383c89
rules:
- apiGroups:
? - ""
? resources:
? - nodes
? - nodes/proxy
? - nodes/metrics
? verbs:
? - get
? - list
? - create

再执行前面报错的命令:

[root@k8s-master1 roles]# curl https://192.168.32.127:8443/api/v1/nodes/k8s-master1 --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
? "kind": "Node",
? "apiVersion": "v1",
? "metadata": {
? ? "name": "k8s-master1",
? ? "selfLink": "/api/v1/nodes/k8s-master1",
? ? "uid": "46a353d3-3b07-11e9-95a3-000c29383c89",
? ? "resourceVersion": "477158",
? ? "creationTimestamp": "2019-02-28T03:16:44Z",
? ? "labels": {
? ? ? "beta.kubernetes.io/arch": "amd64",
? ? ? "beta.kubernetes.io/os": "linux",
? ? ? "kubernetes.io/hostname": "k8s-master1"
? ? },
? ? "annotations": {
? ? ? "node.alpha.kubernetes.io/ttl": "0",
? ? ? "volumes.kubernetes.io/controller-managed-attach-detach": "true"
? ? }
? },
? "spec": {

? },
? "status": {
? ? "capacity": {
? ? ? "cpu": "1",
? ? ? "ephemeral-storage": "17394Mi",
? ? ? "hugepages-1Gi": "0",
? ? ? "hugepages-2Mi": "0",
? ? ? "memory": "1867264Ki",
? ? ? "pods": "110"
? ? },
? ? "allocatable": {
? ? ? "cpu": "1",
? ? ? "ephemeral-storage": "16415037823",
? ? ? "hugepages-1Gi": "0",
? ? ? "hugepages-2Mi": "0",
? ? ? "memory": "1764864Ki",
? ? ? "pods": "110"
? ? },
? ? "conditions": [
? ? ? {
? ? ? ? "type": "OutOfDisk",
? ? ? ? "status": "False",
? ? ? ? "lastHeartbeatTime": "2019-03-18T06:36:47Z",
? ? ? ? "lastTransitionTime": "2019-03-13T08:07:21Z",
? ? ? ? "reason": "KubeletHasSufficientDisk",
? ? ? ? "message": "kubelet has sufficient disk space available"
? ? ? },
? ? ? {
? ? ? ? "type": "MemoryPressure",
? ? ? ? "status": "False",
? ? ? ? "lastHeartbeatTime": "2019-03-18T06:36:47Z",
? ? ? ? "lastTransitionTime": "2019-03-13T08:07:21Z",
? ? ? ? "reason": "KubeletHasSufficientMemory",
? ? ? ? "message": "kubelet has sufficient memory available"
? ? ? },
? ? ? {
? ? ? ? "type": "DiskPressure",
? ? ? ? "status": "False",
? ? ? ? "lastHeartbeatTime": "2019-03-18T06:36:47Z",
? ? ? ? "lastTransitionTime": "2019-03-13T08:07:21Z",
? ? ? ? "reason": "KubeletHasNoDiskPressure",
? ? ? ? "message": "kubelet has no disk pressure"
? ? ? },
? ? ? {
? ? ? ? "type": "PIDPressure",
? ? ? ? "status": "False",
? ? ? ? "lastHeartbeatTime": "2019-03-18T06:36:47Z",
? ? ? ? "lastTransitionTime": "2019-02-28T03:16:45Z",
? ? ? ? "reason": "KubeletHasSufficientPID",
? ? ? ? "message": "kubelet has sufficient PID available"
? ? ? },
? ? ? {
? ? ? ? "type": "Ready",
? ? ? ? "status": "True",
? ? ? ? "lastHeartbeatTime": "2019-03-18T06:36:47Z",
? ? ? ? "lastTransitionTime": "2019-03-13T08:07:31Z",
? ? ? ? "reason": "KubeletReady",
? ? ? ? "message": "kubelet is posting ready status"
? ? ? }
? ? ],
? ? "addresses": [
? ? ? {
? ? ? ? "type": "InternalIP",
? ? ? ? "address": "192.168.32.128"
? ? ? },
? ? ? {
? ? ? ? "type": "Hostname",
? ? ? ? "address": "k8s-master1"
? ? ? }
? ? ],
? ? "daemonEndpoints": {
? ? ? "kubeletEndpoint": {
? ? ? ? "Port": 10250
? ? ? }
? ? },
? ? "nodeInfo": {
? ? ? "machineID": "d1471d605c074c43bf44cd5581364aea",
? ? ? "systemUUID": "84F64D56-0428-2BBD-7F9E-26CE9C1D7023",
? ? ? "bootID": "c49804b6-0645-49d3-902f-e66b74fed805",
? ? ? "kernelVersion": "3.10.0-514.el7.x86_64",
? ? ? "osImage": "CentOS Linux 7 (Core)",
? ? ? "containerRuntimeVersion": "docker://17.3.1",
? ? ? "kubeletVersion": "v1.12.3",
? ? ? "kubeProxyVersion": "v1.12.3",
? ? ? "operatingSystem": "linux",
? ? ? "architecture": "amd64"
? ? },
? ? "images": [
? ? ? {
? ? ? ? "names": [
? ? ? ? ? "registry.access.redhat.com/rhel7/pod-infrastructure@sha256:92d43c37297da3ab187fc2b9e9ebfb243c1110d446c783ae1b989088495db931",
? ? ? ? ? "registry.access.redhat.com/rhel7/pod-infrastructure:latest"
? ? ? ? ],
? ? ? ? "sizeBytes": 208612920
? ? ? },
? ? ? {
? ? ? ? "names": [
? ? ? ? ? "tutum/dnsutils@sha256:d2244ad47219529f1003bd1513f5c99e71655353a3a63624ea9cb19f8393d5fe",
? ? ? ? ? "tutum/dnsutils:latest"
? ? ? ? ],
? ? ? ? "sizeBytes": 199896828
? ? ? },
? ? ? {
? ? ? ? "names": [
? ? ? ? ? "httpd@sha256:5e7992fcdaa214d5e88c4dfde274befe60d5d5b232717862856012bf5ce31086"
? ? ? ? ],
? ? ? ? "sizeBytes": 131692150
? ? ? },
? ? ? {
? ? ? ? "names": [
? ? ? ? ? "httpd@sha256:20ead958907f15b638177071afea60faa61d2b6747c216027b8679b5fa58794b",
? ? ? ? ? "httpd@sha256:e76e7e1d4d853249e9460577d335154877452937c303ba5abde69785e65723f2",
? ? ? ? ? "httpd:latest"
? ? ? ? ],
? ? ? ? "sizeBytes": 131679770
? ? ? }
? ? ]
? }
}[root@k8s-master1 roles]#

整个node的数据都读取出来了.

6.
接上面问题的思考,先对比下,修改前和修改后权限的对比,见下:
修改前:

rules:
- apiGroups:
? - ""
? resources:
? - nodes/proxy
? - nodes/metrics
? verbs:
? - get
? - create

修改后:

?rules:
- apiGroups:
? - ""
? resources:
? - nodes
? - nodes/proxy
? - nodes/metrics
? verbs:
? - get
? - list
? - create

修改的就是resources加上了nodes这个资源.其他pods,svc之类的权限,参考这个权限修改就能够实现访问.
我的理解是:只有具有了访问这个资源的权限之后,才能够访问它的子资源.
?

7.
遗留问题

还遇到个报错,见下:

[root@k8s-master1 roles]#curl https://192.168.32.127:8443/api/v1/nodes/proxy? --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
? "kind": "Status",
? "apiVersion": "v1",
? "metadata": {

? },
? "status": "Failure",
? "message": "nodes \"proxy\" not found",
? "reason": "NotFound",
? "details": {
? ? "name": "proxy",
? ? "kind": "nodes"
? },
? "code": 404
}[root@k8s-master1 roles]#

这是子资源没有生成的问题.后面再来测试.

原文地址:https://blog.51cto.com/goome/2364702

时间: 2024-10-08 20:04:16

k8s实践6:从解决报错开始入门RBAC的相关文章

【工作经验】解决报错:SyntaxError: Unexpected token .

解决报错: SyntaxError: Unexpected token . 处理过程: 屏蔽全部新增改代码,问题不出现. 放开一部分,问题依然不出现. 直到把如下代码放开,报错出现,定位是这块的错误. var textDom.textContent=contentStr; 解决方案: 原来写代码时没注意,多了个var.删掉就好了.

coures包下载和安装 可解决报错ImportError: No module named &#39;_curses&#39;

http://blog.csdn.net/liyaoqing/article/details/54949253 coures curses 库 ( ncurses )提供了控制字符屏幕的独立于终端的方法.curses 是大多数类似于 UNIX 的系统(包括Linux)的标准部分,而且它已经移植到 Windows 和其它系统. 安装包   http://www.lfd.uci.edu/~gohlke/pythonlibs/#curses 安装   pip install whl文件名 可以应对py

编写简单的shell脚本 - for循环 - 解决报错 Syntax error: Bad for loop variable

为了编写批量导入数据的程序,故而学习编写shell脚本!现学现用! ============================================ 1.第一个简单的for循环 #!/bin/bashfor i in 1 2 3 4;do echo $i;done 2.测试for的自增长的循环: #!/bin/bashfor ((i=1; i<=5; i++))do echo $i;done 如果会报错,没有则跳过: Syntax error: Bad for loop variable

解决报错:The server quit without updating PID file

今天晚上要做一个开启MySQL bin-log日志的变更. 在关闭数据库后,修改参数文件,在mysqld下加上(一定要在mysqld下加上),即可开启mysql的binlog日志 [mysqld] log-bin=mysql-bin 修改完参数后启动数据库,数据库无法启动,报一下错误: 参看错误日志发下一下信息: 参看服务器内存 发现已经小于4G 修改参数文件,把4G改为2G innodb_buffer_pool_size = 2G 重新启动数据库,启动成功. 通过此处解决报错的过程,在出现报错

vue 解决报错1

[Vue warn]: You are using the runtime-only build of Vue where the template compiler is not available. Either pre-compile the templates into render functions, or use the compiler-included build. add resolve: { /**解决报错 [Vue warn]: You are using the run

解决报错Could not satisfy explicit device specification &#39;&#39; because the node was colocated with a group of nodes that required incompatible device &#39;/device:GPU:0&#39;

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))改为如下:sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)) 备注:allow_soft_placement=True表示当没有GPU实现可用时,使用将允许TensorFlow回退到CPU. 解决报错Could not sati

对小白操作Linux系统,解决报错的大概思路

有时候,看到报错,就马上到百度报错的原因,然后又报错,又到百度报错的原因 弄着弄着就不知道自己找的答案是解决之前哪个问题的了. 好,介绍一下linux各种报错的解决的大概思路,以加快我们的学习,工作效率 用纸记下: 报错的提示 解决报错的命令 报错的提示 解决报错的命令 如果一个报错在网上有多种解决报错的方法,可以以流程图分支形式几下来. 原文地址:https://www.cnblogs.com/yongshenwu/p/11785875.html

[已解决]报错: Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: &#39;/Users/mac/Ana

报错代码: pip3 install gerapy 报错内容: Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/Users/mac/Ana 解决: sudo pip3 install gerapy [已解决]报错: Could not install packages due to an EnvironmentError: [Errno 13] Permission de

[已解决]报错:UnicodeEncodeError: &#39;latin-1&#39; codec can&#39;t encode characters in position 80-81

报错代码: city_form=self.payload+"&province="+str(pro) 报错内容: UnicodeEncodeError: 'latin-1' codec can't encode characters in position 80-81 解决方法: city_form=self.payload+"&province="+str(pro).encode("utf-8").decode("la