写在前面
controller-manager 是 Kubernetes 控制面的组件,通常不太可能出问题,一般监控一下通用的进程指标就问题不大了,不过 controller-manager 确实也暴露了很多 /metrics 白盒指标,我们也一并梳理一下相关内容。
黑盒测试
类似上一篇《Kubernetes监控手册06-监控APIServer》描述的方法,我们先从黑盒角度测试一下,看看 controller-manager 的 /metrics 接口是否直接可用。
[root@tt-fc-dev01.nj manifests]# ss -tlnp|grep controllerLISTEN 0 128 *:10257 *:* users:(("kube-controller",pid=2782446,fd=7))[root@tt-fc-dev01.nj manifests]# curl -s http://localhost:10257/metricsClient sent an HTTP request to an HTTPS server.[root@tt-fc-dev01.nj manifests]# curl -k -s https://localhost:10257/metrics{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/metrics\"", "reason": "Forbidden", "details": {}, "code": 403}看起来也是需要认证的,我们直接复用上一篇创建的 Token,看看能否拿到数据:
[root@tt-fc-dev01.nj yamls]# token=`kubectl get secret categraf-token-6whbs -n flashcat -o jsonpath={.data.token} | base64 -d`[root@tt-fc-dev01.nj yamls]# curl -s -k -H "Authorization: Bearer $token" https://localhost:10257/metrics > cm.metrics[root@tt-fc-dev01.nj yamls]# head -n 6 cm.metrics# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0[root@tt-fc-dev01.nj yamls]# cat cm.metrics | wc -l10070妥了,可以复用之前的 Token。
配置采集
我们还是使用 Prometheus agent mode 来拉取数据,原汁原味的,只要把 controller-manager 部分也加上就行了。改造之后的 prometheus-agent-configmap.yaml 内容如下:
apiVersion: v1kind: ConfigMapmetadata: name: prometheus-agent-conf labels: name: prometheus-agent-conf namespace: flashcatdata: prometheus.yml: |- global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: 'apiserver' kubernetes_sd_configs: - role: endpoints scheme: https tls_config: insecure_skip_verify: true authorization: credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https - job_name: 'controller-manager' kubernetes_sd_configs: - role: endpoints scheme: https tls_config: insecure_skip_verify: true authorization: credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: kube-system;kube-controller-manager;https remote_write: - url: 'http://10.206.0.16:19000/prometheus/v1/write'这里我新增了一个scrape job name:controller-manager,Kubernetes 服务发现仍然使用 endpoints,匹配规则有三点(通过 relabel_configs 的 keep 实现):
- __meta_kubernetes_namespace endpoint 的 namespace 要求是 kube-system
- __meta_kubernetes_service_name service name 要求是 kube-controller-manager
- __meta_kubernetes_endpoint_port_name endpoint 的 port_name 要求是叫 https
如果你没有采集成功,就要去看看有没有这个 endpoint:
[work@tt-fc-dev01.nj yamls]$ kubectl get endpoints -n kube-systemNAME ENDPOINTS AGEetcd 10.206.0.16:2381 126detcd-service 10.206.0.16:2379 75detcd-service2 10.206.10.16:2379 75dkube-controller-manager 10.206.0.16:10257 74dkube-dns 172.16.0.85:53,172.16.1.4:53,172.16.0.85:53 + 3 more... 324dkube-scheduler 10.206.0.16:10259 131dkube-state-metrics 172.16.3.198:8081,172.16.3.198:8080 75dkubelet 10.206.0.11:10250,10.206.0.16:10250,10.206.0.17:10250 + 15 more... 315d[work@tt-fc-dev01.nj yamls]$ kubectl get endpoints -n kube-system kube-controller-manager -o yamlapiVersion: v1kind: Endpointsmetadata: annotations: endpoints.kubernetes.io/last-change-trigger-time: "2022-09-15T09:43:21Z" creationTimestamp: "2022-09-15T09:43:21Z" labels: k8s-app: kube-controller-manager name: kube-controller-manager namespace: kube-system resourceVersion: "112212043" uid: 52cfb383-6d2b-452e-9a1f-95c7a898a1b4subsets:- addresses: - ip: 10.206.0.16 nodeName: 10.206.0.16 targetRef: kind: Pod name: kube-controller-manager-10.206.0.16 namespace: kube-system resourceVersion: "112211925" uid: d9515495-057c-4ea6-ad1f-28341498710f ports: - name: https port: 10257 protocol: TCP__meta_kubernetes_endpoint_port_name 就是上面的倒数第三行。这些信息我的环境里都是有的,如果你的环境没有对应的 endpoint,可以手工创建一个 service,孔飞老师之前给大家准备过一个 https://github.com/flashcatcloud/categraf/blob/main/k8s/controller-service.yaml,把这个 controller-service.yaml apply 一下就行了。另外,如果是用 kubeadm 安装的 controller-manager,还要记得修改 /etc/kubernetes/manifests/kube-controller-manager.yaml,调整 controller-manager 的启动参数:--bind-address=0.0.0.0。
监控大盘
controller-manager 的大盘已经准备好了,地址在 https://github.com/flashcatcloud/categraf/blob/main/k8s/cm-dash.json,可以直接导入夜莺使用。如果觉得大盘有需要改进的地方,欢迎PR。
监控指标
controller-manager 的关键指标分别是啥意思,孔飞老师之前整理过,我给搬过来了:
# HELP rest_client_request_duration_seconds [ALPHA] Request latency in seconds. Broken down by verb and URL.# TYPE rest_client_request_duration_seconds histogram请求apiserver的耗时分布,按照url+verb统计# HELP cronjob_controller_cronjob_job_creation_skew_duration_seconds [ALPHA] Time between when a cronjob is scheduled to be run, and when the corresponding job is created# TYPE cronjob_controller_cronjob_job_creation_skew_duration_seconds histogramcronjob 创建到运行的时间分布# HELP leader_election_master_status [ALPHA] Gauge of if the reporting system is master of the relevant lease, 0 indicates backup, 1 indicates master. 'name' is the string used to identify the lease. Please make sure to group by name.# TYPE leader_election_master_status gauge控制器的选举状态,0表示backup, 1表示master # HELP node_collector_zone_health [ALPHA] Gauge measuring percentage of healthy nodes per zone.# TYPE node_collector_zone_health gauge每个zone的健康node占比# HELP node_collector_zone_size [ALPHA] Gauge measuring number of registered Nodes per zones.# TYPE node_collector_zone_size gauge每个zone的node数# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.# TYPE process_cpu_seconds_total countercpu使用量(也可以理解为cpu使用率)# HELP process_open_fds Number of open file descriptors.# TYPE process_open_fds gauge控制器打开的fd数# HELP pv_collector_bound_pv_count [ALPHA] Gauge measuring number of persistent volume currently bound# TYPE pv_collector_bound_pv_count gauge当前绑定的pv数量# HELP pv_collector_unbound_pvc_count [ALPHA] Gauge measuring number of persistent volume claim currently unbound# TYPE pv_collector_unbound_pvc_count gauge当前没有绑定的pvc数量 # HELP pv_collector_bound_pvc_count [ALPHA] Gauge measuring number of persistent volume claim currently bound# TYPE pv_collector_bound_pvc_count gauge当前绑定的pvc数量# HELP pv_collector_total_pv_count [ALPHA] Gauge measuring total number of persistent volumes# TYPE pv_collector_total_pv_count gaugepv总数量# HELP workqueue_adds_total [ALPHA] Total number of adds handled by workqueue# TYPE workqueue_adds_total counter各个controller已接受的任务总数与apiserver的workqueue_adds_total指标类似# HELP workqueue_depth [ALPHA] Current depth of workqueue# TYPE workqueue_depth gauge各个controller队列深度,表示一个controller中的任务的数量与apiserver的workqueue_depth类似,这个是指各个controller中队列的深度,数值越小越好# HELP workqueue_queue_duration_seconds [ALPHA] How long in seconds an item stays in workqueue before being requested.# TYPE workqueue_queue_duration_seconds histogram任务在队列中的等待耗时,按照控制器分别统计# HELP workqueue_work_duration_seconds [ALPHA] How long in seconds processing an item from workqueue takes.# TYPE workqueue_work_duration_seconds histogram任务出队到被处理完成的时间,按照控制分别统计# HELP workqueue_retries_total [ALPHA] Total number of retries handled by workqueue# TYPE workqueue_retries_total counter任务进入队列重试的次数# HELP workqueue_longest_running_processor_seconds [ALPHA] How many seconds has the longest running processor for workqueue been running.# TYPE workqueue_longest_running_processor_seconds gauge正在处理的任务中,最长耗时任务的处理时间# HELP endpoint_slice_controller_syncs [ALPHA] Number of EndpointSlice syncs# TYPE endpoint_slice_controller_syncs counterendpoint_slice 同步的数量(1.20以上)# HELP get_token_fail_count [ALPHA] Counter of failed Token() requests to the alternate token source# TYPE get_token_fail_count counter获取token失败的次数# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.# TYPE go_memstats_gc_cpu_fraction gaugecontroller gc的cpu使用率相关文章
- Kubernetes监控手册01-体系介绍
- Kubernetes监控手册02-宿主监控概述
- Kubernetes监控手册03-宿主监控实操
- Kubernetes监控手册04-监控Kube-Proxy
- Kubernetes监控手册05-监控Kubelet
- Kubernetes监控手册06-监控APIServer
关于作者
本文作者秦晓辉,Flashcat合伙人,文章内容是Flashcat技术团队共同沉淀的结晶,作者做了编辑整理,我们会持续输出监控、稳定性保障相关的技术文章,文章可转载,转载请注明出处,尊重技术人员的成果。
如果对 Nightingale、Categraf、Prometheus 等技术感兴趣,欢迎加入我们的微信群组,联系我(picobyte)拉入部落,和社区同仁一起探讨监控技术。