YJWANG

[Kubernetes] Kubernetes Control Plane Component Log 확인 본문

60.Cloud/80.Kubernetes

[Kubernetes] Kubernetes Control Plane Component Log 확인

왕영주 2021. 5. 6. 15:55

Kubernetes Control Plane Component의 Loglevel을 4정도로 맞추고 Pod 생성 / scale out / 등 동작할 때 어떻게 유기적으로 component 들이 동작하는지 알아보고자 합니다.

우선 아래 링크에 각 컴포넌트들에 대해 설명이 돼 있지만 좀 더 와닿게 이해하고자 하는 의도입니다.
https://kubernetes.io/ko/docs/concepts/overview/components/

가장 간단하게 아래 항목들에 대해서 확인해보고자합니다. 각 컴포넌트의 LogLevel을 확인할 수 있는 정도로만 조정하여 테스트한 것으로 Level에 따라 일부 로그가 누락되거나 확인되지 않았을 수 있습니다.

Pod 생성 시


root@yjwang0-k8s-01:~# kubectl run --image=nginx pod-test-1
pod/pod-test-1 created

먼저 API에 인증을 받고 통신을 합니다. Etcd에 새로 생긴 pod에 대한 정보가 저장됐다는 로그를 확인할 수 있습니다.

API-Server

{"log":"I0506 06:18:42.296100       1 trace.go:205] Trace[2022985978]: \"GuaranteedUpdate etcd3\" type:*core.Node (06-May-2021 06:18:41.771) (total time: 524ms):\n","stream":"stderr","time":"2021-05-06T06:18:42.296415757Z"}
{"log":"Trace[2022985978]: ---\"Transaction committed\" 520ms (06:18:00.295)\n","stream":"stderr","time":"2021-05-06T06:18:42.296459832Z"}
{"log":"Trace[2022985978]: [524.536436ms] [524.536436ms] END\n","stream":"stderr","time":"2021-05-06T06:18:42.296472466Z"}
{"log":"I0506 06:18:42.298009       1 trace.go:205] Trace[1013134547]: \"Patch\" url:/api/v1/nodes/yjwang0-k8s-01/status,user-agent:kubelet/v1.20.6 (linux/amd64) kubernetes/8a62859,client:127.0.0.1 (06-May-2021 06:18:41.771) (total time: 526ms):\n","stream":"stderr","time":"2021-05-06T06:18:42.298213802Z"}
{"log":"Trace[1013134547]: ---\"Object stored in database\" 521ms (06:18:00.296)\n","stream":"stderr","time":"2021-05-06T06:18:42.298255081Z"}
{"log":"Trace[1013134547]: [526.65917ms] [526.65917ms] END\n","stream":"stderr","time":"2021-05-06T06:18:42.298267248Z"}

이후 Scheduler를 보면 새로 생긴 pod가 할당이 되지 않았음을 확인하고 이를 Worker Node에 할당함을 확인할 수 있습니다.

Kube-Scheduler

{"log":"I0506 06:18:41.931722       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz\" latency=\"142.474µs\" userAgent=\"kube-probe/1.20\" srcIP=\"10.99.98.10:34928\" resp=200\n","stream":"stderr","time":"2021-05-06T06:18:41.932067003Z"}
{"log":"I0506 06:18:43.739651       1 eventhandlers.go:172] add event for unscheduled pod default/pod-test-1\n","stream":"stderr","time":"2021-05-06T06:18:43.749024592Z"}
{"log":"I0506 06:18:43.739885       1 scheduling_queue.go:812] About to try and schedule pod default/pod-test-1\n","stream":"stderr","time":"2021-05-06T06:18:43.749076972Z"}
{"log":"I0506 06:18:43.739948       1 scheduler.go:459] Attempting to schedule pod: default/pod-test-1\n","stream":"stderr","time":"2021-05-06T06:18:43.749090266Z"}
{"log":"I0506 06:18:43.741613       1 default_binder.go:51] Attempting to bind default/pod-test-1 to yjwang0-k8s-02\n","stream":"stderr","time":"2021-05-06T06:18:43.74910175Z"}
{"log":"I0506 06:18:43.763613       1 eventhandlers.go:229] add event for scheduled pod default/pod-test-1 \n","stream":"stderr","time":"2021-05-06T06:18:43.763854709Z"}
{"log":"I0506 06:18:43.763639       1 eventhandlers.go:209] delete event for unscheduled pod default/pod-test-1\n","stream":"stderr","time":"2021-05-06T06:18:43.763920229Z"}
{"log":"I0506 06:18:43.764136       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"default/pod-test-1\" node=\"yjwang0-k8s-02\" evaluatedNodes=2 feasibleNodes=1\n","stream":"stderr","time":"2021-05-06T06:18:43.764280208Z"}

위 간단한 로그 확인만으로도 장애 발생 시 어떤 컴포넌트들을 확인해야 하는지의 Hint가 될 수 있다.

  • API-Server : kubectl 명령 응답 없음 및 Etcd에 Data가 저장되지 않음
  • Kube-Scheduler : Pod를 생성했으나 WorkerNode에 할당되지 않음

Deployment 생성 시


root@yjwang0-k8s-01:~# kubectl create deployment --image=nginx deployment-test-2
deployment.apps/deployment-test-2 created

Deployment 생성 시 Controller-Manager에 replicaset이 생성됨이 확인되고 scale out이 0에서 1로 필요함이 감지되며 pod를 생성해야 함이 확인된다. 이후 pod를 생성하면

Controller-Manager

{"log":"I0506 06:34:22.615310       1 event.go:291] \"Event occurred\" object=\"default/deployment-test-2\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-test-2-864fbc8987 to 1\"\n","stream":"stderr","time":"2021-05-06T06:34:22.619313238Z"}
{"log":"I0506 06:34:22.681542       1 event.go:291] \"Event occurred\" object=\"default/deployment-test-2-864fbc8987\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-test-2-864fbc8987-l45c4\"\n","stream":"stderr","time":"2021-05-06T06:34:22.681732667Z"}
^C

위와 동일하게 kube-scheduler를 통해서 배치되지 않은 worker-node에 배치된다.

kube-scheduler

{"log":"I0506 06:34:22.676099       1 eventhandlers.go:172] add event for unscheduled pod default/deployment-test-2-864fbc8987-l45c4\n","stream":"stderr","time":"2021-05-06T06:34:22.676378128Z"}
{"log":"I0506 06:34:22.676268       1 scheduling_queue.go:812] About to try and schedule pod default/deployment-test-2-864fbc8987-l45c4\n","stream":"stderr","time":"2021-05-06T06:34:22.676839949Z"}
{"log":"I0506 06:34:22.676335       1 scheduler.go:459] Attempting to schedule pod: default/deployment-test-2-864fbc8987-l45c4\n","stream":"stderr","time":"2021-05-06T06:34:22.676864693Z"}
{"log":"I0506 06:34:22.678067       1 default_binder.go:51] Attempting to bind default/deployment-test-2-864fbc8987-l45c4 to yjwang0-k8s-02\n","stream":"stderr","time":"2021-05-06T06:34:22.678203083Z"}
{"log":"I0506 06:34:22.690352       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"default/deployment-test-2-864fbc8987-l45c4\" node=\"yjwang0-k8s-02\" evaluatedNodes=2 feasibleNodes=1\n","stream":"stderr","time":"2021-05-06T06:34:22.690546346Z"}
{"log":"I0506 06:34:22.695250       1 eventhandlers.go:209] delete event for unscheduled pod default/deployment-test-2-864fbc8987-l45c4\n","stream":"stderr","time":"2021-05-06T06:34:22.695447043Z"}
{"log":"I0506 06:34:22.695759       1 eventhandlers.go:229] add event for scheduled pod default/deployment-test-2-864fbc8987-l45c4 \n","stream":"stderr","time":"2021-05-06T06:34:22.695874134Z"}
{"log":"I0506 06:34:27.364568       1 reflector.go:530] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Watch close - *v1.ConfigMap total 0 items received\n","stream":"stderr","time":"2021-05-06T06:34:27.365425287Z"}
{"log":"I0506 06:34:27.365238       1 reflector.go:530] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Watch close - *v1.ConfigMap total 0 items received\n","stream":"stderr","time":"2021-05-06T06:34:27.365542112Z"}
{"log":"I0506 06:34:27.365777       1 reflector.go:530] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Watch close - *v1.ConfigMap total 0 items received\n","stream":"stderr","time":"2021-05-06T06:34:27.366040244Z"}
  • kube-controller-Manager : sclae up이 되지 않음

Deployment Scale Out 시

scale out 시 위와 동일합니다.

Service 생성 시

설명으로 보아선 controller-manager가 service와 pod를 이어줄 endpoint를 만드는 로그가 발생해야합니다.
한 번 직접 확인해보도록 합니다.

우선 위에서 만든 Deployment를 사용하여 service를 expose 해줍니다.

root@yjwang0-k8s-01:~# kubectl expose deployment deployment-test-2 --port=80 --target-port=80 --name test-service2
service/test-service2 exposed

Controller-Manager를 확인해보면 endpoint_controller에 의해 endpoint가 생성됨을 볼 수 있습니다.

Controller-Manager

{"log":"I0506 06:49:28.939987       1 endpoints_controller.go:541] Update endpoints for default/test-service2, ready: 1 not ready: 0\n","stream":"stderr","time":"2021-05-06T06:49:28.948593454Z"}
{"log":"I0506 06:49:28.970174       1 endpointslicemirroring_controller.go:273] syncEndpoints(\"default/test-service2\")\n","stream":"stderr","time":"2021-05-06T06:49:28.970685682Z"}
{"log":"I0506 06:49:28.970298       1 endpointslicemirroring_controller.go:270] Finished syncing EndpointSlices for \"default/test-service2\" Endpoints. (124.913µs)\n","stream":"stderr","time":"2021-05-06T06:49:28.970724303Z"}
{"log":"I0506 06:49:28.970597       1 endpoints_controller.go:376] Finished syncing service \"default/test-service2\" endpoints. (30.860342ms)\n","stream":"stderr","time":"2021-05-06T06:49:28.970733503Z"}
{"log":"I0506 06:49:28.977769       1 endpointslice_controller.go:297] Finished syncing service \"default/test-service2\" endpoint slices. (38.018966ms)\n","stream":"stderr","time":"2021-05-06T06:49:28.977933924Z"}

후기?


kubernetes component의 log level을 변경하면 너무 많은 log가 수집돼 보는게 쉽지 않습니다.
위와 같이 간단한 동작을 확인하는 것만으로도 log level을 몇 차례 변경하고 입맛에 맞는 log를 발라내기가 쉽지 않습니다.

Component의 Log를 보는게 얼마나 있겠냐하지만 만약 볼 일이 생기면 눈 좀 아프겠네요..

반응형