在 Kubernetes 环境中快速配置 EFK

在 Kubernetes 环境中快速配置 EFK

Published
September 6, 2021
Updated
Last updated September 26, 2022
Description
Progress
Author
Kubernetes 的日志流系统有很多搭配方案,从 sidecar 的选择、到日至的缓存再到分布式和负载均衡。对此,根据不同的机器配置和业务需求,也就有了许许多多的方案。在这个方案中,我们希望在一个单服务器上部署最简单的 EFK 系统。

系统配置

k3s version v1.21.4+k3s1 (3e250fdb) go version go1.16.6bj helm version version.BuildInfo{Version:"v3.6.3"}

目标

快速配置一套简易的 Fluentd-Elasticsearch-Kibana 系统,可以查看 Kubernetes 管理的容器的日志流。
 

配置 namespace

kubectl create namespace logging

添加 Helm Chart

helm repo add elastic https://Helm.elastic.co

配置 Elasticsearch

# elasticsearch-override.yaml # Shrink default JVM heap. esJavaOpts: "-Xmx512m -Xms512m" # Allocate smaller chunks of memory per pod. resources: requests: cpu: "100m" limits: cpu: "2000m" # Request smaller persistent volumes. volumeClaimTemplate: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-path" resources: requests: storage: 1G
 
helm install -n logging --set replicas=1 elasticsearch elastic/elasticsearch -f elasticsearch-override.yaml
🚧
需要配置为 PVC 配置相应的 PV。
 
 

配置 Fluentd

helm repo add bitnami https://charts.bitnami.com/bitnami
fluentd 通过 yaml 文件来配置其输入、输出的行为。改变输出是因为,我们需要将日志送到 elasticsearch 里面。改变输入是因为,K3S 使用了 containerd 作为运行时,导致日至输出与 docker 运行时有所不同,因此需要额外配置。
# fluentd-elasticsearch.yaml apiVersion: v1 kind: ConfigMap metadata: name: elasticsearch-output namespace: logging data: fluentd.conf: | # Ignore fluentd own events <match fluent.**> @type null </match> # TCP input to receive logs from the forwarders <source> @type forward bind 0.0.0.0 port 24224 </source> # HTTP input for the liveness and readiness probes <source> @type http bind 0.0.0.0 port 9880 </source> # Throw the healthcheck to the standard output instead of forwarding it <match fluentd.healthcheck> @type stdout </match> # Send the logs to the standard output <match **> @type elasticsearch include_tag_key true host elasticsearch-master port 9200 logstash_format true <buffer> @type file path /opt/bitnami/fluentd/logs/buffers/logs.buffer flush_thread_count 2 flush_interval 5s </buffer> </match>
# log-parser.yaml apiVersion: v1 kind: ConfigMap metadata: name: log-parser data: fluentd.conf: | # Ignore fluentd own events <match fluent.**> @type null </match> # HTTP input for the liveness and readiness probes <source> @type http port 9880 </source> # Throw the healthcheck to the standard output instead of forwarding it <match fluentd.healthcheck> @type stdout </match> # Get the logs from the containers running in the cluster # This block parses logs using an expression valid for the Apache log format # Update this depending on your application log format <source> @type tail @id in_tail_container_logs path /var/log/containers/*.log pos_file /var/log/fluentd-containers.log.pos tag "#{ENV['FLUENT_CONTAINER_TAIL_TAG'] || 'kubernetes.*'}" exclude_path "#{ENV['FLUENT_CONTAINER_TAIL_EXCLUDE_PATH'] || use_default}" read_from_head true <parse> @type regexp expression /^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<flags>[^ ]+) (?<message>.*)$/ time_format %Y-%m-%dT%H:%M:%S.%N%:z </parse> </source> <filter kubernetes.**> @type kubernetes_metadata @id filter_kube_metadata kubernetes_url "#{'https://' + ENV.fetch('KUBERNETES_SERVICE_HOST') + ':' + ENV.fetch('KUBERNETES_SERVICE_PORT') + '/api'}" </filter> # Forward all logs to the aggregators <match **> @type forward <server> host fluentd-headless port 24224 </server> <buffer> @type file path /opt/bitnami/fluentd/logs/buffers/logs.buffer flush_thread_count 2 flush_interval 5s </buffer> </match>
 
helm install fluentd bitnami/fluentd -n logging --set aggregator.configMap=fluentd-elasticsearch --set forwarder.configMap=log-parser
 

配置 kibana

对外暴露服务使用的 override,否则可以忽略。
# kibana-nodeport.yaml service: type: NodePort nodePort: "30000"
helm install kibana elastic/kibana -n logging -f kibana-nodeport.yaml
 
访问页面即可得到:
notion image