k8s☞19日志系统EFK

阅读量: zyh 2021-08-13 10:50:20
Categories: > Tags:

前言

kubernetes/cluster/addons/fluentd-elasticsearch at master · kubernetes/kubernetes (github.com)

EFK包含三个组件:

  1. fluentd 采集器。采集日志
  2. elasticsearch 搜索引擎。处理日志
  3. kibana 展示。展示日志

数据过程:

Pod日志 ☞ fluentd ☞ Elasticsearch ☞ Kibana

结构

集群模式,节点级收集器收集日志。

其结构图如下:

image-20210813111558385

安装步骤

我们通过 helm 包管理器进行安装。其地址为:Artifact Hub

⚠️需要注意的是,elasticsearch和kibana版本要保持一致

elasticsearch

https://artifacthub.io/packages/helm/elastic/elasticsearch

下载chart包到本地,并解压

mkdir kube-efk && cd kube-efk
helm repo add elastic https://helm.elastic.co
helm fetch elastic/elasticsearch
tar xf elasticsearch*.tgz && cd elasticsearch

编辑 values.yaml,自定义配置

https://artifacthub.io/packages/helm/elastic/elasticsearch#configuration

关于角色的介绍:Node | Elasticsearch Guide | Elastic

roles:
  master: "true"
  ingest: "true"
  data: "true"
  remote_cluster_client: "true"
volumeClaimTemplate:
  storageClassName: nfs-client
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 30Gi

persistence:
  enabled: true

正常情况下,你不应该让其调度到master节点上。

tolerations:
  - key: "role.kubernetes.io/master"
    operator: "Exists"
    effect: "NoSchedule"

默认是关闭状态。

esJavaOpts: "" # example: "-Xmx1g -Xms1g"

resources:
  requests:
    cpu: "1000m"
    memory: "2Gi"
  limits:
    cpu: "1000m"
    memory: "2Gi"
kubectl create ns efk
helm install elasticsearch elastic/elasticsearch --version 7.16.2 --namespace efk -f values.yaml

fluentd-elasticsearch

下载chart包到本地,并解压

cd kube-efk
helm repo add kokuwa https://kokuwaio.github.io/helm-charts
helm fatch kokuwa/fluentd-elasticsearch
tar xf fluentd-elasticsearch*.tgz && cd fluentd-elasticsearch

编辑 values.yaml,自定义配置

修改容器的数据目录(如果你不是默认路径的话)

hostLogDir:
  varLog: /var/log
  #dockerContainers: /var/lib/docker/containers
  dockerContainers: /export/docker-data-root/containers
  libSystemdDir: /usr/lib64

修改elasticsearch的路径

elasticsearch:
  auth:
    enabled: false
    user: null
    password: null
    existingSecret:
      name: null
      key: null
  includeTagKey: true
  setOutputHostEnvVar: true
  # If setOutputHostEnvVar is false this value is ignored
  #hosts: ["elasticsearch-client:9200"]
  hosts: ["elasticsearch-master.efk.svc.cluster.local:9200"]

elasticsearch-master 是es创建完毕后的service_name

添加master节点的容忍

tolerations:
  - key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule

开始安装

helm install myflu kokuwa/fluentd-elasticsearch -f values.yaml

校验 elasticsearch

确认 elasticsearch 是否接收到来自 fluentd-elasticsearch 发送的数据

kubectl run cirros-$RANDOM --image=cirros --rm -it -- /bin/sh

curl elasticsearch-master.efk.svc.cluster.local:9200/_cat/indices
===
green open logstash-2021.07.03             YeQF97-WQZKvg6lieiXxRw 1 1  31045      0   6.7mb  3.4mb
green open logstash-2021.07.02             qQAZg0rnSAW8FWxrNgwWhQ 1 1  42034      0   7.4mb  3.6mb
green open logstash-2021.07.01             lJh5YdtySqukuITYrCmWcQ 1 1  30655      0   6.5mb  3.5mb
green open logstash-2021.06.30             0vw7wc2tT9aNZJfXU2KLtw 1 1  27922      0   5.9mb  3.2mb
green open logstash-2021.06.29             TiUBVT_MTC-mdP_fRbHoTQ 1 1  27914      0   5.9mb    3mb
green open logstash-2021.06.28             54MQznz6Sr27Xn8JjSclsg 1 1  27921      0   5.9mb  3.1mb
green open logstash-2021.06.27             VYcSLdq7QVO-p1rFdoZYRg 1 1  27920      0     6mb  3.1mb
green open logstash-2021.06.26             C4HUoKhoTf67vcZJXW4H1A 1 1  44168      0   9.3mb  4.7mb
green open logstash-2021.06.25             2TLEHI3TSiyEEALSWH3j3g 1 1  24310      0   5.4mb  2.8mb
green open logstash-2021.06.24             Q8kF51zITB-G4xLGjveXEg 1 1  27338      0   6.1mb  2.9mb

kibana

下载chart包到本地,并解压

cd kube-efk
helm fatch elastic/kibana
tar xf kibana*.tgz && cd kibana

编辑 values.yaml,自定义配置

修改elasticsearch地址,其实不用修改,因为这个chart和elasticsearch都是elastic出的,默认值就没问题

elasticsearchHosts: "http://elasticsearch-master:9200" 

修改kibana的service类型,从集群外访问

service:
  type: LoadBalancer
  loadBalancerIP: ""
  port: 5601
  nodePort: ""
  labels: {}
  annotations:
    {}
    # cloud.google.com/load-balancer-type: "Internal"
    # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    #service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    # service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
    # service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"

根据你的环境修改,我这里环境用了metallb组件,用来模拟LB,所以annotations不需要添加任何注释

配置kibana

添加模式分区

路径:Stack Management->Index patterns

Index pattern name: logstash*

Time field: @timestamp

配置一个展示k8s error错误的图标

image-20210814122316905

image-20210814122130824