目录

kubernetes日志-ilogtail收集日志

部署 ilogtail + kafka + es 收集日志.

动机

开发环境使用的 k8s 集群是 1.28 版本,以前习惯用 log-pilot 这个组件来做日志收集的,现在新版本 containerd 运行时下收集有问题,查了一下发现阿里云开源了他们的 ilogtail ,用起来效果大差不差,于是使用 ilogtail + kafka + es 来做日志收集。

部署

yaml文件,参考官方文档:https://ilogtail.gitbook.io/ilogtail-docs/installation/start-with-k8s

containerd 通过环境变量可以自定义配置,这些都有默认配置的,只是如果我们自己改了很多 containerd 的配置的话,就最好看看。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    k8s-app: logtail-ds
  name: ilogtail-ds
  namespace: ilogtail
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: logtail-ds
  template:
    metadata:
      labels:
        k8s-app: logtail-ds
    spec:
      containers:
        - env:
            - name: ALIYUN_LOG_ENV_TAGS
              value: _node_name_|_node_ip_
            - name: _node_name_
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
            - name: _node_ip_
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.hostIP
            - name: cpu_usage_limit
              value: '1'
            - name: mem_usage_limit
              value: '512'
            - name: USE_CONTAINERD
              value: 'true'
            - name: CONTAINERD_SOCK_PATH
              value: /run/containerd/containerd.sock
            - name: CONTAINERD_STATE_DIR
              value: /run/containerd
          image: >-
            sls-opensource-registry.cn-shanghai.cr.aliyuncs.com/ilogtail-community-edition/ilogtail:2.0.4            
          imagePullPolicy: IfNotPresent
          name: logtail
          resources:
            limits:
              cpu: '1'
              memory: 1Gi
            requests:
              cpu: 400m
              memory: 384Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /var/run
              name: run
            - mountPath: /logtail_host
              mountPropagation: HostToContainer
              name: root
              readOnly: true
            - mountPath: /usr/local/ilogtail/checkpoint
              name: checkpoint
            - mountPath: /usr/local/ilogtail/config/local
              name: user-config
              readOnly: true
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
      volumes:
        - hostPath:
            path: /var/run
            type: Directory
          name: run
        - hostPath:
            path: /
            type: Directory
          name: root
        - hostPath:
            path: /var/lib/ilogtail-ilogtail-ds/checkpoint
            type: DirectoryOrCreate
          name: checkpoint
        - configMap:
            defaultMode: 420
            name: ilogtail-user-cm
          name: user-config
  updateStrategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate

规则文件。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: v1
data:
  regex_log.yaml: |
    enable: true
    inputs:
      - Type: input_file
        FilePaths: 
          - /logWork/**/*.log
        MaxDirSearchDepth: 10
        EnableContainerDiscovery: true
        ContainerFilters:
          K8sNamespaceRegex: dev
          K8sPodRegex: ^(logs-.*)$
    flushers:
      - Type: flusher_kafka_v2
        Brokers:
          - 192.168.2.238:9092
          - 192.168.2.239:9092
          - 192.168.2.240:9092
        Topic: your-logs-topic
      - Type: flusher_stdout
        OnlyStdout: true
        Tags: true    
kind: ConfigMap
metadata:
  name: ilogtail-user-cm
  namespace: ilogtail

简单多行日志收集,并提取字段。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
apiVersion: v1
data:
  regex_log.yaml: |
    enable: true
    inputs:
      - Type: input_file
        FilePaths: 
          - /logWork/**/*.log
        Multiline:
          StartPattern: \d+-\d+-\d+\s\d+:\d+:\d+.\d+\s\S+\s.*
        MaxDirSearchDepth: 10
        EnableContainerDiscovery: true
        ContainerFilters:
          K8sNamespaceRegex: dev
          K8sPodRegex: ^(logs-.*)$
    processors:
      - Type: processor_parse_regex_native
        SourceKey: content
        Keys:
          - time
          - level
          - msg
        Regex: (\d+-\d+-\d+\s\d+:\d+:\d+.\d+)\s(\S+)\s(.*)
    flushers:
      - Type: flusher_kafka_v2
        Brokers:
          - 192.168.2.238:9092
          - 192.168.2.239:9092
          - 192.168.2.240:9092
        Topic: your-logs-topic
      - Type: flusher_stdout
        OnlyStdout: true
        Tags: true    
kind: ConfigMap
metadata:
  name: ilogtail-user-cm
  namespace: ilogtail

几个需要注意的点

  • 如果是配置多层路径模糊匹配,要配置 MaxDirSearchDepth,不然收集不到。

  • 输出到 kafka 用 flusher_kafka_v2。

结束

目前只是简单的单行日志收集,后面我们还要根据情况做多行日志收集。