原理篇请查看https://blog.csdn.net/xiaokangtongxue410/article/details/82838872 这里我就不copy了 部署、配置过程: 安装略。。。 配置目录放在:/etc/filebeat $ cd /etc/filebeat $ ll -rw-r--r-- 1 root root 938 Oct 14 10:46 filebeat-htjfweb.yml drwxr-xr-x 2 root root 4096 Nov 8 14:39 prospectors.d $ cat filebeat-htjfweb.yml path.config: /etc/filebeat path.logs: /var/log/filebeat-kafka-htjfweb path.data: /var/lib/filebeat/data-kafka-htjfweb filebeat.registry_file: /var/lib/filebeat/registry-kafka-htjfweb filebeat.shutdown_timeout: 0 logging.level: info logging.metrics.enabled: false logging.files.rotateeverybytes: 104857600 logging.files.keepfiles: 10 logging.files.permissions: 0600 setup.template.name: "filebeat" setup.template.pattern: "filebeat-*" filebeat.config: prospectors: enabled: true path: ${path.config}/prospectors.d/product-htjfweb*.yml ##热加载配置 reload.enabled: true reload.period: 10s output.kafka: enabled: true hosts: ["10.100.202.177:9092","10.100.202.191:9092","10.100.202.192:9092"] topic: logstash-htjfweb partition.round_robin: # 集群是否开启kafka的partition分区 reachable_only: false compression: gzip max_message_bytes: 10000000 required_acks: 1 我的配置文件放在prospectors.d目录中 $ cat prospectors.d/product-htjfweb-account-web.yml - type: log enabled: true paths: - /data/WEBLOG/product-htjfweb-account-web/*/*.log scan_frequency: 10s fields_under_root: true multiline.pattern: '^\d{4}-\d{1,2}-\d{1,2}' multiline.negate: true multiline.match: after multiline.max_lines: 500 multiline.timeout: 5s exclude_lines: ['WARN'] fields: env: product index: logstash-htjf logtype: product-htjfweb-account-web topic: logstash-htjf k8s_host: node tail_files: false close_inactive: 2h close_eof: false close_removed: true clean_removed: true close_renamed: false 启动filebeat $ /usr/local/bin/filebeat -c /etc/filebeat/filebeat-htjfweb.yml 如有多个配置文件 需要将以下配置变更成不同的文件,不然启动报错。 path.config: /etc/filebeat path.logs: /var/log/filebeat-kafka-htjfweb path.data: /var/lib/filebeat/data-kafka-htjfweb filebeat.registry_file: /var/lib/filebeat/registry-kafka-htjfweb 通过查看kafka 的topic里是否有数据来判断收集是否成功。 $ cd /usr/local/kafka/bin/ $ ./kafka-console-consumer.sh --bootstrap-server 10.100.202.191:9092 --topic logstash-htjfweb --from-beginning ##从头查看收集到的数据 $ ./kafka-consumer-groups.sh --group logstash-htjfweb --describe --bootstrap-server 10.100.202.177:9092 ##查看消费情况 GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID logstash-htjfweb logstash-htjfweb 0 13965817 13965822 5 logstash-0-e16abafa-1976-4677-a520-ec95c9ce38b8 /10.100.202.47 logstash-0 logstash-htjfweb logstash-htjfweb 1 13965595 13965599 4 logstash-0-e16abafa-1976-4677-a520-ec95c9ce38b8 /10.100.202.47 logstash-0 logstash-htjfweb logstash-htjfweb 2 13965756 13965761 5 logstash-0-e16abafa-1976-4677-a520-ec95c9ce38b8 /10.100.202.47 logstash-0 (责任编辑:IT) |