安装filebeat6.2
$ cd /usr/local/src
$ wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-linux-x86_64.tar.gz
$ tar xf filebeat-6.2.1-linux-x86_64.tar.gz
$ cd filebeat-6.2.1-linux-x86_64
$ cp -r filebeat /usr/local/bin
创建filebeat配置文件
$ mkdir /etc/filebeat
$ cd /etc/filebeat/
$ vi filebeat.yml
path.config: /etc/filebeat
path.logs: /var/log/filebeat
path.config: /etc/filebeat
path.logs: /var/log/filebeat
path.data: /var/lib/filebeat/data
filebeat.registry_file: /var/lib/filebeat/registry
filebeat.shutdown_timeout: 0
logging.level: info
logging.metrics.enabled: false
logging.files.rotateeverybytes: 104857600
logging.files.keepfiles: 10
logging.files.permissions: 0600
setup.template.name: "filebeat"
setup.template.pattern: "filebeat-*"
filebeat.config:
prospectors:
enabled: true
path: ${path.config}/prospectors.d/*.yml
reload.enabled: true
reload.period: 10s
output.logstash: ##提前把logstash配置好,开启负载均衡模式
hosts: ["10.100.200.47:5044","10.100.200.66:5044"]
index: filebeat-%{+yyyy.MM.dd}
loadbalance: true
编辑 各个需要收集日志项目的配置文件,这里统一放到了prospectors.d中。
示例:
$ mkdir prospectors.d/
$ cd prospectors.d/
$ cat preproduct-htjfapp-account-provider.yml
- type: log
enabled: true
paths:
- /data/WEBLOG/preproduct-xx-xx-provider/*/*.log
scan_frequency: 10s
fields_under_root: true
fields:
env: preproduct
index: logstash-htjf
logtype: preproduct-xx-xx-provider
topic: logstash-htjf
vm_host: 10.100.12.125
tail_files: false
close_inactive: 2h
close_eof: false
close_removed: true
clean_removed: true
close_renamed: false
使用superviosrd 守护进程
修改supervisord.conf配置文件
[program:filebeat]
command=/usr/local/bin/filebeat -c /etc/filebeat/filebeat.yml
autostart=true
autorestart=true
startsecs=5
priority=1
user=root
stopasgroup=true
killasgroup=true
热加载supervisord
$ supervisorctl update
$ supervisorctl status
filebeat RUNNING pid 16957, uptime 18:16:15
logstash 的配置文件(日志格式化)
$ cat htjf_spring_boot.conf
input {
beats {
port => "5044"
codec=> multiline {
pattern => "^%{YEAR}.*" ##不是年份开头的行都归到上一行
negate => true
what => "previous"
}
}
}
filter {
grok {
patterns_dir => [ "/etc/logstash/patterns.d" ]
match => [ "message", "%{TIMESTAMP_ISO8601:timestamp}\s+\[%{THREADID:threadId}\]\s+\[%{THREADNAME:traceId}\]\s+%{LOGLEVEL:level}\s+%{JAVACLASS:javaclass}\s+\-\s+%{JAVAMESSAGE:javameassage}","message", "%{TIMESTAMP_ISO8601:timestamp}\s+\[%{THREADID_1:threadId}\]\s+%{LOGLEVEL:level}\s+%{JAVACLASS:javaclass}\s+\-\s+%{JAVAMESSAGE:javameassage}"]
remove_field => [ "message","beat","timestamp","topic","hostname","name"]
}
date {
match => ["time", "dd/MMM/yyyy:HH:mm:ss Z"]
locale => "cn"
}
}
output {
elasticsearch {
hosts => ["192.168.60.117:9200","192.168.60.118:9200","192.168.60.119:9200"]
index => "logstash-htjf-filebeat-%{+YYYY-MM}"
user => elastic
password => xxx
}
stdout { codec => rubydebug }
}
启动logstash
$ /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/htjf_spring_boot.conf --path.data=/var/log/htjf_spring_boot.log & > /dev/null
可以结合计划任务(crond) 执行logstash启动的守护进程
$ cat /etc/logstash/scripts/logstash_auto.sh
#!/bin/bash
htjf_spring_boot_pid=`ps -ef |grep java |grep -v grep|grep htjf_spring_boot |awk '{print $2}'`
if [ -z $htjf_spring_boot_pid ];then
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/htjf_spring_boot.conf --path.data=/var/log/htjf_spring_boot.log & > /dev/null
fi
$ crontab -l
#logstash auto
*/5 * * * * /bin/sh /etc/logstash/scripts/logstash_auto.sh
$ /etc/init.d/crond restart
kibana 展示收集日志
filebeat收集日志时,新增了字段,这里是vm_host。kibana日志展示中出现了此字段 说明成功啦。。
(责任编辑:IT) |