摘要:本篇文章探讨了大数据采集之filebeat日志采集,希望阅读本篇文章以后大家有所收获,帮助大家对相关内容的理解更加深入。
本篇文章探讨了大数据采集之filebeat日志采集,希望阅读本篇文章以后大家有所收获,帮助大家对相关内容的理解更加深入。
"
架构一: filebeat -> logstash1 -> redis -> logstash2 -> elasticsearch(集群) -> kibana 这里就不写安装程序的步骤了相信大家都没有难度: (软件安装可自行设计)230,安装filebeat, logstash1 ,elasticsearch232,安装logstash2, redis, elasticsearch ,kibana 注意:filebeat文件很注重文件格式1,配置filebeat文件: [root@localhost filebeat]# cat /etc/filebeat/filebeat.ymlfilebeat: prospectors: # - #每个日志文件的开始 # paths: #定义路径 # - /var/www/logs/access.log #绝对路径 # input_type: log #日志类型为log # document_type: api4-nginx-accesslog # 此名称要与logstash定义的名称相对应,logstash要使用此名称做type判断使用 - paths: - /opt/apps/huhu/logs/ase.log input_type: log document_type: ""ase-ase-log"" encoding: utf-8 tail_files: true #每次最后一行 multiline.pattern: '^\[' #分割符 multiline.negate: true multiline.match: after #最后合并 #tags: [""ase-ase""] - paths: #收集json格式日志 - /var/log/nginx/access.log input_type: log document_type: ""nginx-access-log"" tail_files: true json.keys_under_root: true json.overwrite_keys: true registry_file: /var/lib/filebeat/registry output: #输出到230 logstash: hosts: [""192.168.0.230:5044""] shipper: logging: to_files: true files: path: /tmp/mybeat 2.配置230:logstash-->input-redis [root@web1 conf.d]# pwd/etc/logstash/conf.d [root@web1 conf.d]# cat nginx-ase-input.conf input { beats { port => 5044 codec => ""json"" }} output { if [type] == ""nginx-access-log"" { redis { #nginx日志写到redis信息 data_type => ""list"" key => ""nginx-accesslog"" host => ""192.168.0.232"" port => ""6379"" db => ""4"" password => ""123456"" }} if [type] == ""ase-ase-log"" { redis { #写到ase日志写到redis信息 data_type => ""list"" key => ""ase-log"" host => ""192.168.0.232"" port => ""6379"" db => ""4"" password => ""123456"" }} } 3.redis写到elstach里,232服务器配置:logstash-->output-->resid->elasticsearch [root@localhost conf.d]# pwd/etc/logstash/conf.d [root@localhost conf.d]# cat nginx-ase-output.conf input { redis { type => ""nginx-access-log"" data_type => ""list"" key => ""nginx-accesslog"" host => ""192.168.0.232"" port => ""6379"" db => ""4"" password => ""123456"" codec => ""json"" } redis { type => ""ase-ase-log"" data_type => ""list"" key => ""ase-log"" host => ""192.168.0.232"" port => ""6379"" db => ""4"" password => ""123456"" } } output { if [type] == ""nginx-access-log"" { elasticsearch { hosts => [""192.168.0.232:9200""] index => ""nginx-accesslog-%{+YYYY.MM.dd}"" }} if [type] == ""ase-ase-log"" { elasticsearch { hosts => [""192.168.0.232:9200""] index => ""ase-log-%{+YYYY.MM.dd}"" }} }4,在232上配置elsaticsearch--->kibana 在kibana上找到ELS的索引即可。 架构二: filebeat -> redis -> logstash --> elsasctic --> kibana #缺点filebeat写进redis有限制,占时还没找到多个写入。1.feilebeat配置: [root@localhost yes_yml]# cat filebeat.yml filebeat: prospectors: # - #每个日志文件的开始 # paths: #定义路径 # - /var/www/logs/access.log #绝对路径 # input_type: log #日志类型为log # document_type: api4-nginx-accesslog # 此名称要与logstash定义的名称相对应,logstash要使用此名称做type判断使用 - paths: - /opt/apps/qpq/logs/qpq.log input_type: log document_type: ""qpq-qpq-log"" encoding: utf-8 tail_files: true multiline.pattern: '^\[' multiline.negate: true multiline.match: after #tags: [""qpq-qpq-log""] registry_file: /var/lib/filebeat/registry output: redis: host: ""192.168.0.232"" port: 6379 db: 3 password: ""123456"" timeout: 5 reconnect_interval: 1 index: ""pqp-pqp-log""shipper: logging: to_files: true files: path: /tmp/mybeat2.由232redis-->els--kibana [root@localhost yes_yml]# cat systemlog.conf input { redis { type => ""qpq-qpq-log"" data_type => ""list"" key => ""qpq-pqp-log"" host => ""192.168.0.232"" port => ""6379"" db => ""3"" password => ""123456"" }} output { if [type] == ""qpq-qpq-log""{ elasticsearch { hosts => [""192.168.0.232:9200""] index => ""qpq-qpq-log-%{+YYYY.MM.dd}"" } } }3.在232上配置elsaticsearch--->kibana 在kibana上找到ELS的索引即可
本文由职坐标整理发布,学习更多的相关知识,请关注职坐标IT知识库!
您输入的评论内容中包含违禁敏感词
我知道了
请输入正确的手机号码
请输入正确的验证码
您今天的短信下发次数太多了,明天再试试吧!
我们会在第一时间安排职业规划师联系您!
您也可以联系我们的职业规划师咨询:
版权所有 职坐标-一站式IT培训就业服务领导者 沪ICP备13042190号-4
上海海同信息科技有限公司 Copyright ©2015 www.zhizuobiao.com,All Rights Reserved.
沪公网安备 31011502005948号