elasticsearch分析nginx日志并配置告警 您所在的位置:网站首页 nginx日志监控接口响应时间怎么看 elasticsearch分析nginx日志并配置告警

elasticsearch分析nginx日志并配置告警

2023-11-06 15:56| 来源: 网络整理| 查看: 265

nginx日志分析实践 1、背景2、思路3、实践es端filebeat端kibana端grafana端grafana设置告警 4、附录

1、背景

项目统一入口为nginx,为了直观的统计流量以及响应时间,故打算对nginx日志进行分析

2、思路

采用es ingest node预处理功能,利用pipeline对nginx进行字段拆解,设置模板对字段进行映射,理由kibana或者grafana对映射的字段进行分析

3、实践 es端

1、部署es,kibana(此处不介绍) 2、打开kibana并设置pipeline pipeline的调试过程 确定nginx的日志格式

log_format main '$remote_addr - $remote_user [$time_iso8601] "$request" ' '$status $body_bytes_sent "$http_x_forwarded_for" ' '$upstream_cache_status $request_time';

确定nginx的日志输出(access)

172.25.36.1 - - [2021-02-07T14:16:47+08:00] "GET /ehc-portal-web/assets/images/user-exhibition/logo-ecard.png HTTP/1.1" 200 857 "-" HIT 0.000

利用kibana tool–>grok debugger进行调试 debug调试 得出pipeline格式:

%{IP:clientip} (%{USERNAME:ident}|-) (%{USERNAME:auth}|-) \[%{DATA:timestamp}\] \"%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:Http_Status_Code} %{NUMBER:bytes} \"(%{USERNAME:X_Forwarded_For}|-)\" %{NOTSPACE:cache_status} (%{NUMBER:Request_Time}|-) PUT _ingest/pipeline/nginx_access { "description" : "my nginx access log pipeline", "processors": [ { "grok": { "field": "message", "patterns": ["%{IP:clientip} (%{USERNAME:ident}|-) (%{USERNAME:auth}|-) \\[%{DATA:timestamp}\\] \"%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:Http_Status_Code} %{NUMBER:bytes} \"(%{USERNAME:X_Forwarded_For}|-)\" %{NOTSPACE:cache_status} (%{NUMBER:Request_Time}|-)"] } }, { "remove": { "field": ["message", "agent", "ecs", "host", "input", "log"] } } ], "on_failure": [ { "set": { "field": "error.message", "value": "{{ _ingest.on_failure_message }}" } } ] }

3、设置字段映射(对具体要分析的字段进行特定字段映射)

PUT _template/nginx_access { "index_patterns": "nginx_access*", "mappings" : { "properties" : { "request" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "agent" : { "properties" : { "hostname" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "id" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "ephemeral_id" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "upstreamip" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "type" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "version" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } } } }, "method" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "auth" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "log" : { "properties" : { "file" : { "properties" : { "path" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } } } }, "offset" : { "type" : "long" } } }, "Http_Status_Code" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "ident" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "error" : { "properties" : { "message" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } } } }, "message" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "User_Agent" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "cache_status" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "Upstream_Response_Time" : { "type" : "long" }, "input" : { "properties" : { "type" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } } } }, "referrer" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "@timestamp" : { "type" : "date" }, "ecs" : { "properties" : { "version" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } } } }, "Request_Time" : { "type" : "long" }, "bytes" : { "type" : "long" }, "clientip" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "host" : { "properties" : { "name" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } } } }, "httpversion" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "fields" : { "properties" : { "type" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } } } }, "X_Forwarded_For" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } }, "timestamp" : { "type" : "date" }, "timestamp11" : { "type" : "text", "fields" : { "keyword" : { "ignore_above" : 256, "type" : "keyword" } } } } }, "aliases" : { } } } filebeat端

部署这里不介绍(使用rpm部署启动即可) 主要配置filebeat采用指定的pipeline

setup.ilm.enabled: false filebeat.inputs: - type: log enabled: true paths: - /app/nginx/logs/443/access.log fields: type: nginx_access_28_443 - type: log enabled: true paths: - /app/nginx/logs/80/error.log fields: type: nginx_error_28_443 filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 1 output.elasticsearch: hosts: ["172.25.36.36:9200"] indices: - index: "nginx_access_28_443_%{+yyyy.MM.dd}" when.equals: fields.type: "nginx_access_28_443" - index: "nginx_error_28_80_%{+yyyy.MM.dd}" when.equals: fields.type: "nginx_error_28_443" pipelines: - pipeline : "nginx_access" when.equals: fields.type : "nginx_access_28_443" - pipeline : "nginx_error" when.equals: fields.type : "nginx_error_28_443" kibana端

此时可以发现index manager已经有很多索引数据了 在这里插入图片描述

设置kibana index pattern 在这里插入图片描述 并选择timestamp排序即可,此时在kibana discovery可以发现相应的数据 在这里插入图片描述 此时创建视图并导入dashboard(详见其他章节) 此处演示grafana

grafana端

增加数据源es为 在这里插入图片描述 导入模板(需要模板id的私聊) 在这里插入图片描述 效果展示: 在这里插入图片描述

grafana设置告警

由于本身区域已经部署prometheus,grafana支持将告警push到alertmanager。故选用这种方式 设置告警频道 在这里插入图片描述 在这里插入图片描述 在具体要设置告警的视图上设置规则,比如nginx请求数 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 告警推送详情示例: 在这里插入图片描述

4、附录

附error日志分析设置: error日志

PUT _ingest/pipeline/nginx_error { "description": "Pipeline for parsing the Nginx error logs", "processors": [{ "grok": { "field": "message", "patterns": [ "%{DATA:nginx.error.time} \\[%{DATA:log.level}\\] %{NUMBER:process.pid:long}#%{NUMBER:process.thread.id:long}: (\\*%{NUMBER:nginx.error.connection_id:long} )?%{GREEDYDATA:message}" ], "ignore_missing": true } }, { "rename": { "field": "@timestamp", "target_field": "event.created" } }, { "date": { "field": "nginx.error.time", "target_field": "@timestamp", "formats": ["yyyy/MM/dd H:m:s"], "ignore_failure": true } }, { "remove": { "field": "nginx.error.time" } }], "on_failure" : [{ "set" : { "field" : "error.message", "value" : "{{ _ingest.on_failure_message }}" } }] }


【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有