小言_互联网的博客

Beats:Beats processors

391人阅读  评论(0)

我们通常的做法是使用 Elasticsearch 的 ingest node 或者 Logstash 来对数据进行清洗。这其中包括删除,添加,丰富,转换等等。但是针对每个 beats 来讲,它们也分别有自己的一组 processors 来可以帮我们处理数据。我们可以访问  Elastic 的官方网站来查看针对 filebeat 的所有 processors。 也就是说,我们可以在配置 beats 的时候并同时配置相应的 processors 来对数据进行处理。每个 processor 能够修改经过它的事件。

如果你想了解 ingest pipeline 是如何清洗这些事件的,请阅读我之前的文章 “Elastic可观测性 - 运用 pipeline 使数据结构化”。在之前文章 “深入理解 Dissect ingest processor” 中,我讲述了 dissect ingest processor 的应用。在今天的文章中,我将使用同样的 beat processor 来说明如何对数据进行格式化。

 

使用  filebeat 来对数据进行处理

在今天的实验中,我们将使用如下是例子来进行。我们创建一个叫做 sample.log 的文件,其内容如下:

sample.log


  
  1. "321 - App01 - WebServer is starting"
  2. "321 - App01 - WebServer is up and running"
  3. "321 - App01 - WebServer is scaling 2 pods"
  4. "789 - App02 - Database is will be restarted in 5 minutes"
  5. "789 - App02 - Database is up and running"
  6. "789 - App02 - Database is refreshing tables"

由于 filebeat 是以换行符来识别每一行的数据的,所以我在文件的最后一行也加上了一个换行符以确保最后一行的数据能被导入。

我们创建一个叫做 filebeat_processors.yml 的 filebeat 配置文件:

filebeat_processors.yml

它的内容如下:


  
  1. filebeat.inputs:
  2. - type: log
  3. enabled: true
  4. paths:
  5. - /Users/liuxg/data/beatsprocessors/sample.log
  6. processors:
  7. - drop_fields:
  8. fields: [ "ecs", "agent", "log", "input", "host"]
  9. - dissect:
  10. tokenizer: '"%{pid|integer} - %{service.name} - %{service.status}"'
  11. field: "message"
  12. target_prefix: ""
  13. setup.template.enabled: false
  14. setup.ilm.enabled: false
  15. output.elasticsearch:
  16. hosts: [ "localhost:9200"]
  17. index: "sample"
  18. bulk_max_size: 1000

请注意你需要依据自己 sample.log 的位置修改上面的 paths 中的路径。

在上面,我们使用了 drop_fields 以及 dissect 两个 processor。我们使用如下的命令来运行 filebeat:

./filebeat -e -c ~/data/beatsprocessors/filebeat_processors.yml 

同样地,我们需要根据自己的配置文件路径修改上面的路径。

运行完上面的命令后,我们可以在 Kibana 中进行查询 sample 索引的内容:

GET sample/_search

  
  1. {
  2. "took" : 0,
  3. "timed_out" : false,
  4. "_shards" : {
  5. "total" : 1,
  6. "successful" : 1,
  7. "skipped" : 0,
  8. "failed" : 0
  9. },
  10. "hits" : {
  11. "total" : {
  12. "value" : 6,
  13. "relation" : "eq"
  14. },
  15. "max_score" : 1.0,
  16. "hits" : [
  17. {
  18. "_index" : "sample",
  19. "_type" : "_doc",
  20. "_id" : "qrBscHYBpymojx8hDWuV",
  21. "_score" : 1.0,
  22. "_source" : {
  23. "@timestamp" : "2020-12-17T11:18:16.540Z",
  24. "message" : "\"321 - App01 - WebServer is starting\"",
  25. "service" : {
  26. "name" : "App01",
  27. "status" : "WebServer is starting"
  28. },
  29. "pid" : 321
  30. }
  31. },
  32. {
  33. "_index" : "sample",
  34. "_type" : "_doc",
  35. "_id" : "q7BscHYBpymojx8hDWuV",
  36. "_score" : 1.0,
  37. "_source" : {
  38. "@timestamp" : "2020-12-17T11:18:16.541Z",
  39. "pid" : 321,
  40. "message" : "\"321 - App01 - WebServer is up and running\"",
  41. "service" : {
  42. "name" : "App01",
  43. "status" : "WebServer is up and running"
  44. }
  45. }
  46. },
  47. {
  48. "_index" : "sample",
  49. "_type" : "_doc",
  50. "_id" : "rLBscHYBpymojx8hDWuV",
  51. "_score" : 1.0,
  52. "_source" : {
  53. "@timestamp" : "2020-12-17T11:18:16.541Z",
  54. "message" : "\"321 - App01 - WebServer is scaling 2 pods\"",
  55. "service" : {
  56. "name" : "App01",
  57. "status" : "WebServer is scaling 2 pods"
  58. },
  59. "pid" : 321
  60. }
  61. },
  62. {
  63. "_index" : "sample",
  64. "_type" : "_doc",
  65. "_id" : "rbBscHYBpymojx8hDWuV",
  66. "_score" : 1.0,
  67. "_source" : {
  68. "@timestamp" : "2020-12-17T11:18:16.541Z",
  69. "message" : "\"789 - App02 - Database is will be restarted in 5 minutes\"",
  70. "pid" : 789,
  71. "service" : {
  72. "name" : "App02",
  73. "status" : "Database is will be restarted in 5 minutes"
  74. }
  75. }
  76. },
  77. {
  78. "_index" : "sample",
  79. "_type" : "_doc",
  80. "_id" : "rrBscHYBpymojx8hDWuV",
  81. "_score" : 1.0,
  82. "_source" : {
  83. "@timestamp" : "2020-12-17T11:18:16.541Z",
  84. "service" : {
  85. "name" : "App02",
  86. "status" : "Database is up and running"
  87. },
  88. "pid" : 789,
  89. "message" : "\"789 - App02 - Database is up and running\""
  90. }
  91. },
  92. {
  93. "_index" : "sample",
  94. "_type" : "_doc",
  95. "_id" : "r7BscHYBpymojx8hDWuV",
  96. "_score" : 1.0,
  97. "_source" : {
  98. "@timestamp" : "2020-12-17T11:18:16.541Z",
  99. "service" : {
  100. "status" : "Database is refreshing tables",
  101. "name" : "App02"
  102. },
  103. "message" : "\"789 - App02 - Database is refreshing tables\"",
  104. "pid" : 789
  105. }
  106. }
  107. ]
  108. }
  109. }

显然,我们得到了一个结构化的索引。在上面,我们对 pid 还进行了从字符串到整型值的转换。

我们甚至可以重新对一个字段命名,比如:

filebeat_processors.yml


  
  1. filebeat.inputs:
  2. - type: log
  3. enabled: true
  4. paths:
  5. - /Users/liuxg/data/beatsprocessors/sample.log
  6. processors:
  7. - drop_fields:
  8. fields: [ "ecs", "agent", "log", "input", "host"]
  9. - dissect:
  10. tokenizer: '"%{pid|integer} - %{service.name} - %{service.status}"'
  11. field: "message"
  12. target_prefix: ""
  13. - rename:
  14. fields:
  15. - from: "pid"
  16. to: "PID"
  17. ignore_missing: false
  18. fail_on_error: true
  19. setup.template.enabled: false
  20. setup.ilm.enabled: false
  21. output.elasticsearch:
  22. hosts: [ "localhost:9200"]
  23. index: "sample"
  24. bulk_max_size: 1000

重新运行上面的配置文件,我们发现:


  
  1. {
  2. "took" : 0,
  3. "timed_out" : false,
  4. "_shards" : {
  5. "total" : 1,
  6. "successful" : 1,
  7. "skipped" : 0,
  8. "failed" : 0
  9. },
  10. "hits" : {
  11. "total" : {
  12. "value" : 6,
  13. "relation" : "eq"
  14. },
  15. "max_score" : 1.0,
  16. "hits" : [
  17. {
  18. "_index" : "sample",
  19. "_type" : "_doc",
  20. "_id" : "UrB5cHYBpymojx8h7oCK",
  21. "_score" : 1.0,
  22. "_source" : {
  23. "@timestamp" : "2020-12-17T11:33:26.114Z",
  24. "service" : {
  25. "status" : "WebServer is starting",
  26. "name" : "App01"
  27. },
  28. "message" : "\"321 - App01 - WebServer is starting\"",
  29. "PID" : 321
  30. }
  31. },
  32. ...

之前的 pid 已经转换为 PID 字段。

我们还可以通过脚本来实现对事件的处理,比如:

filebeat_processors.yml


  
  1. filebeat.inputs:
  2. - type: log
  3. enabled: true
  4. paths:
  5. - /Users/liuxg/data/beatsprocessors/sample.log
  6. processors:
  7. - drop_fields:
  8. fields: [ "ecs", "agent", "log", "input", "host"]
  9. - dissect:
  10. tokenizer: '"%{pid|integer} - %{service.name} - %{service.status}"'
  11. field: "message"
  12. target_prefix: ""
  13. - rename:
  14. fields:
  15. - from: "pid"
  16. to: "PID"
  17. ignore_missing: false
  18. fail_on_error: true
  19. - script:
  20. lang: javascript
  21. id: my_filter
  22. params:
  23. pid: 789
  24. source: >
  25. var params = {pid: 0};
  26. function register(scriptParams) {
  27. params = scriptParams;
  28. }
  29. function process(event) {
  30. if (event.Get("PID") == params.pid) {
  31. event.Cancel();
  32. }
  33. }
  34. setup.template.enabled: false
  35. setup.ilm.enabled: false
  36. output.elasticsearch:
  37. hosts: [ "localhost:9200"]
  38. index: "sample"
  39. bulk_max_size: 1000

在上面,当 PID 的值为 789 时,我们将过滤这个事件。重新运行 filebeat:


  
  1. {
  2. "took" : 0,
  3. "timed_out" : false,
  4. "_shards" : {
  5. "total" : 1,
  6. "successful" : 1,
  7. "skipped" : 0,
  8. "failed" : 0
  9. },
  10. "hits" : {
  11. "total" : {
  12. "value" : 3,
  13. "relation" : "eq"
  14. },
  15. "max_score" : 1.0,
  16. "hits" : [
  17. {
  18. "_index" : "sample",
  19. "_type" : "_doc",
  20. "_id" : "5bCBcHYBpymojx8hrIup",
  21. "_score" : 1.0,
  22. "_source" : {
  23. "@timestamp" : "2020-12-17T11:41:53.478Z",
  24. "PID" : 321,
  25. "service" : {
  26. "status" : "WebServer is starting",
  27. "name" : "App01"
  28. },
  29. "message" : "\"321 - App01 - WebServer is starting\""
  30. }
  31. },
  32. {
  33. "_index" : "sample",
  34. "_type" : "_doc",
  35. "_id" : "5rCBcHYBpymojx8hrIup",
  36. "_score" : 1.0,
  37. "_source" : {
  38. "@timestamp" : "2020-12-17T11:41:53.479Z",
  39. "message" : "\"321 - App01 - WebServer is up and running\"",
  40. "service" : {
  41. "status" : "WebServer is up and running",
  42. "name" : "App01"
  43. },
  44. "PID" : 321
  45. }
  46. },
  47. {
  48. "_index" : "sample",
  49. "_type" : "_doc",
  50. "_id" : "57CBcHYBpymojx8hrIup",
  51. "_score" : 1.0,
  52. "_source" : {
  53. "@timestamp" : "2020-12-17T11:41:53.479Z",
  54. "service" : {
  55. "status" : "WebServer is scaling 2 pods",
  56. "name" : "App01"
  57. },
  58. "message" : "\"321 - App01 - WebServer is scaling 2 pods\"",
  59. "PID" : 321
  60. }
  61. }
  62. ]
  63. }
  64. }

我们发现所有关于 PID 为789 的事件都被过滤掉了。

我们设置可以通过 script 的方法为事件添加一个 tag。当然由于这是一种 Javascript 的脚本编程,我们甚至可以依据一些条件对事件添加不同的 tag。

filebeat_processors.yml


  
  1. filebeat.inputs:
  2. - type: log
  3. enabled: true
  4. paths:
  5. - /Users/liuxg/data/beatsprocessors/sample.log
  6. processors:
  7. - drop_fields:
  8. fields: [ "ecs", "agent", "log", "input", "host"]
  9. - dissect:
  10. tokenizer: '"%{pid|integer} - %{service.name} - %{service.status}"'
  11. field: "message"
  12. target_prefix: ""
  13. - rename:
  14. fields:
  15. - from: "pid"
  16. to: "PID"
  17. ignore_missing: false
  18. fail_on_error: true
  19. - script:
  20. lang: javascript
  21. id: my_filter
  22. params:
  23. pid: 789
  24. source: >
  25. var params = {pid: 0};
  26. function register(scriptParams) {
  27. params = scriptParams;
  28. }
  29. function process(event) {
  30. if (event.Get("PID") == params.pid) {
  31. event.Cancel();
  32. }
  33. event.Tag("myevent")
  34. }
  35. setup.template.enabled: false
  36. setup.ilm.enabled: false
  37. output.elasticsearch:
  38. hosts: [ "localhost:9200"]
  39. index: "sample"
  40. bulk_max_size: 1000

在上面,我们添加了 event.Tag("myevent")。重新运行我们可以看到:


  
  1. "hits" : [
  2. {
  3. "_index" : "sample",
  4. "_type" : "_doc",
  5. "_id" : "C7CScHYBpymojx8hkKVy",
  6. "_score" : 1.0,
  7. "_source" : {
  8. "@timestamp" : "2020-12-17T12:00:20.365Z",
  9. "message" : "\" 321 - App01 - WebServer is starting\ "",
  10. "PID" : 321,
  11. "service" : {
  12. "name" : "App01",
  13. "status" : "WebServer is starting"
  14. },
  15. "tags" : [
  16. "myevent"
  17. ]
  18. }
  19. },

在上面,我们可以看到 tags 字段里有一个叫做 myevent 的值。
在今天的介绍中,我就当是抛砖引玉。更多关于 Filebeat 的 Beats processors,请参阅链接 https://www.elastic.co/guide/en/beats/filebeat/current/defining-processors.html#processors

在今天的文章中,我们介绍了一种数据处理的方式。这种数据处理可以在 beats 中进行实现,而不需要在 Elasticsearch 中的 ingest node 中实现。在实际的使用中,你需要依据自己的架构设计来实现不同的设计方案。


转载:https://blog.csdn.net/UbuntuTouch/article/details/111321105
查看评论
* 以上用户言论只代表其个人观点,不代表本网站的观点或立场