飞道的博客

Kafka JMX 监控 之 jmxtrans + influxdb + grafana

480人阅读  评论(0)

 

目录

效果图

环境准备

安装 influxdb

安装我们刚刚下载 influxdb rpm文件

查看默认配置

修改参数

启动 influxdb

查看启动状态

设置基本配置

 influxdb 其他命令扩展

安装 jmxtrans

可能遇到的异常

验证jmxtrans是否成功运行

安装 Grafana

 安装

influxDB 与 Grafana 监控模板

 


效果图

先看下

环境准备

JDK:1.8

jmxtrans 安装包


  
  1. # 我们系统是CentOS,这里选择rpm
  2. https: //github.com/downloads/jmxtrans/jmxtrans/jmxtrans-20121016.145842.6a28c97fbb-0.noarch.rpm
  3. # 其他系统在下面链接中可以找到
  4. https: //github.com/jmxtrans/jmxtrans/downloads
  5. # 下载源码,后面编译使用
  6. https: //github.com/jmxtrans/jmxtrans/releases
  7. # 我这里使用的271版本
  8. https: //github.com/jmxtrans/jmxtrans/archive/jmxtrans-parent-271.tar.gz

influxdb 安装包


  
  1. # 下载安装包
  2. https:/ /dl.influxdata.com/influxdb /releases/influxdb- 1.8. 0.x86_64.rpm
  3. # 官网下载页面
  4. https:/ /portal.influxdata.com/downloads /

Grafana 安装包

https://dl.grafana.com/oss/release/grafana-6.7.3-1.x86_64.rpm 

安装 influxdb

安装我们刚刚下载 influxdb rpm文件

rpm -ivh influxdb-1.8.0.x86_64.rpm

查看默认配置


  
  1. > influxd config
  2. Merging with configuration at: /etc/influxdb/influxdb.conf
  3. reporting-disabled = false
  4. bind-address = "127.0.0.1:8088"
  5. [ meta]
  6. dir = "/var/lib/influxdb/meta"
  7. retention-autocreate = true
  8. logging-enabled = true
  9. [ data]
  10. dir = "/var/lib/influxdb/data"
  11. index-version = "inmem"
  12. wal-dir = "/var/lib/influxdb/wal"
  13. wal-fsync-delay = "0s"
  14. validate-keys = false
  15. query-log-enabled = true
  16. cache-max-memory-size = 1073741824
  17. cache-snapshot-memory-size = 26214400
  18. cache-snapshot-write-cold-duration = "10m0s"
  19. compact-full-write-cold-duration = "4h0m0s"
  20. compact-throughput = 50331648
  21. compact-throughput-burst = 50331648
  22. max-series-per-database = 1000000
  23. max-values-per-tag = 100000
  24. max-concurrent-compactions = 0
  25. max-index-log-file-size = 1048576
  26. series-id- set-cache-size = 100
  27. series-file-max-concurrent-snapshot-compactions = 0
  28. trace-logging-enabled = false
  29. tsm-use-madv-willneed = false
  30. [ coordinator]
  31. write-timeout = "10s"
  32. max-concurrent-queries = 0
  33. query-timeout = "0s"
  34. log-queries-after = "0s"
  35. max- select-point = 0
  36. max- select-series = 0
  37. max- select-buckets = 0
  38. [ retention]
  39. enabled = true
  40. check-interval = "30m0s"
  41. [ shard-precreation]
  42. enabled = true
  43. check-interval = "10m0s"
  44. advance-period = "30m0s"
  45. [ monitor]
  46. store-enabled = true
  47. store-database = "_internal"
  48. store-interval = "10s"
  49. [ subscriber]
  50. enabled = true
  51. http-timeout = "30s"
  52. insecure-skip-verify = false
  53. ca-certs = ""
  54. write-concurrency = 40
  55. write-buffer-size = 1000
  56. [ http]
  57. enabled = true
  58. bind-address = ":8086"
  59. auth-enabled = false
  60. log-enabled = true
  61. suppress-write-log = false
  62. write-tracing = false
  63. flux-enabled = false
  64. flux-log-enabled = false
  65. pprof-enabled = true
  66. pprof-auth-enabled = false
  67. debug-pprof-enabled = false
  68. ping-auth-enabled = false
  69. https-enabled = false
  70. https-certificate = "/etc/ssl/influxdb.pem"
  71. https- private-key = ""
  72. max-row-limit = 0
  73. max-connection-limit = 0
  74. shared-secret = ""
  75. realm = "InfluxDB"
  76. unix-socket-enabled = false
  77. unix-socket-permissions = "0777"
  78. bind-socket = "/var/run/influxdb.sock"
  79. max-body-size = 25000000
  80. access-log-path = ""
  81. max-concurrent-write-limit = 0
  82. max-enqueued-write-limit = 0
  83. enqueued-write-timeout = 30000000000
  84. [ logging]
  85. format = "auto"
  86. level = "info"
  87. suppress-logo = false
  88. [ [graphite]]
  89. enabled = false
  90. bind-address = ":2003"
  91. database = "graphite"
  92. retention-policy = ""
  93. protocol = "tcp"
  94. batch-size = 5000
  95. batch-pending = 10
  96. batch-timeout = "1s"
  97. consistency-level = "one"
  98. separator = "."
  99. udp-read-buffer = 0
  100. [ [collectd]]
  101. enabled = false
  102. bind-address = ":25826"
  103. database = "collectd"
  104. retention-policy = ""
  105. batch-size = 5000
  106. batch-pending = 10
  107. batch-timeout = "10s"
  108. read-buffer = 0
  109. typesdb = "/usr/share/collectd/types.db"
  110. security-level = "none"
  111. auth-file = "/etc/collectd/auth_file"
  112. parse-multivalue-plugin = "split"
  113. [ [opentsdb]]
  114. enabled = false
  115. bind-address = ":4242"
  116. database = "opentsdb"
  117. retention-policy = ""
  118. consistency-level = "one"
  119. tls-enabled = false
  120. certificate = "/etc/ssl/influxdb.pem"
  121. batch-size = 1000
  122. batch-pending = 5
  123. batch-timeout = "1s"
  124. log-point-errors = true
  125. [ [udp]]
  126. enabled = false
  127. bind-address = ":8089"
  128. database = "udp"
  129. retention-policy = ""
  130. batch-size = 5000
  131. batch-pending = 10
  132. read-buffer = 0
  133. batch-timeout = "1s"
  134. precision = ""
  135. [ continuous_queries]
  136. log-enabled = true
  137. enabled = true
  138. query-stats-enabled = false
  139. run-interval = "1s"
  140. [ tls]
  141. min-version = ""
  142. max-version = ""

修改参数

默认influxDB使用以下端口

  • 8086: 用于客户端和服务端交互的HTTP API
  • 8088: 用于提供备份和恢复的RPC服务

我这里修改配置文件,使用8087端口,我这里8088和其他服务冲突了

同时修改了数据保存的路径


  
  1. > vim /etc/influxdb/influxdb.conf
  2. bind-address = "127.0.0.1:8087"
  3. # metadata 保存路径
  4. dir = "/root/jast/influxdb/meta"
  5. #数据保存路径
  6. dir = "/root/jast/influxdb/data"
  7. #`write-ahead-log(WAL)保存路径
  8. wal-dir = "/root/jast/influxdb/wal"

 注意:选择的路径要有权限,否则会启动失败

启动 influxdb

systemctl start influxdb

查看启动状态

systemctl status influxdb

 

此时influxdb已经启动成功

我们也可以指定配置文件启动,在/etc/influxdb/influxdb.conf ,是默认的目录也可以不指定

influxd -config /etc/influxdb/influxdb.conf

设置基本配置

在服务器 输入 influx 进入交互页面


  
  1. [ root@ecs-t-001-0001 influx] # influx
  2. Connected to http: //localhost:8086 version 1.8.0
  3. InfluxDB shell version: 1.8 .0
  4. >

 创建用户

CREATE USER "admin" WITH PASSWORD '123456' WITH ALL PRIVILEGES

 创建数据库(后面保存数据用)

create database "jmxDB"

查看是否创建成功


  
  1. [root@ecs-t- 001- 0001 jmxtrans]# influx
  2. Connected to http: //localhost:8086 version 1.8.0
  3. InfluxDB shell version: 1.8. 0
  4. > show databases
  5. name: databases
  6. name
  7. ----
  8. _internal
  9. jmxDB
  10. >

 influxdb 其他命令扩展


  
  1. #创建数据库
  2. create database "db_name"
  3. #显示所有的数据库
  4. show databases
  5. #删除数据库
  6. drop database "db_name"
  7. #使用数据库
  8. use db_name
  9. #显示该数据库中所有的表
  10. show measurements
  11. #创建表,直接在插入数据的时候指定表名
  12. insert test,host=127.0.0.1,monitor_name= test count=1
  13. #删除表
  14. drop measurement "measurement_name"
  15. #退出
  16. quit

 

安装 jmxtrans

安装我们刚刚下载jmxtrans rpm文件

rpm -ivh jmxtrans-20121016.145842.6a28c97fbb-0.noarch.rpm

安装完成后默认安装目录在


  
  1. [root@ecs-t- 001- 0001 jmxtrans] # whereis jmxtrans
  2. jmxtrans: /usr/share /jmxtrans

这里我们先简单配置Kafka 的Memory 监控,其他配置在文末统一整理

我们创建json文件供jmxtrans读取,json文件名称自己根据业务取名即可


  
  1. {
  2. "servers" : [ {
  3. "port" : "9393",
  4. "host" : "172.11.0.1",
  5. "queries" : [ {
  6. "obj" : "java.lang:type=Memory",
  7. "attr" : [ "HeapMemoryUsage", "NonHeapMemoryUsage" ],
  8. "resultAlias": "jvmMemory",
  9. "outputWriters" : [ {
  10. "@class" : "com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory",
  11. "url" : "http://172.11.0.1:8086/",
  12. "username" : "admin",
  13. "password" : "123456",
  14. "database" : "jmxDB",
  15. "tags" : { "application" : "kafka_server"}
  16. } ]
  17. } ]
  18. } ]
  19. }

简单解释一下上面的说明


  
  1. port: 我们要监控的Kafka JMX端口
  2. host:我们要监控的Kafka host
  3. resultAlias:自定义表名,收集到的数据会存入influxdb的定义的表中,自动创建
  4. outputWriters为连接influxdb的配置
  5. @class不需要修改
  6. url:influxdb的机器+端口,默认端口 8086
  7. username和password:influxdb的用户和密码
  8. database:influxdb数据库(我们刚刚创建的)

启动之前我们把 /usr/share/jmxtrans 目录下的所有 .json 文件换个名,因为它会默认会读取 /usr/share/jmxtrans 目录下的所有json文件

在 /usr/share/jmxtrans 目录下启动 jmxtrans.sh

jmxtrans.sh start

到这里正常来说就是要启动成功了,我们先说下可能遇到的异常

可能遇到的异常

异常1


  
  1. [root@ecs-t -001 -0001 jmxtrans]# Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize= 384m; support was removed in 8.0
  2. Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize= 384m; support was removed in 8.0
  3. MaxTenuringThreshold of 16 is invalid; must be between 0 and 15
  4. Error: Could not create the Java Virtual Machine.
  5. Error: A fatal exception has occurred. Program will exit.

MaxTenuringThreshold 这个参数用于控制对象能经历多少次Minor GC才晋升到旧生代

提示设置的是16,但是范围在0-15,我们直接修改一下启动脚本 jmxtrans.sh


  
  1. > vim jmxtrans.sh
  2. GC_OPTS= ${GC_OPTS:-"-Xms${HEAP_SIZE}M -Xmx ${HEAP_SIZE}M -XX:+UseConcMarkSweepGC -XX:NewRatio= ${NEW_RATIO} -XX:NewSize= ${NEW_SIZE}m -XX:MaxNewSize= ${NEW_SIZE}m -XX:MaxTenuringThreshold=15 -XX:GCTimeRatio=9 -XX:PermSize= ${PERM_SIZE}m -XX:MaxPermSize= ${MAX_PERM_SIZE}m -XX:+UseTLAB -XX:CMSInitiatingOccupancyFraction= ${IO_FRACTION} -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing -XX:ParallelGCThreads= ${CPU_CORES} -Dsun.rmi.dgc.server.gcInterval=28800000 -Dsun.rmi.dgc.client.gcInterval=28800000 "}

异常2


  
  1. [root@ecs-t -001 -0001 jmxtrans]# Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize= 384m; support was removed in 8.0
  2. Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize= 384m; support was removed in 8.0
  3. Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
  4. Exception in thread "main" com.googlecode.jmxtrans.util.LifecycleException: com.googlecode.jmxtrans.util.LifecycleException: Error parsing json: / var/lib/jmxtrans/kafka.json
  5. at com.googlecode.jmxtrans.JmxTransformer.start(JmxTransformer.java: 146)
  6. at com.googlecode.jmxtrans.JmxTransformer.doMain(JmxTransformer.java: 107)
  7. at com.googlecode.jmxtrans.JmxTransformer.main(JmxTransformer.java: 92)
  8. Caused by: com.googlecode.jmxtrans.util.LifecycleException: Error parsing json: / var/lib/jmxtrans/kafka.json
  9. at com.googlecode.jmxtrans.JmxTransformer.processFilesIntoServers(JmxTransformer.java: 358)
  10. at com.googlecode.jmxtrans.JmxTransformer.startupSystem(JmxTransformer.java: 301)
  11. at com.googlecode.jmxtrans.JmxTransformer.start(JmxTransformer.java: 142)
  12. ... 2 more
  13. Caused by: java.lang.IllegalArgumentException: Invalid type id 'com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory' ( for id type 'Id.class'): no such class found
  14. at org.codehaus.jackson. map.jsontype.impl.ClassNameIdResolver.typeFromId(ClassNameIdResolver.java: 89)
  15. at org.codehaus.jackson. map.jsontype.impl.TypeDeserializerBase._findDeserializer(TypeDeserializerBase.java: 73)
  16. at org.codehaus.jackson. map.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java: 65)
  17. at org.codehaus.jackson. map.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java: 81)
  18. at org.codehaus.jackson. map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java: 118)
  19. at org.codehaus.jackson. map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java: 93)
  20. at org.codehaus.jackson. map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java: 25)
  21. at org.codehaus.jackson. map.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java: 149)
  22. at org.codehaus.jackson. map.deser.SettableBeanProperty$MethodProperty.deserializeAndSet(SettableBeanProperty.java: 237)
  23. at org.codehaus.jackson. map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java: 496)
  24. at org.codehaus.jackson. map.deser.BeanDeserializer.deserialize(BeanDeserializer.java: 350)
  25. at org.codehaus.jackson. map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java: 116)
  26. at org.codehaus.jackson. map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java: 93)
  27. at org.codehaus.jackson. map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java: 25)
  28. at org.codehaus.jackson. map.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java: 149)
  29. at org.codehaus.jackson. map.deser.SettableBeanProperty$MethodProperty.deserializeAndSet(SettableBeanProperty.java: 237)
  30. at org.codehaus.jackson. map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java: 496)
  31. at org.codehaus.jackson. map.deser.BeanDeserializer.deserialize(BeanDeserializer.java: 350)
  32. at org.codehaus.jackson. map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java: 116)
  33. at org.codehaus.jackson. map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java: 93)
  34. at org.codehaus.jackson. map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java: 25)
  35. at org.codehaus.jackson. map.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java: 149)
  36. at org.codehaus.jackson. map.deser.SettableBeanProperty$MethodProperty.deserializeAndSet(SettableBeanProperty.java: 237)
  37. at org.codehaus.jackson. map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java: 496)
  38. at org.codehaus.jackson. map.deser.BeanDeserializer.deserialize(BeanDeserializer.java: 350)
  39. at org.codehaus.jackson. map.ObjectMapper._readMapAndClose(ObjectMapper.java: 1980)
  40. at org.codehaus.jackson. map.ObjectMapper.readValue(ObjectMapper.java: 1225)
  41. at com.googlecode.jmxtrans.util.JmxUtils.getJmxProcess(JmxUtils.java: 494)
  42. at com.googlecode.jmxtrans.JmxTransformer.processFilesIntoServers(JmxTransformer.java: 352)
  43. ... 4 more

说是解析 com.googlecode.jmxtrans.util.LifecycleException 异常,这里需要我们自己编译一下jar包,在文章开头我们下载过jmxtrans源码

在  项目目录下进行编译,文末有我编译好的jar包

mvn clean package -Dmaven.test.skip=true -DskipTests=true;

编译完成我们需要的jar包在 jmxtrans-jmxtrans-parent-271\jmxtrans\target 目录下

jmxtrans-271-all.jar 就是我们需要用的jar包

将jar包传到jmxtrans目录下

我们对比一下发现我们编译的包是有这个类的,而他自带的那个没有


  
  1. [root@ecs-t- 001- 0001 jmxtrans] # grep 'com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory' ./jmxtrans-271-all.jar
  2. Binary file ./jmxtrans- 271-all.jar matches
  3. [root@ecs-t- 001- 0001 jmxtrans] # grep 'com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory' ./jmxtrans-all.jar
  4. [root@ecs-t- 001- 0001 jmxtrans] #

替换 jmxtrans.sh 中应用的 jmxtrans jar包名称


  
  1. #JAR_FILE=${JAR_FILE:-"jmxtrans-all.jar"}
  2. JAR_FILE=${JAR_FILE:- "jmxtrans-271-all.jar"}

再次启动即可

 

验证jmxtrans是否成功运行

进入 influx jmxDB数据库(我们之前创建的),查看表 show MEASUREMENTS,我们发现自动创了jvmMemory


  
  1. [root@ecs-t- 001- 0001 jmxtrans]# influx
  2. Connected to http: //localhost:8086 version 1.8.0
  3. InfluxDB shell version: 1.8. 0
  4. > show databases
  5. name: databases
  6. name
  7. ----
  8. _internal
  9. jmxDB
  10. > use jmxDB
  11. Using database jmxDB
  12. > show
  13. ERR: error parsing query: found EOF, expected CONTINUOUS, DATABASES, DIAGNOSTICS, FIELD, GRANTS, MEASUREMENT, MEASUREMENTS, QUERIES, RETENTION, SERIES, SHARD, SHARDS, STATS, SUBSCRIPTIONS, TAG, USERS at line 1, char 6
  14. > show MEASUREMENTS
  15. name: measurements
  16. name
  17. ----
  18. jvmMemory

 具体查看数据,发现有数据写入

至此 jmxtrans 已成功监控 Kafka JMX端口,离成功更近了

 

安装 Grafana

 安装

yum install grafana-6.7.3-1.x86_64.rpm

配置文件默认路径 /etc/grafana/grafana.ini

 修改下web端口


  
  1. > vim /etc/grafana/grafana.ini
  2. # web页面端口默认3000
  3. http_port = 9099

 启动服务并设置为开机启动


  
  1. systemctl start grafana- server
  2. systemctl enable grafana- server

查看启动状态

systemctl  status grafana-server

访问 web页面 ,第一次登陆账号密码 是 admin/admin ,登陆完成后会提示你设置密码

开始配置Grafana显示模板

点击DataSource

选择添加 InfluxDB ,并填写基本信息

 填写 influxDB 数据库信息

 点击保存

此时InfluxDB配置完成 ,我们几区创建仪表盘

选择 添加一个查询

 

选择我们上面设置的KafkaMonitor(因为我们只有一个默认的也是这个)

 

 

简单修改一下sql,因为我们上面只监控了JMX中的内存信息

注意:如果上面使用tag进行划分,这里就不要设置了,否则都叫Jvm内存使用了

 

简单配置完成,我们对比一下监控与jconsole监控对比

 

 

influxDB 与 Grafana 监控模板

 

模板下载


  
  1. 链接:https: //pan.baidu.com/s/1ld-Yhv7wVutRxslV084GoQ
  2. 提取码: 0pzr
  3. 复制这段内容后打开百度网盘手机App,操作更方便哦

 

 


转载:https://blog.csdn.net/zhangshenghang/article/details/105860540
查看评论
* 以上用户言论只代表其个人观点,不代表本网站的观点或立场