解决kafka消费者测试时报错“zookeeper is not a recognized option...”

阳光穿透心脏的1/2处 2023-02-20 13:51 159阅读 0赞

案例:使用kafka做消息系统缓存ELK日志

搭建完成以后,使用kafka消费者测试时出现一下情况

  1. [root@filebeat opt]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.10.1:2181,192.168.10.2:2181,192.168.10.7:2181 --topic message --from-beginning
  2. zookeeper is not a recognized option
  3. Option Description
  4. ------ -----------
  5. --bootstrap-server <String: server to REQUIRED: The server(s) to connect to.
  6. connect to>
  7. --consumer-property <String: A mechanism to pass user-defined
  8. consumer_prop> properties in the form key=value to
  9. the consumer.
  10. --consumer.config <String: config file> Consumer config properties file. Note
  11. that [consumer-property] takes
  12. precedence over this config.
  13. --enable-systest-events Log lifecycle events of the consumer
  14. in addition to logging consumed
  15. messages. (This is specific for
  16. system tests.)
  17. --formatter <String: class> The name of a class to use for
  18. formatting kafka messages for
  19. display. (default: kafka.tools.
  20. DefaultMessageFormatter)
  21. --from-beginning If the consumer does not already have
  22. an established offset to consume
  23. from, start with the earliest
  24. message present in the log rather
  25. than the latest message.
  26. --group <String: consumer group id> The consumer group id of the consumer.
  27. --help Print usage information.
  28. --isolation-level <String> Set to read_committed in order to
  29. filter out transactional messages
  30. which are not committed. Set to
  31. read_uncommitted to read all
  32. messages. (default: read_uncommitted)
  33. --key-deserializer <String:
  34. deserializer for key>
  35. --max-messages <Integer: num_messages> The maximum number of messages to
  36. consume before exiting. If not set,
  37. consumption is continual.
  38. --offset <String: consume offset> The offset id to consume from (a non-
  39. negative number), or 'earliest'
  40. which means from beginning, or
  41. 'latest' which means from end
  42. (default: latest)
  43. --partition <Integer: partition> The partition to consume from.
  44. Consumption starts from the end of
  45. the partition unless '--offset' is
  46. specified.
  47. --property <String: prop> The properties to initialize the
  48. message formatter. Default
  49. properties include:
  50. print.timestamp=true|false
  51. print.key=true|false
  52. print.value=true|false
  53. key.separator=<key.separator>
  54. line.separator=<line.separator>
  55. key.deserializer=<key.deserializer>
  56. value.deserializer=<value.
  57. deserializer>
  58. Users can also pass in customized
  59. properties for their formatter; more
  60. specifically, users can pass in
  61. properties keyed with 'key. deserializer.' and 'value. deserializer.' prefixes to configure
  62. their deserializers.
  63. --skip-message-on-error If there is an error when processing a
  64. message, skip it instead of halt.
  65. --timeout-ms <Integer: timeout_ms> If specified, exit if no message is
  66. available for consumption for the
  67. specified interval.
  68. --topic <String: topic> The topic id to consume on.
  69. --value-deserializer <String:
  70. deserializer for values>
  71. --version Display Kafka version.
  72. --whitelist <String: whitelist> Regular expression specifying
  73. whitelist of topics to include for
  74. consumption.

有经验的人这时候一看就是报错信息就知道语法错误
查阅资料发现
是版本的问题,老版本支持上面的语法,新版本如下
/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 节点测试的ip:9092 --topic 发布主题 --from-beginning

  1. [root@filebeat opt]# /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.10.7:9092 --topic message --from-beginning
  2. hello
  3. hi
  4. lalalala

至于新旧版本怎么区分的,我也不是很清楚,这里我选择的kafka版本是
kafka_2.12-2.5.0.tgz,只能用第二种方法测试,
至于你们的呢?
就都试试吧,看看支持的是哪个

发表评论

表情:
评论列表 (有 0 条评论,159人围观)

还没有评论,来说两句吧...

相关阅读