Kafka集群搭建

柔光的暖阳◎ 2022-09-28 14:24 337阅读 0赞

Kafka使用背景

在我们大量使用分布式数据库、分布式计算集群的时候,是否会遇到这样一些问题:

  • 我想分析一下用户行为(pageviews),以便我能设计出更好的广告位;
  • 我想对用户的搜索关键词进行统计,分析出前的流行趋势;
  • 有些数据,存数据库浪费,直接存硬盘操作效率又低;

这个时候,就可以用消息系统了,尤其是分布式消息系统;

另外:

  1. 在很多常见的大数据处理场景中,我们需要对数据进行离线分析和实时分析,离线分析借助于hadoop相关框架(mapreducehive等),对于实时需求可以使用storm,为了统一离线和实时计算,我们可以将离线和实时计算的数据源统一作为输入,然后将数据的流向分别经由离线分析系统和实时系统,分别进行分析处理,这是我们可以考虑将数据源(flume收集)直接连接一个消息中间件,如kafka,整合flume + kafkaflume作为消息的生产者,产生的消息数据(日志数据、业务数据等)发布到kafka中,然后使用StormTopology作为消息的Consumer,在Storm集群中分别进行如下两个需求场景的处理:
  • 直接使用Storm的Topology对数据进行实时分析处理
  • 整合Storm+HDFS,将消息处理后写入HDFS进行离线分析处理

kafka的定义

是一个分布式消息系统,由LinkedIn使用Scala编写,用作LinkedIn的活动流(Activity Stream)和运营数据处理管道(Pipeline)的基础,具有高水平扩展和高吞吐量。

应用领域: 已被多家不同类型的公司作为多种类型的数据管道和消息系统使用。如:

淘宝,支付宝,百度,twitter等

目前越来越多的开源分布式处理系统如Apache flume、Apache Storm、Spark,elasticsearch都支持与Kafka集成

Center

AMQP协议(advanced message queue protocol)

Center 1

基本消费者(Consumer):从消息队列中请求消息的客户端应用程序;
生产者(Producer):向broker发布消息的客户端应用程序;
AMQP服务器端(broker):用来接收生产者发送的消息并将这些消息路由给服务器中的队列;

kafka基本架构

Center 2

主题(Topic):即一种类型;如一个主题类似新闻中的体育、娱乐、教育等分类概念,在实际工程中通常一个业务一个主题;
分区(Partition):一个topic中的消息数据按照多个分区组织,分区是kafka消息队列组织的最小单位,一个分区可以看做是一个FIFO的队列;

kafka集群搭建

zookeeper集群搭建(见前面章节)

kafka集群

环境准备

  • 服务器3台(192.168.0.102,192.168.0.103,192.168.0.104)
  • kafka版本kafka_2.9.2-0.8.1.1

配置环境

  1. 进入自己的目录cd /usr/local/program 创建文件夹mkdir kafkaLogs
  2. 配置环境变量:
  3. set enviroment

    export JAVA_HOME=/usr/local/program/jdk1.7.0_79
    export ZK_HOME=/usr/local/program/zk/zookeeper-3.4.6
    export KAFKA_HOME=/usr/local/program/kafka/kafka_2.9.2-0.8.1.1
    export PATH=$JAVA_HOME/bin:$ZK_HOME/bin:$KAFKA_HOME/bin:$PATH
  4. 配置kafka,cd /usr/local/program/kafka/kafka_2.9.2-0.8.1.1/config/ 编辑server.properties

    1. # Licensed to the Apache Software Foundation (ASF) under one or more
    2. # contributor license agreements. See the NOTICE file distributed with
    3. # this work for additional information regarding copyright ownership.
    4. # The ASF licenses this file to You under the Apache License, Version 2.0
    5. # (the "License"); you may not use this file except in compliance with
    6. # the License. You may obtain a copy of the License at
    7. #
    8. # http://www.apache.org/licenses/LICENSE-2.0
    9. #
    10. # Unless required by applicable law or agreed to in writing, software
    11. # distributed under the License is distributed on an "AS IS" BASIS,
    12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    13. # See the License for the specific language governing permissions and
    14. # limitations under the License.
    15. # see kafka.server.KafkaConfig for additional details and defaults
    16. ############################# Server Basics #############################
    17. # The id of the broker. This must be set to a unique integer for each broker.
    18. #机器在集群中的唯一标识
    19. broker.id=0
    20. ############################# Socket Server Settings #############################
    21. # The port the socket server listens on
    22. #对外提供服务的tcp端口 默认9092
    23. port=19092
    24. # Hostname the broker will bind to. If not set, the server will bind to all interfaces
    25. #主机ip,默认localhost
    26. host.name=192.168.0.102
    27. # Hostname the broker will advertise to producers and consumers. If not set, it uses the
    28. # value for "host.name" if configured. Otherwise, it will use the value returned from
    29. # java.net.InetAddress.getCanonicalHostName().
    30. #advertised.host.name=<hostname routable by clients>
    31. # The port to publish to ZooKeeper for clients to use. If this is not set,
    32. # it will publish the same port that the broker binds to.
    33. #advertised.port=<port accessible by clients>
    34. # The number of threads handling network requests
    35. #broker进行网络处理的线程数
    36. num.network.threads=3
    37. # The number of threads doing disk I/O
    38. #broker进行io处理的线程数
    39. num.io.threads=8
    40. # The send buffer (SO_SNDBUF) used by the socket server
    41. #kafka发送消息的缓冲区5m 默认102400
    42. socket.send.buffer.bytes=1048576
    43. # The receive buffer (SO_RCVBUF) used by the socket server
    44. #kafka接收消息的缓冲区5m 默认102400
    45. socket.receive.buffer.bytes=1048576
    46. # The maximum size of a request that the socket server will accept (protection against OOM)
    47. #向kafka请求 或者接收消息的最大数,不能超过java堆栈大小
    48. socket.request.max.bytes=104857600
  1. ############################# Log Basics #############################
  2. # A comma seperated list of directories under which to store log files
  3. #kafka消息日志目录 多个以逗号分割
  4. log.dirs=/usr/local/program/kafka/kafkaLogs
  5. # The default number of log partitions per topic. More partitions allow greater
  6. # parallelism for consumption, but this will also result in more files across
  7. # the brokers.
  8. #每个topic的分区数
  9. num.partitions=2
  10. ############################# Log Flush Policy #############################
  11. # Messages are immediately written to the filesystem but by default we only fsync() to sync
  12. # the OS cache lazily. The following configurations control the flush of data to disk.
  13. # There are a few important trade-offs here:
  14. # 1. Durability: Unflushed data may be lost if you are not using replication.
  15. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
  16. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
  17. # The settings below allow one to configure the flush policy to flush data after a period of time or
  18. # every N messages (or both). This can be done globally and overridden on a per-topic basis.
  19. # The number of messages to accept before forcing a flush of data to disk
  20. #log.flush.interval.messages=10000
  21. # The maximum amount of time a message can sit in a log before we force a flush
  22. #log.flush.interval.ms=1000
  23. ############################# Log Retention Policy #############################
  24. # The following configurations control the disposal of log segments. The policy can
  25. # be set to delete segments after a period of time, or after a given size has accumulated.
  26. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
  27. # from the end of the log.
  28. # The minimum age of a log file to be eligible for deletion
  29. #kafka消息的驻留时间,168小时【7天】
  30. log.retention.hours=168
  31. #往kafka发送的消息每条不超过的大小5m(默认为1m)
  32. message.max.byte=5048576
  33. #默认的复制因子,每个topic中的partion的副本(默认为1)
  34. default.replication.factor=2
  35. replica.fetch.max.bytes=5048576
  36. # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
  37. # segments don't drop below log.retention.bytes.
  38. #log.retention.bytes=1073741824
  39. # The maximum size of a log segment file. When this size is reached a new log segment will be created.
  40. #每个消息文件的大小(因为消息是追加写入),超过这个数就会新起一个文件
  41. log.segment.bytes=536870912
  42. # The interval at which log segments are checked to see if they can be deleted according
  43. # to the retention policies
  44. #每隔这个时间查看kafkalog是否失效,即查看是否有过期消息,如果有则删除
  45. log.retention.check.interval.ms=60000
  46. # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
  47. # If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
  48. log.cleaner.enable=false
  49. ############################# Zookeeper #############################
  50. # Zookeeper connection string (see zookeeper docs for details).
  51. # This is a comma separated host:port pairs, each corresponding to a zk
  52. # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
  53. # You can also append an optional chroot string to the urls to specify the
  54. # root directory for all kafka znodes.
  55. #zookeeper集群地址
  56. zookeeper.connect=192.168.0.102:12181,192.168.0.103:12181,192.168.0.104:12181
  57. # Timeout in ms for connecting to zookeeper
  58. #kafka集群连接zk的超时时间
  59. zookeeper.connection.timeout.ms=1000000
  1. 同样的操作对于其它机器,配置文件的broker.id分别改为1和2,hostname对应相应的主机ip
  2. 后台启动kafka,执行命令:kafka-server-start.sh /usr/local/program/kafka/kafka_2.9.2-0.8.1.1/config/server.properties &
  3. 同样的操作对于其它机器
  4. 创建主题topic,执行命令:kafka-topics.sh –create –zookeeper localhost:12181 –replication-factor 3 –partitions 1 –topic my-replicated-topic
  5. 查看主题topic,执行命令:kafka-topics.sh –describe –zookeeper localhost:12181 –topic my-replicated-topic
  6. 创建生产者producer,执行命令:kafka-console-producer.sh –broker-list 192.168.0.102:19092 –topic my-replicated-topic
  7. 创建消费者consumer(另外一台机器),执行命令:kafka-console-consumer.sh –zookeeper localhost:12181 –from-beginning –topic my-replicated-topic

    测试:生产者者命令行输入信息,即可在消费者命令行看到对应消息的输出。

  8. ok 至此搭建完成kafka集群。

Center 3

参考:jikexueyuan

发表评论

表情:
评论列表 (有 0 条评论,337人围观)

还没有评论,来说两句吧...

相关阅读

    相关 Kafka

    Kafka使用背景 在我们大量使用分布式数据库、分布式计算集群的时候,是否会遇到这样一些问题: 我想分析一下用户行为(pageviews),以便我能设计出更

    相关 Kafka

    Kafka初识 1、Kafka使用背景 在我们大量使用分布式数据库、分布式计算集群的时候,是否会遇到这样的一些问题: 1. 我们想分析下用户行为(pageviews

    相关 Kafka

    前言:kafka作为一个消息中间件,由linkedin使用scala编写,用作LinkedIn的活动流,和运营数据处理管道的基础,其特点在于具有高水平扩展也就是动态扩容和高吞吐

    相关 Kafka

    一,环境准备           \ Zookeeper单点/集群服务(演示单点使用单点, 演示集群使用集群)           \ kafka安装包        

    相关 Kafka

    Kafka集群搭建 1、软件环境 1、linux一台或多台,大于等于2 2、已经搭建好的zookeeper集群(参考我上一篇zk集群搭建:[https://blo

    相关 Kafka

    1. 集群部署的基本流程 下载安装包、解压安装包、修改配置文件、分发安装包、启动集群 2.集群部署的基础环境准备 **安装前的准备工作(zk集群已经部署完毕)**...