Kafka Multi-Broker Cluster

Kafka Multi-Broker Cluster – In this tutorial, we shall learn to set up a three node cluster, node meaning broker instance. To realize this, multiple physical nodes are not required. We can run all the broker instances in a single local machine.

Prepare Configuration Files

We need to create config files, one for each broker instance.

As we have thought of creating three nodes, and there is already a default server.properties file existing, we shall create two more config files.

To create config files for each broker, follow these steps.

1. Navigate to the Kafka root directory.

2. Open a Terminal from this location.

3. Execute the following copy command for Ubuntu or any Linux based machine.

$ cp config/server.properties config/server-1.properties
$ cp config/server.properties config/server-2.properties

Note : All of the commands mentioned should be run from Kafka root directory.

For Windows, use this copy command

$ copy config/server.properties config/server-1.properties
$ copy config/server.properties config/server-2.properties

4. Once you copy the config files, copy paste the following content to the config files respectively.





broker.id is the name given to a broker instance.

By default Kafka broker starts at port 9092. For other two broker instances, we have provided the ports 9093 and 9094.

The log.dir (log directory) is provided, since all our Kafka Broker instances would be running on a same local machine. And this would separate the logging files for each instance.

Once the config files are ready, we need bring our Zookeeper and Broker instances up.


Start Zookeeper

Run the following command

$ bin/zookeeper-server-start.sh config/zookeeper.properties
$ bin/zookeeper-server-start.sh config/zookeeper.properties
[2017-09-30 12:21:18,920] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-09-30 12:21:18,960] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-09-30 12:21:18,960] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-09-30 12:21:18,960] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)

Start Kafka Broker Instances

Once the Zookeeper is up and running, Open three different terminal windows and run the following commands in each of them, to start Kafka Broker Instances.

$ bin/kafka-server-start.sh config/server.properties
$ bin/kafka-server-start.sh config/server-1.properties
$ bin/kafka-server-start.sh config/server-2.properties

Create a Topic that Replicates

Create a topic with replication factor of three, so that we can demonstrate the replication of topic partition across the three nodes.

$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-topic
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-topic
Created topic "my-topic".

Describe Topic

There are three broker instances running and to know which Kafka broker is doing what with the Kafka topic that we created in the earlier step, run the following command.

$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-topic
$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-topic
Topic:my-topic    PartitionCount:1 ReplicationFactor:3 Configs:
    Topic: my-topic    Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0

Leader: 1 is the broker instance responsible for all reads and writes for the given partition. Reads and Writes are done by Kafka Consumers and Kafka Producers respectively.

Replicas: 1,2,0 meaning broker instances 0, 1 and 2 are acting as nodes that replicate the log irrespective of one being a leader.

Isr: 1,2,0 meaning broker instances 1, 2 and 0 are in-sync replicas.

Now Kafka Produces may send messages to the Kafka topic, my-topic and Kafka Consumers may subscribe to the Kafka Topic.

Testing Fault-Tolerance of Kafka Multi-Broker Cluster

We know the leader (broker instance 1) for the Kafka Topic, my-topic. Lets kill it and see what zookeeper does when the leader goes down.

Find the id of broker-1 instance.

kafkauser@tutorialkart:/home/kafkauser/kafka/kafka_2.11-$ ps -aef|grep server-1.properties
tutorialkart 5058 4749 1 14:14 pts/16 00:01:23 /usr/lib/jvm/default-java/jre/bin/java -Xmx1G -Xms1G

The first number is the id we need. Kill the broker instance using this id.

$ kill -9 5058

Once the instance is killed, describe the topic to check what happened to the Kafka topic, my-topic

arjun@arjun-VPCEH26EN:/media/arjun/0AB650F1B650DF2F/SOFTs_/kafka/kafka_2.11-$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-topic Topic:my-topic    PartitionCount:1 ReplicationFactor:3 Configs:
    Topic: my-topic    Partition: 0 Leader: 2 Replicas: 1,2,0 Isr: 2,0

When the broker instance 1 went down intentionally or unintentionally, zookeeper has elected another node as leader. In this case it elected broker instance 2 as leader.


In this Apache Kafka Tutorial, we have learnt to build a Kafka Multi-Broker Cluster, and how Zookeeper helps in fault-tolerance when leader goes down.