Kafka Multi-Broker Cluster

Kafka Multi-Broker Cluster

Kafka Multi-Broker Cluster – In this tutorial, we shall learn to set up a three node cluster, node meaning broker instance. To realize this, multiple physical nodes are not required. We can run all the broker instances in a single local machine.

Prepare Configuration Files

We need to create config files, one for each broker instance.

As we have thought of creating three nodes, and there is already a default server.properties file existing, we shall create two more config files.

To create config files for each broker,

  1. Navigate to the kafka folder
  2. Open a Terminal
  3. Execute the following copy command
    For Ubuntu or any linux based machine
    Note : All of the commands mentioned should be run from kafka root directory.

    For Windows, use copy command

  4. Once you copy the config files, copy paste the following content to the config files respectively.

    broker.id is the name given to a broker instance.

    By default Kafka broker starts at port 9092. For other two broker instances, we have provided the ports 9093 and 9094.

    The log.dir (log directory) is provided, since all our Kafka Broker instances would be running on a same local machine. And this would separate the logging files for each instance.

Once the config files are ready, we need bring our Zookeeper and Broker instances up.

Start Zookeeper

Run the following command

 

Start Kafka Broker Instances

Once the Zookeeper is up and running, Open three different terminal windows and run the following commands in each of them, to start Kafka Broker Instances.

 

Create a Topic that Replicates

Create a topic with replication factor of three, so that we can demonstrate the replication of topic partition across the three nodes.

 

Describe Topic

There are three broker instances running and to know which kafka broker is doing what with the kafka topic that we created in the earlier step, run the following command

Leader: 1 is the broker instance responsible for all reads and writes for the given partition. Reads and Writes are done by Kafka Consumers and Kafka Producers respectively.

Replicas: 1,2,0 meaning broker instances 0, 1 and 2 are acting as nodes that replicate the log irrespective of one being a leader.

Isr: 1,2,0 meaning broker instances 1, 2 and 0 are in-sync replicas.

Now Kafka Produces may send messages to the Kafka topic, my-topic and Kafka Consumers may subscribe to the Kafka Topic.

 

Testing Fault-Tolerance of Kafka Multi-Broker Cluster

We know the leader (broker instance 1) for the Kafka Topic, my-topic. Lets kill it and see what zookeeper does when the leader goes down.

Find the id of broker-1 instance.

The first number is the id we need. Kill the broker instance using this id.

Once the instance is killed, describe the topic to check what happened to the Kafka topic, my-topic

When the broker instance 1 went down intentionally or unintentionally, zookeeper has elected another node as leader. In this case it elected broker instance 2 as leader.

 

Conclusion :

In this Apache Kafka Tutorial, we have learnt to build a Kafka Multi-Broker Cluster, and how Zookeeper helps in fault-tolerance when leader goes down.