Starting a Kafka Cluster: Single-Broker and 3-Broker with KRaft
Prerequisites
- Docker Desktop installed and running
docker composev2 (bundled with Docker Desktop 4.x+)- Ports 9092, 9093, 9094 free on your machine
All articles in this series assume a running local Kafka cluster. Start with the single-broker setup for articles 1–6, then switch to the 3-broker cluster when we cover replication and fault tolerance.
Single-Broker Cluster (Development)
This is the simplest setup — one Kafka node running in combined mode (broker + controller).
flowchart TB
subgraph DC["Docker Compose"]
K1["kafka\nports: 9092 (client)\n 9093 (controller)"]
SR["schema-registry\nport: 8081\n(used from article 21)"]
K1 <--> SR
end
App["Spring Boot App\n(localhost)"] -->|"bootstrap-servers:\nlocalhost:9092"| K1
docker-compose.yml (single broker):
version: '3.8'
services:
kafka:
image: confluentinc/cp-kafka:7.6.0
hostname: kafka
container_name: kafka
ports:
- "9092:9092"
- "9093:9093"
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
KAFKA_LOG_RETENTION_HOURS: 168
KAFKA_LOG_DIRS: /var/lib/kafka/data
CLUSTER_ID: "MkU3OEVBNTcwNTJENDM2Qk"
volumes:
- kafka-data:/var/lib/kafka/data
healthcheck:
test: ["CMD", "kafka-broker-api-versions.sh", "--bootstrap-server", "localhost:9092"]
interval: 10s
timeout: 10s
retries: 5
schema-registry:
image: confluentinc/cp-schema-registry:7.6.0
container_name: schema-registry
depends_on:
kafka:
condition: service_healthy
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka:9092
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
volumes:
kafka-data:
Start and Verify
# Start the cluster
docker compose up -d
# Wait for health check to pass (usually 15-30 seconds)
docker compose ps
# Confirm Kafka is ready
docker exec kafka kafka-broker-api-versions.sh --bootstrap-server localhost:9092 | head -5
# Expected output:
# localhost:9092 (id: 1 rack: null) -> (
# Produce(0): 0 to 10 [usable: 10],
# ...
View Cluster Metadata
# List brokers in the cluster
docker exec kafka kafka-metadata-quorum.sh \
--bootstrap-server localhost:9092 \
describe --status
# Expected output:
# ClusterId: MkU3OEVBNTcwNTJENDM2Qk
# LeaderId: 1
# LeaderEpoch: 1
# HighWatermark: ...
3-Broker Cluster (Production-Like)
For testing replication, fault tolerance, and producer acknowledgments, use a 3-broker cluster where each broker runs in combined mode.
flowchart TB
subgraph DC["Docker Compose — 3-Broker Cluster"]
direction TB
K1["kafka-1\nNode ID: 1\nbroke: 9092\ncontroller: 9093"]
K2["kafka-2\nNode ID: 2\nbroker: 9094\ncontroller: 9095"]
K3["kafka-3\nNode ID: 3\nbroker: 9096\ncontroller: 9097"]
K1 <-->|Raft| K2
K2 <-->|Raft| K3
K1 <-->|Raft| K3
end
App["Spring Boot App"] -->|"9092,9094,9096"| K1
App -->|"9092,9094,9096"| K2
App -->|"9092,9094,9096"| K3
docker-compose-cluster.yml (3 brokers):
version: '3.8'
x-kafka-common: &kafka-common
image: confluentinc/cp-kafka:7.6.0
environment:
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka-1:9093,2@kafka-2:9095,3@kafka-3:9097
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 3
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 2
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
KAFKA_MIN_INSYNC_REPLICAS: 2
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
KAFKA_LOG_DIRS: /var/lib/kafka/data
CLUSTER_ID: "MkU3OEVBNTcwNTJENDM2Qk"
services:
kafka-1:
<<: *kafka-common
hostname: kafka-1
container_name: kafka-1
ports:
- "9092:9092"
- "9093:9093"
environment:
<<: *kafka-common
KAFKA_NODE_ID: 1
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
volumes:
- kafka-1-data:/var/lib/kafka/data
kafka-2:
<<: *kafka-common
hostname: kafka-2
container_name: kafka-2
ports:
- "9094:9094"
- "9095:9095"
environment:
<<: *kafka-common
KAFKA_NODE_ID: 2
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9094,CONTROLLER://0.0.0.0:9095
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9094
volumes:
- kafka-2-data:/var/lib/kafka/data
kafka-3:
<<: *kafka-common
hostname: kafka-3
container_name: kafka-3
ports:
- "9096:9096"
- "9097:9097"
environment:
<<: *kafka-common
KAFKA_NODE_ID: 3
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9096,CONTROLLER://0.0.0.0:9097
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9096
volumes:
- kafka-3-data:/var/lib/kafka/data
schema-registry:
image: confluentinc/cp-schema-registry:7.6.0
container_name: schema-registry
depends_on:
- kafka-1
- kafka-2
- kafka-3
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka-1:9092,kafka-2:9094,kafka-3:9096
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
volumes:
kafka-1-data:
kafka-2-data:
kafka-3-data:
Start the 3-Broker Cluster
docker compose -f docker-compose-cluster.yml up -d
# Verify all 3 brokers are up
docker exec kafka-1 kafka-metadata-quorum.sh \
--bootstrap-server localhost:9092 \
describe --replication
# Expected: 3 replicas listed, one active leader
Verify Replication
# Create a topic with replication factor 3
docker exec kafka-1 kafka-topics.sh \
--bootstrap-server localhost:9092 \
--create \
--topic orders \
--partitions 3 \
--replication-factor 3
# Describe the topic — note Leader, Replicas, ISR columns
docker exec kafka-1 kafka-topics.sh \
--bootstrap-server localhost:9092 \
--describe \
--topic orders
# Example output:
# Topic: orders Partition: 0 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
# Topic: orders Partition: 1 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
# Topic: orders Partition: 2 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Each partition has a leader on a different broker — Kafka distributes leaders evenly.
Spring Boot Connection Configuration
For the single-broker setup:
# application.properties (single broker)
spring.kafka.bootstrap-servers=localhost:9092
For the 3-broker cluster — list all brokers (client discovers the rest automatically from any one):
# application.properties (3-broker cluster)
spring.kafka.bootstrap-servers=localhost:9092,localhost:9094,localhost:9096
The bootstrap-servers list is only used for the initial connection. Once connected, the client fetches full cluster metadata from any broker and communicates with leaders directly.
Simulating Broker Failure (3-Broker Cluster)
With the 3-broker cluster running, try killing one broker:
sequenceDiagram
participant You
participant K1 as kafka-1\n(leader of P0)
participant K2 as kafka-2
participant K3 as kafka-3\n(new leader of P0)
You->>K1: docker stop kafka-1
Note over K2,K3: Controller detects failure,\nelects K3 as leader for P0
K3->>K3: Become leader of P0
You->>K2: Produce to orders/P0
K2->>K3: Route to new leader
K3-->>You: ProduceResponse (success)
Note over You,K3: No data loss, automatic failover
# Stop broker 1
docker stop kafka-1
# Describe topic — see leadership shift
docker exec kafka-2 kafka-topics.sh \
--bootstrap-server localhost:9094 \
--describe \
--topic orders
# All partitions that had kafka-1 as leader now have a new leader
# Restart kafka-1
docker start kafka-1
# Describe again — kafka-1 rejoins as follower, then may reclaim leadership
Useful CLI Commands for Day-to-Day Use
# List all topics
docker exec kafka kafka-topics.sh \
--bootstrap-server localhost:9092 --list
# Describe a specific topic
docker exec kafka kafka-topics.sh \
--bootstrap-server localhost:9092 --describe --topic orders
# List consumer groups
docker exec kafka kafka-consumer-groups.sh \
--bootstrap-server localhost:9092 --list
# Describe a consumer group (shows lag)
docker exec kafka kafka-consumer-groups.sh \
--bootstrap-server localhost:9092 \
--describe --group inventory-service
# Reset offsets for a group to beginning
docker exec kafka kafka-consumer-groups.sh \
--bootstrap-server localhost:9092 \
--group inventory-service \
--topic orders \
--reset-offsets --to-earliest \
--execute
# Delete a topic
docker exec kafka kafka-topics.sh \
--bootstrap-server localhost:9092 \
--delete --topic orders
Stopping and Cleaning Up
# Stop all containers but keep volumes (data persists)
docker compose down
# Stop and delete all data (clean slate)
docker compose down -v
Key Takeaways
- The single-broker Docker Compose setup (combined mode) is used throughout this series for simplicity
- The 3-broker setup mirrors production:
replication.factor=3,min.insync.replicas=2 - List all brokers in
bootstrap-servers— the client discovers the full cluster from any one broker - Simulate broker failure by stopping a container — leadership shifts automatically within seconds
kafka-topics.sh --describeshows Leader, Replicas, and ISR for every partitionkafka-consumer-groups.sh --describeshows consumer lag per partition
Next: Kafka CLI: Creating Topics, Producing, and Consuming Messages — use the CLI tools to create topics, send messages, and observe consumption before writing any Spring code.