<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Spring-Kafka on Devops Monk</title><link>https://devops-monk.com/tags/spring-kafka/</link><description>Recent content in Spring-Kafka on Devops Monk</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 04 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://devops-monk.com/tags/spring-kafka/index.xml" rel="self" type="application/rss+xml"/><item><title>@SendTo and @KafkaHandler: Chaining Consumers and Multi-Type Dispatch</title><link>https://devops-monk.com/tutorials/spring-kafka/sendto-kafkahandler/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/sendto-kafkahandler/</guid><description>@SendTo — Chaining Listeners @SendTo on a @KafkaListener method automatically sends the return value to another Kafka topic. This is how you build event pipelines without manually calling KafkaTemplate.send() in your listener.
flowchart LR T1["orders\n(OrderPlacedEvent)"] T2["orders-confirmed\n(OrderConfirmedEvent)"] T3["inventory-events\n(StockReservedEvent)"] T1 -->|"@KafkaListener\n@SendTo"| L1["confirmOrder()"] L1 --> T2 T2 -->|"@KafkaListener\n@SendTo"| L2["reserveStock()"] L2 --> T3 Basic @SendTo @KafkaListener(topics = &amp;#34;orders&amp;#34;, groupId = &amp;#34;confirmation-service&amp;#34;) @SendTo(&amp;#34;orders-confirmed&amp;#34;) public OrderConfirmedEvent onOrder(OrderPlacedEvent event) { // Return value is automatically sent to &amp;#34;orders-confirmed&amp;#34; return new OrderConfirmedEvent( event.</description></item><item><title>Avro Serialization with Confluent Schema Registry</title><link>https://devops-monk.com/tutorials/spring-kafka/avro-schema-registry/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/avro-schema-registry/</guid><description>Why Avro and Schema Registry? JSON has no schema enforcement — a producer can change a field name and silently break every consumer. Avro + Schema Registry solves this:
Avro gives you a compact binary format with a schema definition Schema Registry stores and versions schemas, enforces compatibility rules, and prevents breaking changes from reaching consumers flowchart LR subgraph Producer["Order Service"] E["OrderPlacedEvent"] -->|"KafkaAvroSerializer"| SR["Schema Registry\n(register/lookup schema)"] SR --> Bytes["[schema_id (4 bytes)] + [avro payload]"</description></item><item><title>Consumer @Bean Configuration: ConcurrentKafkaListenerContainerFactory</title><link>https://devops-monk.com/tutorials/spring-kafka/consumer-bean-configuration/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/consumer-bean-configuration/</guid><description>Why @Bean Configuration? application.properties covers the common cases. But real applications need multiple listener factories — one for orders with manual acknowledgment and concurrency 3, another for analytics events with batch listening and different deserializers. @Bean configuration gives you a factory per use case, full IDE support, and the ability to wire in custom components like error handlers and interceptors.
The Factory Relationship flowchart TD CF["ConsumerFactory\n(connection + deserialization config)"] LCF["ConcurrentKafkaListenerContainerFactory\n(container behaviour config)"</description></item><item><title>Consumer Groups, Offsets, and the __consumer_offsets Topic</title><link>https://devops-monk.com/tutorials/spring-kafka/consumer-groups-offsets/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/consumer-groups-offsets/</guid><description>What Is a Consumer Group? A consumer group is a set of consumer instances that jointly consume a topic. Kafka assigns each partition to exactly one consumer within the group at a time. This is what enables parallel processing: multiple consumers in the same group read different partitions simultaneously.
flowchart LR subgraph Topic["Topic: orders — 4 partitions"] P0["Partition 0"] P1["Partition 1"] P2["Partition 2"] P3["Partition 3"] end subgraph CG["Consumer Group: inventory-service"] C1["</description></item><item><title>Consumer Groups: Parallel Processing and Partition Assignment Strategies</title><link>https://devops-monk.com/tutorials/spring-kafka/consumer-groups/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/consumer-groups/</guid><description>Consumer Group Fundamentals A consumer group is how Kafka distributes work across multiple consumers. Each partition in a topic is assigned to exactly one consumer instance in the group at any given time.
flowchart TB subgraph Topic["Topic: orders — 6 partitions"] P0["P0"] P1["P1"] P2["P2"] P3["P3"] P4["P4"] P5["P5"] end subgraph CG1["Group: inventory-service (3 instances)"] I1["Instance 1\nP0, P1"] I2["Instance 2\nP2, P3"] I3["Instance 3\nP4, P5"] end subgraph CG2["Group: notification-service (1 instance)"] N1["Instance 1\nP0,P1,P2,P3,P4,P5"] end P0 &amp; P1 --> I1 P2 &amp; P3 --> I2 P4 &amp; P5 --> I3 P0 &amp; P1 &amp; P2 &amp; P3 &amp; P4 &amp; P5 --> N1 Both groups receive all events — they are independent.</description></item><item><title>Custom Serializers and Deserializers</title><link>https://devops-monk.com/tutorials/spring-kafka/custom-serializers/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/custom-serializers/</guid><description>When to Write a Custom Serializer Spring Kafka ships JSON and Avro support. You need a custom serializer when:
Your team uses Protobuf or MessagePack and wants native support You need a compact binary format for high-throughput topics (pricing ticks, sensor readings) You&amp;rsquo;re integrating with a legacy system that publishes a fixed binary protocol You want deterministic serialization for event deduplication or content-addressed storage The Serializer and Deserializer Interfaces // org.</description></item><item><title>Dead Letter Topics: Routing Failed Messages with DeadLetterPublishingRecoverer</title><link>https://devops-monk.com/tutorials/spring-kafka/dead-letter-topics/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/dead-letter-topics/</guid><description>What Is a Dead Letter Topic? When a record fails processing and retries are exhausted, you have two options: skip it (losing the data) or park it somewhere for inspection and reprocessing. A dead-letter topic (DLT) is that parking lot — a Kafka topic that holds records that could not be processed, enriched with error metadata headers.
flowchart LR subgraph Main["orders topic"] R1["record: offset 42\n(bad data)"] end subgraph DLT["orders.DLT topic"] R2["</description></item><item><title>Dynamic Listener Containers and Programmatic Topic Registration</title><link>https://devops-monk.com/tutorials/spring-kafka/dynamic-listeners/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/dynamic-listeners/</guid><description>Why Dynamic Listeners? @KafkaListener is declared at compile time. Some scenarios require listeners created at runtime:
Multi-tenant SaaS — each tenant onboards to their own topic; you can&amp;rsquo;t redeploy to add @KafkaListener for each new tenant Feature flags — enable or disable a listener without a deployment Plugin systems — modules register their own topic subscriptions when loaded Admin APIs — operators subscribe to new topics via a REST endpoint ConcurrentMessageListenerContainer The core building block is ConcurrentMessageListenerContainer — the same class @KafkaListener uses internally, but constructed and started manually:</description></item><item><title>Error Handling Basics: DefaultErrorHandler and CommonErrorHandler</title><link>https://devops-monk.com/tutorials/spring-kafka/error-handling-basics/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/error-handling-basics/</guid><description>What Happens When a Listener Throws? Without an error handler, an uncaught exception from @KafkaListener causes the container to log the error and retry the same record on the next poll — indefinitely. One bad record can block an entire partition forever.
DefaultErrorHandler fixes this: it retries a configurable number of times with backoff, then calls a ConsumerRecordRecoverer (e.g. send to a dead-letter topic) and moves on.
DefaultErrorHandler — The Modern API Spring Kafka 2.</description></item><item><title>Filtering Messages with RecordFilterStrategy</title><link>https://devops-monk.com/tutorials/spring-kafka/message-filtering/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/message-filtering/</guid><description>Why Filter at the Container Level? Multiple consumers can share a topic. The inventory service only cares about PLACED orders; the analytics service wants everything. Rather than putting if (event.getStatus() != PLACED) return; at the top of every listener, Spring Kafka lets you filter records before they reach your method — keeping business logic clean and making the filter reusable across listeners.
How RecordFilterStrategy Works flowchart LR Broker -->|"poll() → [r1, r2, r3, r4]"</description></item><item><title>Handling Deserialization Errors Gracefully</title><link>https://devops-monk.com/tutorials/spring-kafka/deserialization-errors/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/deserialization-errors/</guid><description>The Problem: Poison Pills at Deserialization Time A malformed byte sequence — truncated JSON, wrong Avro schema, corrupt payload — throws an exception during deserialization, before the listener method is called. Without special handling, this record blocks the partition indefinitely: the consumer fetches it, fails to deserialize, and fetches it again on the next poll.
sequenceDiagram participant Broker participant Container participant Deserializer loop forever without ErrorHandlingDeserializer Container->>Broker: poll() Broker-->>Container: [good-record, CORRUPT-RECORD, good-record] Container->>Deserializer: deserialize(CORRUPT-RECORD) Deserializer-->>Container: JsonProcessingException 💥 Note over Container: Partition blocked — same record on every poll end ErrorHandlingDeserializer solves this by catching the deserialization exception and wrapping it in a DeserializationException that the listener container can route to the error handler.</description></item><item><title>Idempotent Producers: Eliminating Duplicate Messages</title><link>https://devops-monk.com/tutorials/spring-kafka/idempotent-producer/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/idempotent-producer/</guid><description>The Duplicate Problem With acks=all and retries enabled, a produce request might be acknowledged by the broker, but the acknowledgment is lost in the network before reaching the producer. The producer, seeing no response, retries — sending the same record again. The broker writes it a second time. The consumer sees a duplicate.
sequenceDiagram participant Producer participant Leader as Broker (Leader) Producer->>Leader: ProduceRequest: OrderPlaced (orderId=1001) Leader->>Leader: Write record at offset 42 ✓ Leader--xProducer: ProduceResponse LOST (network failure) Note over Producer: No ack received — retrying Producer->>Leader: ProduceRequest: OrderPlaced (orderId=1001) [RETRY] Leader->>Leader: Write record at offset 43 ✓ (DUPLICATE!</description></item><item><title>JSON Serialization: JsonSerializer, JsonDeserializer, and Type Mapping</title><link>https://devops-monk.com/tutorials/spring-kafka/json-serialization/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/json-serialization/</guid><description>The Serialization Problem Kafka stores bytes. KafkaTemplate&amp;lt;String, OrderPlacedEvent&amp;gt; needs to turn your Java object into bytes for the producer, and @KafkaListener needs to turn those bytes back into the right Java class on the consumer. Spring Kafka ships JsonSerializer and JsonDeserializer built on Jackson to handle this — but they have several sharp edges that break in real multi-service deployments.
How Spring Kafka JSON Serialization Works flowchart LR subgraph Producer["Order Service"</description></item><item><title>Kafka Architecture: Brokers, Topics, Partitions, and Replicas</title><link>https://devops-monk.com/tutorials/spring-kafka/kafka-architecture/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/kafka-architecture/</guid><description>The Cluster: Brokers and the Controller A Kafka cluster is a group of servers, each called a broker. Brokers store data and serve producer/consumer requests. One broker in the cluster acts as the controller — it manages partition leadership, handles broker joins and departures, and coordinates rebalancing.
In KRaft mode (Kafka 3.3+, the default from Kafka 4.0), the controller is built into Kafka itself — no ZooKeeper needed.
flowchart TB subgraph Cluster["</description></item><item><title>Kafka CLI: Creating Topics, Producing, and Consuming Messages</title><link>https://devops-monk.com/tutorials/spring-kafka/kafka-cli/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/kafka-cli/</guid><description>Why Learn the CLI First? Before writing any Spring code, understanding the Kafka CLI tools gives you the ability to:
Verify your cluster is working correctly Inspect topics and partitions Debug consumer lag issues Replay messages from specific offsets Reset consumer groups during incident recovery All CLI tools are in Kafka&amp;rsquo;s bin/ directory. In Docker, run them with docker exec:
docker exec kafka &amp;lt;tool&amp;gt; &amp;lt;args&amp;gt; kafka-topics.sh: Managing Topics Create a Topic docker exec kafka kafka-topics.</description></item><item><title>Kafka Consumer in Spring Boot: @KafkaListener Basics</title><link>https://devops-monk.com/tutorials/spring-kafka/kafka-consumer-basics/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/kafka-consumer-basics/</guid><description>How @KafkaListener Works @KafkaListener is a Spring Kafka annotation that registers a method as a Kafka consumer. Under the hood, Spring Kafka creates a ConcurrentMessageListenerContainer — a managed thread pool that continuously polls the broker and dispatches records to your method.
flowchart LR Broker["Kafka Broker"] subgraph Container["ConcurrentMessageListenerContainer"] T1["Poll Thread 1\n(Partition 0)"] T2["Poll Thread 2\n(Partition 1)"] T3["Poll Thread 3\n(Partition 2)"] end Method["@KafkaListener\nvoid onOrderPlaced(...)"] Broker -->|"fetch records"| T1 Broker -->|"fetch records"| T2 Broker -->|"</description></item><item><title>Kafka Producer in Spring Boot: KafkaTemplate Basics</title><link>https://devops-monk.com/tutorials/spring-kafka/kafka-producer-basics/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/kafka-producer-basics/</guid><description>How a Spring Kafka Producer Works KafkaTemplate is the central Spring Kafka class for sending messages. It wraps the native Kafka KafkaProducer, manages serialization, and provides a Spring-friendly API for sending records.
flowchart LR App["Your Service\n(OrderService)"] KT["KafkaTemplate\n(Spring Kafka)"] Buffer["Producer Buffer\n(RecordAccumulator)"] Sender["Sender Thread\n(NetworkClient)"] Broker["Kafka Broker\n(Leader Partition)"] App -->|"send(topic, key, value)"| KT KT -->|serialize + route| Buffer Buffer -->|batch when full\nor linger.ms elapsed| Sender Sender -->|ProduceRequest| Broker Broker -->|ProduceResponse| Sender Sender -->|callback| App The send is asynchronous by default — KafkaTemplate.</description></item><item><title>Kafka Streams with Spring Boot: Stateless and Stateful Processing</title><link>https://devops-monk.com/tutorials/spring-kafka/kafka-streams/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/kafka-streams/</guid><description>Kafka Streams vs @KafkaListener @KafkaListener is a consumer — it reads records and processes them one by one or in batches. Kafka Streams is a stream processing library — it builds a topology of transformations that runs continuously, with built-in state stores, windowed aggregations, and join operations.
Aspect @KafkaListener Kafka Streams Processing model Consume and process Topology of operators Stateful processing Manual (external DB) Built-in state stores (RocksDB) Windowed aggregations Manual Native (time, session, hopping) Joins Manual KStream-KTable, KStream-KStream Fault tolerance Committed offsets Changelog topics + offsets Use when Imperative event handling Stream transformations and aggregations Maven Dependency &amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.</description></item><item><title>Kafka Transactions and Exactly-Once Semantics</title><link>https://devops-monk.com/tutorials/spring-kafka/transactions-eos/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/transactions-eos/</guid><description>Why Transactions? At-least-once delivery means a record can be processed and produced more than once after a crash. For most applications, idempotent consumers handle this. But when you need a hard guarantee — either the produce happens and the offset commits, or neither does — you need Kafka transactions.
Common scenarios:
Consume → transform → produce (read from one topic, write to another) where partial completion is unacceptable Exactly-once aggregations in financial or billing systems Atomic multi-topic produce where records to multiple topics must all land or none land How Kafka Transactions Work sequenceDiagram participant Producer participant Broker participant Consumer Producer->>Broker: initTransactions() [registers transactional.</description></item><item><title>KafkaAdmin and AdminClient: Managing Topics Programmatically</title><link>https://devops-monk.com/tutorials/spring-kafka/kafka-admin/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/kafka-admin/</guid><description>Why Manage Topics Programmatically? CLI commands work for one-time setup. Production services need:
Startup validation — verify required topics exist before the application starts Auto-provisioning — create topics at deployment time with correct config Dynamic tenant onboarding — create per-tenant topics at runtime Config drift detection — compare actual topic config against expected values Spring Kafka provides KafkaAdmin for declarative topic management and AdminClient for imperative operations.
KafkaAdmin — Declarative Topic Creation Declare NewTopic beans — Spring Kafka creates them at startup if they don&amp;rsquo;t exist:</description></item><item><title>KRaft Mode: Running Kafka Without ZooKeeper</title><link>https://devops-monk.com/tutorials/spring-kafka/kraft-mode/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/kraft-mode/</guid><description>Why ZooKeeper Had to Go For the first decade of Kafka&amp;rsquo;s existence, every Kafka cluster required an Apache ZooKeeper cluster to manage metadata: controller election, topic configurations, partition leadership, access control lists, and consumer group state.
This created real problems:
flowchart TB subgraph OldArchitecture["Old Architecture: Kafka + ZooKeeper"] subgraph ZK["ZooKeeper Cluster (3+ nodes)"] Z1["ZK Node 1"] Z2["ZK Node 2"] Z3["ZK Node 3"] end subgraph Kafka["Kafka Cluster"] K1["Broker 1\n(Controller)"] K2["Broker 2"] K3["</description></item><item><title>Message Headers: Metadata, Routing, and Custom Header Propagation</title><link>https://devops-monk.com/tutorials/spring-kafka/message-headers/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/message-headers/</guid><description>What Are Kafka Record Headers? Every Kafka record carries a list of Header objects — key-value pairs of String key and byte[] value. They sit outside the message payload and are ideal for:
Trace propagation — carry X-Trace-Id / X-Span-Id across service boundaries Correlation IDs — link a response to a request in async flows Routing metadata — signal which region, tenant, or feature flag applies Schema type hints — __TypeId__ (set automatically by JsonSerializer) Event versioning — indicate schema version without modifying the payload flowchart LR subgraph Record["</description></item><item><title>Monitoring: Consumer Lag, Micrometer Metrics, and Actuator Integration</title><link>https://devops-monk.com/tutorials/spring-kafka/monitoring/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/monitoring/</guid><description>What to Monitor in Kafka Production Kafka applications need visibility into:
Consumer lag — how many records are unprocessed per partition Throughput — records produced and consumed per second Error rates — listener exceptions, DLT records, retry counts Producer latency — time from send() to broker acknowledgment Rebalance frequency — high rebalance rate signals consumer instability Dependencies &amp;lt;!-- Micrometer Prometheus registry --&amp;gt; &amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;io.micrometer&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;micrometer-registry-prometheus&amp;lt;/artifactId&amp;gt; &amp;lt;/dependency&amp;gt; &amp;lt;!-- Spring Boot Actuator --&amp;gt; &amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.</description></item><item><title>Non-Blocking Retries: @RetryableTopic, BackOff, and the Retry Topic Chain</title><link>https://devops-monk.com/tutorials/spring-kafka/non-blocking-retries/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/non-blocking-retries/</guid><description>The Blocking Retry Problem DefaultErrorHandler retries by seeking back to the failed offset. While retrying, no other records from that partition are consumed — the partition is blocked. For a topic with high throughput, one slow retry can cause significant consumer lag.
flowchart TD subgraph Blocking["Blocking Retry (DefaultErrorHandler)"] B1["poll() → [r50, r51, r52, r53]"] B2["process r50 ✓"] B3["process r51 ✗ — retry"] B4["wait 10s... retry r51 ✗"] B5["wait 20s... retry r51 ✗"</description></item><item><title>Offset Management: Auto-Commit vs Manual Acknowledgment</title><link>https://devops-monk.com/tutorials/spring-kafka/offset-management/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/offset-management/</guid><description>Why Offset Management Matters The committed offset determines what happens when a consumer restarts. If the offset is committed too early, a crash before processing completes means events are lost. If it is committed too late, a crash after processing but before committing means events are re-processed.
flowchart TD subgraph TooEarly["Commit too early → Data Loss"] E1["Commit offset 43"] --> E2["Process record 42"] --> E3["Crash!"] E4["Restart: resume from 43"] --> E5["</description></item><item><title>Pausing, Resuming, and Stopping Listener Containers</title><link>https://devops-monk.com/tutorials/spring-kafka/listener-lifecycle/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/listener-lifecycle/</guid><description>Why Control Container Lifecycle? A running listener consumes from Kafka continuously. In production you need to:
Pause consumption when a downstream service is overloaded (back-pressure) Resume once the downstream recovers Stop a container entirely during maintenance or feature flag toggles Restart after a configuration change without redeploying Spring Kafka exposes all of this through KafkaListenerEndpointRegistry and the container&amp;rsquo;s own lifecycle API.
Container States stateDiagram-v2 [*] --> Running : start() Running --> Paused : pause() Paused --> Running : resume() Running --> Stopped : stop() Stopped --> Running : start() Paused --> Stopped : stop() Running — polling Kafka, dispatching to listener Paused — broker connection maintained, consumer heartbeat sent, no new records fetched Stopped — consumer thread terminated, partitions released back to group Paused is preferable to stopped for temporary throttling: it avoids a rebalance and keeps the consumer&amp;rsquo;s partition assignment intact.</description></item><item><title>Producer @Bean Configuration: Beyond application.properties</title><link>https://devops-monk.com/tutorials/spring-kafka/producer-bean-configuration/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/producer-bean-configuration/</guid><description>Why @Bean Configuration? application.properties is convenient for a single producer, but insufficient when you need:
Multiple producers with different serializers (e.g. one for JSON events, one for Avro) Different settings per environment built at runtime (not just property substitution) Producers sending to different clusters (e.g. primary + DR cluster) Programmatic validation of configuration at startup flowchart TB subgraph PropertiesApproach["application.properties Approach"] P1["Single producer config\nspring.kafka.producer.*\n✓ Simple\n✗ One producer only\n✗ No runtime logic"] end subgraph BeanApproach["</description></item><item><title>Producer Acknowledgments: acks, min.insync.replicas, and Data Durability</title><link>https://devops-monk.com/tutorials/spring-kafka/producer-acknowledgments/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/producer-acknowledgments/</guid><description>What Are Acknowledgments? When a producer sends a record to a Kafka broker, it can wait for confirmation that the write was received and replicated before considering the send &amp;ldquo;complete.&amp;rdquo; The acks setting controls how much confirmation the producer requires.
flowchart LR Producer["Producer"] Leader["Partition Leader\n(Broker 1)"] F1["Follower\n(Broker 2)"] F2["Follower\n(Broker 3)"] Producer -->|"ProduceRequest"| Leader Leader -->|"replicate"| F1 Leader -->|"replicate"| F2 Leader -->|"ProduceResponse ✓"| Producer style Producer fill:#3b82f6,color:#fff style Leader fill:#10b981,color:#fff The acknowledgment is the broker&amp;rsquo;s confirmation to the producer.</description></item><item><title>Producer Retries: Backoff, Timeouts, and Retry Strategies</title><link>https://devops-monk.com/tutorials/spring-kafka/producer-retries/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/producer-retries/</guid><description>Why Producers Need Retries Network errors, leader elections, and broker restarts are normal events in a distributed system. Without retries, a transient broker hiccup causes permanent data loss from the producer&amp;rsquo;s perspective. With retries, the producer automatically re-sends failed records until either the broker accepts them or a timeout deadline is reached.
sequenceDiagram participant Producer participant Leader as Leader (Broker 1) participant NewLeader as New Leader (Broker 2) Producer->>Leader: ProduceRequest (offset 42) Note over Leader: Broker 1 crashes mid-write Leader--xProducer: No response (timeout) Note over Producer: retry.</description></item><item><title>Request-Reply Pattern with ReplyingKafkaTemplate</title><link>https://devops-monk.com/tutorials/spring-kafka/request-reply/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/request-reply/</guid><description>When Kafka Needs to Be Synchronous Kafka is designed for asynchronous event streaming. But some flows genuinely need a response: a payment validation service that must confirm before the order proceeds, or a pricing engine that must return the current price before checkout completes. ReplyingKafkaTemplate gives you a blocking send-and-receive call over Kafka without leaving the Kafka ecosystem.
How Request-Reply Works sequenceDiagram participant Requester as "Order Service\n(ReplyingKafkaTemplate)" participant Broker participant Replier as "</description></item><item><title>Retryable vs Non-Retryable Exceptions: Custom Exception Classification</title><link>https://devops-monk.com/tutorials/spring-kafka/retryable-exceptions/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/retryable-exceptions/</guid><description>Transient vs Permanent Failures Not every exception is worth retrying. Retrying a NullPointerException or a schema validation error wastes time and delays other records. Retrying a database timeout or a downstream HTTP 503 is exactly right — the error is temporary and will likely resolve.
flowchart TD Ex["Exception in listener"] Q{"Transient?\n(DB timeout, HTTP 503,\nnetwork blip)"} Q -->|Yes| Retry["Retry with BackOff"] Q -->|No| Skip["Call recoverer immediately\n(no retries wasted)"] Retry -->|"still failing after\nmax retries"</description></item><item><title>Seeking to Specific Offsets: Replay, Recovery, and Time-Based Seeking</title><link>https://devops-monk.com/tutorials/spring-kafka/seeking-offsets/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/seeking-offsets/</guid><description>Why Seek Instead of Reset? Offset management (auto-commit vs manual acknowledgment) controls when offsets advance during normal processing. Seeking is different: it lets you reposition the consumer to any offset — past or future — programmatically, without touching the committed offset in __consumer_offsets.
Common scenarios:
Replay from the beginning — reprocess all historical events after a bug fix Resume from a known-good offset — skip a poison pill that&amp;rsquo;s blocking the consumer Time-based replay — reprocess everything since yesterday 09:00 Startup positioning — always start from the end, ignoring backlog on first launch How Kafka Seeking Works flowchart LR subgraph Broker["</description></item><item><title>Sending Messages with Keys, Headers, and Custom Partitioning</title><link>https://devops-monk.com/tutorials/spring-kafka/producer-keys-headers-partitioning/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/producer-keys-headers-partitioning/</guid><description>Why Partitioning Strategy Matters How you route messages to partitions determines:
Ordering: only messages in the same partition are ordered relative to each other Parallelism: how evenly work is distributed across consumers Hot spots: if one key generates 90% of traffic, one partition (and one consumer) gets 90% of the load flowchart TD subgraph Routing["Message Routing Decision"] Msg["Message"] HasKey{Has key?} HasPartition{Explicit partition?} KeyHash["hash(key) % numPartitions\n→ deterministic, same partition always"] RoundRobin["Sticky partitioning\n(batch to same partition,\nthen round-robin)"</description></item><item><title>Spring Kafka Production Checklist and Best Practices</title><link>https://devops-monk.com/tutorials/spring-kafka/production-best-practices/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/production-best-practices/</guid><description>Before You Ship This is the checklist distilled from everything in this series. Work through it before your first production deployment. Each item links to the article where it&amp;rsquo;s covered in depth.
Producer Checklist Durability # Never lose data on leader failure spring.kafka.producer.acks=all # At least 2 brokers must acknowledge every write spring.kafka.producer.properties.min.insync.replicas=2 # Enables exactly-once message delivery (required for transactions) spring.kafka.producer.properties.enable.idempotence=true Do: Set acks=all and min.insync.replicas=2 for any topic that carries business data.</description></item><item><title>Starting a Kafka Cluster: Single-Broker and 3-Broker with KRaft</title><link>https://devops-monk.com/tutorials/spring-kafka/kafka-cluster-setup/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/kafka-cluster-setup/</guid><description>Prerequisites Docker Desktop installed and running docker compose v2 (bundled with Docker Desktop 4.x+) Ports 9092, 9093, 9094 free on your machine All articles in this series assume a running local Kafka cluster. Start with the single-broker setup for articles 1–6, then switch to the 3-broker cluster when we cover replication and fault tolerance.
Single-Broker Cluster (Development) This is the simplest setup — one Kafka node running in combined mode (broker + controller).</description></item><item><title>Testing Kafka Applications: EmbeddedKafka and Testcontainers</title><link>https://devops-monk.com/tutorials/spring-kafka/testing-kafka/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/testing-kafka/</guid><description>Two Testing Strategies Strategy Speed Fidelity Use when @EmbeddedKafka Fast (~2s startup) In-process broker, not 100% identical Unit/integration tests — CI fast path KafkaContainer (Testcontainers) Slower (~10s startup) Real Kafka broker in Docker Acceptance tests, DLT/transaction validation Use both: @EmbeddedKafka for the bulk of tests, KafkaContainer for the smoke suite that validates real-broker behaviour.
Test Dependencies &amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.springframework.kafka&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;spring-kafka-test&amp;lt;/artifactId&amp;gt; &amp;lt;scope&amp;gt;test&amp;lt;/scope&amp;gt; &amp;lt;/dependency&amp;gt; &amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.testcontainers&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;kafka&amp;lt;/artifactId&amp;gt; &amp;lt;scope&amp;gt;test&amp;lt;/scope&amp;gt; &amp;lt;/dependency&amp;gt; &amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.awaitility&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;awaitility&amp;lt;/artifactId&amp;gt; &amp;lt;scope&amp;gt;test&amp;lt;/scope&amp;gt; &amp;lt;/dependency&amp;gt; @EmbeddedKafka — Fast Integration Tests Testing a Producer @SpringBootTest @EmbeddedKafka( partitions = 1, topics = {&amp;#34;orders&amp;#34;}, brokerProperties = {&amp;#34;log.</description></item><item><title>What Is Apache Kafka: Event Streaming From First Principles</title><link>https://devops-monk.com/tutorials/spring-kafka/what-is-apache-kafka/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/what-is-apache-kafka/</guid><description>The Problem Kafka Solves Imagine an e-commerce platform. A customer places an order. What needs to happen next?
Inventory must be reserved Payment must be charged A confirmation email must be sent The warehouse must be notified to pick and pack Analytics must record the sale Fraud detection must evaluate the transaction One request. Six downstream systems. In a traditional REST architecture, the Order Service calls each of those six services directly — synchronously, one after another.</description></item></channel></rss>