<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Producer on Devops Monk</title><link>https://devops-monk.com/tags/producer/</link><description>Recent content in Producer on Devops Monk</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 04 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://devops-monk.com/tags/producer/index.xml" rel="self" type="application/rss+xml"/><item><title>Idempotent Producers: Eliminating Duplicate Messages</title><link>https://devops-monk.com/tutorials/spring-kafka/idempotent-producer/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/idempotent-producer/</guid><description>The Duplicate Problem With acks=all and retries enabled, a produce request might be acknowledged by the broker, but the acknowledgment is lost in the network before reaching the producer. The producer, seeing no response, retries — sending the same record again. The broker writes it a second time. The consumer sees a duplicate.
sequenceDiagram participant Producer participant Leader as Broker (Leader) Producer->>Leader: ProduceRequest: OrderPlaced (orderId=1001) Leader->>Leader: Write record at offset 42 ✓ Leader--xProducer: ProduceResponse LOST (network failure) Note over Producer: No ack received — retrying Producer->>Leader: ProduceRequest: OrderPlaced (orderId=1001) [RETRY] Leader->>Leader: Write record at offset 43 ✓ (DUPLICATE!</description></item><item><title>Kafka Producer in Spring Boot: KafkaTemplate Basics</title><link>https://devops-monk.com/tutorials/spring-kafka/kafka-producer-basics/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/kafka-producer-basics/</guid><description>How a Spring Kafka Producer Works KafkaTemplate is the central Spring Kafka class for sending messages. It wraps the native Kafka KafkaProducer, manages serialization, and provides a Spring-friendly API for sending records.
flowchart LR App["Your Service\n(OrderService)"] KT["KafkaTemplate\n(Spring Kafka)"] Buffer["Producer Buffer\n(RecordAccumulator)"] Sender["Sender Thread\n(NetworkClient)"] Broker["Kafka Broker\n(Leader Partition)"] App -->|"send(topic, key, value)"| KT KT -->|serialize + route| Buffer Buffer -->|batch when full\nor linger.ms elapsed| Sender Sender -->|ProduceRequest| Broker Broker -->|ProduceResponse| Sender Sender -->|callback| App The send is asynchronous by default — KafkaTemplate.</description></item><item><title>Producer @Bean Configuration: Beyond application.properties</title><link>https://devops-monk.com/tutorials/spring-kafka/producer-bean-configuration/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/producer-bean-configuration/</guid><description>Why @Bean Configuration? application.properties is convenient for a single producer, but insufficient when you need:
Multiple producers with different serializers (e.g. one for JSON events, one for Avro) Different settings per environment built at runtime (not just property substitution) Producers sending to different clusters (e.g. primary + DR cluster) Programmatic validation of configuration at startup flowchart TB subgraph PropertiesApproach["application.properties Approach"] P1["Single producer config\nspring.kafka.producer.*\n✓ Simple\n✗ One producer only\n✗ No runtime logic"] end subgraph BeanApproach["</description></item><item><title>Producer Acknowledgments: acks, min.insync.replicas, and Data Durability</title><link>https://devops-monk.com/tutorials/spring-kafka/producer-acknowledgments/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/producer-acknowledgments/</guid><description>What Are Acknowledgments? When a producer sends a record to a Kafka broker, it can wait for confirmation that the write was received and replicated before considering the send &amp;ldquo;complete.&amp;rdquo; The acks setting controls how much confirmation the producer requires.
flowchart LR Producer["Producer"] Leader["Partition Leader\n(Broker 1)"] F1["Follower\n(Broker 2)"] F2["Follower\n(Broker 3)"] Producer -->|"ProduceRequest"| Leader Leader -->|"replicate"| F1 Leader -->|"replicate"| F2 Leader -->|"ProduceResponse ✓"| Producer style Producer fill:#3b82f6,color:#fff style Leader fill:#10b981,color:#fff The acknowledgment is the broker&amp;rsquo;s confirmation to the producer.</description></item><item><title>Producer Retries: Backoff, Timeouts, and Retry Strategies</title><link>https://devops-monk.com/tutorials/spring-kafka/producer-retries/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/producer-retries/</guid><description>Why Producers Need Retries Network errors, leader elections, and broker restarts are normal events in a distributed system. Without retries, a transient broker hiccup causes permanent data loss from the producer&amp;rsquo;s perspective. With retries, the producer automatically re-sends failed records until either the broker accepts them or a timeout deadline is reached.
sequenceDiagram participant Producer participant Leader as Leader (Broker 1) participant NewLeader as New Leader (Broker 2) Producer->>Leader: ProduceRequest (offset 42) Note over Leader: Broker 1 crashes mid-write Leader--xProducer: No response (timeout) Note over Producer: retry.</description></item><item><title>Sending Messages with Keys, Headers, and Custom Partitioning</title><link>https://devops-monk.com/tutorials/spring-kafka/producer-keys-headers-partitioning/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/producer-keys-headers-partitioning/</guid><description>Why Partitioning Strategy Matters How you route messages to partitions determines:
Ordering: only messages in the same partition are ordered relative to each other Parallelism: how evenly work is distributed across consumers Hot spots: if one key generates 90% of traffic, one partition (and one consumer) gets 90% of the load flowchart TD subgraph Routing["Message Routing Decision"] Msg["Message"] HasKey{Has key?} HasPartition{Explicit partition?} KeyHash["hash(key) % numPartitions\n→ deterministic, same partition always"] RoundRobin["Sticky partitioning\n(batch to same partition,\nthen round-robin)"</description></item></channel></rss>