<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Exactly-Once on Devops Monk</title><link>https://devops-monk.com/tags/exactly-once/</link><description>Recent content in Exactly-Once on Devops Monk</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 04 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://devops-monk.com/tags/exactly-once/index.xml" rel="self" type="application/rss+xml"/><item><title>Idempotent Producers: Eliminating Duplicate Messages</title><link>https://devops-monk.com/tutorials/spring-kafka/idempotent-producer/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/idempotent-producer/</guid><description>The Duplicate Problem With acks=all and retries enabled, a produce request might be acknowledged by the broker, but the acknowledgment is lost in the network before reaching the producer. The producer, seeing no response, retries — sending the same record again. The broker writes it a second time. The consumer sees a duplicate.
sequenceDiagram participant Producer participant Leader as Broker (Leader) Producer->>Leader: ProduceRequest: OrderPlaced (orderId=1001) Leader->>Leader: Write record at offset 42 ✓ Leader--xProducer: ProduceResponse LOST (network failure) Note over Producer: No ack received — retrying Producer->>Leader: ProduceRequest: OrderPlaced (orderId=1001) [RETRY] Leader->>Leader: Write record at offset 43 ✓ (DUPLICATE!</description></item><item><title>Kafka Transactions and Exactly-Once Semantics</title><link>https://devops-monk.com/tutorials/spring-kafka/transactions-eos/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/transactions-eos/</guid><description>Why Transactions? At-least-once delivery means a record can be processed and produced more than once after a crash. For most applications, idempotent consumers handle this. But when you need a hard guarantee — either the produce happens and the offset commits, or neither does — you need Kafka transactions.
Common scenarios:
Consume → transform → produce (read from one topic, write to another) where partial completion is unacceptable Exactly-once aggregations in financial or billing systems Atomic multi-topic produce where records to multiple topics must all land or none land How Kafka Transactions Work sequenceDiagram participant Producer participant Broker participant Consumer Producer->>Broker: initTransactions() [registers transactional.</description></item></channel></rss>