Dead Letter Topics: Routing Failed Messages with DeadLetterPublishingRecoverer
What Is a Dead Letter Topic?
When a record fails processing and retries are exhausted, you have two options: skip it (losing the data) or park it somewhere for inspection and reprocessing. A dead-letter topic (DLT) is that parking lot — a Kafka topic that holds records that could not be processed, enriched with error metadata headers.
flowchart LR
subgraph Main["orders topic"]
R1["record: offset 42\n(bad data)"]
end
subgraph DLT["orders.DLT topic"]
R2["record: offset 42 + error headers\nexception, cause, original topic,\noriginal partition, original offset"]
end
R1 -->|"retries exhausted\n→ DeadLetterPublishingRecoverer"| R2
R2 -->|"human review\nor automated reprocess"| Fix["Fix + republish\nto orders topic"]
DeadLetterPublishingRecoverer Setup
@Bean
public DefaultErrorHandler errorHandler(KafkaTemplate<String, Object> kafkaTemplate) {
DeadLetterPublishingRecoverer recoverer =
new DeadLetterPublishingRecoverer(kafkaTemplate);
return new DefaultErrorHandler(
recoverer,
new ExponentialBackOff(1000L, 2.0) {{
setMaxElapsedTime(30_000L);
}}
);
}
By default, DeadLetterPublishingRecoverer sends the failed record to {originalTopic}.DLT on the same partition as the original record. If orders is the source topic and partition 2 failed, the record lands in orders.DLT partition 2.
Error Headers Added by DeadLetterPublishingRecoverer
The recoverer adds these headers to every DLT record:
| Header | Value |
|---|---|
kafka_dlt-exception-fqcn | Fully-qualified exception class name |
kafka_dlt-exception-message | Exception message |
kafka_dlt-exception-cause-fqcn | Root cause class name |
kafka_dlt-original-topic | Source topic name |
kafka_dlt-original-partition | Source partition |
kafka_dlt-original-offset | Source offset |
kafka_dlt-original-timestamp | Source record timestamp |
kafka_dlt-original-timestamp-type | Timestamp type (CreateTime / LogAppendTime) |
These headers let DLT consumers know exactly why a record failed and where it came from.
Custom DLT Naming
Override the destination topic with a BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition>:
@Bean
public DefaultErrorHandler errorHandler(KafkaTemplate<String, Object> kafkaTemplate) {
DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer(
kafkaTemplate,
(record, exception) -> {
// Route validation errors to a separate topic for manual review
if (exception instanceof OrderValidationException) {
return new TopicPartition("orders.invalid", record.partition());
}
// Everything else to the standard DLT
return new TopicPartition(record.topic() + ".DLT", record.partition());
}
);
return new DefaultErrorHandler(recoverer, new ExponentialBackOff(1000L, 2.0));
}
Multiple KafkaTemplates for DLT
If your DLT topic uses different serialization than the main topic (e.g., raw bytes on the DLT), provide a separate KafkaTemplate:
@Bean
public DefaultErrorHandler errorHandler(
KafkaTemplate<String, OrderPlacedEvent> mainTemplate,
KafkaTemplate<String, byte[]> dltTemplate) {
DeadLetterPublishingRecoverer recoverer =
new DeadLetterPublishingRecoverer(
Map.of(OrderPlacedEvent.class, mainTemplate,
byte[].class, dltTemplate)
);
return new DefaultErrorHandler(recoverer, new FixedBackOff(1000L, 3L));
}
DeadLetterPublishingRecoverer selects the template based on the value class of the failed record.
DLT Consumer — Inspecting Failed Records
Consume the DLT with a separate listener to log, alert, or reprocess:
@KafkaListener(
topics = "orders.DLT",
groupId = "order-dlt-consumer",
containerFactory = "dltListenerContainerFactory"
)
public void onDltRecord(
ConsumerRecord<String, OrderPlacedEvent> record,
@Header("kafka_dlt-exception-fqcn") String exceptionClass,
@Header("kafka_dlt-exception-message") String exceptionMessage,
@Header("kafka_dlt-original-topic") String originalTopic,
@Header("kafka_dlt-original-partition") int originalPartition,
@Header("kafka_dlt-original-offset") long originalOffset) {
log.error(
"DLT record: originalTopic={} partition={} offset={} " +
"exception={}: {}",
originalTopic, originalPartition, originalOffset,
exceptionClass, exceptionMessage
);
// Alert ops team, store to database for review, etc.
alertService.notifyDltRecord(record, exceptionMessage);
}
Reprocessing DLT Records
After fixing the root cause, reprocess DLT records by republishing them to the original topic:
@KafkaListener(topics = "orders.DLT", groupId = "order-dlt-reprocessor")
public void reprocess(
ConsumerRecord<String, OrderPlacedEvent> dltRecord,
@Header("kafka_dlt-original-topic") String originalTopic,
Acknowledgment ack) {
try {
// Attempt to reprocess
orderService.process(dltRecord.value());
ack.acknowledge(); // mark as processed in DLT
} catch (Exception e) {
// Still failing — republish to original topic for retry
kafkaTemplate.send(originalTopic, dltRecord.key(), dltRecord.value());
ack.acknowledge();
}
}
Creating the DLT Topic
DLT topics are not auto-created by DeadLetterPublishingRecoverer — create them explicitly:
@Bean
public NewTopic ordersDlt() {
return TopicBuilder.name("orders.DLT")
.partitions(6) // same partition count as orders
.replicas(3)
.config(TopicConfig.RETENTION_MS_CONFIG, String.valueOf(7 * 24 * 60 * 60 * 1000L)) // 7 days
.build();
}
Set longer retention on DLT topics — failed records need time for human review and reprocessing.
DLT Flow Diagram
sequenceDiagram
participant Consumer as "Inventory Service"
participant ErrorHandler as "DefaultErrorHandler"
participant Recoverer as "DeadLetterPublishingRecoverer"
participant DLT as "orders.DLT"
participant DLTConsumer as "DLT Consumer"
Consumer->>ErrorHandler: OrderProcessingException (attempt 1/3)
Note over ErrorHandler: wait 1s
Consumer->>ErrorHandler: OrderProcessingException (attempt 2/3)
Note over ErrorHandler: wait 2s
Consumer->>ErrorHandler: OrderProcessingException (attempt 3/3)
Note over ErrorHandler: retries exhausted
ErrorHandler->>Recoverer: recover(record, exception)
Recoverer->>DLT: send record + error headers
ErrorHandler->>Consumer: commitOffset (continue processing)
DLTConsumer->>DLT: poll()
DLT-->>DLTConsumer: failed record + headers
DLTConsumer->>DLTConsumer: log, alert, or reprocess
Key Takeaways
DeadLetterPublishingRecoverersends failed records to{topic}.DLTwith full error metadata in headers- The DLT receives the original record unchanged, plus headers identifying the exception, source topic, partition, and offset
- Provide a
BiFunctionto customize the DLT topic name or route different exceptions to different topics - Create DLT topics explicitly with
TopicBuilder— set longer retention than production topics - Always have a DLT consumer that at minimum logs and alerts on DLT records — silent failures are dangerous
- For high-value events (payments, orders), pair the DLT consumer with a reprocessing pipeline
Next: Handling Deserialization Errors Gracefully — prevent one malformed message from blocking an entire partition using ErrorHandlingDeserializer and Spring Kafka’s deserialization error handling.