<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Avro on Devops Monk</title><link>https://devops-monk.com/tags/avro/</link><description>Recent content in Avro on Devops Monk</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 04 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://devops-monk.com/tags/avro/index.xml" rel="self" type="application/rss+xml"/><item><title>Avro Serialization with Confluent Schema Registry</title><link>https://devops-monk.com/tutorials/spring-kafka/avro-schema-registry/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/avro-schema-registry/</guid><description>Why Avro and Schema Registry? JSON has no schema enforcement — a producer can change a field name and silently break every consumer. Avro + Schema Registry solves this:
Avro gives you a compact binary format with a schema definition Schema Registry stores and versions schemas, enforces compatibility rules, and prevents breaking changes from reaching consumers flowchart LR subgraph Producer["Order Service"] E["OrderPlacedEvent"] -->|"KafkaAvroSerializer"| SR["Schema Registry\n(register/lookup schema)"] SR --> Bytes["[schema_id (4 bytes)] + [avro payload]"</description></item></channel></rss>