<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Partitioning on Devops Monk</title><link>https://devops-monk.com/tags/partitioning/</link><description>Recent content in Partitioning on Devops Monk</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 04 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://devops-monk.com/tags/partitioning/index.xml" rel="self" type="application/rss+xml"/><item><title>Consumer Groups: Parallel Processing and Partition Assignment Strategies</title><link>https://devops-monk.com/tutorials/spring-kafka/consumer-groups/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/consumer-groups/</guid><description>Consumer Group Fundamentals A consumer group is how Kafka distributes work across multiple consumers. Each partition in a topic is assigned to exactly one consumer instance in the group at any given time.
flowchart TB subgraph Topic["Topic: orders — 6 partitions"] P0["P0"] P1["P1"] P2["P2"] P3["P3"] P4["P4"] P5["P5"] end subgraph CG1["Group: inventory-service (3 instances)"] I1["Instance 1\nP0, P1"] I2["Instance 2\nP2, P3"] I3["Instance 3\nP4, P5"] end subgraph CG2["Group: notification-service (1 instance)"] N1["Instance 1\nP0,P1,P2,P3,P4,P5"] end P0 &amp; P1 --> I1 P2 &amp; P3 --> I2 P4 &amp; P5 --> I3 P0 &amp; P1 &amp; P2 &amp; P3 &amp; P4 &amp; P5 --> N1 Both groups receive all events — they are independent.</description></item><item><title>Sending Messages with Keys, Headers, and Custom Partitioning</title><link>https://devops-monk.com/tutorials/spring-kafka/producer-keys-headers-partitioning/</link><pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-kafka/producer-keys-headers-partitioning/</guid><description>Why Partitioning Strategy Matters How you route messages to partitions determines:
Ordering: only messages in the same partition are ordered relative to each other Parallelism: how evenly work is distributed across consumers Hot spots: if one key generates 90% of traffic, one partition (and one consumer) gets 90% of the load flowchart TD subgraph Routing["Message Routing Decision"] Msg["Message"] HasKey{Has key?} HasPartition{Explicit partition?} KeyHash["hash(key) % numPartitions\n→ deterministic, same partition always"] RoundRobin["Sticky partitioning\n(batch to same partition,\nthen round-robin)"</description></item></channel></rss>