Caching with Caffeine and Redis
Caching sits between your application and the database. A cache hit returns data in microseconds; a database query takes milliseconds. For frequently-read, infrequently-changed data, caching is the highest-leverage performance improvement.
Spring Cache Abstraction
Spring’s cache abstraction lets you add caching with annotations — the backing store (Caffeine, Redis, Hazelcast) is swappable:
@Service
@RequiredArgsConstructor
public class ProductService {
private final ProductRepository repository;
@Cacheable("products") // cache the result
public Product findById(UUID id) {
return repository.findById(id).orElseThrow();
}
@CachePut(value = "products", key = "#result.id") // update the cache
public Product update(UUID id, UpdateProductRequest request) {
// ...
}
@CacheEvict(value = "products", key = "#id") // remove from cache
public void delete(UUID id) {
repository.deleteById(id);
}
}
Enable caching:
@SpringBootApplication
@EnableCaching
public class OrderServiceApplication { }
Caffeine — In-Process Cache
Caffeine is the fastest in-process cache for the JVM. Data lives in the same JVM heap — zero network overhead.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
</dependency>
spring:
cache:
type: caffeine
caffeine:
spec: maximumSize=1000,expireAfterWrite=10m
Per-cache configuration:
@Configuration
@EnableCaching
public class CacheConfig {
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager manager = new CaffeineCacheManager();
// Default spec for caches not explicitly configured
manager.setCaffeine(Caffeine.newBuilder()
.maximumSize(1000)
.expireAfterWrite(Duration.ofMinutes(10))
.recordStats()); // enable hit/miss stats
return manager;
}
// Or configure individual caches:
@Bean
public CacheManager cacheManager(CaffeineSpec defaultSpec) {
CaffeineCacheManager manager = new CaffeineCacheManager();
manager.registerCustomCache("products",
Caffeine.newBuilder()
.maximumSize(500)
.expireAfterWrite(Duration.ofMinutes(30))
.recordStats()
.build());
manager.registerCustomCache("categories",
Caffeine.newBuilder()
.maximumSize(100)
.expireAfterAccess(Duration.ofHours(1))
.build());
return manager;
}
}
Caffeine eviction policies:
expireAfterWrite(10m)— expire 10 minutes after the entry was writtenexpireAfterAccess(10m)— expire 10 minutes after last read or writerefreshAfterWrite(5m)— refresh in the background after 5 minutes (stale-while-revalidate)maximumSize(1000)— evict least-recently-used when over capacitymaximumWeight(100MB)— evict by memory weight
Redis — Distributed Cache
Use Redis when you have multiple application instances. Each instance has its own Caffeine cache — they diverge after the first write. Redis is a shared cache across all instances.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
spring:
cache:
type: redis
redis:
time-to-live: 10m
cache-null-values: false
data:
redis:
host: localhost
port: 6379
password: ${REDIS_PASSWORD}
lettuce:
pool:
max-active: 8
max-idle: 4
Per-cache TTL with Redis:
@Configuration
@EnableCaching
public class RedisCacheConfig {
@Bean
public RedisCacheManager cacheManager(RedisConnectionFactory factory) {
RedisCacheConfiguration defaultConfig = RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofMinutes(10))
.disableCachingNullValues()
.serializeValuesWith(RedisSerializationContext.SerializationPair
.fromSerializer(new GenericJackson2JsonRedisSerializer()));
return RedisCacheManager.builder(factory)
.cacheDefaults(defaultConfig)
.withCacheConfiguration("products",
defaultConfig.entryTtl(Duration.ofMinutes(30)))
.withCacheConfiguration("categories",
defaultConfig.entryTtl(Duration.ofHours(1)))
.withCacheConfiguration("user-sessions",
defaultConfig.entryTtl(Duration.ofDays(7)))
.build();
}
}
Make cached objects serializable:
// Use Jackson-serializable records or classes
public record ProductResponse(
UUID id,
String name,
BigDecimal price,
String category
) {} // records are automatically serializable with Jackson
@Cacheable — Detailed Configuration
@Service
public class ProductService {
// Cache key from method parameters
@Cacheable(value = "products", key = "#id")
public ProductResponse findById(UUID id) { ... }
// Compound key
@Cacheable(value = "product-search", key = "#query + ':' + #page + ':' + #size")
public Page<ProductResponse> search(String query, int page, int size) { ... }
// SpEL key using object fields
@Cacheable(value = "products", key = "#request.category + ':' + #request.minPrice")
public List<ProductResponse> findByFilter(ProductFilter request) { ... }
// Conditional caching — only cache if result is not null
@Cacheable(value = "products", key = "#id", unless = "#result == null")
public ProductResponse findByIdOrNull(UUID id) { ... }
// Conditional caching — only cache expensive lookups
@Cacheable(value = "expensive", condition = "#items.size() > 100")
public BigDecimal calculateTotal(List<OrderItem> items) { ... }
// Sync — prevents cache stampede (only one thread computes, others wait)
@Cacheable(value = "products", key = "#id", sync = true)
public ProductResponse findByIdSync(UUID id) { ... }
}
Cache Eviction Strategies
@Service
public class ProductService {
// Evict single entry
@CacheEvict(value = "products", key = "#id")
public void deleteProduct(UUID id) {
productRepository.deleteById(id);
}
// Evict all entries in the cache
@CacheEvict(value = "products", allEntries = true)
public void importProducts(List<Product> products) {
productRepository.saveAll(products);
}
// Evict before method runs (not after)
@CacheEvict(value = "products", key = "#id", beforeInvocation = true)
public void forceRefresh(UUID id) { }
// Update the cache entry with the new value
@CachePut(value = "products", key = "#result.id")
public ProductResponse updateProduct(UUID id, UpdateProductRequest request) {
// The return value replaces the cached entry
return productRepository.save(/* ... */).toResponse();
}
// Multiple cache operations
@Caching(
evict = {
@CacheEvict(value = "products", key = "#id"),
@CacheEvict(value = "product-search", allEntries = true)
}
)
public void deleteAndInvalidateSearch(UUID id) {
productRepository.deleteById(id);
}
}
Cache-Aside Pattern (Manual)
For complex invalidation logic, manage the cache manually:
@Service
@RequiredArgsConstructor
public class OrderService {
private final OrderRepository repository;
private final CacheManager cacheManager;
public OrderResponse findById(UUID id) {
Cache cache = cacheManager.getCache("orders");
// Try cache first
OrderResponse cached = cache.get(id, OrderResponse.class);
if (cached != null) return cached;
// Cache miss — load from DB
OrderResponse response = repository.findById(id)
.map(OrderResponse::from)
.orElseThrow(() -> new OrderNotFoundException(id));
cache.put(id, response);
return response;
}
public void updateOrder(UUID id, UpdateOrderRequest request) {
Order updated = /* ... update logic ... */;
// Invalidate cache after update
cacheManager.getCache("orders").evict(id);
}
}
Two-Level Cache: Caffeine + Redis
Caffeine (L1) is fast but local. Redis (L2) is shared but has network latency. Use both:
@Configuration
@EnableCaching
public class TwoLevelCacheConfig {
@Bean
@Primary
public CacheManager cacheManager(
CaffeineCacheManager l1,
RedisCacheManager l2) {
return new TwoLevelCacheManager(l1, l2);
}
@Bean
public CaffeineCacheManager l1CacheManager() {
CaffeineCacheManager manager = new CaffeineCacheManager();
manager.setCaffeine(Caffeine.newBuilder()
.maximumSize(500)
.expireAfterWrite(Duration.ofMinutes(1))); // short TTL — L1 is stale tolerant
return manager;
}
@Bean
public RedisCacheManager l2CacheManager(RedisConnectionFactory factory) {
return RedisCacheManager.builder(factory)
.cacheDefaults(RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofMinutes(30)))
.build();
}
}
A simpler approach using Spring’s CompositeCacheManager:
@Bean
public CacheManager cacheManager(CaffeineCacheManager local, RedisCacheManager distributed) {
// Checks local first, then distributed
return new CompositeCacheManager(local, distributed);
}
Cache Warming
Pre-populate the cache at startup for latency-sensitive data:
@Component
@RequiredArgsConstructor
@Slf4j
public class CacheWarmer implements ApplicationRunner {
private final ProductService productService;
private final CategoryRepository categoryRepository;
@Override
public void run(ApplicationArguments args) {
log.info("Warming product category cache");
categoryRepository.findAll().forEach(cat ->
productService.findByCategory(cat.getId()));
log.info("Cache warmed: {} categories loaded", categoryRepository.count());
}
}
Testing Cached Methods
@SpringBootTest
class ProductServiceCacheTest {
@Autowired ProductService productService;
@MockBean ProductRepository productRepository;
@Test
void secondCallUsesCache() {
UUID id = UUID.randomUUID();
when(productRepository.findById(id))
.thenReturn(Optional.of(new Product(id, "Widget", BigDecimal.TEN)));
productService.findById(id);
productService.findById(id); // second call
// Repository should only be called once — second call from cache
verify(productRepository, times(1)).findById(id);
}
@Test
void evictRemovesFromCache() {
UUID id = UUID.randomUUID();
when(productRepository.findById(id))
.thenReturn(Optional.of(new Product(id, "Widget", BigDecimal.TEN)));
productService.findById(id); // populate cache
productService.deleteProduct(id); // evict
productService.findById(id); // should hit DB again
verify(productRepository, times(2)).findById(id);
}
}
What You’ve Learned
- Spring Cache abstraction (
@Cacheable,@CachePut,@CacheEvict) decouples cache logic from business logic - Caffeine is the best in-process cache — zero network overhead, sub-microsecond reads
- Redis is the distributed cache — shared across application instances, survives restarts
- Per-cache TTL configuration lets you tune expiry independently for different data
sync = trueon@Cacheableprevents cache stampede under high concurrency- Two-level cache (Caffeine + Redis) gives you the best of both: local speed and cross-instance consistency
- Test caching by verifying repository call count — cache hits should suppress repository calls
Next: Article 40 — Async Processing with @Async and Virtual Threads — offload work from request threads and handle background tasks.