Logging: SLF4J, Logback, and Structured Logging
Logging done right gives you everything you need to diagnose production issues. Done wrong, it either buries you in noise or leaves you blind. This article covers the full logging stack — from basics to structured production logging.
The Logging Stack
Your Code → SLF4J API → Logback (implementation) → Appenders (console, file, etc.)
SLF4J is the facade — your code always calls LoggerFactory.getLogger() and log.info(). The implementation (Logback) is swappable without changing your code.
Spring Boot auto-configures Logback. No XML needed for basic setup.
Writing Log Statements
@Service
@Slf4j // Lombok: generates private static final Logger log = LoggerFactory.getLogger(OrderService.class)
public class OrderService {
public Order createOrder(CreateOrderRequest request) {
log.debug("Creating order for customer {}", request.customerId()); // dev only
Order order = buildOrder(request);
log.info("Order created: orderId={}, customerId={}, itemCount={}, total={}",
order.getId(), order.getCustomerId(),
order.getItems().size(), order.getTotalAmount());
return order;
}
public void cancelOrder(UUID id) {
log.warn("Order cancellation requested: orderId={}", id);
// ...
log.warn("Order cancelled after shipment: orderId={}, status={}",
id, OrderStatus.SHIPPED);
}
public Order findById(UUID id) {
return orderRepository.findById(id).orElseThrow(() -> {
log.error("Order not found: orderId={}", id); // this is an error the caller should handle
return new OrderNotFoundException(id);
});
}
}
Log levels (lowest → highest): TRACE → DEBUG → INFO → WARN → ERROR
TRACE— very detailed, line-by-line flow (rarely used in production)DEBUG— diagnostic info, enabled in devINFO— business events that always matter (order created, payment processed)WARN— something unexpected but handled (retry, degraded mode, deprecated usage)ERROR— something failed that shouldn’t have (exception, data corruption, dependency failure)
Rules:
- Log at
INFOfor business events — what happened, not how - Log at
DEBUGfor technical details — SQL parameters, cache hits, external calls - Log at
ERRORonly for actual errors — things that need attention - Never log passwords, tokens, or PII
Configuration via application.yml
logging:
level:
root: WARN # default for everything
com.devopsmonk: INFO # your code
com.devopsmonk.order.service: DEBUG # specific package — more verbose
org.springframework.web: INFO
org.hibernate.SQL: DEBUG # log SQL statements
org.hibernate.type.descriptor.sql: TRACE # log SQL parameters
pattern:
console: "%d{HH:mm:ss.SSS} %highlight(%-5level) %cyan(%logger{36}) - %msg%n"
file: "%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%thread] %logger{36} - %msg%n"
file:
name: logs/order-service.log
logback:
rollingpolicy:
max-file-size: 100MB
max-history: 30 # keep 30 days
total-size-cap: 3GB
Logback XML Configuration (Full Control)
For production, use a logback-spring.xml file (Spring-aware, supports profiles):
<!-- src/main/resources/logback-spring.xml -->
<configuration>
<!-- Import Spring Boot defaults -->
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<springProperty scope="context" name="appName"
source="spring.application.name" defaultValue="app"/>
<!-- Console appender (dev) -->
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} %highlight(%-5level) [%thread] %cyan(%logger{36}) - %msg%n</pattern>
</encoder>
</appender>
<!-- File appender with rolling (all environments) -->
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/${appName}.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>logs/${appName}.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxFileSize>100MB</maxFileSize>
<maxHistory>30</maxHistory>
<totalSizeCap>5GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%thread] %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<!-- JSON appender (production) -->
<appender name="JSON" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<includeMdcKeyName>traceId</includeMdcKeyName>
<includeMdcKeyName>spanId</includeMdcKeyName>
<includeMdcKeyName>requestId</includeMdcKeyName>
<includeMdcKeyName>userId</includeMdcKeyName>
<customFields>{"app":"${appName}","env":"${ENVIRONMENT:-local}"}</customFields>
</encoder>
</appender>
<!-- Async wrapper (don't block request threads on I/O) -->
<appender name="ASYNC_FILE" class="ch.qos.logback.classic.AsyncAppender">
<queueSize>1024</queueSize>
<discardingThreshold>0</discardingThreshold> <!-- never discard -->
<appender-ref ref="FILE"/>
</appender>
<!-- Environment-specific configuration -->
<springProfile name="dev,local">
<root level="INFO">
<appender-ref ref="CONSOLE"/>
</root>
<logger name="com.devopsmonk" level="DEBUG"/>
</springProfile>
<springProfile name="prod">
<root level="WARN">
<appender-ref ref="JSON"/>
</root>
<logger name="com.devopsmonk" level="INFO"/>
</springProfile>
</configuration>
Structured JSON Logging for Production
In production, logs go to a log aggregation system (Loki, Elasticsearch, CloudWatch). Plain text logs are hard to query. JSON logs are structured and searchable.
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.4</version>
</dependency>
Example JSON log entry:
{
"timestamp": "2026-05-03T10:15:30.456Z",
"level": "INFO",
"logger": "com.devopsmonk.order.service.OrderService",
"message": "Order created",
"app": "order-service",
"env": "prod",
"traceId": "65f8a2b3c4d5e6f7",
"spanId": "a1b2c3d4",
"requestId": "req-550e8400",
"userId": "user-abc123",
"thread": "http-nio-8080-exec-5",
"orderId": "order-xyz789",
"customerId": "customer-def456",
"itemCount": 3,
"totalAmount": 149.99
}
Every field is queryable. You can find all orders above $100 for a specific user in seconds.
Adding Structured Fields
// Instead of string interpolation:
log.info("Order created: id={}, total={}", order.getId(), order.getTotalAmount());
// Use key-value pairs for structured logging:
log.atInfo()
.addKeyValue("orderId", order.getId())
.addKeyValue("customerId", order.getCustomerId())
.addKeyValue("itemCount", order.getItems().size())
.addKeyValue("totalAmount", order.getTotalAmount())
.log("Order created");
Or with Logstash markers:
import static net.logstash.logback.argument.StructuredArguments.*;
log.info("Order created {}",
keyValue("orderId", order.getId()),
keyValue("customerId", order.getCustomerId()),
keyValue("total", order.getTotalAmount())
);
MDC — Mapped Diagnostic Context
MDC attaches key-value pairs to all log statements in the current thread — no need to pass them to every log call:
@Component
public class LoggingFilter implements Filter {
@Override
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
HttpServletRequest req = (HttpServletRequest) request;
String requestId = Optional.ofNullable(req.getHeader("X-Request-ID"))
.orElse(UUID.randomUUID().toString());
try {
MDC.put("requestId", requestId);
MDC.put("method", req.getMethod());
MDC.put("path", req.getRequestURI());
MDC.put("clientIp", req.getRemoteAddr());
// If authenticated
Authentication auth = SecurityContextHolder.getContext().getAuthentication();
if (auth != null && auth.isAuthenticated()) {
MDC.put("userId", auth.getName());
}
chain.doFilter(request, response);
} finally {
MDC.clear(); // always clear — thread pool reuses threads
}
}
}
Now every log statement in the request thread automatically includes requestId, method, path, and userId — with no changes to the logging code itself.
Register it:
@Bean
public FilterRegistrationBean<LoggingFilter> loggingFilter() {
FilterRegistrationBean<LoggingFilter> registration = new FilterRegistrationBean<>();
registration.setFilter(new LoggingFilter());
registration.addUrlPatterns("/*");
registration.setOrder(Ordered.HIGHEST_PRECEDENCE);
return registration;
}
Request/Response Logging
Log all HTTP traffic (careful — verbose, can expose sensitive data):
@Bean
public CommonsRequestLoggingFilter requestLoggingFilter() {
CommonsRequestLoggingFilter filter = new CommonsRequestLoggingFilter();
filter.setIncludeQueryString(true);
filter.setIncludePayload(true);
filter.setMaxPayloadLength(10000);
filter.setIncludeHeaders(false); // don't log Authorization header!
filter.setAfterMessagePrefix("REQUEST DATA: ");
return filter;
}
Enable at DEBUG level:
logging:
level:
org.springframework.web.filter.CommonsRequestLoggingFilter: DEBUG
Performance Logging with @Around AOP
Log method execution time for slow operations:
@Aspect
@Component
@Slf4j
public class PerformanceLoggingAspect {
@Around("@annotation(com.devopsmonk.annotation.LogPerformance)")
public Object logPerformance(ProceedingJoinPoint pjp) throws Throwable {
long start = System.currentTimeMillis();
String methodName = pjp.getSignature().toShortString();
try {
Object result = pjp.proceed();
long elapsed = System.currentTimeMillis() - start;
if (elapsed > 1000) {
log.warn("Slow method: method={}, elapsed={}ms", methodName, elapsed);
} else {
log.debug("Method completed: method={}, elapsed={}ms", methodName, elapsed);
}
return result;
} catch (Exception e) {
long elapsed = System.currentTimeMillis() - start;
log.error("Method failed: method={}, elapsed={}ms, error={}",
methodName, elapsed, e.getMessage());
throw e;
}
}
}
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface LogPerformance {}
// Usage
@Service
public class OrderService {
@LogPerformance
public Order processLargeOrder(CreateOrderRequest request) { ... }
}
Log Aggregation with Loki
Grafana Loki collects logs like Prometheus collects metrics. Add the Loki appender:
<dependency>
<groupId>com.github.loki4j</groupId>
<artifactId>loki-logback-appender</artifactId>
<version>1.5.1</version>
</dependency>
<!-- In logback-spring.xml (prod profile) -->
<appender name="LOKI" class="com.github.loki4j.logback.Loki4jAppender">
<http>
<url>http://loki:3100/loki/api/v1/push</url>
</http>
<format>
<label>
<pattern>app=${appName},env=${ENVIRONMENT},host=${HOSTNAME}</pattern>
</label>
<message class="com.github.loki4j.logback.JsonLayout"/>
</format>
</appender>
Now query logs in Grafana with LogQL:
{app="order-service"} | json | level="ERROR"
{app="order-service"} | json | orderId="abc123"
{app="order-service"} | json | totalAmount > 1000
What You’ve Learned
- SLF4J is the facade; Logback is Spring Boot’s default implementation
- Log at INFO for business events, DEBUG for technical details, ERROR for actual failures
logback-spring.xmlwith<springProfile>gives environment-specific configuration- Use
LogstashEncoderfor structured JSON logs in production — every field is queryable - MDC attaches context (requestId, userId) to all log statements on a thread — add it in a Filter
- Async appenders prevent logging from blocking request threads
- Ship logs to Loki; query with LogQL in Grafana alongside your Prometheus metrics
Next: Article 36 — Externalized Configuration with @ConfigurationProperties — a deeper dive into typed, validated configuration binding.