Scheduling Batch Jobs: @Scheduled, Quartz, and Clustered Scheduling
Introduction
A batch job that only runs manually is barely useful. Production jobs run on a schedule — nightly, hourly, after a file arrives. Spring Boot provides three scheduling options:
| Option | Persistence | Clustered | Use when |
|---|---|---|---|
@Scheduled | No | No | Simple cron on a single node |
| Quartz Scheduler | Yes (DB) | Yes | HA scheduling, persistent triggers |
| External scheduler (Kubernetes CronJob, Airflow) | Varies | Yes | Complex pipelines, dependency management |
@Scheduled — Simple Cron Trigger
The simplest approach: annotate a method with @Scheduled and run the job from it.
Enable scheduling
@SpringBootApplication
@EnableScheduling
@EnableBatchProcessing
public class BatchApplication { ... }
Scheduled launcher
@Component
@RequiredArgsConstructor
public class OrderImportScheduler {
private final Job importOrdersJob;
private final JobLauncher jobLauncher;
// Run every night at 2am
@Scheduled(cron = "0 0 2 * * *")
public void runDailyImport() throws Exception {
JobParameters params = new JobParametersBuilder()
.addString("runDate", LocalDate.now().toString(), true)
.addLong("startedAt", System.currentTimeMillis(), false)
.toJobParameters();
JobExecution execution = jobLauncher.run(importOrdersJob, params);
log.info("Daily import finished with status: {}", execution.getStatus());
}
}
runDate as an identifying parameter means the same date always hits the same JobInstance — a failed nightly job can be restarted just by waiting for the next scheduled trigger.
Prevent overlapping runs
If a job run takes longer than its schedule interval, the next trigger fires while the previous is still running. Prevent this with a flag:
@Component
@RequiredArgsConstructor
public class OrderImportScheduler {
private final Job importOrdersJob;
private final JobLauncher jobLauncher;
private final JobExplorer jobExplorer;
private final AtomicBoolean running = new AtomicBoolean(false);
@Scheduled(cron = "0 0 2 * * *")
public void runDailyImport() throws Exception {
if (!running.compareAndSet(false, true)) {
log.warn("Previous import still running — skipping this trigger");
return;
}
try {
JobParameters params = new JobParametersBuilder()
.addString("runDate", LocalDate.now().toString(), true)
.toJobParameters();
jobLauncher.run(importOrdersJob, params);
} finally {
running.set(false);
}
}
}
Asynchronous job launcher
By default, JobLauncher is synchronous — run() blocks until the job completes. For long-running jobs, use an async launcher so @Scheduled returns quickly:
@Bean
public JobLauncher asyncJobLauncher(JobRepository jobRepository) throws Exception {
TaskExecutorJobLauncher launcher = new TaskExecutorJobLauncher();
launcher.setJobRepository(jobRepository);
launcher.setTaskExecutor(new SimpleAsyncTaskExecutor());
launcher.afterPropertiesSet();
return launcher;
}
Quartz Scheduler — Persistent Clustered Scheduling
Quartz stores its triggers in a database. In a clustered deployment (multiple pods), only one node fires each trigger — Quartz uses row-level locking in MySQL to ensure this.
Dependencies
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-quartz</artifactId>
</dependency>
Quartz tables
Spring Boot auto-creates Quartz tables if you set:
spring.quartz.jdbc.initialize-schema=always
spring.quartz.job-store-type=jdbc
Or use never in production and apply the DDL from org/quartz/impl/jdbcjobstore/tables_mysql_innodb.sql via Flyway/Liquibase.
Define the Quartz Job
@Component
@RequiredArgsConstructor
public class ImportOrdersQuartzJob implements Job {
private final Job importOrdersJob; // Spring Batch Job
private final JobLauncher jobLauncher;
@Override
public void execute(JobExecutionContext context) throws JobExecutionException {
try {
String runDate = context.getMergedJobDataMap()
.getString("runDate");
if (runDate == null) runDate = LocalDate.now().toString();
JobParameters params = new JobParametersBuilder()
.addString("runDate", runDate, true)
.addLong("firedAt", context.getFireTime().getTime(), false)
.toJobParameters();
jobLauncher.run(importOrdersJob, params);
} catch (Exception e) {
throw new JobExecutionException("Spring Batch job failed", e);
}
}
}
Register the trigger
@Configuration
public class QuartzSchedulerConfig {
@Bean
public JobDetail importOrdersJobDetail() {
return JobBuilder.newJob(ImportOrdersQuartzJob.class)
.withIdentity("importOrdersJob", "batch")
.withDescription("Daily order import")
.storeDurably()
.build();
}
@Bean
public Trigger importOrdersDailyTrigger(JobDetail importOrdersJobDetail) {
return TriggerBuilder.newTrigger()
.forJob(importOrdersJobDetail)
.withIdentity("importOrdersDailyTrigger", "batch")
.withSchedule(CronScheduleBuilder.cronSchedule("0 0 2 * * ?"))
.build();
}
// Run every hour for near-real-time imports
@Bean
public Trigger importOrdersHourlyTrigger(JobDetail importOrdersJobDetail) {
return TriggerBuilder.newTrigger()
.forJob(importOrdersJobDetail)
.withIdentity("importOrdersHourlyTrigger", "batch")
.withSchedule(SimpleScheduleBuilder.simpleSchedule()
.withIntervalInHours(1)
.repeatForever())
.build();
}
}
Quartz properties for clustered mode
# application.properties
spring.quartz.job-store-type=jdbc
spring.quartz.jdbc.initialize-schema=never
spring.quartz.properties.org.quartz.jobStore.isClustered=true
spring.quartz.properties.org.quartz.jobStore.clusterCheckinInterval=10000
spring.quartz.properties.org.quartz.scheduler.instanceId=AUTO
spring.quartz.properties.org.quartz.scheduler.instanceName=BatchCluster
spring.quartz.properties.org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX
spring.quartz.properties.org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.StdJDBCDelegate
spring.quartz.properties.org.quartz.threadPool.threadCount=5
With isClustered=true, Quartz acquires a database lock before firing any trigger. Only one node fires the trigger even when 10 pods are running — exactly what you want.
Cron Expression Reference
| Expression | Meaning |
|---|---|
0 0 2 * * * | Every day at 2:00 AM |
0 0 * * * * | Every hour on the hour |
0 */15 * * * * | Every 15 minutes |
0 0 2 * * MON-FRI | 2 AM on weekdays only |
0 0 1 1 * * | 1st of every month at 1 AM |
0 0 2 ? * SUN | Sundays at 2 AM |
JobOperator — Manual Trigger via REST
@RestController
@RequestMapping("/api/jobs")
@RequiredArgsConstructor
public class JobTriggerController {
private final JobOperator jobOperator;
@PostMapping("/{jobName}/run")
public ResponseEntity<Long> triggerJob(
@PathVariable String jobName,
@RequestBody(required = false) Map<String, String> params) throws Exception {
StringBuilder paramStr = new StringBuilder();
if (params != null) {
params.forEach((k, v) -> paramStr.append(k).append("=").append(v).append(","));
}
paramStr.append("triggeredAt=").append(System.currentTimeMillis());
Long executionId = jobOperator.start(jobName, paramStr.toString());
return ResponseEntity.ok(executionId);
}
@PostMapping("/{executionId}/stop")
public ResponseEntity<Void> stopJob(@PathVariable Long executionId) throws Exception {
jobOperator.stop(executionId);
return ResponseEntity.accepted().build();
}
@GetMapping("/{jobName}/running")
public ResponseEntity<Set<Long>> runningExecutions(@PathVariable String jobName)
throws Exception {
return ResponseEntity.ok(jobOperator.getRunningExecutions(jobName));
}
}
Key Takeaways
@Scheduled+AtomicBooleanis sufficient for single-node deployments. Use an asyncJobLauncherif the job runs longer than its schedule interval.- Quartz with
isClustered=trueand JDBC job store ensures exactly one node fires each trigger in a multi-pod deployment. - Always include a timestamp or date as an identifying parameter — it ensures each scheduled run creates a new
JobInstance(preventing “job already complete” errors on subsequent triggers). - Use
JobOperatorto expose manual run and stop endpoints for operations teams.
What’s Next
Article 24 covers performance tuning — chunk size optimisation, connection pool configuration, MySQL JDBC settings, and memory management for large batch jobs.