JobParameters, ExecutionContext, and Job Restartability

Introduction

Two mechanisms let you pass information into and through a Spring Batch job:

  • JobParameters — input values provided at launch time (a date, a file path, a run ID). They are immutable and persisted to BATCH_JOB_EXECUTION_PARAMS.
  • ExecutionContext — a key-value map that steps can read and write during execution. It is persisted after each chunk commit, enabling restartability.

Understanding both is essential for building jobs that can be safely re-run, restarted after failure, and parameterised for different data sets.


JobParameters

Defining and launching with parameters

// Programmatic launch
JobParameters params = new JobParametersBuilder()
        .addString("runDate", "2026-05-03", true)   // true = identifying
        .addString("inputFile", "/data/orders.csv", false)  // non-identifying
        .addLong("chunkSize", 500L, false)
        .toJobParameters();

jobLauncher.run(importOrdersJob, params);

The identifying flag determines whether a parameter contributes to the JobInstance identity (the JOB_KEY hash). Two launches with the same identifying parameters hit the same JobInstance — a restart if the first run failed, a duplicate-run error if it already completed.

Types supported in Spring Batch 5: String, Long, Double, LocalDate, LocalTime, LocalDateTime, Date.

Reading parameters in beans

Method 1 — @Value with SpEL (most common)

@Bean
@StepScope   // REQUIRED — see below
public FlatFileItemReader<Order> orderReader(
        @Value("#{jobParameters['inputFile']}") String inputFile) {

    return new FlatFileItemReaderBuilder<Order>()
            .name("orderReader")
            .resource(new FileSystemResource(inputFile))
            .lineMapper(lineMapper())
            .build();
}

Method 2 — inject JobParameters directly

@Component
@StepScope
public class OrderProcessor implements ItemProcessor<Order, Order> {

    private final String runDate;
    private final long chunkSize;

    public OrderProcessor(
            @Value("#{jobParameters['runDate']}") String runDate,
            @Value("#{jobParameters['chunkSize']}") long chunkSize) {
        this.runDate  = runDate;
        this.chunkSize = chunkSize;
    }

    @Override
    public Order process(Order order) { ... }
}

@StepScope and @JobScope — why they are required

Spring beans are singletons by default. @Value("#{jobParameters[...]}") is a SpEL expression that is evaluated at bean creation time. If the bean is a singleton, it is created once at application startup — before any job runs — so jobParameters is empty.

@StepScope changes the bean’s scope to step-scoped: the bean is created fresh at the start of each step execution, when jobParameters are available. Similarly, @JobScope creates the bean at job start.

@Bean
@StepScope   // created per step execution — can access jobParameters
public ItemReader<Order> stepScopedReader(@Value("#{jobParameters['file']}") String file) { ... }

@Bean
@JobScope    // created per job execution — can access jobParameters but not stepExecutionContext
public SomeJobScopedBean jobScopedBean(@Value("#{jobParameters['runDate']}") String date) { ... }

Rule: any bean that injects jobParameters, jobExecutionContext, or stepExecutionContext via SpEL must be @StepScope or @JobScope.

Passing a unique parameter on each run

When a job uses preventRestart() or you want a new JobInstance every run (e.g., a report job that should always run fresh), add a unique parameter:

JobParameters params = new JobParametersBuilder()
        .addLocalDateTime("runAt", LocalDateTime.now(), true) // always unique
        .addString("reportDate", "2026-05-03", false)
        .toJobParameters();

Or use Spring Batch’s built-in RunIdIncrementer:

@Bean
public Job reportJob(JobRepository jobRepository, Step reportStep) {
    return new JobBuilder("reportJob", jobRepository)
            .incrementer(new RunIdIncrementer())  // adds run.id=1, 2, 3... automatically
            .start(reportStep)
            .build();
}

ExecutionContext

ExecutionContext is a Map<String, Object> that Spring Batch persists to BATCH_JOB_EXECUTION_CONTEXT and BATCH_STEP_EXECUTION_CONTEXT after each successful chunk commit.

There are two contexts:

ContextScopePersisted to
Job ExecutionContextShared across all steps in a jobBATCH_JOB_EXECUTION_CONTEXT
Step ExecutionContextPrivate to one stepBATCH_STEP_EXECUTION_CONTEXT

Writing to ExecutionContext from a listener

@Component
public class OrderImportStepListener implements StepExecutionListener {

    @Override
    public void beforeStep(StepExecution stepExecution) {
        // Read data computed by a previous step from the job context
        ExecutionContext jobCtx = stepExecution.getJobExecution().getExecutionContext();
        String sourceFile = jobCtx.getString("downloadedFilePath");
        stepExecution.getExecutionContext().putString("inputFile", sourceFile);
    }

    @Override
    public ExitStatus afterStep(StepExecution stepExecution) {
        // Publish results to the job context for the next step to consume
        ExecutionContext jobCtx = stepExecution.getJobExecution().getExecutionContext();
        jobCtx.putLong("importedOrderCount", stepExecution.getWriteCount());
        jobCtx.putLong("skippedOrderCount",  stepExecution.getSkipCount());
        return null;
    }
}

Reading ExecutionContext in a bean

@Bean
@StepScope
public FlatFileItemReader<Order> contextAwareReader(
        @Value("#{stepExecutionContext['inputFile']}") String inputFile) {

    return new FlatFileItemReaderBuilder<Order>()
            .name("contextAwareReader")
            .resource(new FileSystemResource(inputFile))
            .lineMapper(lineMapper())
            .build();
}
@Bean
@StepScope
public ItemProcessor<Order, Order> contextAwareProcessor(
        @Value("#{jobExecutionContext['importedOrderCount']}") Long prevCount) {
    // Use data published by a previous step
    return order -> {
        if (prevCount != null && prevCount > 50000) {
            // Large import — apply different rules
        }
        return order;
    };
}

Sharing data between steps without ExecutionContext

An alternative is to use a shared Spring bean (request/prototype-scoped component) to hold inter-step data in memory. This is simpler but not restartable — on crash, the shared state is lost.

@Component
@JobScope  // one instance per job execution
public class ImportJobState {
    private String downloadedFilePath;
    private long importedCount;
    // getters/setters
}

Use @JobScope beans for non-critical inter-step communication. Use ExecutionContext when restart safety matters.


Restartability in Depth

What makes a job restartable?

  1. The job is not marked preventRestart().
  2. The JobInstance (job name + identifying params) has a FAILED or STOPPED execution.
  3. You relaunch with the exact same identifying parameters.

Spring Batch then:

  • Creates a new JobExecution for the same JobInstance.
  • For each step, checks BATCH_STEP_EXECUTIONCOMPLETED steps are skipped.
  • For in-progress steps, restores the step’s ExecutionContext (reader position, item counters).

Making a job non-restartable

@Bean
public Job neverRestartJob(JobRepository jobRepository, Step step) {
    return new JobBuilder("neverRestartJob", jobRepository)
            .preventRestart()
            .start(step)
            .build();
}

With preventRestart(), relaunching with the same parameters for a failed run throws JobRestartException.

Limiting restart attempts

// Built-in: limit to 3 total executions per JobInstance
@Bean
public Step limitedRestartStep(JobRepository jobRepository, ...) {
    return new StepBuilder("limitedRestartStep", jobRepository)
            .<Order, Order>chunk(100, tx)
            .reader(reader)
            .writer(writer)
            .startLimit(3)     // allow at most 3 attempts (initial + 2 restarts)
            .build();
}

Allowing a completed step to re-run on restart

By default, a COMPLETED step is skipped on restart. Use allowStartIfComplete(true) to re-run it:

@Bean
public Step idempotentCleanupStep(JobRepository jobRepository, ...) {
    return new StepBuilder("idempotentCleanupStep", jobRepository)
            .tasklet(cleanupTasklet(), tx)
            .allowStartIfComplete(true)  // always re-run on restart
            .build();
}

Use this for idempotent steps like file cleanup, temp table truncation, or status flag resets.


Idempotent Job Design

A job is idempotent if running it multiple times with the same parameters produces the same result as running it once. Design for idempotency by default — network failures can cause ambiguous outcomes.

Idempotent insert — MySQL ON DUPLICATE KEY

INSERT INTO orders (order_id, customer_id, amount, status)
VALUES (:orderId, :customerId, :amount, :status)
ON DUPLICATE KEY UPDATE
    amount = VALUES(amount),
    status = VALUES(status)

Idempotent insert — status guard

INSERT INTO processed_orders (order_id, ...)
SELECT :orderId, ...
WHERE NOT EXISTS (
    SELECT 1 FROM processed_orders WHERE order_id = :orderId
)

Idempotent file output — unique file per run

@Bean
@JobScope
public FlatFileItemWriter<Order> reportWriter(
        @Value("#{jobParameters['runDate']}") String runDate) {

    // Unique file per run date — safe to re-run
    return new FlatFileItemWriterBuilder<Order>()
            .name("reportWriter")
            .resource(new FileSystemResource("/reports/orders-" + runDate + ".csv"))
            .lineAggregator(aggregator())
            .build();
}

Complete Example: Parameterised Daily Import Job

@Configuration
@RequiredArgsConstructor
public class DailyImportJobConfig {

    private final JobRepository jobRepository;
    private final PlatformTransactionManager tx;
    private final DataSource dataSource;

    @Bean
    @StepScope
    public FlatFileItemReader<Order> dailyOrderReader(
            @Value("#{jobParameters['inputFile']}") String inputFile) {

        DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer();
        tokenizer.setNames("customerId", "amount", "orderDate", "status");

        DefaultLineMapper<Order> lineMapper = new DefaultLineMapper<>();
        lineMapper.setLineTokenizer(tokenizer);
        lineMapper.setFieldSetMapper(new OrderFieldSetMapper());

        return new FlatFileItemReaderBuilder<Order>()
                .name("dailyOrderReader")
                .resource(new FileSystemResource(inputFile))
                .lineMapper(lineMapper)
                .linesToSkip(1)
                .build();
    }

    @Bean
    @StepScope
    public JdbcBatchItemWriter<Order> dailyOrderWriter(
            @Value("#{jobParameters['runDate']}") String runDate) {

        return new JdbcBatchItemWriterBuilder<Order>()
                .dataSource(dataSource)
                .sql("INSERT INTO orders (customer_id, amount, order_date, status, batch_run_date) " +
                     "VALUES (:customerId, :amount, :orderDate, :status, '" + runDate + "') " +
                     "ON DUPLICATE KEY UPDATE status = VALUES(status)")
                .beanMapped()
                .assertUpdates(false)
                .build();
    }

    @Bean
    public Step importDailyOrdersStep() {
        return new StepBuilder("importDailyOrdersStep", jobRepository)
                .<Order, Order>chunk(500, tx)
                .reader(dailyOrderReader(null))  // null — Spring injects via @StepScope
                .writer(dailyOrderWriter(null))
                .build();
    }

    @Bean
    public Job dailyOrderImportJob() {
        return new JobBuilder("dailyOrderImportJob", jobRepository)
                // No incrementer — same runDate = same JobInstance = restart on failure
                .start(importDailyOrdersStep())
                .build();
    }
}

Launch:

jobLauncher.run(dailyOrderImportJob,
    new JobParametersBuilder()
        .addString("runDate", "2026-05-03", true)         // identifying
        .addString("inputFile", "/data/orders-2026-05-03.csv", false)
        .toJobParameters());

If this job fails at chunk 50, re-launch with the same runDate — Spring Batch restores the reader to line N×500 and resumes.


Key Takeaways

  • JobParameters are immutable inputs defined at launch time. Identifying parameters determine JobInstance identity (same params = same instance = restart if failed).
  • @StepScope / @JobScope are required on any bean that reads jobParameters or executionContext via SpEL — without them the SpEL is evaluated at singleton creation time (before any job runs).
  • Use ExecutionContext to share data between steps. Job context is shared across all steps; step context is private to one step.
  • COMPLETED steps are skipped on restart. Use allowStartIfComplete(true) for idempotent steps that must always re-run.
  • Design writers with ON DUPLICATE KEY UPDATE or existence checks to make jobs naturally idempotent.

What’s Next

Article 15 covers Tasklets — the non-chunk execution model for operations like file cleanup, DDL execution, sending notifications, and any work that does not fit the read-process-write pattern.