Part 17 of 18

Zero-Downtime Deployments: The Expand-Contract Pattern

A RENAME COLUMN statement takes milliseconds. But if your application is still running the old code when the rename executes, every query that uses the old column name fails immediately. Zero-downtime schema changes are not about making DDL faster — they are about sequencing changes so that no single step breaks the running application.

The expand-contract pattern is the standard solution. It breaks dangerous migrations into safe, incremental phases that allow old and new application code to coexist with the same database during deployment.


The Core Principle

Every migration phase must be backward compatible with the currently deployed application code. A migration that passes this test can be deployed independently of application code — before, during, or after the app deployment. A migration that fails this test requires downtime or a coordinated cutover.

Backward compatible = the old application version still works correctly after the migration runs.

The expand-contract pattern enforces this by splitting a breaking change into:

  1. Expand — add new structure without removing old structure (backward compatible)
  2. Transition — migrate data and application code; both structures in use
  3. Contract — remove old structure (safe because no code uses it anymore)

Each phase is a separate deployment. The database migration for each phase deploys independently of application code changes.


Pattern 1: Renaming a Column

Goal: Rename full_name to display_name in the users table.

Naive approach (causes downtime):

ALTER TABLE users RENAME COLUMN full_name TO display_name;

Deployed while old app is running: every query using full_name fails immediately.

Expand-contract approach:

Phase 1: Expand — add the new column

# 20260620-001-add-display-name-column.yaml
databaseChangeLog:
  - changeSet:
      id: "20260620-001"
      author: abhay
      comment: "Phase 1/3: Add display_name column alongside full_name"
      changes:
        - addColumn:
            tableName: users
            columns:
              - column:
                  name: display_name
                  type: VARCHAR(255)
                  constraints:
                    nullable: true
      rollback:
        - dropColumn:
            tableName: users
            columnName: display_name

Deploy this changeset. The old application still uses full_namedisplay_name exists but is unused and nullable. No downtime.

Phase 2: Transition — sync and deploy new app code

# 20260620-002-sync-display-name.yaml
databaseChangeLog:
  - changeSet:
      id: "20260620-002"
      author: abhay
      comment: "Phase 2/3: Backfill display_name from full_name"
      changes:
        - sql:
            sql: >
              UPDATE users
              SET display_name = full_name
              WHERE display_name IS NULL;              
      rollback:
        - sql:
            sql: UPDATE users SET display_name = NULL;

  - changeSet:
      id: "20260620-003"
      author: abhay
      comment: "Phase 2/3: MySQL trigger to keep display_name in sync with full_name"
      changes:
        - sql:
            splitStatements: false
            sql: >
              CREATE TRIGGER trg_sync_display_name
              BEFORE INSERT ON users
              FOR EACH ROW
              BEGIN
                IF NEW.display_name IS NULL THEN
                  SET NEW.display_name = NEW.full_name;
                END IF;
              END              
        - sql:
            splitStatements: false
            sql: >
              CREATE TRIGGER trg_sync_display_name_update
              BEFORE UPDATE ON users
              FOR EACH ROW
              BEGIN
                IF NEW.full_name != OLD.full_name AND NEW.display_name = OLD.display_name THEN
                  SET NEW.display_name = NEW.full_name;
                END IF;
              END              
      rollback:
        - sql:
            sql: DROP TRIGGER IF EXISTS trg_sync_display_name;
        - sql:
            sql: DROP TRIGGER IF EXISTS trg_sync_display_name_update;

After deploying this migration, deploy the new application code that reads and writes display_name. The triggers ensure that any writes via old app code (using full_name) are automatically reflected in display_name. Both columns are in sync.

Phase 3: Contract — remove the old column

Once all application instances are running the new code and no code references full_name:

# 20260627-001-drop-full-name.yaml
databaseChangeLog:
  - changeSet:
      id: "20260627-001"
      author: abhay
      comment: "Phase 3/3: Drop full_name after display_name transition complete"
      preConditions:
        onFail: HALT
        onFailMessage: "Verify no application code references full_name before dropping it."
        sqlCheck:
          expectedResult: "0"
          sql: >
            SELECT COUNT(*) FROM information_schema.TRIGGERS
            WHERE TRIGGER_SCHEMA = DATABASE()
              AND TRIGGER_NAME LIKE 'trg_sync_display_name%'            
      changes:
        - dropColumn:
            tableName: users
            columnName: full_name
      rollback:
        - addColumn:
            tableName: users
            columns:
              - column:
                  name: full_name
                  type: VARCHAR(255)
                  constraints:
                    nullable: true
        - sql:
            sql: UPDATE users SET full_name = display_name;

Wait — the precondition checks that the sync triggers are gone before dropping full_name. Drop the triggers as a separate changeset before this one:

# 20260625-001-drop-sync-triggers.yaml
databaseChangeLog:
  - changeSet:
      id: "20260625-001"
      author: abhay
      comment: Remove sync triggers — new app code deployed, both columns no longer needed
      changes:
        - sql:
            sql: DROP TRIGGER IF EXISTS trg_sync_display_name;
        - sql:
            sql: DROP TRIGGER IF EXISTS trg_sync_display_name_update;
      rollback:
        - empty

Summary timeline:

  • Week 1: Deploy Phase 1 (add display_name, nullable)
  • Week 1: Deploy Phase 2 (backfill, add sync triggers) + deploy new app code
  • Week 2: Deploy trigger cleanup + Phase 3 (drop full_name) after verifying no references

Pattern 2: Adding a NOT NULL Column to a Large Table

Goal: Add email_verified TINYINT(1) NOT NULL DEFAULT 0 to a table with 50M rows.

A direct ALTER TABLE users ADD COLUMN email_verified TINYINT(1) NOT NULL DEFAULT 0 is an instant operation in MySQL 8 (uses the default internally without a table rewrite), but only when:

  • The column has a literal default value (not a function)
  • The column is added at the end of the table

If either condition is not met, MySQL performs a full table copy — potentially minutes of exclusive lock.

The safe pattern regardless of table size:

# Phase 1: Add nullable with default
- changeSet:
    id: "20260621-001"
    author: abhay
    comment: "Phase 1/3: Add email_verified as nullable"
    changes:
      - addColumn:
          tableName: users
          columns:
            - column:
                name: email_verified
                type: TINYINT(1)
                defaultValueNumeric: 0
                constraints:
                  nullable: true
    rollback:
      - dropColumn:
          tableName: users
          columnName: email_verified

# Phase 2: Backfill in batches (large table — avoid a single massive UPDATE)
- changeSet:
    id: "20260621-002"
    author: abhay
    comment: "Phase 2/3: Backfill email_verified = 0 in batches"
    changes:
      - sql:
          sql: >
            UPDATE users SET email_verified = 0
            WHERE email_verified IS NULL
            LIMIT 10000;            
    runOnChange: false
    rollback:
      - sql:
          sql: UPDATE users SET email_verified = NULL WHERE email_verified = 0;

For a 50M row table, a single UPDATE users SET email_verified = 0 WHERE email_verified IS NULL holds a write lock for the duration and generates an enormous transaction log entry. Batch with a loop instead — but since Liquibase changesets are not loops, use a stored procedure:

- changeSet:
    id: "20260621-003"
    author: abhay
    comment: "Phase 2/3: Batch backfill via procedure"
    changes:
      - sql:
          splitStatements: false
          sql: >
            CREATE OR REPLACE PROCEDURE batch_fill_email_verified()
            BEGIN
              DECLARE done INT DEFAULT 0;
              REPEAT
                UPDATE users SET email_verified = 0
                WHERE email_verified IS NULL
                LIMIT 5000;
                SET done = ROW_COUNT() = 0;
                DO SLEEP(0.05);
              UNTIL done END REPEAT;
            END            

      - sql:
          sql: CALL batch_fill_email_verified();

      - sql:
          sql: DROP PROCEDURE IF EXISTS batch_fill_email_verified;
    rollback:
      - sql:
          sql: UPDATE users SET email_verified = NULL;

# Phase 3: Apply NOT NULL constraint
- changeSet:
    id: "20260621-004"
    author: abhay
    comment: "Phase 3/3: Apply NOT NULL — all rows backfilled"
    preConditions:
      onFail: HALT
      sqlCheck:
        expectedResult: "0"
        sql: SELECT COUNT(*) FROM users WHERE email_verified IS NULL
    changes:
      - addNotNullConstraint:
          tableName: users
          columnName: email_verified
          columnDataType: TINYINT(1)
          defaultNullValue: "0"
    rollback:
      - dropNotNullConstraint:
          tableName: users
          columnName: email_verified
          columnDataType: TINYINT(1)

The precondition on Phase 3 (COUNT(*) WHERE email_verified IS NULL = 0) is the safety gate. If any row escaped the backfill, the constraint application halts with a clear message rather than a cryptic MySQL error.


Pattern 3: Splitting a Column (Type Change)

Goal: Split address (a single VARCHAR field) into street, city, state, postal_code.

Phase 1: Expand — add new columns

- changeSet:
    id: "20260622-001"
    author: abhay
    comment: "Phase 1/3: Add structured address columns"
    changes:
      - addColumn:
          tableName: user_addresses
          columns:
            - column: {name: street, type: VARCHAR(255), constraints: {nullable: true}}
            - column: {name: city, type: VARCHAR(100), constraints: {nullable: true}}
            - column: {name: state, type: VARCHAR(100), constraints: {nullable: true}}
            - column: {name: postal_code, type: VARCHAR(20), constraints: {nullable: true}}
    rollback:
      - dropColumn:
          tableName: user_addresses
          columns:
            - column: {name: street}
            - column: {name: city}
            - column: {name: state}
            - column: {name: postal_code}

Phase 2: Transition — deploy app that writes both old and new

Deploy application code that:

  • Reads from new columns (falling back to parsing address if new columns are null)
  • Writes to both address AND the new columns on every save

Then run the backfill migration:

- changeSet:
    id: "20260622-002"
    author: abhay
    comment: "Phase 2/3: Migrate existing address data via application logic"
    changes:
      - sql:
          sql: >
            -- Mark rows that need migration — app will process these
            UPDATE user_addresses
            SET street = 'PENDING_MIGRATION'
            WHERE address IS NOT NULL AND street IS NULL;            
    rollback:
      - sql:
          sql: UPDATE user_addresses SET street = NULL WHERE street = 'PENDING_MIGRATION';

The actual address parsing runs in application code (which knows the address format) rather than in SQL (which doesn’t). The PENDING_MIGRATION marker lets the app identify rows to backfill asynchronously.

Phase 3: Contract — drop the old column

After all rows are migrated and no code writes to address:

- changeSet:
    id: "20260622-003"
    author: abhay
    comment: "Phase 3/3: Drop address column after structured migration complete"
    preConditions:
      onFail: HALT
      sqlCheck:
        expectedResult: "0"
        sql: SELECT COUNT(*) FROM user_addresses WHERE street IS NULL AND address IS NOT NULL
    changes:
      - dropColumn:
          tableName: user_addresses
          columnName: address
    rollback:
      - addColumn:
          tableName: user_addresses
          columns:
            - column:
                name: address
                type: VARCHAR(500)
                constraints:
                  nullable: true

Pattern 4: Table Split (Extracting a New Table)

Goal: Extract shipping_address_* columns from orders into a separate order_addresses table.

Phase 1: Create the new table

- changeSet:
    id: "20260623-001"
    author: abhay
    comment: "Phase 1/3: Create order_addresses table"
    changes:
      - createTable:
          tableName: order_addresses
          columns:
            - column: {name: id, type: BIGINT UNSIGNED, autoIncrement: true, constraints: {primaryKey: true, nullable: false}}
            - column: {name: order_id, type: BIGINT UNSIGNED, constraints: {nullable: false}}
            - column: {name: street, type: VARCHAR(255)}
            - column: {name: city, type: VARCHAR(100)}
            - column: {name: postal_code, type: VARCHAR(20)}
            - column: {name: country, type: CHAR(2)}
    rollback:
      - dropTable:
          tableName: order_addresses

  - changeSet:
      id: "20260623-002"
      author: abhay
      comment: "Phase 1/3: Migrate existing shipping addresses"
      changes:
        - sql:
            sql: >
              INSERT INTO order_addresses (order_id, street, city, postal_code, country)
              SELECT id, shipping_street, shipping_city, shipping_postal_code, shipping_country
              FROM orders
              WHERE shipping_street IS NOT NULL;              
      rollback:
        - delete:
            tableName: order_addresses

Phase 2: Transition — app reads new table, writes both

Deploy app code that:

  • Reads shipping address from order_addresses
  • Writes to both orders.shipping_* columns AND order_addresses on save

Phase 3: Contract — drop old columns from orders

- changeSet:
    id: "20260623-003"
    author: abhay
    comment: "Phase 3/3: Drop shipping_* columns from orders"
    changes:
      - dropColumn:
          tableName: orders
          columns:
            - column: {name: shipping_street}
            - column: {name: shipping_city}
            - column: {name: shipping_postal_code}
            - column: {name: shipping_country}
    rollback:
      - addColumn:
          tableName: orders
          columns:
            - column: {name: shipping_street, type: VARCHAR(255), constraints: {nullable: true}}
            - column: {name: shipping_city, type: VARCHAR(100), constraints: {nullable: true}}
            - column: {name: shipping_postal_code, type: VARCHAR(20), constraints: {nullable: true}}
            - column: {name: shipping_country, type: CHAR(2), constraints: {nullable: true}}
      - sql:
          sql: >
            UPDATE orders o
            JOIN order_addresses oa ON oa.order_id = o.id
            SET o.shipping_street = oa.street,
                o.shipping_city = oa.city,
                o.shipping_postal_code = oa.postal_code,
                o.shipping_country = oa.country;            

Expand-Contract Quick Reference

ChangeExpandTransitionContract
Rename columnAdd new column (nullable)Backfill + sync triggers + deploy new codeDrop old column + triggers
Add NOT NULL columnAdd nullableBackfill in batchesApply NOT NULL constraint
Change column typeAdd new typed columnCopy + convert dataDrop old column
Split columnAdd target columnsParse + distribute dataDrop source column
Split tableCreate new tableCopy data + write to bothDrop old columns
Rename tableCreate new table + FKCopy + sync writesDrop old table

Common Mistakes

Skipping the transition phase: Deploying Phase 1 (expand) and Phase 3 (contract) in the same release forces a coordinated cutover — which is exactly what this pattern is designed to avoid. The transition phase, even if it lasts only one release cycle, is what makes zero-downtime possible.

Dropping old columns the same day they were renamed: Developers eager to clean up remove the old column in the same sprint as the rename. If a cache is serving old queries, an external system hasn’t been updated, or a rollback is needed, the old column is gone. Keep old structure for at least one full release cycle after the transition.

Not using preconditions on the contract phase: The contract changeset that drops the old column should have a precondition that verifies no rows depend on it (e.g., NULL counts, no data in the old column). This prevents accidental data loss if the transition was incomplete.


Best Practices

  • Never execute a breaking change in one step — any rename, drop, or type change that would break running application code must be expand-contracted
  • Deploy migrations independently from application code — Phase 1 goes out before the app deployment; Phase 3 goes out after
  • Use sync triggers for high-write columns during transition — they ensure writes via old app code are reflected in the new column
  • Set a calendar reminder for the contract phase — it is always tempting to leave the old structure “just in case”; set a date and clean it up
  • Precondition on every contract changeset — verify the old structure is truly unused before dropping it

What You’ve Learned

  • Backward compatibility is the test every migration must pass — if old app code still works after the migration, it is safe to deploy independently
  • Expand-contract has three phases: add new structure, transition app and data, remove old structure
  • Column renames require sync triggers during transition to keep old and new columns consistent for concurrent writes
  • Large table backfills use stored procedure loops with SLEEP() to avoid lock contention
  • Preconditions on the contract phase verify the old structure is unused before dropping it
  • The timeline between phases is at least one release cycle — long enough for all instances to update and all caches to expire

Next: Article 18 — Team Collaboration: Naming Conventions, Conflict Prevention, Git Workflow — the final article of Part 3, covering how multi-developer teams manage changelogs without stepping on each other.