Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space ML1 and version 5.3

ClustrixDB Xpand performs online schema changes without blocking reads or writes to a table. 

ClustrixDB Xpand uses MVCC (multi-version concurrency control) to avoid locking to tables undergoing a schema change. New temporary container(s) are created, and while data is being copied, a temporary transaction log records any writes run against the old containers. Once all the original records have been copied and logged transactions have been processed, the ALTER transaction completes, the new table is available, and the old table is discarded. All of these mechanics are performed automatically when you execute your ALTER SQL. 

ClustrixDB Xpand maintains read consistency during the ALTER – read and write queries that run against the table before the ALTER commits see the original table schema. Reads and writes after the ALTER commits see the new schema. From the perspective of any single query or user, the ALTER is instantaneous.

Table of Contents

Best Practices

While ClustrixDB Xpand is designed to easily support online schema changes for tables of any size, the following best practices help to minimize negative and unforeseen impacts to your application. This includes: 


  • The current amount of space used.
  • The minimum amount of free space desired. 
    • Plan to reserve a minimum of at least 10% free space throughout your schema changes.
    • ClustrixDB Xpand will kill long-running transactions (including the ALTER) if free space falls under 5%.
  • Ensure there is sufficient space to store a full copy of the table, including replicas. The size of a table can be estimated from system.table_sizes.
  • The estimated amount of time it will take to complete the ALTER
  • New data growth rate.
  • Undo log size. The undo log for all INSERT, UPDATE, and DELETE operations will accumulate during the ALTER.
  • Binlog size and growth rate. During the ALTER, binlog trims will still run, but the disk space will not be freed until BigC is unpinned.


In the following example, a column will be added to a 1TB table that is part of a 10TB cluster. 

Description of Space RequirementTB Required

Our cluster is currently 10 TB of usable space and we know the system reserves 5% for internal operations. If we exceed this, writes can be blocked and the ALTER will roll-back.

0.5 TB
Our users have requested we leave at least 10% of space free at all times to handle surges in load.1.0 TB
The new table will consume a bit over 1 TB due to the additional column. We could calculate rows times bytes, but let's assume 1.1 TB. Because this is a copy, we have to reserve this space until the ALTER is complete.1.1 TB

The entire cluster receives approximately ~1.5 TB per day of write transactions (including writes to the binlogs) so (1.5 / 24) * 25 hours = 1.5625 TB

1.6 TB
Total free space required to safely begin the ALTER:4.2 TB

A conservative estimate would be that there should be at least 4.5 TB available space on the cluster to allow the ALTER to complete.


To view the status of an ALTER in-process, use this SQL.

sql> select * from system.alter_progress;

Replication Concerns

Performing ALTER operations in replicated environments requires additional planning. Whenever possible, it is recommended to allow the ALTER to run over the replication stream.


Tables that are being accessed with high concurrency or have more than N*10^7 rows may experience degraded cluster performance immediately after the ALTER completes. ClustrixDB Xpand stores statistics and query plans for each table, and these values are cached. ClustrixDBXpand's periodic refresh and flush of that information may not occur quickly enough and may consequently impact performance. To avoid this, immediately flush the cache once the ALTER is complete by using the following commands:

sql> ALTER TABLE ...
shell> clx cmd 'mysql system -e "call pdcache_update()"'; 
shell> clx cmd 'mysql system -e "call qpc_flush()"'; 

Note that qpc_flush and pdcache_update are done on a per-node basis. Therefore, these should be executed using the clx cmd utility to ensure they are run on all nodes.