This is documentation for a previous version of ClustrixDB. Documentation for the latest version can be found here
If you are currently using MySQL, you can use the following methods to validate your application's compatibility with Clustrix:
To validate write statements, set up Clustrix as a slave using the SBR mode of replication to ensure the Clustrix slave can handle your application's write queries.
SBR provides the simplest option to validate compatibility, but for a production deployment, RBR may yield better performance.
This section describes considerations when testing OLTP performance, where we are focused on finding the maximum throughput of the system, while maintaining a reasonable response time (query latency). Given an appropriate test harness and set of queries as described below, you can determine whether the TPS/latency figures of a given configuration are suitable for your application, or directly compare performance of Clustrix and an alternative database using the same workload at varying concurrency.
Since Clustrix is optimized for high concurrency OLTP workloads, a test to exhibit this capability must simulate many users accessing the database at once. A single-threaded test (for example feeding mysql client a list of queries) will fail to utilize the distributed resources of a Clustrix cluster, and obtain results no better than a single-instance database. A suitable performance test utilizing tens or hundreds of threads, however, will allow Clustrix to leverage its distributed architecture.
It is important to have some kind of test harness to run load against the cluster. A suitable test harness for high-throughput transaction testing should have the following capabilities:
As with any performance exercise, it is important to identify and eliminate any bottlenecks. If your test harness is itself the bottleneck, you will not be able to differentiate the performance characteristics of the database under test. Make sure that your test tool is efficient enough that it can drive high load without itself requiring lots of CPU or memory; this is a particularly important consideration if you are building a test with a robust application server framework, where you may find you need to allocate a large number of servers to adequately drive the database backend. To avoid this expense and complexity, you may prefer to use one of the methods of query replay suggested below.
We have worked with customers who have successfully utilized existing load test frameworks such as YCSB and jmeter, both of which are designed to scale load. We have also assisted users in utilizing two tools from Clustrix, stest and aleqload, in order to build a model of their workload from queries extracted from their current production database.
stest and aleqload are tools from Clustrix for generating query workload with varying concurrency in an efficient manner, such that a single thread can saturate even a large cluster with query load. stest simply re-plays queries from a specified set of queries and allows you to specify number of threads and a duration or number of iterations. The output of stest includes metrics for throughput and latency measurements. The set of queries is typically obtained by extraction from a tcpdump collected on the production database; Clustrix Sales can assist in the process of extracting queries and executing these with stest.
aleqload is a more sophisticated tool, similar to MySQL's randgen utility, but with considerably more functionality. Rather than replaying a given set of queries, it takes a configuration file which specifies a grammar for generating queries, including random generation of values. The distribution of each query type can be fine-tuned to match production workload. The tool also allows setting a maximum QPS (query per second) rate, in addition to driving maximum load, which is useful for measuring system response to ad hoc queries while the system is under some baseline workload.
For more information on stest or aleqload, contact Clustrix Sales or Support.
As noted above, it is important that your test include sufficient concurrency (multiple threads) to engage all cluster resources and adequately model production load. Further, clients should distribute their thread connections evenly across all the nodes in a cluster. Some test harnesses (including stest and aleqload) allow specification of a pool of hostnames/IPs to facilitate this. Otherwise, a front-end load balancer can be used, as described in Load Balancing Clustrix with HAProxy and Configure EC2 Elastic Load Balancer for ClustrixDB AWS. You can confirm that load is evenly distributed during your test run by checking the output of
clx
stat
, which shows TPS for each node, or with a query such as:
SELECT nodeid, count(1) FROM system.sessions GROUP BY nodeid, ORDER BY nodeid;
To optimally balance load across the cluster, consider using a thread count which is an even multiple of the number of nodes. However, for sufficiently high thread counts (greater than total number of cores in the cluster), exactly even distribution of threads will matter much less.
For an existing application, a dump and restore of the current production data set provides the simplest data set for testing. For very large data sets, it may be desirable to utilize a subset of production data, in which case:
--where
flag can be used to extract a subset of production data, including the neat trick --where "1 LIMIT 1000"
When creating a data set from scratch, be careful to avoid problems arising from synthetic data. One particular problem to avoid is repeating one or very few values in an indexed column; this leads to queries over the index either processing most rows or none at all, but additionally the imbalanced distribution of values will trigger the Rebalancer redistribution mechanism to kick in during data population. Consider using a data generation tool which is specifically designed to create meaningful distribution of values, such as spawner or benerator.
Please see Best Practices for Loading Data Onto Clustrix for guidelines to obtain optimal import speed and data distribution.
When testing OLTP performance with a test harness such as stest or aleqload, it's important to ensure that you are testing just OLTP queries. A common pitfall is to include some non-OLTP query, such as a reporting query, which may take several seconds or even minutes to execute. If queries are randomly selected by threads of the harness, many threads will end up executing this one query, resulting in very low TPS. Clustrix Insight's Current Workload Analysis tool is ideally suited for determining if your test workload is dominated by a single query. You can also look for such a problem by running SHOW PROCESS LIST or selecting from system.sessions while running your test, to identify any long-running queries.
Please note that scraping queries from MySQL's slow query log will not produce a set of queries suitable for OLTP testing; see analytic query testing below.
If you discover queries which are dominating your workload, remove them from your test if they are clearly reporting queries which are expected to be expensive, or optimize them as described in Optimizing Query Performance.
For OLAP testing, customers are typically concerned with query response time for complex queries, rather than aggregate throughput. In this case, a simple framework which executes queries one at a time is reasonable, but consider the following guidelines:
Running the query for a second time eliminates two confounding factors, caching and query compilation. Depending on your dataset size and workload, in production most queries may execute from cache, in which case cold cache performance may not be terribly relevant. Clustrix caches the compiled program of each query, so subsequent executions need not be recompiled; for particularly complex queries (in particular, with many-way joins), compilation time may comprise a significant portion of total execution time. Running a second time will give you a good idea of how the query will execute in production, where the compiled query plan will typically be cached.
This goes hand in hand with the comments above regarding use of a data set with meaningful distribution of values. Make sure that your queries match your data. One issue we have encountered is using a dump which is several months old, but queries taken from current workload; many of the queries included date ranges which matched no data in the data set, so the queries returned no rows.
The indexes necessary for efficient execution of your queries are in most cases the same as needed for MySQL or other databases. However, given the distributed nature of ClustrixDB, the lack of an index can sometimes impose a more severe penalty than on a single-instance database; this is due to broadcasting among nodes, which is avoided when proper indexes (e.g. on the columns of a JOIN clause) are available. Customers sometimes use queries from their MySQL slow query log to test against ClustrixDB; in some cases we found that these queries were slow on MySQL due to lack of a proper index, and without the index, ClustrixDB was indeed slower than MySQL. With the proper index, MySQL's performance was improved, but ClustrixDB with the index was even faster.
For more information on identifying and correcting queries which fail to use an index, see the material in Optimizing Query Performance.
When testing performance of your ClustrixDB cluster, it is important to bear in mind the limitations imposed by your platform: CPU, memory, storage I/O, etc. The good news here is that, as a scale-out solution, the limitations of a single server can be overcome by growing the cluster to more nodes. However, to ensure growing your cluster will improve performance of your workload, it is important to identify the limiting resource constraint.
Recognizing Platform Limits provides detailed information on identifying resource constraints, but to summarize here:
To validate the scaling capabilities of ClustrixDB, you may wish to repeat your tests, whether OLTP, analytics, or both, with different cluster sizes. When doing such scale testing, particularly with a smaller data set, it is important to take care that your data is evenly distributed for each iteration of cluster size. There are several approaches to this:
ALTER TABLE <table_name> SLICES=<number of nodes>
) after growing the clusterWhich of these strategies is appropriate depends largely on your data set size. Note that if you plan to use the rebalancer, you can greatly increase the limits on rebalancer activity to decrease the time it takes to move data to the new node(s), as described in Managing the Rebalancer.