A Xpand instance obtains statistics using statd (/opt/clustrix/bin/statd.py), a cluster-wide statistics aggregation and tracking mechanism. The resulting statistics are used for the graphical displays within the XpandGUI Administration UI and may also be queried directly using SQL.
The statd script runs on each node but only active instance is on the node with the highest node ID and collects statistics from internal system tables and populates the tables in the clustrix_statd database:
The frequency of statistics gathering is tunable via the statd_config table (see below), but we recommend to leave them with their default values.
In addition, the statd_metadata table contains the current and historical statd IDs and names that can be used to join back to the
sql> SELECT name, value FROM clustrix_statd.statd_current NATURAL JOIN clustrix_statd.statd_metadata WHERE name LIKE '%nodes_in_quorum%' ORDER BY name;
sql> SELECT timestamp, value, name FROM clustrix_statd.statd_history NATURAL JOIN clustrix_statd.statd_metadata WHERE name LIKE 'clustrix.cpu.load_avg%' AND timestamp BETWEEN '2019-01-01 09:30:01' AND '2019-05-15 10:30:01' ORDER BY timestamp, name;
The query below looks at queries that have high average rows read between the specified timestamps as queries that read many rows may be candidates for optimizing.
sql> SELECT sum(exec_count) as exec_count,
The query below looks at tables and indexes that have the most reads over the last 24 hours:
sql> SELECT Sum(reads) AS reads,
There are numerous statistics collected for your cluster. Use the following query to determine all available statd statistics:
sql> SELECT name FROM clustrix_statd.statd_metadata ORDER BY name;
The following tables highlight useful statistics. Additional stats can be found at statd Metrics.
|The statistics for both queries per second (qps) and transactions per second (tps) will be the same if explicit transactions were not used.|
Indicates average CPU load across all CPU cores per node: and load of busiest and least busy CPU core in the cluster.
Note that core 0 is reserved for certain activities and thus will often reflect a lower CPU utilization relative to other cores, which will intern affect these averages. To see more types of cpu stats available:
Provide execution counters for each operation.
Provide the total time spent executing each operation.
Taking the delta of Com_<operation>_us and dividing by the delta of Com_<operation> will thus provide the average response time for that delta. To see more types of Com stats available for operations:
|clustrix.stats.executing_sessions||The statistics for number of sessions executing.|
|clustrix.replication.slave.relay_log_bytes.<slave name>||Size of the slave relay log in bytes.|
|clustrix.replication.slave.seconds_behind_master.<slave name>||Seconds behind master.|
|clustrix.cluster.nodes_in_quorum||Nodes participating in current group.|
|clustrix.cluster.total_nodes||All the nodes the cluster knows about, including down/unavailable.|
|Number of Rebalancer actions in progress for a given type.|
The following statistics are kept as percentages and are thus in the range of 0-100
|clustrix.capacity.disks.avg_used_percent||Average percentage of permanent space allocated across the disks in the system.|
|clustrix.capacity.disks.max_used_percent||Highest percentage of permanent space allocated across all the disks in the system.|
Lowest percentage of permanent space allocated across all the disks in the system.
Increasing frequency of statd collection can impact the performance of your cluster.
The frequency of statistics collection is configurable within the statd_config table. Contact Xpand Support for assistance in modifying these values.
sql> select * from statd_config order by name; +---------------------------------------+-------+ | name | value | +---------------------------------------+-------+ | qpc_current_interval_s | 60 | | qpc_history_interval_s | 300 | | qpc_history_top_n | 100 | | qpc_rollup_interval_s | 3600 | | repartition_history_interval_s | 43200 | | SpaceUsedBreakdown_current_interval_s | 900 | | stats_current_interval_s | 30 | | stats_history_interval_s | 300 | +---------------------------------------+-------+ 8 rows in set (0.00 sec)
The hotness_history, qpc_history and statd_history tables are partition tables and data older than 7 days are dropped. If you need to longer data retention, you can have a cron job to extract the data accordingly and store it elsewhere.
See also information for using: