Congratulations on needing to expand your cluster on ClustrixDB!
Your ClustrixDB is licensed for a maximum number of cores per node as well as a maximum number of nodes for the cluster. If expanding your cluster will exceed your current license capabilities, please contact Clustrix Sales to expand your license agreement. The flex_clone operation below will fail with a "CPU Count Mismatch" error if your license needs expansion.
Before you can add a new node to a ClustrixDB cluster, the node will first need to have the ClustrixDB software installed and configured on it, which can be done with the flex_clone script. Run this script from a bash command line while connected to one of the nodes of your cluster. The IP specified should be that of the node you are adding. You will need to run this process once for each new node being added.
|Run the flex_clone script from any node of your cluster to clone your installation and configuration|
shell> /opt/clustrix/bin/flex_clone.sh 'new_node_ip'
The final message of this process will indicate either a successful installation or display an error message if something goes wrong. Contact Clustrix Support to aid in resolving any errors you receive during this process.
When adding nodes to your cluster, you may need to open ports. For a listing of all ports required by ClustrixDB, including those used for multiport, please see Network Security with ClustrixDB.
After each new node has been prepared using the step above, you are now ready to expand your cluster's capacity. Connect to one of the existing nodes of your cluster and run the following from a SQL prompt. The IP(s) specified are those for new nodes being added.
|Add the cloned nodes to your cluster|
sql> ALTER CLUSTER ADD 'ip' [, 'ip'] ...;
ClustrixDB will interrupt your service to issue a Group Change only when all nodes are ready to join the cluster.
Clustrix recommends running this command during non-peak periods or during a scheduled maintenance window.
There will be a short disruption of service while the node(s) are being added.
You may also notice a slight degradation of performance while the Rebalancer moves data to the new node(s).
There are multiple ways to verify that a node was successfully added.
a) Run this query from a SQL prompt. The new node(s) will be added using the next consecutive unused node number(s), so the new node(s) will appear at the end of the resulting display.
sql> SELECT * FROM system.nodeinfo ORDER BY nodeid;
b) You can also use The CLX Command-Line Administration Tool.
|View your cluster's status by providing the following at a bash prompt|
shell> /opt/clustrix/bin/clx status
You should see that all nodes appear OK on the display. You may also notice that the data distribution amongst your nodes is not yet balanced. Be patient. It will be, soon.
If the node you are trying to add does not appear in the list above, see section below on Errors during Flex Up.
Your new node(s) have been successfully added to your ClustrixDB cluster but they do not yet contain data. The Rebalancer will now automatically work in the background to move data onto the new node(s). To monitor this process, refer to the instructions regarding Managing the Rebalancer. Your cluster is fully functional and able to be used during this process.
As part of adding nodes to your cluster, ClustrixDB performs some checks to ensure the nodes have the same configuration. This section describes errors that can be encountered with ALTER CLUSTER ADD and how to resolve those issues.
The following are errors you may encounter at the SQL prompt
|List of nodes with pending invitations|
sql> SELECT * FROM system.pending_invites;
The cluster periodically attempts to send invitations to nodes in system.pending_invites. For each invitation that is sent, there will be entries in clustrix.log:
sending invitation response(no error) to "10.2.13.68:24378"
Note: If the same node is in system.pending_invites and system.problem_nodes, you may want to remove the node from system.pending_invites before resolving the issue with system.problem_nodes. Doing this will prevent the node from being automatically added to the cluster once the problem is resolved, causing a premature group change. Instead, you may prefer to complete the node addition during off-peak hours.
|Remove a pending node addition from PENDING_INVITES|
SQL> DELETE FROM system.pending_invites;
|Query system.problem_nodes to see why a node could not be added|
SQL> SELECT * FROM system.problem_nodes;
Here is the list of reasons provided in system.problem_nodes and how to resolve those issues:
|Software binaries differ||Make sure all nodes are running the same version of ClustrixDB|
|Multiport settings mismatched|
See Modifying Startup Configuration Options for instructions on how to disable Multiport.
Network Security with ClustrixDB contains information relative to enabling Multiport ports.