Remove nodes from a cluster setup
- Updated: 2026/02/05
As RPA platform administrator, you can remove nodes from a Control Room cluster setup when you want to replace or update them for enhanced performance.
Prerequisites
Ensure the following:
- Follow the decommissioning sequence of the nodes
Whenever nodes are removed from the cluster, ensure that the decommissioning process is performed sequentially, handling one node at a time.
Note:- When removing or downgrading nodes within a cluster, ensure that the
resulting cluster always retains an odd number of active nodes. This
practice helps maintain optimal quorum and cluster stability.
For example, in a 7‑node cluster, if nodes need to be removed or downgraded, you must remove 2 nodes, leaving 5 nodes active. This ensures that the cluster continues to operate with an odd node count, which is essential for maintaining quorum.
- For data to be completely available, it must have at least n/2 + 1 nodes up. It is recommended to have at least 3 nodes for High Availability (HA) to avoid split brain scenario which can result in data loss. For a three node cluster, at least two nodes must be available.
- When removing a node from the cluster, always verify the placement of primary and replica shards for each index. Avoid deleting any node that contains both the primary and replica shards of the same index, as this can cause that index to move into a RED status.
- When removing or downgrading nodes within a cluster, ensure that the
resulting cluster always retains an odd number of active nodes. This
practice helps maintain optimal quorum and cluster stability.
- Verify that all the primary nodes and databases have backups.
- Ensure that you have administrator or root privileges on the Control Room servers.
Perform the following steps to remove three nodes (N1, N2, and N3) from a cluster that has six nodes (N1, N2, N3, N4, N5, and N6).