閱讀及檢閱 Automation Anywhere 文件

Automation Anywhere Automation 360

關閉內容

內容

開啟內容

Remove nodes in cluster setup

  • 已更新:2021/10/13

    Remove nodes in cluster setup

    As a RPA platform administrator, you can remove one or more existing original nodes from the multi node Control Room cluster if any of the existing nodes needs replacement for upgrade or to enhance the performance.

    先決條件

    Ensure the following:

    • All primary nodes and databases have backup in place.
    • You have administrator/root privileges on the Control Room Servers.
    You can remove one or more nodes from an existing cluster. This task describes about the scenario to remove the three existing nodes and replacing with three new nodes in an existing cluster.

    程序

    1. Run the following command in the Linux shell to identify the master node.
      curl -k --user es_client:Automation123 https://172.31.46.2:47599/_cat/nodes
    2. Log in to the server as an administrator and run the following command to stop all Control Room services.
      sudo systemctl stop controlroom*
      Log in on any one of the original nodes N2 or N3 and not the master node N1.
      Similarly, shut down all Control Room services for the master node N1.
    3. To verify the health of the cluster, run the following command in the command line.
      curl -k --user es_client:<es password>https://172.31.18.37/_cat/nodes
    4. Edit the cluster.properties file located at: /opt/automationanywhere/automation360/config.
    5. Remove the IP addresses for the original three nodes.
      註: Perform this action for all the nodes in the cluster. Do not disturb the sequence of the IP address while removing the IP address of the original three nodes.
    6. Edit the elasticsearch.yml file at: /opt/automationanywhere/automation360/elasticsearch/config.
    7. Remove the IP addresses for the original three nodes from the discovery.zen.ping.unicast.hosts attribute.
      註: The discovery.zen.ping.unicast.hosts attribute must contain the IP addresses for new nodes only and in the same order within the file on each node.
    8. Run the following command in the Linux shell to identify the new master node.
      curl -k --user es_client:Automation123 https://172.31.46.2:47599/_cat/nodes
    9. Update the new master IP address in the cluster.initial_master_nodes attribute.
    10. Run the following command to start the services on each node.
      sudo systemctl start controlroom*
      You must start the services on the master node at the end.
    11. To verify the final cluster health, run the following command in the command line.
      curl -k --user es_client:<es password>https://172.31.18.37/_cat/nodes
      註: You must wait until the replication status is showing green. The replication status turns green when the cluster is fully synchronized.
    傳送意見反饋