阅读和查看 Automation Anywhere 文档

Automation Anywhere Automation 360

关闭内容

内容

打开内容

Convert single-node deployment to multi-node deployment

  • 已更新:2022/01/24
    • Automation 360 v.x
    • 安装
    • RPA Workspace

Convert single-node deployment to multi-node deployment

You can convert your single-node deployment into a multi-node deployment by editing the configuration files and restoring the data in the repository for the single-node setup.

先决条件

The single-node deployment can run on one of several infrastructures, local machines, private data centers, and cloud providers.

To convert a single-node deployment running on AWS or other supported environment into a multi node-deployment, perform the following steps.

过程

  1. In the Task Manager, stop all Automation Anywhere services.
  2. Stop the Control Room instance.
  3. Create an Amazon Machine Images (AMI) instance using the Control Room instance.

    For information about how to create AMI in AWS, see Create an AMI from an Amazon EC2 Instance.

  4. Create a new instance using the AMI created in the previous step.
  5. Edit the configuration files related to the database server, Ignite cluster, and Elasticsearch to form the clusters in the configuration directory on a standard installation.
    The files are located in C:\Program Files\Automation Anywhere\Enterprise\config
    1. Edit the database server URL to point to the intended database server in boot.db.properties.
      Do not change the URL if the original server already refers to a non-localhost address.
    2. Edit the following property in the cluster.properties file:
      Append the list with a new server IP in ignite.discovery.static.ips=<existing list of ips>, <current server ip>
    3. Edit the following properties in the elasticsearch.yaml file:
      • Add the current server address in node.name: "<local-ip>"
      • Add the current server address in network.host: "local-ip"
      • Leave the existing values intact and append the IP of the current server in discovery.zen.ping.unicast.hosts: ["ip1","<local-ip>"]
      • Leave the existing values intact in cluster.initial_master_nodes: ["<master-ip>"]
  6. Restore and mount the repository to the respective path from the same timed snapshot as the selected Control Room snapshot.
  7. Update the configuration tables to ensure TCP visibility between the nodes.
  8. Start the services on the replicated node and wait for a couple of minutes for the clustering to establish.
  9. Log in to the Control Room to verify whether the bots are available and the repository structure is intact.
  10. Check Git integration for standard installations.
    If the installation has external Git configured, check the validity by exercising test check-in and confirm audit logs.
发送反馈