Image based replication

The replication process is similar across environments and cloud providers. A scheduled way is used for creating and storing the snapshots. The snapshot interval is based on the tolerance of the customer for potential data loss.

Prerequisites

We recommend that the replication schedule should be at least 1 day (one snapshot daily).

The following describes the procedure for AWS as an example cloud provider.

Procedure

As a first step in any image-based DR setup, create snapshots at fixed intervals. In case of the disaster, the setup would revert to the latest good image/snapshot and system would be back, up and fully functional in a short time, however with a data loss and short down time.

  1. Decide on the snapshot interval based on potential data loss.
  2. Stop the Automation Anywhere services on the server being imaged.
  3. If on AWS, create AMI using standard image creation steps.
  4. After the image is created, start the Automation Anywhere services.
  5. Run the repository backup mechanism on the same schedule.

The subsequent steps describe restoring data from an image.

  1. Spin a new instance using the previously created AMI.
    Depending on the original setup, if the setup is spread across availability zones, you must do the same in all relevant availability zones.

The following steps are applicable to each instance being recovered.

  1. Edit the configuration files related to the database server, Ignite cluster, and Elasticsearch to form the clusters in the configuration directory.
    On a standard installation, the files are located in: C:\Program Files\Automation Anywhere\Enterprise\config
    1. Edit the database server URL to point to the intended database server in: boot.db.properties.
      Do not change the URL if the original server already refers to a non-localhost address.
    2. Edit the following property in the cluster.properties file:
      Append the list with a new server IP in: ignite.discovery.static.ips=<existing list of ips>, <current server ip>
    3. Edit the following properties in the elasticsearch.yaml file:
      • Add the current server address in: node.name: "<local-ip>"
      • Add the current server address in: network.host: "local-ip"
      • Leave the existing values intact and append the IP of the current server in: discovery.zen.ping.unicast.hosts: ["ip1","<local-ip>"]
      • Leave the existing values intact in: cluster.initial_master_nodes: ["<master-ip>"]
  2. Optional: If mounted, restore the repository from the same timed snapshot as the selected Control Room snapshot and mount to the respective path.
  3. Update configuration tables.
  4. Ensure Transmission Control Protocol (TCP) visibility between the nodes.
  5. Start the services on the replicated node and wait for couple of minutes the clustering to establish.
  6. Verify the following:
    • Login and check the bots are listed and visible.
    • If the installation has external Git configured, check the validity using functions such as check-in.
    • Verify the audit logs.
  7. Update the load balancer tier or DNS as needed if any host names/IPs change with corresponding current values.