We have introduced generative AI (GenAI) infused features to help our customers realize productivity gains by building better and smarter automations. Our customers can realize the power of GenAI with features such as Automation Co-Pilot for Business Users and Document Automation.

Automation Anywhere supports command packages for customers using their own licenses for the large language models (LLMs). Some Automation Anywhere products contain embedded third party LLMs. This document intends to provide answers to frequently asked questions around data and security measures we have put in place for safely using the GenAI infused features. It is important to understand the two categories of data as we refer to them in this document.

Customer Data
Refers to data submitted by customers through Automation Anywhere hosted systems such as Automation Success Platform. This data is required to operate and deliver services. For example, user text prompts are treated as Customer Data.
  • Inputs: data submitted by Customer in the form of prompts or queries. At Customer's discretion, some Inputs may contain Customer Data. Other Inputs may solely contain instructions to Automation Anywhere services, such as where Customer is providing natural language Inputs to build a bot.
  • Outputs: data returned by the GenAI models in response to Inputs submitted by the Customer.
Usage Data
Refers to data generated from the use of the platform services and features. This data is anonymized and aggregated data for metrics and other telemetry, such as standard package names and sequencing of steps gathered by Automation Anywhere to improve the services and product performance.
How does Automation Anywhere enable customers to automate against their own LLM subscription?
Automation Anywhere supports customers bringing their own license for their preferred foundation models when using Automation Anywhere provided command packages, for example, Automation Co-Pilot for Business Users. These foundation models hosted on hyperscaler platforms can be accessed using the Automation 360 native integrations through our command packages, which include Microsoft Azure OpenAI, OpenAI, and Google Vertex AI.

To know if you can integrate with your chosen LLM, see the product documentation.

Which products use third-party AI models provided by Automation Anywhere?
We use LLMs from third-party providers in the following products:
  • Document Automation
  • Automator AI
What data will be used to train the Automation Anywhere provided models
Automation Anywhere may use Inputs and Outputs to train or otherwise improve the AI Features, but only if such Inputs and Outputs have been (a) de-identified so that they do not identify Customer, its users or any other person and (b) aggregated with data across other customers.
What measures are in place to ensure that Customer Data is not utilized for training the large language model (LLM) libraries?
No Customer Data is used to train the models, nor is Customer Data stored outside customer-tenant production environments within the Automation Success Platform.

We have gone through vendor reviews to ensure that third-party LLMs are not using Customer Data for training their models.

What measures are in place to prevent unauthorized access or data breaches?
We have put in place security measures, such preventing Customer Data from being stored externally as well as providing guardrails, redaction, or masking in certain products. Automation Success Platform ensures that Customer Data is always protected by using industry-standard encryption for data at rest and in transit. These systems storing Customer Data are monitored 24x7 and access controlled to ensure safe operations in compliance with SOC 1, SOC 2, ISO 27001:2022: Information Security Management Systems (ISMS), ISO 27017:2015: Information Security Controls for Cloud Services, ISO 27018:2019: Protection of Personally Identifiable Information (PII) in Cloud environments and HITRUST. We have implemented appropriate security measures, such as web application firewall, encryption (AES 256 at rest, TLS in transit), and industry-standard authentication and authorizations for RBAC. Our platform designs have considered protection against threats as laid out in the OWASP Top10 for LLMs.
How does Automation Anywhere protect Customer Data when using GenAI?
Our products using GenAI are on the same platform as our current products and are covered by the same security certifications (SOC1, SOC2, ISO, and COBIT) and standards as our other products. A dedicated cloud security team is responsible for ensuring compliance and supporting audits conducted by external, professional auditors for our security certifications. Our security certifications and reports can be found on our Compliance Portal.
What are some best practices customers can leverage to benefit from GenAI products?
Here are some best practices to leverage when using GenAI infused product features:
Know where your data is and how it is being used
When using your own LLM providers, only use model providers that are vetted and those with whom you have clarity on your data and its usage. Ensure that no sensitive data is used to train the shared models and understand if/where your data is stored and who has access to it.
Use guardrails on model inputs and outputs
GenAI models are sensitive to variations in the inputs they received and can occasionally be unpredictable in the output generated as it is free-form text. By designing workflows that use GenAI models for approved tasks with pre-designed prompts and output validation steps, you can further ensure that the model will work with a high degree of confidence in production environments. Designing a prompt that is explicit and controlled for a task like summarization translates to your users getting a higher quality and more consistent output from the models to use in their workflows. Ensure users do not submit sensitive information in their prompts.
Audit and monitor usage of AI within automations

GenAI models must be evaluated for accuracy and safety. Ensure users access only approved models by leveraging the governance features available in AI Agent Studio. Use these features to securely connect to authorized models and audit all interactions through the AI Governance logs.

Keep a human-in-the-loop for generated content
When generating content such as personalized customer emails or patient summaries, it is critical to ensure that a human validation step is present in your process before sharing anything externally. It is important to understand that GenAI models can occasionally be unpredictable in the output generated, especially when generating new content. Use notifications to push users’ work requiring reviews in real time and track the overall workflow status.