Automation Anywhere provided models
- Updated: 2025/12/03
This topic describes the models provided directly by Automation Anywhere.
Automation Anywhere offers a way to access and use AI models within Automation Anywhere. This approach simplifies the process by providing models directly, eliminating the need for you to manage your own licenses.
- Model selection
-
- When creating a model connection, you select Automation Anywhere as the vendor.
- Two standard models are currently available: Claude Sonnet 3.5 and GPT-4o.
- You need to select a region to route traffic, with options for the United States
and Europe.Note: While Automation Anywhere provides regional routing options, connections are not restricted from other geographies. It is your responsibility to choose the most appropriate region based on their performance and data processing requirements.
- How it works
-
- Automation Anywhere handles the licensing and hosting of the models in the background. There are no additional services or LLM provider accounts that you need to sign up for. Automation Anywhere manages model access and hosting seamlessly in the background, ensuring a simple and unified experience.
- Usage is tracked through a credit system, where each call to the model consumes a certain
number of credits.Note: When Automation AI Credits are exhausted, automations and agents that rely on LLM calls will fail to execute those specific model-based steps until additional credits are available. Automation Anywhere provides a grace policy that allows continued usage for up to 10% additional credits and 15 days beyond the purchased limit to prevent interruptions while credits are renewed or topped up. For more information, refer to Using Automation credits.
- This system is designed to provide a simple, direct connection to powerful AI models for your automations.
- The provided LLM capacity is available exclusively within the Automation 360 platform and can only be utilized when building AI Agents or AI Skills. It cannot be accessed or consumed externally.
- LLMs are deployed as stateless instances, meaning input data is processed but never stored.
- Endpoints are deployed by Automation Anywhere IT within Automation Anywhere cloud environments.
- Instances are regional (US and EU), ensuring tenant traffic remains within the chosen geography.
- Why it matters:
-
- Simplified access: You no longer need to navigate the complex process of securing your own licenses, which saves time and effort.
- Streamlined integration: The models are natively integrated, making it easy to test and use them with existing AI Agents, AI Skills and Task Bots.
- Data control: Regional selection allows you to manage data sovereignty and compliance.
Data & privacy risks evaluated
Automation Anywhere has evaluated key data and privacy risks as part of its deployment of large language models (LLMs). These evaluations ensures you to adopt the provided models (Claude Sonnet 3.5 and GPT-4o) with confidence in compliance and data handling.
- Customer data being sent to LLMs — where is it stored?
- Data is only processed by the LLM; nothing is stored.
- Data residency — where is data stored?
- Data is not stored. LLM endpoints are part of Automation Anywhere cloud subscriptions. Residency is limited to the deployment region.
- Compliance with EU & US data residency guidelines?
- Yes. LLMs are deployed regionally (US/EU). Data in transit remains within the regional boundary.
- Risk of cross-customer data leakage?
- Evaluation and stress tests confirm no mixing or caching of data between tenants.
- Prompt injection risk — can data leak?
- Evaluations against prompt injection payloads found no data leakage. Stateless design ensures nothing is cached or exposed.