Automation Anywhere Enterprise Knowledge On-Premises deployment options
- Updated: 2025/09/03
Automation Anywhere Enterprise Knowledge On-Premises deployment options
Contact your Automation Anywhere customer support to install and deploy Automation Anywhere Enterprise Knowledge on-premises by providing at least one week’s advance notice for scheduling your installation.
Deployment options & prerequisites
- Small Deployment (approximately 100 concurrent users):
- Single VM: 16 cores, 64GB RAM, 1 TB SSD
- Medium Deployment (approximately 500 concurrent users):
- Larger machine: 32 cores, 128GB RAM, 2 TB SSD
- Large Deployment (approximately 1000+ concurrent users):
- Deployed across containers for scalability
- Fully distributed architecture
Automation Anywhere (AAI) Enterprise Knowledge is a mature platform that goes beyond simple Retrieval-Augmented Generation (RAG) implementations found with hyperscalers. The OnPrem offering provides the full platform for deployment and use within a customer's security perimeter, under their control. While Proof-of-Concept (POC) deployments are currently done with Docker, full-scale production deployments can be implemented using EKS, AKS, GKE, or Kubernetes.
On-Premises footprint
- The server requirements specifications are based on Docker images and PostgreSQL, configured as Supabase for the vector index store.
- On-Premises typically recreates a cloud architecture on a single machine.
- Two single-box deployments are suitable for small and medium deployments. Scaling to a fully distributed deployment involves distributing components across multiple pod service boxes.
- A base fully distributed deployment starts with 8 machines and can scale to 40+ depending on concurrent usage, supporting up to 100 concurrent users.
- This architecture delivers a functional cloud solution within a customer-controlled On-Premises environment.
- This information represents a baseline, and a specific scaling assessment will be necessary in consultation with our Enterprise Knowledge SMEs.
Item | Description |
---|---|
![]() |
Automation Anywhere Enterprise Knowledge connections and AI Skills are designed and stored within the Control Room. |
![]() |
When prompts are executed, they are funneled to a local execution agent within your infrastructure. |
![]() |
RAG relevance is checked against known knowledge sources. |
![]() |
Prompts are sent to the LLM with RAG and returned directly to AAI Enterprise Knowledge Platform (This can also be directed to client's VPC) |
![]() |
Connection details and additional information are stored within the AI Governance logs within the your tenant. |
On-Premises deployment architecture and patterns
Automation Anywhere Enterprise Knowledge On-Premises offers two primary deployment patterns to cater to different needs and scales:
- Single machine deployment: This architecture consolidates all Enterprise
Knowledge components onto a single machine. It is a suitable option for Proof of
Concepts (POCs) and small to medium-sized deployments.
- A single Virtual Machine (VM) can be utilized for POC environments.
- For optimal POC performance, a processor equivalent to an Intel i7 13700 with hyperthreading, along with sufficient resources to host both the database and component Docker containers, is recommended.
- A single VM configured with approximately 16 cores, 64GB of RAM, and a 1 TB SSD can support roughly 100 concurrent users.
- For larger single-machine deployments, a VM with around 32 cores, 128GB of RAM, and a 2 TB SSD can potentially support up to 500 users.
- In single machine deployments, ensure that the Supabase VM and all Docker containers can communicate without any network restrictions.
- Fully distributed deployment: For larger deployments requiring higher
concurrent user capacity, a fully distributed model is recommended. This pattern
distributes the Enterprise Knowledge components across multiple machines,
typically organized into pod services.
- A basic fully distributed deployment starts with a minimum of 8 machines and can scale up to 40 or more machines to support around 100 concurrent users, depending on the specific usage patterns.
- This deployment model mirrors the architecture of the cloud offering, effectively creating a functional cloud solution within your organization's controlled environment.
- The architecture involves key components such as PostgreSQL (installed natively), Websocket Server, and Signaling Server. These components can be collocated on the same root machines hosting Docker or deployed on separate dedicated machines.
Key components of On-Premises deployment
The Automation Anywhere Enterprise Knowledge architecture relies on the following key components:
- Operating System (OS): The supported operating systems are Red Hat or Ubuntu Linux. While Ubuntu 22 is ideal, other distributions like RHEL (RedHat) are compatible. You can choose your Linux distribution based on your IT support requirements, and the deployment will adapt accordingly. It's important to consider how Docker behaves with different Linux distributions. A minimum of Docker v27.1.1 is required. The Docker service should be configured to restart automatically upon a machine reboot. Note that RHEL defaults to not starting Docker on reboot and must be manually configured to restart.
- Database: The underlying database is PostgreSQL, specifically modified as a vector index store known as Supabase. This database requires a dedicated, free-standing server initiated as a VM and cannot be replaced by managed cloud database services like AWS RDS. The initial server in your deployment will host the native PostgreSQL installation.
- Dockers: Docker containers are fundamental to the architecture, providing a standardized and isolated environment for deploying various Enterprise Knowledge components. The minimum supported Docker version is v27.1.1. Docker Compose files are available to facilitate the assembly and deployment of Docker images.
- Websocket server and signaling server: These servers handle real-time communication within the platform. They can be deployed on a single machine or collocated on the same server. The sentry signaling machine plays a crucial role in providing debugging details through logging for support purposes.
- Nginx: Nginx serves as a reverse proxy for all incoming and outgoing traffic to the Enterprise Knowledge platform.
- Optional components:
- Sentry signaling machine: This component is optional and its inclusion does not impact the minimum machine count required for deployment.
- Document Editor: This component, which is a function of the console and provides web sockets for real-time document creation, is also optional. It is not required for knowledge bases primarily built from uploaded or crawled data. The inclusion of optional components does not reduce the base machine requirements.