Volume tiers
- Updated: 2024/10/17
Volume tiers
Depending on your business requirements or organization size, the scaling requirements are classified into three volume tiers.
- Low volume
- Suitable for small businesses or teams with lower document processing needs.
- Processing Capacity: Up to 2,400 pages per day (100 pages per hour).
- Typically managed by a single Bot Runner.
- Medium volume
- Ideal for mid-sized organizations or departments with moderate document
processing requirements.
- Processing Capacity: Up to 10,000 pages per day (400-500 pages per hour).
- Utilizes 2 to 5 Bot Runners to handle the workload.
- High volume
- Best for large enterprises or organizations with heavy document processing
needs.
- Processing Capacity: More than 10,000 pages per day (over 500 pages per hour).
- Requires more than 5 Bot Runners for optimal performance.
Recommendations per-volume tier
- Low volume recommendations
- For low-volume use cases, a single Bot Runner should suffice if you follow the recommended configurations. Since the volume is low, detailed adjustments are generally unnecessary, making it a straightforward setup. However, ensure that the default workflow and configuration align with the baseline to avoid unexpected delays.
- Medium volume recommendations
- To determine the number of Bot Runners required, divide
the total processing volume by the baseline performance of 100 pages per
hour or 2,400 pages per day. For example, if your required volume is
6,000 pages per day, you would need 3 Bot Runners. Key
considerations for medium-volume use cases include the following:
- Calibration: Calibrate device performance against the baseline. If performance differs significantly, ensure you are following the recommended configurations.
- Provider differences: If using extraction models other than Automation Anywhere, such as Microsoft Standard Forms (Document Intelligence) or Google Document AI, expect a 30% performance improvement.
- Multipage documents: Generally, multipage documents show higher performance in terms of pages processed per hour.
- Using LLMs: Incorporating LLMs can increase processing time, particularly with higher field counts, larger document sizes, or increased document complexity.
- Queue Management: Address workflow bottlenecks using a Task Bot that only creates requests if the queue size is below 100. This prevents overloading the system and ensures smoother operation. You can find detailed guidance here.
- High volume recommendations
- For high-volume scenarios, the considerations outlined for medium
volumes apply, with a greater emphasis on testing and infrastructure
optimization:
- Preliminary testing: Testing with real-world samples is essential for high volumes, as even a slight difference in processing time per page can significantly impact the number of required Bot Runners.
- Dedicated ingestion bots: Consider dedicating some Bot Runners exclusively for document ingestion to keep queues consistently active and prevent delays in task assignment.
- Common bottlenecks:
- Network congestion: Using a single network share can create delays. Distribute uploads and downloads across multiple Bot Runners using separate folders to ensure parallel processing.
- Database performance: Monitor on-premises databases for CPU/memory usage, I/O operations, and potential deadlocks.
- Workflow optimization: Reduce deployment times by merging post-processing steps and a straight through processing (STP) flow with the extraction step when possible.