Parallel queue processing
- Updated: 2025/06/10
Parallel queue processing allows Workload Management (WLM) to handle multiple queues within same device pool simultaneously. It ensures workloads are evenly distributed, resulting in better system performance and shorter wait times for tasks in queues.
When multiple queues are deployed onto a device pool, only one queue is processed at a time. This can be done either in round-robin mode or priority mode. As a result, resources such as licenses and devices might remain idle, leading to inefficient use. With parallel processing, available resources—licenses and devices—can be actively utilized, ensuring optimal performance.
Parallel queue processing is advantageous in environments with varying workloads as it adjusts dynamically to demand changes. It uses advanced algorithms to predict and adapt to workload patterns, optimizing resource allocation in real-time. This adaptability enhances performance, reduces bottlenecks, and ensures efficient operation across queues. Overall, it provides a robust solution for modern workload management, offering efficiency, flexibility, and scalability. This method also minimizes resource contention by intelligently assigning tasks to available resources, thereby preventing any single resource from becoming overwhelmed. Furthermore, parallel queue processing can be configured to prioritize certain queues, allowing for greater flexibility in managing critical workloads. This prioritization ensures that high-priority tasks receive the necessary resources to meet time-sensitive requirements, while still maintaining overall system balance and efficiency.
Parallel queue processing allows automation leads, administrators, and process owners to handle multiple queues simultaneously across a group of devices. This capability is particularly important in terminal server environments, where one server manages several queues. By processing work items concurrently, user sessions can be initiated on the terminal server according to the queue Service Level Agreements (SLAs). This approach leads to shorter SLAs and faster processing times.

Key features
The key features of parallel queue processing are:
- Task distribution
- Tasks are distributed across queues that can be processed concurrently. This distribution allows multiple tasks to be executed at the same time, reducing overall processing time.
- Faster SLAs
- By utilizing all available resources, work items are processed more quickly, resulting in faster service level agreements (SLAs).
- Simplified deployment
- Deployment is simplified as you only need to select multiple queues and specify run-as users, eliminating the need to choose a device pool.
- Resource utilization
- By leveraging multiple queues and devices, parallel queue processing maximizes resource utilization. This ensures that available queue and device is used effectively, minimizing idle time and increasing throughput.
- Terminal server use case
- With the growing adoption of terminal servers, a single machine can initiate multiple user sessions, ensuring that queues are processed efficiently within their SLAs.
- Priority management
- Queues can be assigned different priority levels, allowing critical tasks to be processed first. This prioritization ensures that important tasks receive the necessary resources promptly, while less critical tasks are queued for later processing.
- Improved success metrics
- Success metrics include more cost savings, faster data access, and improved governance, leading to a significant reduction in errors.
- Increased WLM customers
- The implementation of parallel queue processing is expected to attract more customers to WLM.