AI Agents are powerful digital co-workers that combine generative AI with automation. To get the best results, follow these practical best practices — drawn from real-world experience and SME guidance.

Keep agents focused
  • Narrow scope: Each agent should solve one clear problem. Avoid general-purpose agents that try to do too much. For instance in a large workflows, use a supervisor agent that delegates tasks to specialized agents (e.g., Claim Supervisor → Claim Details Agent + Fraud Check Agent + Approval Agent).
  • Avoid tool overload: Keep the number of tools low (≤10 is enforced). Too many tools lead to confusion and higher failure rates.
Decide between Automation Vs. Agent
  • Use deterministic automation: Automations, processes, and API Tasks are best for steps that can be reliably scripted and maintained.
  • Use AI Agents: Best suited for reasoning, flexibility, or dynamic decision-making tasks. Do not default to agents for everything — most processes will remain standard automations.
Avoid batch processing in a single agent
  • No large tables/lists: Do not feed an agent with bulk data (e.g., 20 claim IDs).
  • Loop outside the agent: Use a process or bot to send one item at a time.
  • Example: A process iterates over rows in a claims table and calls the agent per row.
Use dummy Or validation tools
  • If your agent sometimes misses required outputs, add a tracker/validation tool.

  • This tool simply echoes back what the agent provided and flags what’s missing.

  • Agents are more likely to notice gaps and escalate to a human when such a tool is present.

Human in the loop for reliability
  • Always include a way for humans to review or complete missing data.

  • Gradual onboarding strategy:

    • Start with 100% human review of agent outputs.

    • As confidence grows, reduce review (e.g., 50%, then 10%).

    • Eventually, only exception cases are reviewed.

  • This builds trust over time and prevents over-reliance on untested agents.

Write balanced prompts
  • Be clear and concise: Too vague causes confusion; too prescriptive causes rigidity.
  • Avoid long prompts: Essay-style instructions reduce accuracy.
  • Provide context: Give just enough for tool selection and decision-making.
Structure inputs And outputs for automation
  • Use structured variables: Prefer `claim_id`, `policy_id` over single JSON strings.
  • Use JSON only when required: Apply it only if directly integrating with APIs.
  • Automation-friendly: Structured variables are easier for downstream processing.
Test and refine
  • Diverse testing: Run both happy path and edge cases.
  • Leverage governance logs: Analyze reasoning, tool calls, and data gaps.
  • Iterate quickly: Adjust prompts, outputs, and tools continuously.
Build for reliability
  • Validate outputs (no nulls in required fields).

  • Add error-handling paths for missing or inconsistent data.

  • Use Data Tracker or dummy tools to improve completeness checks.