Refer to this FAQ for general queries about the features and functionality of the AI guardrail. This is designed to protect sensitive data within the prompt and model responses when GenAI is used within automations.

What is AI guardrail?

Using publicly hosted LLMs and generative AI models poses significant privacy and security risks to businesses. IntroducingAI Guardrails, an intelligent tokenization solution, protects sensitive data shared with or processed by LLMs while maintaining context. This feature is designed to ensure data protection, provide customers with the utmost privacy and security, and monitor toxicity levels in content to prevent unintended outcomes that impact brand reputation. AI Guardrails empower you to precisely govern sensitive data within generative AI automations through the following capabilities:

  • Manage AI guardrail policies for sensitive data in automations that integrate with generative AI models.
  • Define rules for managing sensitive data within prompts and model responses.
  • Specify data handling preferences based on data categories such as PII, PHI, and PCI.
  • Enforce policies on automations during execution.
  • Monitor and audit data treatment and toxicity scores for prompts executed within automations.
Does this feature use AI?
Yes, we leverage a well-established Named Entity Recognition (NER) model, a specialized form of AI, to identify sensitive data.
NER, a subfield of Natural Language Processing (NLP), identifies and locates sensitive information within text prompts, such as names, addresses, phone numbers, and dates. This allows us to mask or redact this data, protecting privacy and ensuring compliance with data protection regulations. Essentially, NER acts as the initial step in identifying what needs to be masked within a text dataset.
What licenses are required for AI Guardrails?
You will be required to purchase Enterprise Platform license as well as AI Guardrails which is a consumption-based license. For details about the Enterprise Platform license, see Enterprise Platform
What permissions are required to manage AI Guardrails?

The Manage AI Guardrails permission is required to configure, edit, or delete AI Guardrails policies. The View AI Guardrails permission allows users to view configured AI Guardrails policies.

Is AI Guardrails available for on-premises deployments?

AI Guardrails is not currently supported for on-premises deployments.

How does AI Guardrails protect my sensitive data?

When enforced on automations, AI Guardrails intercepts each prompt executed using Generative AI commands. The service scans the prompts for sensitive data and replaces it with tokenized values to de-identify it. These tokenized prompts are then sent to the LLMs. Upon receiving responses, the service scans them again and reconstructs them with the original values. This is the default masking behavior. AI Guardrails also supports irreversible anonymization, configurable through the guardrails policy.

Where can I view logs related to sensitive data protection?

Activity traces are available in the AI Governance Prompt logs and Event logs. Use the detailed views in the AI Prompt logs to view prompts and responses with masked content and toxicity scores.

How long is sensitive data mapping stored?

Sensitive data mappings are stored in a secure vault within the Automation 360 cloud, encrypted with a regularly rotated AES-256 key, which is FIPS 197 compliant. Mappings are also stored securely in a database within the customer's production tenant environment for 30 days and then securely purged. New mappings are created for each transaction after 30 days.

What security controls are applied to stored sensitive data?

Sensitive data mappings are protected using industry-standard encryption and the following security controls: network isolation with restricted connectivity, access limited to authorized users based on the principle of least privilege, and encryption at rest and in transit.

Does AI Guardrails send sensitive data to third-party LLMs or clouds during masking/unmasking?

No, all sensitive data is processed within the Automation 360 Cloud tenant. AI Guardrails does not integrate with any third-party LLMs or external cloud services for masking or unmasking.