AI governance code analysis rule

With the new AI governance code analysis rule, you can ensure responsible and compliant use of AI. Code analysis enhancements allow you to monitor and enforce your AI-use policy via notifications for immediate remediation.

Note: This feature requires the Enterprise Platform license. For information about supported version for this feature, see Enterprise Platform.

Overview

Using foundational models that have not been tested could pose significant safety risks to users and applications using them. Without the security controls to govern access to the generative AI models the users and applications would be exposed to risks associated with the model use. Implement governance policies that are aligned with your organization's responsible AI and safe usage policies to mitigate these risks. Restrict access to approved and tested models to manage risk when using models that have not been vetted for good response quality, in addition to enforcing rule as per regional restriction.

The new rule enforces model usage during the design and development of automations through code analysis. This easy-to-enforce policy helps ensure compliance before automations are checked-in to the Production folder for execution. The policy checks and reports on automations in Production, that are in violation of the policy and provides visibility into the risk when these compromised and unapproved models are used. This gives you an opportunity to remediate these incidents.

See Code analysis rules and Configure and assign code analysis policy.

Benefits

New code analysis rule for AI governance

Enforce use of only approved models and publishers through model usage policies enforced through code analysis:
  • Restrict model-use to tested and validated models only
  • Monitor violations to the policy
  • Prevent code check-ins with policy violations

Users and permissions

The Automation Admin, Pro Developer, or the Automation Lead would be the primary users of the AI governance code analysis policy. They would define the rule for the Bot Creators and Pro Developers to monitor and enforce policy restrictions on the model use and token consumption. The personae can define the policy through the Control Room Policy manager and assign them to folders containing automations.

After developing the automations, when the Pro Developer saves and checks-them in, the policy checks get triggered and notifies them of any policy violations in the automation-code. The Pro Developers must address and resolve the policy violations to complete the code check-in process successfully.

This new policy rule allows the Automation Admin, Pro Developer, or the Automation Lead define the following:
  • List of supported hyperscaler publishers
  • List of models from the hyperscaler publishers
  • Supported regions of deployment
Note: We recommend adding the severity level as Critical to prevent the Pro Developer from checking-in an automation, in case of rule violations.

See Code analysis for permission details.

Availability

This feature requires the Enterprise Platform license. For information about supported version for this feature, see Enterprise Platform.

Accessing AI governance code analysis policy

Navigate to Administration > Policies and define the AI governance policy in the Control Room.
Note: See Code analysis policy management to enable, create, and assign code analysis policy.
  1. In the Policies screen, click Create policy or + icon to display the Create policy page with the option to add the Policy details.
  2. Scroll to the AI Governance policy and enable the toggle.
  3. Click Add model and begin adding the Publisher which is the hyperscaler vendor, Model, which is the supported model from the vendor, and the Region of deployment.
    Note: The model name can be entered manually. We recommend using the following format when entering the model name:
    • Amazon Bedrock: Jurassic-2 Mid, Jurassic-2 Ulta, Claude Instant v1.2, Claude v1.3, Claude v2.1 (other supported versions), Titan Text G1 - Lite, and Titan Text G1 - Express (other supported versions).
    • Google Vertex AI: chat-bison (latest), chat-bison-32k (latest), chat-bison-32k@002, chat-bison@001, chat-bison@002, codechat-bison, codechat-bison-32k, codechat-bison-32k@002, codechat-bison@001, codechat-bison@002 (other supported versions), text-bison (latest), text-bison-32k (latest), text-bison-32k@002, text-bison@001, text-bison@002, text-unicorn@001, code-bison (latest), code-bison-32k@002, code-bison@001, code-bison@002, code-gecko@001, code-gecko@002, code-gecko (other supported versions), and Gemini Pro.
    • OpenAI: gpt-3.5-turbo (default), gpt-3.5-turbo-16k, gpt-4, gpt-4-32k (other supported versions), text-davinci-003, text-davinci-002, davinci, text-curie-001, curie, text-babbage-001, babbage, text-ada-001, and custom models.
  4. Clicking the Add model button lets you add a new row in the Allowed models table.
  5. Next, add the severity level as Critical as per the code analysis policy.
    Note: We recommend adding the severity level as Critical to prevent the Pro Developer from checking-in an automation, in case of rule violations.
  6. Next, save the changes.