Vertex AI: Prompt AI action

The Vertex AI: Prompt AI action uses Google's PaLM API, which provides access to the model's ability to generate text. You can give the model a text prompt in English, and it will complete the text.

Prerequisites

  • You must have the Bot creator role to use the Vertex Prompt AI action in a bot.
  • Ensure that you have the necessary credentials to send a request and have included Connect action before calling any Google Cloud actions.

This example shows how to send a natural language prompt using the Vertex Prompt AI action and to get an appropriate response.

Procedure

  1. In the Automation Anywhere Control Room, navigate to the Actions pane, select Generative AI > Google, drag Vertex AI: Prompt AI, and place it in the canvas.
  2. Enter or select the following fields:

    Google Vertex Prompt AI action

    1. Enter the Project Number/Name. This is the unique Project ID from the GCP. For more information on Project ID, see Google Cloud Project's Project ID.
    2. Enter the Location. For more information on Vertex AI location, see Vertex AI locations.
    3. Click Publisher drop-down and select Google; or select 3rd Party to enter a third-party publisher.
    4. Select a large language model (LLM) to use for your prompt from the Model dropdown. You can select the following models:
      • text-bison (latest)
      • text-bison-32k (latest)
      • text-bison-32k@002
      • text-bison@001
      • text-bison@002
      • text-unicorn@001
      • code-bison (latest)
      • code-bison-32k@002
      • code-bison@001
      • code-bison@002
      • code-gecko@001
      • code-gecko@002
      • code-gecko
      • Other supported version to input other supported models.
      Note:
      • Bison: Best value in terms of capability and cost.
      • Gecko: Smallest and lowest cost model for simple tasks.
    5. Enter a Prompt to use by the model to generate a response.
    6. Enter the maximum number of tokens (Max tokens) to generate. By default, if you do not enter a value, then the maximum number of tokens generated is automatically set to keep it within the maximum context length of the selected model by considering the length of the generated response.
    7. Enter a Temperature. This value refers to the randomness of the response. As the temperature approaches zero, the response becomes more focused and deterministic. The higher the value, the more random is the response.
    8. Enter Default as the session name to limit the session to the current session.
    9. To manage the optional parameters, click the Show more options and select Yes. If you select Yes, you can add other parameters such as: Top K and Top P. For information about these optional parameters, see Learn Models.
    10. Save the response to a variable. In this example, the response is saved to google-vertex_prompt-response.
  3. Click Run to start the bot. You can read the value of the field by printing the response in a Message box action. In this example, google-vertex_prompt-response prints the response.