The Vertex AI: Chat AI action uses Google's Vertex AI completion API for interacting with the models. It generates text in a conversational format.

Prerequisites

  • You must have the Bot creator role to use the Vertex Chat AI action in a bot.
  • Ensure that you have the necessary credentials to send a request and have included Vertex AI: Aktion „Verbinden“ before calling any Google Cloud actions.

This example shows how to send a natural language message using the Vertex AI: Chat AI action and get an appropriate response.

Procedure

  1. In the Automation Anywhere Control Room, navigate to the Actions pane, select Generative AI > Google, drag Vertex AI: Chat AI, and place it in the canvas.
  2. Enter or select the following fields:

    Vertex Chat AI

    1. Enter the Project Number/Name. This is the unique Project ID from the GCP. For information on Project ID, see Google Cloud Project's Project ID.
    2. Enter the Location. For more information on Vertex AI location, see Vertex AI locations.
    3. Click Publisher drop-down and select Google; or select 3rd Party to enter a third-party publisher.
    4. Select a large language model (LLM) to use for your prompt from the Model dropdown. You can select the following models:
      • chat-bison (latest)
      • chat-bison-32k (latest)
      • chat-bison-32k@002
      • chat-bison@001
      • chat-bison@002
      • codechat-bison
      • codechat-bison-32k
      • codechat-bison-32k@002
      • codechat-bison@001
      • codechat-bison@002
      • gemini-1.0-pro-001
      • Other supported version : To input other supported versions of the above models. For more information, see Google Vertex AI models.
      Note: Generative KI-Pakete verlassen sich für einen korrekten Betrieb auf das spezifische Input-/Output-Schema eines Modells. Da verschiedene Modelle oft unterschiedliche Schemata haben, können nur Versionen desselben Modells integriert werden. Informationen zu anderen unterstützten Versionen finden Sie in der Dokumentation des Hyperscalers für unterstützte Versionen von Modellen.
    5. Enter a chat Message to use by the model to generate a response.
      Note: Die Chat-Aktionen behalten das Ergebnis der vorherigen Chat-Aktion innerhalb derselben Sitzung bei. Wenn Sie Chat-Aktionen nacheinander aufrufen, kann das Modell die nachfolgenden Nachrichten verstehen und sie mit der vorherigen Nachricht in Beziehung setzen. Der gesamte Chatverlauf wird jedoch nach Beendigung der Sitzung gelöscht.
    6. Enter the maximum number of tokens (Max tokens) to generate. By default, if you do not enter a value, then the maximum number of tokens generated is automatically set to keep it within the maximum context length of the selected model by considering the length of the generated response.
    7. Enter a Temperature. This value refers to the randomness of the response. As the temperature approaches zero, the response becomes more focused and deterministic. The higher the value, the more random is the response.
    8. Enter Default as the session name to limit the session to the current session.
    9. To manage the optional parameters, click the Show more options and select Yes. If you select Yes, you can add other parameters such as: Context, Examples, Top K, and Top P. For information about these optional parameters, see Learn Models.
      Note: While Vertex AI models, such as codechat-bison@002, allow providing Context and Examples to refine prompts, the gemini-1.0-pro-001 model currently does not support these features. While these fields are available in the interface, you can safely leave them empty.
    10. Save the response to a variable. In this example, the response is saved to VertexChatResponse.
  3. Click Run to start the bot. Use a Message Box action to display the response (for instance, VertexChatResponse). You can add more chat actions to continue the conversation
    Tip: Um mehrere Chats im selben Bot zu verwalten, müssen Sie mehrere Sitzungen mit unterschiedlichen Namen oder Variablen erstellen.