Titan de Amazon: Chat AI acción

Titan de Amazon: Chat AI action connects automations to Amazon Bedrock Titan chat AI functionality. This action enables automations in natural, informative, and context-aware conversations with users, providing a more personalized and engaging automation experience.

Antes de empezar

  • You must have the Bot creator role to use the Titan de Amazon: Chat AI action in a bot.
  • Ensure that you have the necessary credentials to send a request and have included acción Autenticar before calling any Amazon Bedrock actions.

This example shows how to send a natural language message using the Titan Chat AI action and get an appropriate response.

Procedimiento

  1. In the Automation Anywhere Control Room, navigate to the Actions pane, select Generative AI > Amazon Bedrock, drag Amazon Titan: Chat AI, and place it in the canvas.
  2. Enter or select the following fields:

    Amazon Titan: Chat AI action

    1. Enter the Region. For information on Region, see Amazon Bedrock GA regions.
    2. Click the Model drop-down and select a model with which to communicate.
      • Titan Text G1 - Lite: Amazon's Titan Text G1-Lite is a smaller and more efficient version of the larger Titan Text G1 model, making it more suitable for devices with limited resources.
      • Titan Text G1 - Express: Amazon's Titan Text G1 - Express model is a versatile and cost-effective large language model (LLM) designed for a wide range of text generation tasks.
      • Other supported version: To input other supported models.
    3. Enter a chat Message to use by the model to generate a response.
      Nota: Las acciones de chat conservan el resultado de la acción de chat anterior dentro de la misma sesión. Si activa las acciones de chat consecutivamente, el modelo puede comprender los mensajes posteriores y relacionarlos con el mensaje anterior. Sin embargo, todo el historial de chat se elimina una vez finalizada la sesión.
    4. Enter the maximum length.
      By default, if you do not enter a value, then the maximum length is automatically set to keep it within the maximum context length of the selected model by considering the length of the generated response.
    5. Enter a Temperature. This value refers to the randomness of the response. As the temperature approaches zero, the response becomes specific. The higher the value, the more random is the response.
    6. Enter Default as the session name to limit the session to the current session.
    7. To manage the optional parameters, click the Show more options and select Yes. If you select Yes, you can add other parameters such as: Top P, Add Instructions, , and Stop Sequences. For information about these optional parameters, see Learn Models.
    8. Save the response to a variable. In this example, the response is saved to str_ChatResponse-1.
  3. Click Run to start the bot.
    You can read the value of the field by printing the response in a Message box acción. In this example, the first chat response is stored instr_ChatResponse-1. You can add additional chat requests to get additional responses as shown in the example.
    Consejo: Para mantener múltiples chats en el mismo bot, deberá crear múltiples sesiones con diferentes nombres o variables.