Copilot Message Block

Generates text using a language model

The Copilot Message block is a special component designed to be used with the Scout Copilot, designed to generate text using a language model and send it back to the client where a user is interacting with the Scout Copilot.

Configuration

Model
selectRequired

Select the underlying model to use for generating text. The default model is gpt-4o. Available models include:

  • claude-3-5-sonnet@20240620
  • deepseek-chat
  • gemini-1.0-pro
  • gemini-1.0-pro-001
  • gemini-1.0-pro-002
  • gemini-1.0-pro-vision-001
  • gemini-1.5-flash-001
  • gemini-1.5-flash-002
  • gemini-1.5-pro-001
  • gemini-1.5-pro-002
  • gpt-3.5-turbo
  • gpt-3.5-turbo-0125
  • gpt-3.5-turbo-1106
  • gpt-4
  • gpt-4-0125-preview
  • gpt-4-1106-preview
  • gpt-4-turbo
  • gpt-4-turbo-2024-04-09
  • gpt-4o
  • gpt-4o-2024-08-06
  • gpt-4o-mini
  • llama-v3-70b-instruct
  • mixtral-8x7b-instruct
Temperature
floatRequired

Controls the randomness of the output. The default value is 0.7. The range is from 0 to 2 with a step of 0.01.

Maximum Tokens
integerRequired

The maximum number of tokens to generate. The default value is 300. The minimum value is 100.

Response Type
selectRequired

The format of the response. The default is text. Options include:

  • text: For plain text outputs.
  • json_object: For structured JSON outputs.
Chat Messages
listRequired

Messages to be sent to the model. This input supports Jinja templating for dynamic message construction.

Each message is defined with a role (such as system, user, or assistant) and content. Below are examples of how to structure the input:

  • Example 1: Basic Chat Interaction
    This is a simple conversation where the user asks a question.

    1[
    2 {
    3 "role": "system",
    4 "content": "You are a helpful assistant."
    5 },
    6 {
    7 "role": "user",
    8 "content": "Can you explain the laws of gravity in simple terms?"
    9 }
    10]
  • Example 2: Using Jinja Templating for Dynamic Inputs
    This example demonstrates how you can use variables to create dynamic prompts. The model will receive the final rendered text after Jinja processes the template.

    1[
    2 {
    3 "role": "system",
    4 "content": "You are an AI tutor specialized in {{ topic }}."
    5 },
    6 {
    7 "role": "user",
    8 "content": "Explain {{ concept }} with examples."
    9 }
    10]
  • Example 3: List of Messages Basic Multi-Step Conversation
    This example shows multiple back-and-forth messages, which can be passed into List of Messages to build on context of a conversation. Notice how the conversation context builds on each message, which can make subsequent responses from the LLM more relevant and refined:

    List of Messages Example
    1[
    2 {
    3 "role": "user",
    4 "content": "I'm having trouble logging into my account."
    5 },
    6 {
    7 "role": "assistant",
    8 "content": "Can you confirm if you're seeing an error message?"
    9 },
    10 {
    11 "role": "user",
    12 "content": "Yes, it says 'Incorrect password'."
    13 },
    14 {
    15 "role": "assistant",
    16 "content": "Please try resetting your password using the 'Forgot Password' link on the login page."
    17 }
    18]
See Workflow Logic & State > State Management for details on using dynamic variables in this block.

Outputs

The block outputs the generated text or a JSON object based on the selected response format. It includes metadata such as input and output tokens, model used, and cost information.

Usage Context

Use this block to integrate AI-generated text into your workflow. It is suitable for tasks requiring dynamic text generation based on specified prompts and models.

Best Practices

  • Select the appropriate model based on your text generation needs.
  • Adjust the temperature to manage the creativity of the generated text.
  • Ensure that prompt items are well-structured to achieve desired outputs.
  • Consider the cost implications of different models and token usage.
Built with