Copilot Message Block

Generates text using a language model

The Copilot Message block is a component of the Scout Copilot, designed to generate a text response using a language model.

Configuration (Required)

Model
selectRequired

Select the underlying model to use for generating text. The default model is gpt-4o. Available models include:

  • claude-3-5-sonnet@20240620
  • gpt-3.5-turbo
  • gpt-3.5-turbo-0125
  • gpt-3.5-turbo-1106
  • gpt-4
  • gpt-4-0125-preview
  • gpt-4-1106-preview
  • gpt-4-turbo
  • gpt-4-turbo-2024-04-09
  • gpt-4o
  • gpt-4o-2024-08-06
  • gpt-4o-mini
  • llama-v2-13b-chat
  • llama-v2-70b-chat
  • llama-v2-7b-chat
  • llama-v3-70b-instruct
  • llama-v3p1-405b-instruct
  • llama-v3p1-70b-instruct
  • mixtral-8x7b-instruct
Temperature
floatRequired

Controls the randomness of the output. The default value is 0.7. The range is from 0 to 2 with a step of 0.01.

Maximum Tokens
integerRequired

The maximum number of tokens to generate. The default value is 300. The minimum value is 100.

Response Type
selectRequired

The format of the response. The default is text. Options include:

  • text: For plain text outputs.
  • json_object: For structured JSON outputs.
Chat Messages
list

Messages to be sent to the model. This input supports Jinja templating for dynamic message construction.

Outputs

The block outputs the generated text into the Scout Copilot.

Usage Context

Use this block to respond with AI-generated text to the Scout Copilot. The block supports a variety of models, allowing for flexibility in choosing the right model for your task. Consider the model’s capabilities and constraints when selecting it for your use case.

Best Practices

  • Choose the appropriate model for your use case: Ensure the model selected aligns with the task requirements and desired output.
  • Adjust temperature to control output randomness: Modify the temperature setting to balance creativity and coherence in the generated text.
  • Ensure prompts are clear and well-structured: Craft prompts that effectively guide the model to produce relevant and accurate outputs.
  • Use Jinja templating to dynamically construct message content: Leverage Jinja templating to adapt message content based on workflow state.
  • Consider the token limits and adjust ‘max_tokens’ accordingly: Set the maximum tokens to accommodate the expected length of the output while managing resource usage.