The LLM Block exposes the capabilities of large language models within Scout Workflows. This block allows you to configure and use different foundational models to generate text, providing flexibility and creativity in output generation.

Configuration (Required)

Model
selectRequired

The Model configuration allows you to select the underlying model used to generate a response. Current options include:

  • claude-3-5-sonnet@20240620
  • gpt-3.5-turbo
  • gpt-3.5-turbo-0125
  • gpt-3.5-turbo-1106
  • gpt-4
  • gpt-4-0125-preview
  • gpt-4-1106-preview
  • gpt-4-turbo
  • gpt-4-turbo-2024-04-09
  • gpt-4o
  • gpt-4o-2024-08-06
  • gpt-4o-mini
  • o1-preview
  • o1-mini
  • llama-v2-13b-chat
  • llama-v2-70b-chat
  • llama-v2-7b-chat
  • llama-v3-70b-instruct
  • llama-v3p1-405b-instruct
  • llama-v3p1-70b-instruct
  • mixtral-8x7b-instruct
Temperature
floatRequired

Temperature controls the randomness of the output. Lower values result in more predictable responses, while higher values can generate more creative and diverse outputs. Adjust this based on your need for creativity or precision.

Maximum Tokens
intRequired

This setting specifies the maximum number of tokens to be generated in the response. It helps in controlling the length and detail of the output.

Response Type
selectRequired

The Response Type determines the format of the output. Possible options include:

  • text: For plain text outputs.
  • json_object: For structured JSON outputs, useful for applications needing structured data.

Chat Messages (Required)

Single Message

Role
selectRequired

The Role in a message defines the context or perspective from which the message is sent. Possible roles include:

  • system: Provides instructions or context for the model.
  • user: Represents the user’s input or queries.
  • assistant: The model’s responses or actions.

When using single messages, create a separate message for each role required in your use case.

Content
stringRequired

The content of the message. Example:

Tell a 4 sentence story about "{{inputs.user_message}}" given the following context: {{collection_YOURID.output}}

List of Messages

Messages
arrayRequired

A list of messages that will be added to the message list. Example:

1[
2 {"role": "system", "content": "You are a helpful agent..."},
3 {"role": "user", "content": "{{inputs.user_message}}"}
4]
When using {{dynamic variables}} in a List of Messages input, ensure that the end result is still valid JSON

Best Practices

  • Model Selection: Choose the model that best fits your application’s needs in terms of accuracy, style, and response time.
  • Temperature Tuning: Adjust the temperature setting to balance creativity and predictability as required by your task.
  • Token Management: Use the Maximum Tokens setting to control the verbosity and detail of the generated responses.