LLM Block

Generate text using language models

The LLM block integrates large language models into Scout workflows, enabling flexible and creative text generation through configurable foundational models.

Configuration

Model
selectRequired

Select the underlying model to use for generating text. The default model is GPT-4o. Available models include:

  • Claude 3.7 Sonnet
  • Claude 3.5 Sonnet
  • Claude 3.5 Haiku
  • Claude 3.5 Sonnet (2024-06-20)
  • DeepSeek Chat
  • GPT-3.5 Turbo
  • GPT-3.5 Turbo (Jan 2025)
  • GPT-3.5 Turbo (Nov 2023)
  • GPT-4
  • GPT-4 (Jan 2025 Preview)
  • GPT-4 (Nov 2023 Preview)
  • GPT-4 Turbo
  • GPT-4 Turbo (Apr 2024)
  • GPT-4o
  • GPT-4o (Aug 2024)
  • GPT-4o Mini
  • GPT-4.1
  • GPT-4.1 Nano
  • GPT-4.1 Mini
  • Llama 3 70B Instruct
  • Llama 3.1 70B Instruct
  • Llama 3.1 405B Instruct
  • Llama 3.3 70B Instruct
  • Mixtral 8x7B Instruct
  • Gemini 1 Pro
  • Gemini 1 Pro (001)
  • Gemini 1 Pro (002)
  • Gemini 1 Pro Vision (001)
  • Gemini 1.5 Pro (001)
  • Gemini 1.5 Pro (002)
  • Gemini 1.5 Flash (001)
  • Gemini 1.5 Flash (002)
  • Gemini 2.0 Flash
  • Gemini 2.0 Flash Lite
Temperature
floatRequired

Controls the randomness of the output. The default value is 0.7. The range is from 0 to 2 with a step of 0.01.

Maximum Tokens
integerRequired

The maximum number of tokens to generate. The default value is 300. The minimum value is 100.

Response Type
selectRequired

The format of the response. The default is text. Options include:

  • text: For plain text outputs.
  • json_object: For structured JSON outputs.
Messages
listRequired

Messages to be sent to the model. This input supports Jinja templating for dynamic message construction.

Each message is defined with a role and you can see the purpose of each role below:

  • System - This role supplies the overarching instructions and policies that guide how the assistant should respond, setting the tone, style, and boundaries of the conversation.
  • User - This is the role that represents the end-user’s inputs, questions, or prompts, serving as the catalyst for the conversation.
  • Assistant - This role is responsible for generating responses based on both the system’s guidelines and the user’s input, aiming to be helpful, context-aware, and accurate.
  • Messages - These are the individual dialogue entries exchanged between the system, user, and assistant, collectively forming the conversation’s context and history. Be careful to prune long chat histories, as this can increase token counts pretty quickly.

  • System & User - Using Jinja Templating for Dynamic Inputs
    This example demonstrates how you can use variables from other blocks to create dynamic prompts. The model will receive the final rendered text after Jinja processes the template.

    1[
    2 {
    3 "role": "system",
    4 "content": "You are an AI tutor. Use the following context to help answer the user question: {{ query.output }}"
    5 },
    6 {
    7 "role": "user",
    8 "content": "Explain {{ inputs.user_message }} with examples."
    9 }
    10]
  • Using Messages to share chat history with the LLM
    This example shows multiple back-and-forth messages, which can be passed into Messages to build on context of a conversation, which can make subsequent responses from the LLM more relevant and refined:

    List of Messages Example
    1[
    2 {
    3 "role": "user",
    4 "content": "I'm having trouble logging into my account."
    5 },
    6 {
    7 "role": "assistant",
    8 "content": "Can you confirm if you're seeing an error message?"
    9 },
    10 {
    11 "role": "user",
    12 "content": "Yes, it says 'Incorrect password'."
    13 },
    14 {
    15 "role": "assistant",
    16 "content": "Please try resetting your password using the 'Forgot Password' link on the login page."
    17 }
    18]
See Workflow Logic & State > State Management for details on using dynamic variables in this block.

Outputs

The block outputs the generated text or a JSON object based on the selected response format. It includes metadata such as input and output tokens, model used, and cost information.

Usage Context

Use this block to integrate AI-generated text into your workflow. It is suitable for tasks requiring dynamic text generation based on specified prompts and models.

Best Practices

  • Select the appropriate model based on your text generation needs.
  • Adjust the temperature to manage the creativity of the generated text.
  • Ensure that prompt items are well-structured to achieve desired outputs.
  • Consider the cost implications of different models and token usage.