LLM Reasoning Block

Generate text using a reasoning model

The LLM Reasoning block integrates reasoning models into Scout workflows, enabling dynamic text generation with enhanced reasoning capabilities.

Configuration

model
selectRequired

Select the reasoning model to use for generating text. The default model is o1. Available models include:

  • deepseek-r1
  • o1
  • o1-mini
  • o1-preview
response_format
selectRequired

The format of the response. The default is text. Options include:

  • text: For plain text outputs.
  • json_object: For structured JSON outputs.
reasoning_effort
selectRequired

Reducing reasoning effort can result in faster responses and fewer tokens used. The default value is low. Options include:

  • low
  • medium
  • high
max_completion_tokens
integerRequired

Upper bound for number of tokens to generate. This includes the visible output tokens and reasoning tokens. The default value is 5000, with a minimum of 1000.

Chat Messages
listRequired

Messages to be sent to the model. This input supports Jinja templating for dynamic message construction.

Each message is defined with a role (such as system, user, or assistant) and content. Below are examples of how to structure the input:

  • Example 1: Basic Chat Interaction
    This is a simple conversation where the user asks a question.

    1[
    2 {
    3 "role": "system",
    4 "content": "You are a helpful assistant."
    5 },
    6 {
    7 "role": "user",
    8 "content": "Can you explain the laws of gravity in simple terms?"
    9 }
    10]
  • Example 2: Using Jinja Templating for Dynamic Inputs
    This example demonstrates how you can use variables to create dynamic prompts. The model will receive the final rendered text after Jinja processes the template.

    1[
    2 {
    3 "role": "system",
    4 "content": "You are an AI tutor specialized in {{ topic }}."
    5 },
    6 {
    7 "role": "user",
    8 "content": "Explain {{ concept }} with examples."
    9 }
    10]
  • Example 3: List of Messages Basic Multi-Step Conversation
    This example shows multiple back-and-forth messages, which can be passed into List of Messages to build on context of a conversation. Notice how the conversation context builds on each message, which can make subsequent responses from the LLM more relevant and refined:

    List of Messages Example
    1[
    2 {
    3 "role": "user",
    4 "content": "I'm having trouble logging into my account."
    5 },
    6 {
    7 "role": "assistant",
    8 "content": "Can you confirm if you're seeing an error message?"
    9 },
    10 {
    11 "role": "user",
    12 "content": "Yes, it says 'Incorrect password'."
    13 },
    14 {
    15 "role": "assistant",
    16 "content": "Please try resetting your password using the 'Forgot Password' link on the login page."
    17 }
    18]
See Workflow Logic & State > State Management for details on using dynamic variables in this block.

Outputs

The block outputs generated text or a JSON object based on the selected response format, along with detailed token usage information.

Usage Context

Use this block to integrate AI-generated text with reasoning capabilities into your workflow. It is particularly useful for tasks that require complex reasoning and decision-making processes, enhancing the workflow’s ability to handle nuanced and context-sensitive scenarios.

Best Practices

  • Select the appropriate model for your use case to balance accuracy and performance. Different models may offer varying levels of reasoning capabilities and efficiency.
  • Adjust reasoning effort to optimize for speed and token usage. Lower reasoning effort can lead to faster responses but may reduce the depth of reasoning.
  • Ensure that prompt items are well-structured and clear to achieve the desired output. Clear and concise prompts help the model generate more accurate and relevant responses.
Built with