LLM Block
Generate text using language models
The LLM block integrates large language models into Scout workflows, enabling flexible and creative text generation through configurable foundational models.
Configuration (Required)
Select the underlying model to use for generating text. The default model is gpt-4o
. Available models include:
- claude-3-5-sonnet@20240620
- gpt-3.5-turbo
- gpt-3.5-turbo-0125
- gpt-3.5-turbo-1106
- gpt-4
- gpt-4-0125-preview
- gpt-4-1106-preview
- gpt-4-turbo
- gpt-4-turbo-2024-04-09
- gpt-4o
- gpt-4o-2024-08-06
- gpt-4o-mini
- llama-v2-13b-chat
- llama-v2-70b-chat
- llama-v2-7b-chat
- llama-v3-70b-instruct
- llama-v3p1-405b-instruct
- llama-v3p1-70b-instruct
- mixtral-8x7b-instruct
Controls the randomness of the output. The default value is 0.7
. The range is from 0
to 2
with a step of 0.01
.
The maximum number of tokens to generate. The default value is 300
. The minimum value is 100
.
The format of the response. The default is text
. Options include:
- text: For plain text outputs.
- json_object: For structured JSON outputs.
Messages to be sent to the model. This input supports Jinja templating for dynamic message construction.
Outputs
The block outputs generated text or a JSON object based on the selected response format.
Usage Context
Use this block to integrate AI-generated text into your workflow. It is suitable for applications requiring dynamic text generation, such as chatbots, content creation, and automated responses.
Best Practices
- Choose the appropriate model for your use case: Ensure the model selected aligns with the task requirements to achieve optimal results.
- Adjust temperature to control output randomness: Modify the temperature setting to balance creativity and coherence in the generated text.
- Ensure prompts are clear and well-structured: Craft prompts that effectively guide the model to produce relevant and accurate outputs.
- Use Jinja templating to dynamically construct message content: Leverage Jinja templating to customize messages based on workflow state and inputs.