Core Concepts

Understand how Scout's building blocks work together

Scout provides three core primitives for building AI-powered automations. Understanding how they differ—and how they connect—is key to building effectively.

The Three Primitives

Workflows vs Agents

This is the most important distinction in Scout:

AspectWorkflowsAgents
PurposeOrchestrate data flowLLM reasoning & tool use
ExecutionDeterministic block-by-blockLoop: perceive → reason → act
StatePer-run execution stateSession-based conversation history
Building blocksBlocks (LLM, HTTP, conditionals, etc.)Tools (search, query, API calls)

When to use workflows:

  • Processing data through multiple steps
  • Calling APIs and transforming responses
  • Conditional logic and branching
  • Scheduled or triggered automations

When to use agents:

  • Conversational interfaces (chat)
  • Tasks requiring reasoning and judgment
  • Multi-step tool use where next step depends on previous result
  • When you want the LLM to decide what to do

How They Connect

Workflows and agents can invoke each other:

  • Slack triggers can execute workflows OR agents directly
  • Copilot typically triggers a workflow that routes to an agent
  • Workflows can call agents via the Message Agent block
  • Agents can call workflows as tools

Workflow Execution Model

Workflows execute as a directed acyclic graph (DAG):

  1. Blocks with no dependencies run first
  2. Independent blocks run in parallel (not sequentially)
  3. A block only runs when all its predecessors complete
  4. Conditional blocks can skip downstream branches

In this example:

  • LLM Block and HTTP Block run in parallel (both depend only on Input)
  • Conditional waits for both to complete before running
  • Only one branch (E or F) executes based on the condition

State & Data Flow

Workflow State

Each block stores its output in a shared state object. Access data using Jinja templates:

1{# Access input fields by their ID #}
2{{ inputs.transcript }}
3
4{# Access block outputs using block_id.output #}
5{{ llm.output }}
6{{ http_block.output.data | tojson }}

Agent Sessions

Agents maintain conversation history per session:

  • Each user-agent conversation has a unique session
  • Messages accumulate across turns
  • Context is preserved until session ends

Databases & RAG

Databases store documents with vector embeddings for semantic search:

  1. Ingest: Upload documents (text, PDFs, web scrapes)
  2. Embed: Automatic vector embedding generation
  3. Query: Search by meaning, not just keywords
  4. Use: Pass retrieved context to LLM blocks or agents

Next Steps