Core Concepts
Scout provides three core primitives for building AI-powered automations. Understanding how they differ—and how they connect—is key to building effectively.
The Three Primitives
Orchestration pipelines that process data through blocks. Deterministic, visual, composable.
LLM-powered reasoning engines that can use tools. They “think” and make decisions.
Vector-enabled data stores for RAG. Store documents and query with semantic search.
Workflows vs Agents
This is the most important distinction in Scout:
When to use workflows:
- Processing data through multiple steps
- Calling APIs and transforming responses
- Conditional logic and branching
- Scheduled or triggered automations
When to use agents:
- Conversational interfaces (chat)
- Tasks requiring reasoning and judgment
- Multi-step tool use where next step depends on previous result
- When you want the LLM to decide what to do
How They Connect
Workflows and agents can invoke each other:
- Slack triggers can execute workflows OR agents directly
- Copilot typically triggers a workflow that routes to an agent
- Workflows can call agents via the Message Agent block
- Agents can call workflows as tools
Workflow Execution Model
Workflows execute as a directed acyclic graph (DAG):
- Blocks with no dependencies run first
- Independent blocks run in parallel (not sequentially)
- A block only runs when all its predecessors complete
- Conditional blocks can skip downstream branches
In this example:
- LLM Block and HTTP Block run in parallel (both depend only on Input)
- Conditional waits for both to complete before running
- Only one branch (E or F) executes based on the condition
State & Data Flow
Workflow State
Each block stores its output in a shared state object. Access data using Jinja templates:
Agent Sessions
Agents maintain conversation history per session:
- Each user-agent conversation has a unique session
- Messages accumulate across turns
- Context is preserved until session ends
Databases & RAG
Databases store documents with vector embeddings for semantic search:
- Ingest: Upload documents (text, PDFs, web scrapes)
- Embed: Automatic vector embedding generation
- Query: Search by meaning, not just keywords
- Use: Pass retrieved context to LLM blocks or agents