ALPHA RELEASE (v0.1) — Aixgo is in active development. Not all features are complete. Production release planned for late 2025. Learn more →
← Back to Guides

Multi-Agent Orchestration

Learn how to coordinate multiple agents with supervisor patterns and message routing.

Multi-agent systems unlock powerful capabilities: divide complex tasks across specialized agents, enable parallel processing, and create sophisticated workflows. Aixgo’s supervisor pattern makes orchestrating these systems straightforward.

The Supervisor Pattern

The supervisor is the orchestration layer that coordinates agent lifecycle, routes messages, and enforces execution constraints. Think of it as the conductor of an orchestra—each agent plays its part, and the supervisor ensures they work in harmony.

Supervisor Configuration

supervisor:
  name: coordinator # Supervisor identifier
  model: gpt-4-turbo # LLM for supervisor reasoning (optional)
  max_rounds: 10 # Maximum workflow iterations
  timeout: 300s # Global timeout (optional)

What the Supervisor Does

  1. Lifecycle Management

    • Starts agents in dependency order
    • Monitors agent health
    • Handles graceful shutdown
  2. Message Routing

    • Routes messages based on configured inputs/outputs
    • Maintains message ordering guarantees
    • Handles backpressure
  3. Execution Control

    • Enforces max_rounds limits
    • Applies timeout constraints
    • Manages error propagation
  4. Observability

    • Provides distributed tracing hooks
    • Emits performance metrics
    • Enables debugging workflows

Message Routing Patterns

Linear Pipeline

The simplest pattern: data flows through agents sequentially.

supervisor:
  name: pipeline-coordinator
  max_rounds: 5

agents:
  - name: ingest
    role: producer
    interval: 1s
    outputs:
      - target: process

  - name: process
    role: react
    model: gpt-4-turbo
    prompt: 'Transform the data'
    inputs:
      - source: ingest
    outputs:
      - target: store

  - name: store
    role: logger
    inputs:
      - source: process

Flow: ingest → process → store

Use cases:

  • ETL pipelines
  • Data transformation workflows
  • Simple processing chains

Fan-Out Pattern

One agent sends messages to multiple downstream agents.

supervisor:
  name: fanout-coordinator

agents:
  - name: source
    role: producer
    outputs:
      - target: analyzer-1
      - target: analyzer-2
      - target: analyzer-3

  - name: analyzer-1
    role: react
    model: gpt-4-turbo
    prompt: 'Analyze for sentiment'
    inputs:
      - source: source

  - name: analyzer-2
    role: react
    model: gpt-4-turbo
    prompt: 'Extract entities'
    inputs:
      - source: source

  - name: analyzer-3
    role: react
    model: gpt-4-turbo
    prompt: 'Classify topic'
    inputs:
      - source: source

Flow: source → [analyzer-1, analyzer-2, analyzer-3]

Use cases:

  • Parallel analysis (sentiment, entities, classification)
  • Multi-perspective evaluation
  • Distributed processing

Fan-In Pattern

Multiple agents send messages to a single aggregator.

supervisor:
  name: fanin-coordinator

agents:
  - name: source-1
    role: producer
    outputs:
      - target: aggregator

  - name: source-2
    role: producer
    outputs:
      - target: aggregator

  - name: source-3
    role: producer
    outputs:
      - target: aggregator

  - name: aggregator
    role: react
    model: gpt-4-turbo
    prompt: 'Combine and summarize inputs'
    inputs:
      - source: source-1
      - source: source-2
      - source: source-3

Flow: [source-1, source-2, source-3] → aggregator

Use cases:

  • Data aggregation from multiple sources
  • Consensus mechanisms
  • Multi-source enrichment

Complex DAG (Directed Acyclic Graph)

Combine patterns for sophisticated workflows.

supervisor:
  name: dag-coordinator
  max_rounds: 20

agents:
  # Data sources
  - name: api-poller
    role: producer
    outputs:
      - target: enricher

  - name: database-reader
    role: producer
    outputs:
      - target: enricher

  # Enrichment layer
  - name: enricher
    role: react
    model: gpt-4-turbo
    prompt: 'Enrich with context'
    inputs:
      - source: api-poller
      - source: database-reader
    outputs:
      - target: classifier
      - target: sentiment-analyzer

  # Analysis layer
  - name: classifier
    role: react
    model: gpt-4-turbo
    prompt: 'Classify content'
    inputs:
      - source: enricher
    outputs:
      - target: decision-maker

  - name: sentiment-analyzer
    role: react
    model: gpt-4-turbo
    prompt: 'Analyze sentiment'
    inputs:
      - source: enricher
    outputs:
      - target: decision-maker

  # Decision layer
  - name: decision-maker
    role: react
    model: gpt-4-turbo
    prompt: 'Make final decision based on classification and sentiment'
    inputs:
      - source: classifier
      - source: sentiment-analyzer
    outputs:
      - target: action-executor

  # Action layer
  - name: action-executor
    role: logger
    inputs:
      - source: decision-maker

Flow:

[api-poller, database-reader] → enricher → [classifier, sentiment-analyzer] → decision-maker → action-executor

Dependency-Aware Startup

The supervisor automatically determines the correct startup order by analyzing the dependency graph.

# The supervisor starts agents in this order:
# 1. api-poller, database-reader (no dependencies)
# 2. enricher (depends on sources)
# 3. classifier, sentiment-analyzer (depend on enricher)
# 4. decision-maker (depends on analyzers)
# 5. action-executor (depends on decision-maker)

You never manually specify startup order—the supervisor figures it out from inputs/outputs.

Execution Constraints

Max Rounds

Limits the total number of workflow iterations:

supervisor:
  max_rounds: 10 # Stop after 10 iterations

This prevents runaway workflows and ensures predictable resource usage.

Agent-Level Timeouts

Set timeouts per agent for long-running operations:

agents:
  - name: slow-processor
    role: react
    model: gpt-4-turbo
    timeout: 30s # Fail if processing takes >30s

Error Handling

The supervisor provides built-in error handling:

Automatic Retry

Configure retry behavior for transient failures:

agents:
  - name: api-caller
    role: react
    retry:
      max_attempts: 3
      backoff: exponential
      initial_interval: 1s

Graceful Degradation

Agents can fail without crashing the entire system:

supervisor:
  failure_mode: continue # or 'stop' to halt on first error

Best Practices

1. Keep Workflows Acyclic

Avoid circular dependencies—messages should flow in one direction:

Bad:

# Creates a cycle: A → B → C → A
agents:
  - name: A
    outputs: [{ target: B }]
    inputs: [{ source: C }] # Circular!

Good:

# Acyclic: A → B → C
agents:
  - name: A
    outputs: [{ target: B }]
  - name: B
    inputs: [{ source: A }]
    outputs: [{ target: C }]
  - name: C
    inputs: [{ source: B }]

2. Use Descriptive Names

Agent names should describe their role:

agents:
  - name: customer-data-enricher # Clear
  - name: agent-1 # Unclear

3. Set Reasonable max_rounds

Too high = wasted resources, too low = incomplete workflows:

supervisor:
  max_rounds: 10 # Typical: 5-20 rounds

4. Monitor Performance

Use observability to track:

  • Message latency between agents
  • Agent processing time
  • Bottlenecks in the workflow

Real-World Example: Content Moderation Pipeline

supervisor:
  name: content-moderation
  max_rounds: 100

agents:
  # Ingest content submissions
  - name: content-ingestion
    role: producer
    interval: 100ms
    outputs:
      - target: content-classifier
      - target: text-analyzer
      - target: image-analyzer

  # Classify content type
  - name: content-classifier
    role: classifier
    model: gpt-4-turbo
    prompt: 'Classify content into categories'
    strategy: multi-label
    categories:
      - user-generated
      - commercial
      - news
      - entertainment
    inputs:
      - source: content-ingestion
    outputs:
      - target: decision-aggregator

  # Parallel analysis
  - name: text-analyzer
    role: react
    model: gpt-4-turbo
    prompt: 'Analyze text for policy violations'
    inputs:
      - source: content-ingestion
    outputs:
      - target: decision-aggregator

  - name: image-analyzer
    role: react
    model: gpt-4-turbo
    prompt: 'Analyze images for inappropriate content'
    inputs:
      - source: content-ingestion
    outputs:
      - target: decision-aggregator

  # Aggregate results and decide
  - name: decision-aggregator
    role: aggregator
    model: gpt-4-turbo
    prompt: 'Combine classification, text, and image analysis using consensus strategy'
    strategy: consensus
    inputs:
      - source: content-classifier
      - source: text-analyzer
      - source: image-analyzer
    outputs:
      - target: action-planner

  # Plan actions based on aggregated decision
  - name: action-planner
    role: planner
    model: gpt-4-turbo
    prompt: 'Plan appropriate actions: approve, reject, or flag for human review'
    strategy: chain-of-thought
    inputs:
      - source: decision-aggregator
    outputs:
      - target: action-handler

  # Execute decision
  - name: action-handler
    role: logger
    inputs:
      - source: action-planner

This pipeline:

  1. Ingests content at 10 submissions/second
  2. Classifies content type with multi-label strategy
  3. Analyzes text and images in parallel
  4. Aggregates results using consensus strategy
  5. Plans actions with chain-of-thought reasoning
  6. Executes appropriate action

Next Steps