ALPHA RELEASE (v0.1) — Aixgo is in active development. Not all features are complete. Production release planned for late 2025. Learn more →
← Back to Guides

Agent Types Guide

Comprehensive guide to all Aixgo agent types including Classifier and Aggregator agents with examples and best practices.

Aixgo provides specialized agent types for building production-grade multi-agent systems. This guide covers all available agent types, when to use each, and how to configure them for optimal performance.

Overview

Aixgo offers six core agent types, each designed for specific roles in your multi-agent architecture:

  • Producer: Generate periodic messages for downstream processing
  • ReAct: LLM-powered reasoning and tool execution
  • Logger: Message consumption and persistence
  • Classifier: Intelligent content classification with confidence scoring
  • Aggregator: Multi-agent output synthesis and consensus building
  • Planner: Task decomposition and workflow orchestration

Producer Agent

Producer agents generate messages at configured intervals, providing the data input for your agent workflows.

When to Use

  • Polling external APIs or data sources
  • Generating synthetic test data
  • Periodic health checks or monitoring
  • Time-based event triggers
  • ETL pipeline data ingestion

Configuration

agents:
  - name: event-generator
    role: producer
    interval: 500ms
    outputs:
      - target: processor

Best Practices

  • Set appropriate intervals based on your data source refresh rate
  • Use exponential backoff for failed polling attempts
  • Consider rate limits when polling external APIs
  • Implement circuit breakers for unreliable sources

Learn more: Producer examples

ReAct Agent

ReAct (Reasoning + Acting) agents combine LLM reasoning with tool execution capabilities for complex decision-making workflows.

When to Use

  • Data analysis requiring intelligent reasoning
  • Decision-making workflows with business logic
  • Natural language processing tasks
  • Complex multi-step operations
  • Tool-assisted problem solving

Configuration

agents:
  - name: analyst
    role: react
    model: gpt-4-turbo
    prompt: 'You are an expert data analyst.'
    tools:
      - name: query_database
        description: 'Query the database'
        input_schema:
          type: object
          properties:
            query: { type: string }
          required: [query]
    inputs:
      - source: event-generator
    outputs:
      - target: logger

Best Practices

  • Provide clear, specific system prompts
  • Define precise tool schemas with validation
  • Use appropriate temperature settings (0.2-0.4 for deterministic, 0.7-1.0 for creative)
  • Implement timeout handling for long-running operations
  • Monitor token usage and optimize prompts

Learn more: ReAct examples

Logger Agent

Logger agents consume and persist messages, providing observability and audit capabilities for your workflows.

When to Use

  • Audit logging and compliance
  • Debugging multi-agent workflows
  • Data persistence and archival
  • Monitoring and alerting
  • Performance metric collection

Configuration

agents:
  - name: audit-log
    role: logger
    inputs:
      - source: analyst

Best Practices

  • Use structured logging formats (JSON)
  • Implement log rotation and retention policies
  • Set up log aggregation for distributed systems
  • Create alerts for error patterns

Learn more: Logger examples

Classifier Agent

Classifier agents use LLM-powered semantic understanding to categorize content with confidence scoring, few-shot learning, and structured outputs.

When to Use

  • Customer support ticket routing and prioritization
  • Content moderation and categorization
  • Document classification and tagging
  • Intent detection in conversational AI
  • Sentiment analysis with custom categories
  • Multi-label content tagging

Key Features

  • Structured JSON Outputs: Schema-validated responses for reliable parsing
  • Confidence Scoring: Automatic quality assessment (0-1 scale)
  • Few-Shot Learning: Improve accuracy with example-based training
  • Multi-Label Support: Assign multiple categories simultaneously
  • Alternative Classifications: Secondary suggestions for low-confidence results
  • Semantic Understanding: Context-aware classification beyond keywords

Configuration

agents:
  - name: ticket-classifier
    role: classifier
    model: gpt-4-turbo
    inputs:
      - source: support-tickets
    outputs:
      - target: classified-tickets
    classifier_config:
      categories:
        - name: technical_issue
          description: "Issues requiring technical troubleshooting or product support"
          keywords: ["error", "bug", "not working", "crash"]
          examples:
            - "The app crashes when I click submit"
            - "Error code 500 appears on checkout"

        - name: billing_inquiry
          description: "Questions about payments, invoices, or pricing"
          keywords: ["payment", "invoice", "charge", "refund"]
          examples:
            - "I was charged twice this month"
            - "Can I get a refund?"

      # Minimum confidence for automatic classification
      confidence_threshold: 0.7

      # Allow multiple categories per input
      multi_label: false

      # Few-shot examples for improved accuracy
      few_shot_examples:
        - input: "My account won't let me log in"
          category: technical_issue
          reason: "Authentication system issue"

      # LLM parameters
      temperature: 0.3      # Low for consistent classification
      max_tokens: 500       # Sufficient for reasoning

Category Definition Best Practices

Each category should include:

  • name: Unique identifier (use snake_case)
  • description: Clear explanation of category boundaries
  • keywords: Terms strongly associated with this category
  • examples: 2-3 representative samples

Confidence Threshold Guidelines

  • 0.5-0.6: Exploratory use, may have incorrect classifications
  • 0.7-0.8: Production baseline, good accuracy/coverage balance
  • 0.85+: High-stakes scenarios, may reject ambiguous inputs

Example Output

{
  "category": "technical_issue",
  "confidence": 0.92,
  "reasoning": "User describes specific product issue requiring technical assistance",
  "alternatives": [
    {"category": "billing_inquiry", "confidence": 0.15}
  ],
  "tokens_used": 234
}

Learn more:

Aggregator Agent

Aggregator agents synthesize outputs from multiple agents using AI-powered strategies including consensus building, weighted synthesis, semantic clustering, hierarchical summarization, and RAG-based aggregation.

When to Use

  • Multi-agent research synthesis
  • Combining outputs from specialized expert agents
  • Consensus building in distributed AI systems
  • Ensemble learning for improved accuracy
  • Cross-validation of agent outputs
  • RAG systems with multiple retrievers
  • Conflict resolution between diverse perspectives

Key Features

  • Multiple Aggregation Strategies: Consensus, weighted, semantic, hierarchical, RAG-based
  • Conflict Resolution: Automatic detection and LLM-mediated resolution
  • Semantic Clustering: Group similar outputs using text similarity
  • Consensus Scoring: Quantify agreement levels (0-1 scale)
  • Performance Tracking: Built-in observability and metrics

Aggregation Strategies

Consensus Strategy

Finds common ground and resolves disagreements through LLM analysis.

Best for: Fact verification, balanced synthesis, transparent conflict resolution

aggregator_config:
  aggregation_strategy: consensus
  consensus_threshold: 0.7
  conflict_resolution: llm_mediated

Weighted Strategy

Applies importance weights to prioritize certain agent outputs.

Best for: Expert prioritization, confidence-based mixing, known reliability differences

aggregator_config:
  aggregation_strategy: weighted
  source_weights:
    expert_agent: 1.0
    general_agent_1: 0.6
    general_agent_2: 0.4

Semantic Strategy

Groups inputs by semantic similarity before aggregation.

Best for: Large agent counts (5+), deduplication, perspective identification

aggregator_config:
  aggregation_strategy: semantic
  semantic_similarity_threshold: 0.85
  deduplication_method: semantic

Hierarchical Strategy

Multi-level aggregation for scalability with large agent counts.

Best for: 10+ agents, token efficiency, structured summarization

aggregator_config:
  aggregation_strategy: hierarchical
  max_input_sources: 20
  summarization_enabled: true

RAG-Based Strategy

Treats agent outputs as retrieved context for generation.

Best for: Question answering, multi-source research, citation preservation

aggregator_config:
  aggregation_strategy: rag_based
  max_input_sources: 10

Full Configuration Example

agents:
  - name: research-synthesizer
    role: aggregator
    model: gpt-4-turbo
    inputs:
      - source: expert-1
      - source: expert-2
      - source: expert-3
    outputs:
      - target: final-report
    aggregator_config:
      # Strategy selection
      aggregation_strategy: consensus

      # Conflict handling
      conflict_resolution: llm_mediated

      # Deduplication
      deduplication_method: semantic

      # Enable summarization
      summarization_enabled: true

      # Maximum agents to aggregate
      max_input_sources: 10

      # Timeout for collecting inputs (ms)
      timeout_ms: 5000

      # Semantic clustering threshold
      semantic_similarity_threshold: 0.85

      # Source weights (for weighted strategy)
      source_weights:
        expert-1: 1.0
        expert-2: 0.7
        expert-3: 0.5

      # Consensus threshold
      consensus_threshold: 0.7

      # LLM parameters
      temperature: 0.5
      max_tokens: 1500

Example Output

{
  "strategy": "consensus",
  "consensus_level": 0.87,
  "aggregated_content": "After analyzing all expert inputs, the following synthesis emerges...",
  "conflicts_resolved": [
    {
      "topic": "implementation_approach",
      "conflicting_sources": ["expert-1", "expert-2"],
      "resolution": "Hybrid approach combining both perspectives",
      "reasoning": "Expert 1's architectural concerns addressed by Expert 2's practical constraints"
    }
  ],
  "semantic_clusters": [
    {
      "cluster_id": "cluster_0",
      "members": ["expert-1", "expert-3"],
      "core_concept": "technical_implementation",
      "avg_similarity": 0.89
    }
  ],
  "tokens_used": 1250
}

Best Practices

Strategy Selection

  • Consensus: Use when you need balanced synthesis with conflict transparency
  • Weighted: Use when certain agents have more expertise or authority
  • Semantic: Use for deduplication and thematic organization (5+ agents)
  • Hierarchical: Use for scalability with many agents (10+)
  • RAG-based: Use for question answering with source attribution

Timeout Configuration

Set based on expected agent response times:

  • Fast agents (1-2s): timeout_ms: 3000
  • Standard agents (3-5s): timeout_ms: 5000
  • Complex agents (5-10s): timeout_ms: 10000

Token Management

Typical token usage:

  • 2-3 agents: 500-1000 tokens
  • 4-6 agents: 1000-1500 tokens
  • 7-10 agents: 1500-2500 tokens
  • 10+ agents: Use hierarchical strategy

Learn more:

Planner Agent

Planner agents decompose complex tasks into executable steps and orchestrate their execution across multiple agents.

When to Use

  • Complex multi-step workflows requiring coordination
  • Dynamic task decomposition based on context
  • Adaptive workflow execution
  • Resource allocation and scheduling
  • Goal-oriented planning with dependencies

Configuration

agents:
  - name: task-planner
    role: planner
    model: gpt-4-turbo
    prompt: 'You are a task planning expert'
    inputs:
      - source: user-requests
    outputs:
      - target: execution-queue

Learn more: Planner examples

Integration Patterns

Parallel Classification + Aggregation

Combine multiple classifiers with an aggregator for comprehensive analysis:

agents:
  # Input producer
  - name: content-source
    role: producer
    outputs:
      - target: content

  # Parallel classifiers
  - name: sentiment-classifier
    role: classifier
    inputs:
      - source: content
    outputs:
      - target: classifications
    classifier_config:
      categories:
        - name: positive
          description: "Positive sentiment"
        - name: negative
          description: "Negative sentiment"

  - name: topic-classifier
    role: classifier
    inputs:
      - source: content
    outputs:
      - target: classifications
    classifier_config:
      categories:
        - name: technology
          description: "Technology-related content"
        - name: business
          description: "Business-related content"

  # Aggregator combines classifications
  - name: final-classifier
    role: aggregator
    inputs:
      - source: classifications
    outputs:
      - target: final-output
    aggregator_config:
      aggregation_strategy: consensus

Multi-Expert Research Pipeline

Deploy specialized experts with weighted aggregation:

agents:
  # Expert agents
  - name: technical-expert
    role: react
    model: gpt-4-turbo
    prompt: "You are a technical architecture expert"
    outputs:
      - target: expert-analyses

  - name: security-expert
    role: react
    model: gpt-4-turbo
    prompt: "You are a security expert"
    outputs:
      - target: expert-analyses

  - name: business-expert
    role: react
    model: gpt-4-turbo
    prompt: "You are a business analyst"
    outputs:
      - target: expert-analyses

  # Weighted aggregator
  - name: research-synthesis
    role: aggregator
    model: gpt-4-turbo
    inputs:
      - source: expert-analyses
    outputs:
      - target: final-report
    aggregator_config:
      aggregation_strategy: weighted
      source_weights:
        technical-expert: 0.9
        security-expert: 0.95
        business-expert: 0.7

Performance Considerations

Token Usage Optimization

  • Producer: No LLM calls, zero token usage
  • ReAct: 200-2000 tokens per message (depends on complexity)
  • Logger: No LLM calls, zero token usage
  • Classifier: 200-500 tokens per classification (add 150-300 for few-shot)
  • Aggregator: 500-2500 tokens (scales with agent count)
  • Planner: 300-1000 tokens per planning operation

Latency Guidelines

  • Producer: <10ms (local generation)
  • ReAct: 500ms-5s (LLM-dependent)
  • Logger: <50ms (I/O-dependent)
  • Classifier: 500ms-2s (LLM-dependent)
  • Aggregator: 1s-5s (scales with agent count)
  • Planner: 1s-3s (LLM-dependent)

Cost Management

Choose appropriate models for your use case:

# Production traffic - balance cost and quality
model: gpt-4o-mini

# Critical decisions - maximum accuracy
model: gpt-4-turbo

# High volume, simple tasks - lowest cost
model: gpt-3.5-turbo

Next Steps

Additional Resources