Features
Explore Aixgo's complete feature set across AI agents, LLM providers, security, observability, and infrastructure. Track development status and roadmap.
Alpha Release
Aixgo is in active development. We’re building in public and shipping early to gather community feedback.
Core AI Capabilities
LLM Providers
✅ OpenAI ⓘ
Full support for GPT-4, GPT-3.5, and other OpenAI models with streaming, function calling, and vision capabilities.
✅ Anthropic (Claude) ⓘ
Native integration with Claude 3 family (Opus, Sonnet, Haiku) featuring extended context windows and advanced reasoning.
✅ Google Gemini ⓘ
Access to Gemini Pro and Ultra models with multi-modal capabilities and large context support.
✅ xAI (Grok) ⓘ
Integration with Grok models for real-time information and alternative AI perspectives.
✅ Vertex AI ⓘ
Enterprise-grade access to Google's AI models through Vertex AI platform with enhanced security and compliance.
✅ HuggingFace (free) ⓘ
Ollama (local models) and Free Inference API with simulated streaming due to API limitations.
❌ HuggingFace (TGI) ⓘ
Native streaming support for paid Text Generation Inference endpoints.
Agent System
✅ ReAct Agent ⓘ
Reasoning and Acting agent that iteratively plans, executes tools, and observes results to solve complex tasks.
✅ Supervisor Orchestration ⓘ
Coordinates multiple specialized agents, managing task delegation and result aggregation for complex workflows.
✅ Classifier Agent ⓘ
Routes queries to appropriate specialized agents based on intent classification and context analysis.
✅ Aggregator Agent ⓘ
Combines outputs from multiple agents into coherent responses, handling result synthesis and deduplication.
✅ Planner Agent ⓘ
Breaks down complex tasks into executable steps, creating dynamic workflows based on goal analysis.
✅ Producer Agent ⓘ
Generates content and artifacts based on task requirements, supporting various output formats and templates.
✅ Logger Agent ⓘ
Captures and structures agent interactions, decisions, and outputs for debugging and audit trails.
Orchestration Patterns
✅ Parallel Pattern ⓘ
Execute multiple independent tasks concurrently, reducing latency by leveraging parallel processing.
✅ Sequential Pattern ⓘ
Chain tasks in order where each step depends on previous results, ensuring proper execution flow.
✅ Reflection Pattern ⓘ
Self-critique and iterative improvement where agents evaluate and refine their own outputs.
✅ MapReduce Pattern ⓘ
Distribute work across multiple agents and combine results, ideal for processing large datasets.
✅ Planning Pattern ⓘ
Generate and execute multi-step plans dynamically, adapting to intermediate results and conditions.
✅ Classification Pattern ⓘ
Route requests to specialized handlers based on classification, enabling intent-based processing.
Tools & MCP
Tools & MCP
✅ Function Calling ⓘ
Enable LLMs to invoke external functions and APIs, extending capabilities beyond text generation.
✅ Tool Registration ⓘ
Dynamic registration and discovery of tools, allowing agents to utilize custom functions at runtime.
✅ Local Transport ⓘ
In-process tool communication for low-latency function calls without network overhead.
✅ gRPC Transport ⓘ
Distributed tool execution via gRPC, enabling remote service integration and microservices architecture.
✅ Service Discovery ⓘ
Automatic discovery and registration of available tools and services across the infrastructure.
Data Infrastructure
Vector Databases
✅ Firestore ⓘ
Google Cloud Firestore integration for vector storage with built-in scaling and real-time synchronization.
✅ In-Memory Store ⓘ
High-performance in-memory vector storage for development, testing, and low-latency use cases.
❌ Qdrant ⓘ
High-performance vector search engine with advanced filtering and hybrid search capabilities. Roadmap: Full integration with metadata filtering and batch operations.
❌ pgvector ⓘ
PostgreSQL extension for vector similarity search, combining relational and vector data. Roadmap: Complete CRUD operations and index optimization support.
Memory & Context
✅ Conversation History ⓘ
Persistent storage and retrieval of conversation threads, maintaining context across sessions.
✅ RAG Systems ⓘ
Retrieval Augmented Generation for grounding LLM responses in your knowledge base and documents.
✅ Semantic Search ⓘ
Vector-based similarity search for finding relevant information based on meaning rather than keywords.
❌ Long-term Memory ⓘ
Cross-session knowledge retention and personalization based on historical interactions. Roadmap: Automatic memory consolidation and fact extraction from conversations.
Embeddings
✅ OpenAI ⓘ
OpenAI's text-embedding models (ada-002, text-embedding-3) for high-quality vector representations.
✅ HuggingFace API ⓘ
Access to HuggingFace's hosted embedding models via their Inference API.
✅ HuggingFace TEI ⓘ
Text Embeddings Inference server integration for self-hosted, optimized embedding generation.
Security & Observability
Security
✅ Auth Framework ⓘ
Pluggable authentication system supporting multiple providers and custom auth strategies.
✅ RBAC Authorization ⓘ
Role-Based Access Control for fine-grained permissions on agents, tools, and data resources.
✅ Rate Limiting ⓘ
Configurable request throttling to prevent abuse and control API costs per user or endpoint.
✅ Injection Protection ⓘ
Detection and mitigation of prompt injection attacks to prevent unauthorized LLM behavior manipulation.
✅ TLS/mTLS Support ⓘ
Transport Layer Security with mutual TLS for encrypted and authenticated service-to-service communication.
✅ Audit Logging ⓘ
Comprehensive logging of security events, access attempts, and data operations for compliance.
✅ JWT Verification ⓘ
JSON Web Token validation for stateless authentication and secure API access.
✅ Input Validation ⓘ
Schema-based validation of inputs to prevent injection attacks and ensure data integrity.
Observability
✅ OpenTelemetry ⓘ
Industry-standard distributed tracing, metrics, and logging for comprehensive system observability.
✅ Langfuse Integration ⓘ
LLM-specific observability platform integration for tracking prompts, completions, costs, and quality metrics.
✅ Prometheus Metrics ⓘ
Expose operational metrics in Prometheus format for monitoring, alerting, and performance analysis.
✅ Health Checks ⓘ
Liveness and readiness endpoints for orchestration platforms and load balancer integration.
✅ Distributed Tracing ⓘ
End-to-end request tracing across services to diagnose latency issues and understand system behavior.
Multi-Modal
❌ Vision/Images ⓘ
Image understanding and analysis capabilities for visual question answering and OCR tasks. Roadmap: Integration with vision-enabled models like GPT-4 Vision and Claude 3.
❌ Audio Processing ⓘ
Speech-to-text transcription and audio analysis for voice-driven applications. Roadmap: Support for Whisper and other audio AI models.
❌ Document Parsing ⓘ
Extract structured data from PDFs, images, and complex document formats. Roadmap: Integration with document AI services and layout analysis models.
Infrastructure & Operations
Configuration
✅ YAML Workflows ⓘ
Declarative workflow definitions using YAML for version-controlled, code-free agent orchestration.
✅ Go SDK ⓘ
Comprehensive Go library for programmatic agent creation, customization, and integration.
✅ 29+ Example Configs ⓘ
Production-ready reference implementations covering common patterns and use cases.
✅ Complete Use Cases ⓘ
End-to-end examples demonstrating real-world applications from setup to deployment.
Deployment
✅ Docker ⓘ
Containerized deployment with optimized images for consistent runtime environments.
✅ Docker Compose ⓘ
Multi-container orchestration for local development and simple production deployments.
✅ Cloud Run ⓘ
Serverless deployment on Google Cloud Run with automatic scaling and zero-ops infrastructure.
✅ Kubernetes Manifests ⓘ
Production-ready Kubernetes configurations including deployments, services, and ingress rules.
❌ Kubernetes Operator ⓘ
Custom controller for automated agent lifecycle management on Kubernetes. Roadmap: CRDs for declarative agent provisioning and GitOps workflows.
❌ Terraform IaC ⓘ
Infrastructure as Code modules for automated cloud resource provisioning. Roadmap: Modules for GCP, AWS, and Azure deployments with best practices.
Production Reliability
✅ Circuit Breakers ⓘ
Automatic failure detection and circuit breaking to prevent cascade failures during service outages.
✅ Retry with Backoff ⓘ
Exponential backoff retry logic for handling transient failures and rate limit errors gracefully.
✅ State Persistence ⓘ
Durable storage of agent state and conversation context for resuming workflows after interruptions.
❌ Crash Recovery ⓘ
Automatic detection and recovery from process crashes with workflow continuation. Roadmap: Checkpoint-based recovery and state reconstruction mechanisms.
❌ Multi-Region ⓘ
Deploy agents across multiple geographic regions for low latency and high availability. Roadmap: Cross-region state replication and request routing strategies.
What Alpha Means
- API stability: Expect breaking changes before v1.0
- Production use: Possible but at your own risk - no guarantees yet
- Documentation: Features marked ✅ are implemented, 🚧 in progress, ❌ are roadmap items
- Timeline: Production-ready release targeted for Q4 2025
The Path to v1.0
We’re committed to making Aixgo production-ready with strong stability guarantees. When we reach v1.0 (targeted for Q4 2025), you’ll get:
- API stability guarantee: Your YAML workflows, Go SDK code, and gRPC/MCP integrations won’t break across minor versions
- Backward compatibility: Documented migration paths for any breaking changes
- Production support: Clear SLAs, security patches, and long-term maintenance commitments
Read our v1.0 Compatibility Guarantee to understand exactly what stability means for your projects.
Why this matters: You can start building with Aixgo today knowing there’s a clear migration path to production stability.