← Back to Blog

Aixgo v0.2.0: Production-Grade VertexAI, Enhanced Security, and Stability

Aixgo v0.2.0 brings major improvements to the VertexAI provider with Google Gen AI SDK migration, production hardening fixes, enhanced security with SSRF protection, and streamlined CI/CD workflows.

Aixgo v0.2.0 focuses on production readiness, security hardening, and developer experience. This release brings a major upgrade to our VertexAI provider, critical stability fixes, and enhanced security features.

VertexAI Provider: Migration to Google Gen AI SDK

The biggest change is complete migration from manual HTTP API calls to the official Google Generative AI SDK (v0.5.0).

Key Benefits

Simplified Authentication

Previously, authentication required manual OAuth2 token management. Now, the provider uses Application Default Credentials (ADC), automatically discovering credentials from:

  • Service account key files via GOOGLE_APPLICATION_CREDENTIALS
  • Google Cloud SDK credentials
  • Compute Engine/Cloud Run metadata service
// Automatic ADC-based authentication
provider, err := provider.NewVertexAIProvider("gemini-1.5-pro", "your-project-id")
if err != nil {
    log.Fatal(err)
}
defer provider.Close() // New: Graceful cleanup

Better SDK Support

  • Automatic handling of API versioning
  • Built-in retry logic and error handling
  • Support for new features as Google releases them
  • Reduced maintenance burden

Configuration Example

agents:
  - name: analyzer
    role: react
    model: gemini-1.5-pro
    prompt: |
      Analyze the provided data and identify trends.
    tools:
      - name: query_database
        description: Query the analytics database
# Set your GCP project ID
export GOOGLE_CLOUD_PROJECT=your-project-id

# Configure ADC
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json

# Run your agent
go run main.go

Production Hardening

Fixed Goroutine Leak in Streaming

Solution: Added context-aware cleanup ensuring goroutines terminate properly when:

  • Client cancels the request
  • Stream encounters an error
  • Normal completion occurs
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()

stream, err := provider.CompleteStream(ctx, request)
if err != nil {
    log.Fatal(err)
}

for chunk := range stream {
    fmt.Print(chunk.Content)
}
// Goroutine automatically cleaned up

Fixed Streaming Error Race Condition

Implemented proper channel lifecycle management with sync primitives to prevent race conditions.

Improved Retry Logic

The provider now includes robust retry logic:

  • 5 retries for transient failures (rate limits, temporary errors)
  • Exponential backoff with jitter (±30%) to prevent thundering herd
  • Max backoff capped at 32 seconds
  • 30-second timeout for client creation

Deterministic Outputs Fixed

Temperature is now correctly passed to the SDK, enabling truly deterministic outputs.

agents:
  - name: classifier
    role: classifier
    model: gemini-1.5-flash
    temperature: 0 # Now works correctly for deterministic outputs

Security Enhancements

SSRF Protection for Ollama

Protected Against:

  • Private IP address access (127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16)
  • Metadata service endpoints (169.254.169.254)
  • IPv6 private addresses
  • DNS rebinding attacks
model_services:
  - name: local-llama
    provider: huggingface
    model: meta-llama/Llama-2-7b
    runtime: ollama
    config:
      address: http://localhost:11434 # Validated for SSRF

Debug Logging Control

Debug logging is now controlled via AIXGO_DEBUG environment variable instead of always being enabled.

# Enable debug logging
export AIXGO_DEBUG=true
go run main.go

# Production: Debug logging disabled by default
go run main.go

New Features

Tool/Function Response Handling

The VertexAI provider now properly handles tool/function responses in the ReAct agent loop.

agent := NewReActAgent(AgentDef{
    Name:  "assistant",
    Model: "gemini-1.5-pro",
    Tools: []Tool{
        {
            Name:        "get_weather",
            Description: "Get current weather for a location",
            Handler:     weatherHandler,
        },
    },
})

result, err := agent.Execute(ctx, &Message{
    Content: "What's the weather in San Francisco?",
})

Graceful Provider Shutdown

All providers now implement a Close() method for graceful cleanup.

provider, err := provider.NewVertexAIProvider("gemini-1.5-pro", "project-id")
if err != nil {
    log.Fatal(err)
}
defer provider.Close() // Clean up gRPC connections, clients, etc.

// Use provider...

Upgrade Guide

From v0.1.2 to v0.2.0

1. Update Dependency

go get -u github.com/aixgo-dev/aixgo@v0.2.0
go mod tidy

2. VertexAI Authentication Changes

Before (v0.1.2):

// Manual token management (no longer needed)
provider, err := provider.NewVertexAIProvider(model, projectID, serviceAccountJSON)

After (v0.2.0):

// Use Application Default Credentials
provider, err := provider.NewVertexAIProvider(model, projectID)
if err != nil {
    log.Fatal(err)
}
defer provider.Close() // Add cleanup

Set up ADC:

# Option 1: Service account key
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json

# Option 2: gcloud CLI
gcloud auth application-default login

3. Add Provider Cleanup

provider, err := provider.NewOpenAIProvider("gpt-4-turbo")
if err != nil {
    log.Fatal(err)
}
defer provider.Close() // Add this

// Use provider...

4. Review Debug Logging

export AIXGO_DEBUG=true

5. Test Ollama Configurations

For production, use public endpoints or configure SSRF allowlists if needed.

Breaking Changes

VertexAI Provider

  • Authentication: The serviceAccountJSON parameter was removed. Use Application Default Credentials (ADC) instead.

All Providers

  • Cleanup: All providers now have a Close() method. While not strictly required to call, it’s recommended for graceful cleanup.

Performance Improvements

  • Reduced Memory Usage: Goroutine leak fix prevents unbounded memory growth
  • Faster Error Recovery: Improved retry logic with exponential backoff
  • Client Reuse: VertexAI provider now reuses HTTP clients

What’s Next

Looking ahead to v0.3.0:

  • More Provider Improvements: Expanding SDK migrations to other providers
  • Enhanced Observability: Better tracing for tool calls and multi-step reasoning
  • Performance Benchmarks: Comprehensive benchmarking suite
  • Advanced Caching: Response caching for improved latency and cost reduction

Get Involved

Resources


Download Aixgo v0.2.0 today and build production-grade AI agents with confidence.

go get github.com/aixgo-dev/aixgo@v0.2.0
release vertexai security production google-cloud