Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Providers Overview

Providers are the LLM inference backends that agsh uses to process your instructions. agsh ships with two built-in providers:

ProviderAPIStreamingTool Calling
OpenAIChat CompletionsSSEFunction calling
AnthropicMessages APISSE (named events)Content blocks

Selecting a Provider

Set the provider via any configuration layer:

# CLI flag
agsh --provider openai

# Environment variable
export AGSH_PROVIDER=anthropic

# Config file (~/.config/agsh/config.toml)
[provider]
name = "openai"

OpenAI-Compatible APIs

The openai provider works with any API that implements the OpenAI Chat Completions format. This includes:

  • OpenAI (default endpoint)
  • Ollama (http://localhost:11434/v1)
  • OpenRouter (https://openrouter.ai/api/v1)
  • vLLM, LiteLLM, and other OpenAI-compatible servers

Set the --base-url flag or OPENAI_BASE_URL environment variable to point to the alternative endpoint.

Streaming vs Non-Streaming

By default, agsh uses streaming mode: tokens appear in the terminal as they are generated. Use --no-stream to wait for the complete response before displaying it.

Streaming is recommended for interactive use. Non-streaming may be useful for scripting or when the provider does not support SSE.