The agent framework
that does things
supyagent is an open-source framework for building LLM agents with real-world tools. Define agents in YAML, write tools as Python scripts, and connect to 13+ cloud services with one command.
Three layers, one ecosystem
Each layer does one thing well. Use them together or independently.
supyagent cloud
The connections
OAuth management, token encryption, auto-refresh, and a unified REST API for 13+ services. Optional — use it when your agent needs third-party access.
supyagent
The brain
The agent conversation loop. YAML-configured agents, any LLM via LiteLLM, session persistence, context management with auto-summarization, and multi-agent delegation.
supypowers
The hands
Tool execution engine. Write Python scripts with Pydantic models — each declares its own dependencies and runs in an isolated uv environment. Zero runtime dependencies.
Tools as Python scripts
Every tool is a standalone Python file. Declare dependencies inline — each tool gets its own isolated uv environment at runtime.
No dependency conflicts
A tool using numpy 1.x won't conflict with one using numpy 2.x. Each script resolves independently.
Clean agent environment
Your agent's Python environment stays minimal. Tool dependencies never bloat it.
Auto-documented
Pydantic models = automatic JSON schemas. Your agent gets typed function signatures with descriptions.
# /// script# dependencies = ["pydantic", "httpx"]# ///from pydantic import BaseModel, Fieldclass WeatherInput(BaseModel): city: str = Field(..., description="City name")class WeatherOutput(BaseModel): temperature: float conditions: strdef get_weather(input: WeatherInput) -> WeatherOutput: """Get current weather for a city.""" resp = httpx.get(f"https://api.weather.com/{input.city}") data = resp.json() return WeatherOutput(**data)Built for real work
Not just a conversation loop. A complete system for building agents that do things.
YAML over code
Define agents declaratively. Model, system prompt, tools, delegates — all in a YAML file. No boilerplate classes to inherit.
# agents/researcher.yamlname: researchermodel: gpt-4system_prompt: | You are a research assistant. Summarize findings concisely.powers_path: ./powersdelegates: - writerAny LLM provider
OpenAI, Anthropic, Google, Mistral, Ollama, or any OpenAI-compatible endpoint. Switch models by changing one line in your YAML.
# Use any providermodel: gpt-4omodel: claude-sonnet-4-5-20250929model: gemini/gemini-2.0-flashmodel: ollama/llama3model: together_ai/meta-llama/Llama-3-70bMulti-agent orchestration
Agents delegate to specialists. A manager decomposes tasks and routes to the right agent. Depth limits prevent infinite loops.
# agents/manager.yamlname: managermodel: gpt-4delegates: - researcher - writer - codermax_delegation_depth: 3Context management
Full history persists to disk. LLM calls get a trimmed window: system prompt + optional summary + recent messages. Auto-summarization keeps costs down.
# Built-in context managementcontext: max_messages: 50 max_tokens: 8000 auto_summarize: true summary_model: gpt-4o-miniGet started
Two paths to a working agent.
Full-stack app (recommended)
$ npx create-supyagent-app$ cd my-supyagent-app$ npm run dev # chat UI + agent readyPython framework
$ pip install supyagent$ supyagent init # scaffold agents/ and powers/$ supyagent connect # link cloud integrations$ supyagent chat myagent # start chatting