Open source · MIT licensed
Coming soon — The framework is under active development. Star the repo to get notified.

The agent framework
that does things

supyagent is an open-source framework for building LLM agents with real-world tools. Define agents in YAML, write tools as Python scripts, and connect to 13+ cloud services with one command.

Three layers, one ecosystem

Each layer does one thing well. Use them together or independently.

supyagent cloud

The connections

OAuth management, token encryption, auto-refresh, and a unified REST API for 13+ services. Optional — use it when your agent needs third-party access.

supyagent

The brain

The agent conversation loop. YAML-configured agents, any LLM via LiteLLM, session persistence, context management with auto-summarization, and multi-agent delegation.

supypowers

The hands

Tool execution engine. Write Python scripts with Pydantic models — each declares its own dependencies and runs in an isolated uv environment. Zero runtime dependencies.

Tools as Python scripts

Every tool is a standalone Python file. Declare dependencies inline — each tool gets its own isolated uv environment at runtime.

1

No dependency conflicts

A tool using numpy 1.x won't conflict with one using numpy 2.x. Each script resolves independently.

2

Clean agent environment

Your agent's Python environment stays minimal. Tool dependencies never bloat it.

3

Auto-documented

Pydantic models = automatic JSON schemas. Your agent gets typed function signatures with descriptions.

powers/weather.py
# /// script
# dependencies = ["pydantic", "httpx"]
# ///
from pydantic import BaseModel, Field
class WeatherInput(BaseModel):
city: str = Field(..., description="City name")
class WeatherOutput(BaseModel):
temperature: float
conditions: str
def get_weather(input: WeatherInput) -> WeatherOutput:
"""Get current weather for a city."""
resp = httpx.get(f"https://api.weather.com/{input.city}")
data = resp.json()
return WeatherOutput(**data)

Built for real work

Not just a conversation loop. A complete system for building agents that do things.

YAML over code

Define agents declaratively. Model, system prompt, tools, delegates — all in a YAML file. No boilerplate classes to inherit.

YAML
# agents/researcher.yaml
name: researcher
model: gpt-4
system_prompt: |
You are a research assistant.
Summarize findings concisely.
powers_path: ./powers
delegates:
- writer

Any LLM provider

OpenAI, Anthropic, Google, Mistral, Ollama, or any OpenAI-compatible endpoint. Switch models by changing one line in your YAML.

YAML
# Use any provider
model: gpt-4o
model: claude-sonnet-4-5-20250929
model: gemini/gemini-2.0-flash
model: ollama/llama3
model: together_ai/meta-llama/Llama-3-70b

Multi-agent orchestration

Agents delegate to specialists. A manager decomposes tasks and routes to the right agent. Depth limits prevent infinite loops.

YAML
# agents/manager.yaml
name: manager
model: gpt-4
delegates:
- researcher
- writer
- coder
max_delegation_depth: 3

Context management

Full history persists to disk. LLM calls get a trimmed window: system prompt + optional summary + recent messages. Auto-summarization keeps costs down.

YAML
# Built-in context management
context:
max_messages: 50
max_tokens: 8000
auto_summarize: true
summary_model: gpt-4o-mini

Get started

Two paths to a working agent.

Full-stack app (recommended)

terminal
$ npx create-supyagent-app
$ cd my-supyagent-app
$ npm run dev # chat UI + agent ready

Python framework

terminal
$ pip install supyagent
$ supyagent init # scaffold agents/ and powers/
$ supyagent connect # link cloud integrations
$ supyagent chat myagent # start chatting