Long time no see.
I’ve been heads down working on something new, and I couldn’t be more excited about it. It’s called cagent, and it’s an open-source project I’ve been building at Docker.
In short: cagent is a multi-agent runtime that makes it easy to build, run, and share AI agents.
We wanted to create a tool that lets you orchestrate “teams” of virtual experts: agents with specialized knowledge and tools, without the complexity that usually comes with building AI applications.
How it works
The core philosophy is simplicity. You define your agents in a straightforward YAML file. No complex Python frameworks or boilerplate code required.
Here is what a basic agent looks like:
1agents:
2 root:
3 model: openai/gpt-4o
4 description: A helpful AI assistant
5 instruction: |
6 You are a knowledgeable assistant that helps users with various tasks.
7 Be helpful, accurate, and concise in your responses.
You can run this immediately with:
1cagent run agent.yaml
The Power of Tools (MCP)
One of the things I’m most excited about is the support for the Model Context Protocol (MCP). This allows your agents to connect to the outside world: databases, file systems, web search, you name it.
You can even use Docker containers as tools. For example, giving an agent access to DuckDuckGo search via a containerized MCP server is as simple as adding a few lines to your YAML:
1toolsets:
2 - type: mcp
3 ref: docker:duckduckgo
Agents as Tools
It gets more interesting when you start composing agents. cagent isn’t just
about running one agent; it’s about orchestration. You can have a root agent
that delegates tasks to specialized sub-agents (e.g., a coder, a writer, a
researcher).
Plus, cagent itself can act as an MCP server. This means you can expose your
custom agents as tools to be used by other MCP clients!
Sharing is Caring
Since we are Docker, we obviously had to make distribution easy. You can push your agent configurations to Docker Hub just like container images:
1cagent push my-namespace/my-agent
And anyone else can pull and run them:
1cagent run my-namespace/my-agent
More
Is that it, I hear you ask. Well no, of course not. cagent is packed with
features, each could have its own blog post. And maybe I will one day, for now
here’s a rapid fire overview of some of them.
- cagent can act as an ACP server, use your agents from your IDE
- It has an API
- Its TUI can connect to the API remotely
- Remote MCPs with OAuth work out of the box
- A (we think) nice list of builtin tools for your agents
- Alloy agents, in your YAML file, add
multiple LLMs to your agent:
model: openai/gpt-5,anthropic/claude-sonnet-4-5 - Code mode MCP
- Evals
- Auto session compaction
- TOON encoding of the tool results
The future
We have a lot of ideas about new features we can add to cagent. We are
exploring more complex orchestration patterns, workflows, RAG, you name it,
we’ll implement it!
Give it a spin
I’m really proud of how this is shaping up. It’s still active development, so things might break, but I’d love for you to try it out.
You can install it via Homebrew:
1brew install cagent
Or check out the GitHub repository for more details, examples, and binary releases.
Let me know what you think!