Last Updated: April 10, 2026
Something shifted this week. On April 8, Anthropic launched Claude Managed Agents, giving teams a way to assign tasks to AI agents instead of prompting them one message at a time. It is a big deal. But here is what most people missed: an open-source project called Multica has been building the same idea, and it works with every major coding agent, not just Claude.
That matters more than you might think.
The Problem: AI Agents Are Powerful But Unmanaged
If you have used Claude Code, Codex, OpenClaw, or OpenCode, you know the drill. You open a terminal, type a prompt, and watch the agent build something impressive. Then you do it again. And again. Each interaction is a one-shot conversation. The agent does not remember what it learned last time. It does not show up on your project board. It cannot report blockers or ask for help the way a real teammate would.
The result? Teams end up with a handful of powerful tools and no system for coordinating them. It is like hiring five talented developers and forgetting to give them a Slack channel, a ticket system, or a standup meeting.
That gap between "capable agent" and "productive team member" is exactly what Multica fills.
What Is Multica?
Multica is an open-source platform that turns coding agents into real teammates. You install it, connect your existing agent CLIs (Claude Code, Codex, OpenClaw, OpenCode), and suddenly each agent has a profile, shows up on your kanban board, and can be assigned tasks the same way you would assign work to a human colleague.
The project is licensed under Apache 2.0, hosted on GitHub at multica-ai/multica, and offers both a self-hosted option and a cloud-hosted version at multica.ai. Their tagline says it all: "Your next 10 hires won't be human."
What makes Multica different from simply running agents in terminals is the management layer. Agents post comments on issues. They create follow-up tasks. They report when they are stuck. They complete work and mark it done. The full task lifecycle is tracked: enqueue, claim, start, complete, or fail.
How Multica Works: Step by Step
Getting started with Multica is straightforward. Here is the flow from install to first completed task.
Install the CLI. If you are on macOS, you can use Homebrew:
brew tap multica-ai/tap
brew install multica
Authenticate. Run multica login to connect to your cloud account or your self-hosted instance.
Start the daemon. Run multica daemon start. This is where the magic happens. The daemon scans your system for agent CLIs on your PATH. It auto-detects Claude Code (the claude command), Codex (the codex command), OpenClaw (the openclaw command), and OpenCode (the opencode command).
Agents appear on the board. Each detected agent shows up as a teammate in your Multica dashboard with its own profile. You can see which agents are online, what runtimes they have access to, and their current workload.
Assign tasks. Create an issue or pick an existing one, then assign it to an agent. This works exactly like assigning a ticket to a developer in Jira or Linear. The agent receives the task, spins up an isolated working environment, starts working, and streams its progress back in real time via WebSocket.
Review results. When the agent finishes, it posts its output, any code changes, and a summary. If it hits a blocker, it reports that too. You review, approve, request changes, or close the issue.
Key Features That Set Multica Apart
Reusable Skills That Compound Over Time
This is the feature that got me genuinely excited. Every time an agent solves a problem in Multica, that solution can be saved as a reusable skill. Think of it as a playbook that any agent on the team can reference later.
Say one agent figures out a clean deployment workflow for your AWS Lambda setup. That becomes a skill. Next week, a different agent working on a related deployment can pull in that skill instead of figuring it out from scratch. Over time, your team of agents builds up a shared knowledge base that makes every subsequent task faster and more reliable.
Common skills that compound well:
- Deployment procedures and rollback strategies
- Database migration patterns
- Code review checklists and style enforcement
- Testing scaffolding for new services
- Incident response runbooks
This is fundamentally different from how agents normally work, which is starting from scratch every single time. Skills turn your agents from disposable tools into experienced team members who get better at their job.
Unified Runtimes: One Dashboard for Everything
If your team uses a mix of agents, and most serious teams will, Multica gives you a single pane of glass. Local daemons, cloud runtimes, different agent types: they all show up in one dashboard. The auto-detection of CLIs means you spend zero time on configuration. If it is on your PATH, Multica finds it.
This is particularly valuable for teams that are evaluating multiple agents or that use different agents for different types of work. Maybe Claude Code for architecture decisions, Codex for bulk implementation, and OpenClaw for DevOps tasks. Multica manages all of them without forcing you into a single vendor.
Multi-Workspace Isolation
Each workspace in Multica has its own agents, issues, settings, and access controls. This makes it suitable for agencies managing multiple clients, enterprise teams with separate projects, or open-source maintainers juggling several repositories.
For Australian consultancies and agencies (and I am looking at you here, because this is a surprisingly good fit), multi-workspace means you can keep client projects completely separate. No cross-contamination of code, no mixing up API keys, no "which client was this agent working for again?"
Autonomous Execution With Human Oversight
Agents in Multica do not just wait for prompts. They can claim tasks from a queue, work through them, and report results. But humans stay in the loop through code review, approval gates, and real-time progress monitoring. It is autonomy with accountability, which is exactly how you want AI agents to operate in production environments.
Multica vs Claude Managed Agents: An Honest Comparison
Claude Managed Agents launched on April 8, 2026, and it is a significant product. If you are all-in on the Anthropic ecosystem, it is a strong choice. But it is worth understanding the tradeoffs.
Vendor lock-in:
- Claude Managed Agents works exclusively with Claude
- Multica works with Claude Code, Codex, OpenClaw, and OpenCode
- If you want flexibility or already use multiple agents, Multica has the edge
Open source vs proprietary:
- Claude Managed Agents is a proprietary service
- Multica is Apache 2.0 licensed, fully auditable, and self-hostable
- For teams with compliance requirements or data sovereignty concerns (hello, Australian businesses with data residency needs), open source is a major advantage
Cost structure:
- Claude Managed Agents is usage-priced through Anthropic
- Multica is free to self-host; the cloud version has its own pricing
- Self-hosting Multica means your costs are infrastructure plus whatever API calls your agents make
Skill sharing:
- Claude Managed Agents benefits from Claude's capabilities but does not have a cross-task skill compounding system
- Multica's reusable skills feature means every completed task makes future tasks easier
- This compounds dramatically over months of use
Maturity and support:
- Claude Managed Agents is backed by Anthropic's engineering team
- Multica is a newer open-source project with community-driven development
- Anthropic offers enterprise support; Multica relies on GitHub issues and community
The honest take: both are good options. If you are a Claude-only team that wants a turnkey solution, Claude Managed Agents is the path of least resistance. If you want vendor flexibility, self-hosting control, and the skills-compounding feature, Multica is worth serious consideration.
Tech Stack and Architecture
For the technical folks evaluating whether Multica fits their infrastructure, here is what is under the hood.
Frontend: Next.js 16 with the App Router. Fast, modern, and familiar to most web developers.
Backend: Go, using the Chi router, sqlc for type-safe SQL queries, and gorilla/websocket for real-time streaming. Go was a smart choice here: the daemon needs to be lightweight enough to run alongside agent processes, and Go's concurrency model handles WebSocket connections and agent orchestration cleanly.
Database: PostgreSQL 17 with pgvector. The pgvector extension suggests Multica is thinking about semantic search for skills and task matching, which could enable smart task-to-agent routing in the future.
Agent runtime: The local daemon executes agent CLIs directly. It creates isolated environments for each task, manages the lifecycle, captures output, and streams progress back.
The architecture is clean and pragmatic. No unnecessary microservices, no Kubernetes dependency for local use, no over-engineered message queues. It does what it needs to do and gets out of the way.
Self-Hosting vs Cloud: Which Is Right for You?
Multica gives you two paths, and the right one depends on your situation.
Self-Hosting
You need Docker and Docker Compose. The setup is three commands:
git clone https://github.com/multica-ai/multica.git
cd multica
cp .env.example .env
docker compose -f docker-compose.selfhost.yml up -d
This gives you PostgreSQL, the backend (with auto-migration), and the frontend. Self-hosting is the right choice if:
- You have data sovereignty requirements (Australian government work, healthcare, finance)
- You want full control over your infrastructure
- You want to avoid per-seat SaaS costs as your team grows
- You have existing DevOps capability
Cloud (multica.ai)
The hosted version is faster to get started with and removes the operational burden. It is the right choice if:
- You want to be up and running in minutes
- You do not want to manage infrastructure
- You are a small team without dedicated DevOps
- You want automatic updates and new features as they ship
For Australian businesses specifically, self-hosting gives you control over where your data lives, which matters for clients in regulated industries. The cloud option is fine for most other use cases.
Who Should Use Multica
Development teams (5-50 developers). If your team is already experimenting with AI coding agents but struggling with coordination, Multica provides the missing management layer. Instead of everyone running agents ad hoc in their own terminals, you get a shared board, shared skills, and visibility into what agents are working on.
Digital agencies and consultancies. This is a particularly strong fit. Agencies juggle multiple clients, each with different codebases and access requirements. Multica's multi-workspace isolation keeps everything separate, and the skills compounding means your agents get better at common agency tasks (CMS setups, API integrations, deployment pipelines) over time. For Australian agencies competing on speed and quality, this is a real advantage.
Open-source maintainers. If you maintain a popular open-source project, you know the feeling: hundreds of good first issues, not enough contributors. Multica lets you assign routine tasks (documentation updates, test coverage, dependency bumps) to agents while human contributors focus on design decisions and complex features.
Solo developers and indie hackers. Even if it is just you and a handful of agents, the skills system and task tracking are valuable. You stop being a prompt engineer and start being a team lead. The agents handle the grunt work; you handle the creative and strategic decisions.
Enterprise teams evaluating AI tooling. If your organisation is cautious about adopting AI (and many are, especially in Australia's more conservative corporate landscape), Multica's open-source nature means you can audit every line of code, run it on your own infrastructure, and maintain complete control over what data goes where.
Getting Started: Your First 15 Minutes
Here is a practical guide to go from zero to your first agent-assigned task.
Step 1: Install Multica
brew tap multica-ai/tap
brew install multica
Step 2: Set up your account
multica login
Step 3: Make sure at least one agent CLI is on your PATH. If you already use Claude Code, Codex, OpenClaw, or OpenCode, you are good to go.
Step 4: Start the daemon
multica daemon start
Step 5: Open the dashboard. You will see your detected agents listed as teammates.
Step 6: Create an issue. Write it like you would write a ticket for a developer. Be specific about what needs to happen, which files or modules are relevant, and what success looks like.
Step 7: Assign it to an agent. Pick the agent you think is best suited (or let Multica suggest one based on skills).
Step 8: Watch it work. You will see real-time progress in the dashboard. The agent will post comments, create sub-tasks if needed, and report when it is done or blocked.
Step 9: Review and merge. Check the agent's output, review the code changes, and merge or request revisions.
Step 10: Save the skill. If the solution is reusable, save it as a skill so other agents (and future tasks) can benefit from it.
That is it. In fifteen minutes you have gone from downloading a CLI to having an AI agent complete real work on your project board.
What This Means for Software Teams
The shift from "prompting an agent" to "managing a team of agents" is not a small one. It changes how you think about AI assistance in software development.
Today, most teams treat AI agents as interactive tools. You use them when you need them, one conversation at a time. That is useful, but it does not scale. You cannot run a team that way any more than you could run a development team by having everyone work in isolation with no ticket system and no communication.
Multica represents the next phase: agents as first-class team members with profiles, tasks, accountability, and shared knowledge. This is the infrastructure layer that makes multi-agent teams actually work in practice.
The skills compounding feature is particularly important. In a world where AI capabilities are commoditising fast (every major provider releases better models every few months), the differentiator is not which model you use. It is how effectively your team leverages AI over time. Skills compound. Prompts do not.
For Australian teams, this is worth paying attention to. The local market is competitive, margins are tight, and the ability to do more with fewer people is not a luxury but a necessity. Platforms like Multica that make AI agents genuinely productive (not just impressive in demos) are the ones that will matter.
The question is no longer "should we use AI coding agents?" Most teams have answered that. The question is becoming "how do we manage a mixed team of humans and AI agents effectively?" Multica is one of the first platforms to give a serious answer to that question, and it does it in the open.
FAQ
Is Multica free to use?
Yes. Multica is open-source under the Apache 2.0 license, so you can self-host it for free. Your only costs are the infrastructure to run it (a small server or Docker environment) and the API costs from whichever agent CLIs you connect (Claude Code, Codex, etc.). There is also a cloud-hosted version at multica.ai if you prefer not to manage infrastructure yourself.
Which coding agents does Multica support?
Multica currently supports Claude Code, Codex (OpenAI), OpenClaw, and OpenCode. The daemon auto-detects any of these CLIs on your system PATH. Because the architecture is extensible, support for additional agents is likely as the community grows. The vendor-neutral approach is a core design decision, not an afterthought.
Can I self-host Multica behind my company firewall?
Absolutely. Multica is designed for self-hosting with Docker Compose. You clone the repository, configure your environment variables, and run docker compose -f docker-compose.selfhost.yml up -d. Everything runs on your infrastructure: the PostgreSQL database, the Go backend, and the Next.js frontend. No data leaves your network unless your agents make external API calls.
How does the skills system work in practice?
When an agent completes a task, the solution can be saved as a reusable skill. Other agents can then reference that skill when working on similar tasks. For example, if an agent develops a deployment workflow for AWS Lambda, that becomes a skill that any agent can use for future Lambda deployments. Over time, your team builds a library of skills that make every agent more effective. It is like onboarding documentation that agents actually read and apply.
Is Multica ready for production use?
Multica is a relatively new open-source project, so it comes with the usual caveats about early-stage software. The architecture is solid (Go backend, PostgreSQL, battle-tested frontend framework), the feature set is comprehensive, and the code is open for audit. For teams comfortable with open-source tooling and willing to participate in the community, it is usable now. For organisations that need enterprise SLAs and guaranteed support, the cloud version at multica.ai may be more appropriate.


