What is Claude Managed Agents?
Anthropic's hosted platform for running autonomous AI agents — launched April 8, 2026. You define the agent (model + system prompt + tools). Anthropic runs it in a secure cloud container. No servers to build, no infrastructure to maintain. All 7 types below run on it.
Create an agent with the built-in toolset, start a session, give it a task. Anthropic spins up a secure cloud container and the agent runs bash commands, searches the web and reads/writes files — all autonomously on Anthropic's infrastructure.
ant beta:agents create \
--name 'My Assistant' \
--model '{id: claude-opus-4-7}' \
--system 'You are a helpful assistant.' \
--tool '{type: agent_toolset_20260401}'
# 2. Create a cloud environment — save the returned environment.id
ant beta:environments create \
--name 'my-env' \
--config '{type: cloud, networking: {type: unrestricted}}'
# 3. Start a session and send your task — streams back in real time
session = client.beta.sessions.create(agent=agent_id, environment_id=env_id)
client.beta.sessions.events.send(session.id,
events=[{type: 'user.message', content: 'Your task here'}])
Add MCP (Model Context Protocol) server URLs when you create your agent. It gains live read/write access to those external services — Notion, Atlassian, Slack, any database with a public MCP URL — and can interact with everything stored there.
ant beta:agents create \
--name 'Notion Agent' \
--model '{id: claude-opus-4-7}' \
--system 'You read and write Notion pages.' \
--tool '{type: agent_toolset_20260401}' \
--mcp-server '{type: url, url: https://mcp.notion.com/sse}'
# Then start a session exactly as in Type 01.
# Add multiple --mcp-server flags for multiple services.
Create separate agents for each stage of your pipeline. Start sessions in order, passing the output of each as the input to the next. Each agent is a specialist that only does its one job, perfectly, before handing off.
agent_a = create_agent('You read and summarise contacts.')
agent_b = create_agent('You write personalised emails.')
agent_c = create_agent('You send emails via the API.')
# Run in sequence — output of each becomes input of the next
session_a = start_session(agent_a)
result_a = send_and_wait(session_a, original_task)
session_b = start_session(agent_b)
result_b = send_and_wait(session_b, result_a)
session_c = start_session(agent_c)
final = send_and_wait(session_c, result_b)
Start multiple sessions at the same time — each runs independently in Anthropic's cloud. Use Python asyncio to fire them all at once and gather results when they're done. Massively faster for tasks you can split into parallel workstreams.
# Async helper — starts a session, waits for the result
async def run_session(task):
session = client.beta.sessions.create(agent=agent_id, environment_id=env_id)
return await send_and_wait_async(session, task)
# Fire all sessions at the same time — Anthropic runs them in parallel
results = await asyncio.gather(
run_session('Research competitor A'),
run_session('Research competitor B'),
run_session('Research competitor C'))
merged = combine(results)
Create a lightweight router agent whose only job is to classify an incoming task and return a routing decision. Your code reads that decision and starts the right specialist session. Fully automatic, zero manual sorting.
router = create_agent(system='Read the task. Reply with ONE word: PRIORITY, STANDARD or ESCALATE.')
# Specialist agents for each route
priority_agent = create_agent('Handle urgent tasks fast.')
standard_agent = create_agent('Handle normal tasks carefully.')
escalate_agent = create_agent('Handle complex edge cases.')
# Route automatically — start the right specialist session
route = send_and_wait(start_session(router), incoming_task)
agents = {'PRIORITY':priority_agent,'STANDARD':standard_agent,'ESCALATE':escalate_agent}
result = send_and_wait(start_session(agents[route]), incoming_task)
Set your agent's tools to always_ask confirmation mode. Before the agent executes any action, your code receives a confirmation event. You approve or deny — the action only fires if you say allow. Full control, zero surprise executions.
agent = create_agent(tools=[{type:'agent_toolset_20260401', confirmation:'always_ask'}])
# Stream events — intercept before any tool fires
for event in stream:
if event.type == 'agent.tool_use':
print(f'Agent wants to: {event.name}')
ok = input('Allow? y/n: ')
client.beta.sessions.events.send(session.id, events=[{
type:'user.tool_confirmation', tool_use_id:event.id,
result:'allow' if ok=='y' else 'deny'}])
Create specialist sub-agents, then create an orchestrator that lists them as callable_agents. The orchestrator dynamically spawns threads to delegate work at runtime — it figures out who it needs on demand. You don't pre-configure the flow.
researcher = create_agent('You research topics thoroughly.')
writer = create_agent('You write clear, concise summaries.')
# 2. Create orchestrator — list sub-agents it's allowed to call
orchestrator = client.beta.agents.create(
name='Engineering Lead', model='claude-opus-4-7',
system='Delegate research and writing to your specialist agents.',
tools=[{type:'agent_toolset_20260401'}],
callable_agents=[{type:'agent',id:researcher.id},{type:'agent',id:writer.id}])
# 3. Start ONE session — it spawns sub-agent threads automatically
session = start_session(orchestrator)
# Threads run in parallel, orchestrator merges and returns results
Quick Comparison
| # | Type | Best For | Level |
|---|---|---|---|
| 01 | Basic Agent With Tools | Bash, web search, file ops | Easy |
| 02 | MCP Server Agent | External databases via MCP URLs | Easy |
| 03 | Sequential Agents | Ordered multi-step pipelines | Medium |
| 04 | Parallel Execution | Speed-critical split workstreams | Medium |
| 05 | Router Agent | High-volume task sorting | Medium |
| 06 | Human in the Loop | High-stakes tasks needing approval | Medium |
| 07 | Orchestrator Agent | Self-organising multi-agent tasks | Research Preview |
YOUR NEXT STEPS
Start small. Ship something real. These compound fast.