LangChain Deep Agents: Building Autonomous Coding Agents
LangChain has open-sourced Deep Agents, a powerful framework for building autonomous AI agents that can handle complex, multi-step tasks reliably. Unlike typical “shallow” tool-calling LLM agents, Deep Agents combine task planning, persistent memory, context management, and sub-agent orchestration to tackle real-world coding and research workflows.
In this post, you’ll discover what makes Deep Agents different, how the architecture works, how to get started quickly, and what you can build with this framework.
What Are Deep Agents?
Deep Agents are advanced AI agents built on top of LangChain and LangGraph that go beyond simple question-answering or single-tool invocation. They are designed for sustained, complex workflows that require:
- Breaking down large tasks into manageable subtasks
- Maintaining state and memory across long conversations
- Managing context windows intelligently through summarization
- Delegating specialized work to sub-agents
- Persistent file system access for offloading content
Think of Deep Agents as autonomous assistants that can work on research projects, codebases, data analysis, or any task requiring multiple steps, decision-making, and memory over time.
💡 Tip: Deep Agents are particularly useful when your task can’t be solved in a single LLM call and requires orchestration across multiple tools and steps.
Key Features
1. Task Planning and Decomposition
Deep Agents come with built-in planning tools like write_todos that help break complex tasks into smaller, trackable steps. The agent can adjust plans dynamically as it discovers new information or encounters blockers.
2. Virtual File System for Context Management
One of the standout features is file system access. Agents can use tools like ls, read_file, write_file, and edit_file to:
- Store intermediate results and notes
- Offload large content to avoid context overflow
- Maintain persistent workspaces for ongoing projects
- Keep organized state across multiple sessions
This approach prevents context window limits from becoming a bottleneck.
3. Sub-agent Spawning
When a task requires specialized focus, Deep Agents can spawn sub-agents in isolated context windows. Each sub-agent tackles a specific subtask, returns summarized results to the main agent, and keeps the primary context clean and manageable.
4. Long-term Memory
Using LangGraph’s persistent storage capabilities, Deep Agents maintain state across multiple conversations or sessions. They can recall past work, access historical context, and build on earlier progress — essential for extended projects.
5. Automatic Context Summarization
When conversations approach context limits, Deep Agents automatically summarize older content while archiving full histories. This keeps the active working memory focused on what matters most.
6. CLI and SDK
Deep Agents ship with both a Python SDK for programmatic control and a powerful CLI for interactive terminal sessions. The CLI acts as a coding assistant that can be run interactively or in headless script mode.
How Deep Agents Work
The architecture is modular and built on a middleware stack:
Architecture Overview
-
LangGraph Foundation — At the core, Deep Agents use LangGraph’s
CompiledStateGraphfor orchestration, execution streaming, and state checkpointing. - Middleware Stack — The
create_deep_agent()function builds a layered middleware system:- TodoListMiddleware: Task decomposition and tracking
- MemoryMiddleware: Agent documentation and knowledge integration
- SkillsMiddleware: Custom Python tools and functions
- FilesystemMiddleware: File operations for context management
- SubAgentMiddleware: Sub-agent creation and isolation
- SummarizationMiddleware: Automatic conversation summarization
-
Backend Abstraction — Agents support different file system backends (RAM, disk, or custom stores) for context and outputs.
- Persistence — Built-in checkpointers provide robust long-term state retention, enabling workflows that span multiple sessions.
Workflow Example
- Initialization: Create an agent with
create_deep_agent(), supplying tools, system prompt, and configuration - Task Reception: The agent receives a user request and writes a task plan with subtasks
- Tool Invocation: For each task, the agent selects and invokes appropriate tools
- Context Management: Large results are saved to the file system; summaries are written back to context
- Delegation: Specialized tasks are delegated to sub-agents with focused contexts
- Persistence: State is maintained across sessions for long-running workflows
Quick Start
Installation
Install the Python SDK:
pip install deepagents
Or install the CLI for interactive use:
pip install deepagents-cli
Set Up API Keys
Deep Agents support multiple LLM providers. Set your API key:
export OPENAI_API_KEY="your_openai_api_key"
# Or for Anthropic:
export ANTHROPIC_API_KEY="your_anthropic_api_key"
Basic Python Example
from deepagents import create_deep_agent
# Create a default agent
agent = create_deep_agent()
# Invoke the agent with a task
result = agent.invoke({
"messages": [{
"role": "user",
"content": "Research LangGraph and write a summary"
}]
})
print(result["messages"][-1]["content"])
This agent automatically has access to planning tools, file operations, and context management out of the box.
Customizing with Tools
Add your own tools and system prompt:
from langchain_core.tools import tool
from langchain.chat_models import init_chat_model
from deepagents import create_deep_agent
@tool
def search_documentation(query: str) -> str:
"""Search technical documentation."""
return f"Results for: {query}"
agent = create_deep_agent(
model=init_chat_model("openai:gpt-4o"),
tools=[search_documentation],
system_prompt="You are a research assistant specialized in documentation.",
)
Using the CLI
Start the CLI in any project directory:
deepagents
Then interact naturally:
You: Add type hints to all functions in src/utils.py
The CLI will read files, propose changes, and ask for approval before making edits.
Example: Building a Research Agent
Here’s a more complete example using internet search:
import os
from tavily import TavilyClient
from deepagents import create_deep_agent
tavily_client = TavilyClient(api_key=os.environ["TAVILY_API_KEY"])
def internet_search(query: str, max_results: int = 5):
"""Search the internet for information."""
return tavily_client.search(query, max_results=max_results)
research_instructions = """
You are an expert researcher. Use the `internet_search` tool for gathering information.
Create comprehensive summaries with citations.
"""
agent = create_deep_agent(
model="anthropic:claude-sonnet-4-6",
tools=[internet_search],
system_prompt=research_instructions,
)
result = agent.invoke({
"messages": [{
"role": "user",
"content": "What are the latest trends in Kubernetes security?"
}]
})
print(result["messages"][-1].content)
The agent will:
- Use the search tool to gather information
- Store intermediate findings in its virtual file system
- Synthesize results into a comprehensive summary
- Maintain context if you ask follow-up questions
Next Steps
Now that you understand Deep Agents, here are some directions to explore:
Build Production Workflows
Deep Agents are production-ready. Integrate them into:
- Automated code review pipelines
- Documentation generation workflows
- Research and report automation
- Data analysis and visualization tasks
Explore Advanced Features
- Multi-agent systems: Create specialized agents for different domains
- Custom middleware: Build your own middleware layers for specialized behavior
- Persistent backends: Use PostgreSQL or Redis for durable state storage
- Observability: Integrate with LangSmith for tracing and monitoring
Learn from Examples
The Deep Agents repository includes example implementations:
- Coding assistants with GitHub integration
- Research agents with web search
- Data analysis agents with pandas tools
- Multi-step automation workflows
Compare with Other Frameworks
Deep Agents position themselves as an open-source alternative to proprietary solutions like Claude Code and Devin. Key advantages include:
- Full control over prompts, tools, and behavior
- Model-agnostic design (works with OpenAI, Anthropic, Google, etc.)
- Transparent architecture with readable source code
- Integration with the broader LangChain ecosystem
Conclusion
LangChain Deep Agents provide the first production-ready open-source solution for building powerful autonomous AI agents. By combining task planning, persistent memory, context management, and sub-agent orchestration, they address the major limitations of earlier agent frameworks.
Whether you’re building a coding assistant, research tool, or automation pipeline, Deep Agents offer a robust foundation with minimal boilerplate and maximum flexibility. The framework is model-agnostic, developer-friendly, and designed for real-world complexity.
Ready to build your first Deep Agent? Start with the quickstart guide and explore the examples to see what’s possible.
References: