ROMA (Recursive Open Meta-Agents) is an open-source framework from Sentient that turns complex goals into structured, parallelizable work. It does this by recursively decomposing tasks and orchestrating specialized agents for planning, execution, and synthesis. Instead of a monolithic prompt-and-pray approach, ROMA shows its work, making agentic systems easier to reason about, debug, and iterate.
From the documentation and code, ROMA's core promise is clear: a transparent, hierarchical model for building high-performance, domain-ready agents with practical integrations (FastAPI backend, React frontend, S3 mounting, sandboxed execution) and a principled backbone grounded in the MECE idea of dividing all work into Think, Write, and Search operations. See README.md and docs/INTRODUCTION.md in the repository for a full tour.
Why recursive agents, and why now
Traditional single-pass LLM flows struggle with complex, multi-constraint problems. They oscillate between overly general outputs and brittle prompt chains, with little visibility into intermediate steps.
ROMA's answer is a plan-execute-aggregate loop that mirrors how humans solve problems: decompose a goal, work on parts in parallel where possible, respect dependencies when needed, and synthesize an answer suited to the parent task, not just a concatenation of child outputs.
The approach is formalized in the docs/CORE_CONCEPTS.md and docs/ARCHITECTURE.md and reinforced by the code in src/sentientresearchagent/hierarchical_agent_framework.
What stands out in practice
Two things: transparency and real-world plumbing. Transparency comes from stage tracing and explicit node types (PLAN vs EXECUTE) running inside an event-driven orchestration layer, so you can see where and why a system took a path.
The plumbing shows up in details like sandboxed execution with E2B, optional S3-backed storage via goofys, and a LiteLLM-powered abstraction over model providers. That combination lets you move from a neat idea to a deployable agent system quickly, without hiding the logic behind it.
What ROMA delivers out of the box
ROMA's README and docs highlight several capabilities, many of which are easy to trace in the Python modules under src/sentientresearchagent/:
- Recursive task decomposition and parallel execution: a PLAN node creates subtasks; execution nodes run independently when they can, and wait when dependencies exist. See orchestration/execution_orchestrator.py and orchestration/task_scheduler.py.
- MECE-aligned task types: Think, Write, and Search, enforced by TaskType and NodeType models in hierarchical_agent_framework/types.py and implemented in agents/base_adapter.py and agents/adapters.py.
- Traceability and state management: explicit states, batched state updates, and deadlock detection in orchestration/batched_state_manager.py and orchestration/deadlock_detector.py.
- Provider-agnostic LLM access: LiteLLM integration enables OpenAI, Anthropic, Google, local models, and more (BerriAI, 2025).
- Secure code execution: optional E2B sandboxes to run code safely and reproducibly (E2B, 2025).
- Data layer integrations: S3 via goofys for fast, POSIX-ish mounting with guardrails (Cheung, 2020).
The team's search-focused benchmarks (SEAL-0, FRAMES, SimpleQA) in the README give an early look at performance on realistic information tasks, and serve as templates for evaluating new agent recipes and domains.
Inside the architecture and code
The codebase is Python-first with FastAPI/Flask for APIs and a React + TypeScript frontend for real-time visualization. The heart of the system sits in src/sentientresearchagent/hierarchical_agent_framework, which cleanly separates concerns:
- Orchestration: execution orchestrator, state transition manager, scheduler, and recovery logic live in files like execution_orchestrator.py, state_transition_manager.py, and task_scheduler.py.
- Agent layer: adapters, base classes, prompts, and registry are in agents/base_adapter.py, agents/adapters.py, agents/prompts.py, and agents/registry.py.
- Data and context: the docs describe a KnowledgeStore abstraction that propagates lineage and sibling context; the types and utilities supporting this are surfaced in types.py and the surrounding utils/ and context/ directories referenced throughout the docs.
- API and runtime: you can run a server via fastapi_server.py and configure behaviors with sentient.yaml. Setup paths for Docker vs native environments are scripted in setup.sh.
Finally, ROMA embraces the for building agents, which gives you a production-ready FastAPI app and control plane to manage multi-agent systems (Agno, 2025).
Where ROMA fits today
The repository includes pre-built agents for three areas: a general task solver that uses OpenAI's search preview for broad queries; a deep research agent that decomposes multi-phase research tasks; and a crypto analytics agent for market, on-chain, and DeFi data.
They are intentionally simple and show how to scaffold capabilities by editing agent prompts and adapters. From there, it is straightforward to extend into adjacent domains such as technical due diligence, market landscaping, editorial pipelines, structured report generation, and code-centric workflows executed in sandboxes.
The benchmark suite named in README surfaces realistic evaluation scenarios: SEAL-0 for noisy search, FRAMES for retrieval accuracy and reasoning, and SimpleQA for short factual questions.
You can browse the assets and evaluation references directly: SEAL-0 dataset on Hugging Face, FRAMES on Hugging Face, and OpenAI's SimpleQA page. These form a useful harness for tracking improvements as you tune plans, adapters, and prompts.
Community, docs, and roadmap
ROMA ships with extensive docs under the docs/ directory: INTRODUCTION, CORE_CONCEPTS, ARCHITECTURE, SETUP, CONFIGURATION, AGENTS_GUIDE, and COMMUNITY.
Contribution guidance and a roadmap are spelled out, with a bias toward transparent recipes and community-built agents. Start with docs/COMMUNITY.md for participation notes, and docs/ROADMAP.md for upcoming work.
Using ROMA and license terms
To try ROMA locally, clone the repository and run setup.sh to choose Docker or native install. The server entry points are exposed via FastAPI, and the React UI streams live task graph updates over WebSocket as agents execute.
Configuration happens via sentient.yaml and .env. For model connectivity, LiteLLM abstracts provider differences so you can switch models without rewriting adapters (BerriAI, 2025).
ROMA is released under the MIT License; see the LICENSE file in the repository. MIT permits reuse, modification, distribution, and private or commercial use, provided you include the license and copyright notice. There is no warranty; you assume the risks of using the software.
Why this matters
Agentic systems are shifting from demos to dependable software components. ROMA's design helps that transition by making problem decomposition explicit and inspectable. T
he recursion depth controls and dependency-aware scheduling mean you can dial in performance for shallow tasks and still scale to deeper trees for comprehensive work.
While the integrations around storage, sandboxes, and multi-provider models puts it closer to production use than many academic or toy frameworks.
Looking ahead, the docs point to ideas like distributed execution, plugin-style extensibility, stronger context management, and multi-agent collaboration patterns. Those directions overlap with emerging best practices across the agent ecosystem, and should keep ROMA compatible with external protocols and tools as they solidify.
About Sentient
Sentient positions itself as a company focused on practical, high-performance agent systems, with open source at the center and a community presence across GitHub and Discord. See the website at sentient.xyz. The ROMA repository links out to resources that invite developers to build agents, share recipes, and contribute back. The strategy shows in the code: a bias for performance, clear orchestration, and modularity that welcomes external tools.
Try it, trace it, tune it
If you have been waiting for a clear way to build and reason about multi-agent systems, ROMA is worth your time. Start with the docs to understand the plan-execute-aggregate loop, then explore the orchestration and agents packages to see how the abstractions map to code. The example agents offer quick wins; the transparency and adapters make iteration fast. Whether you are prototyping a research assistant or wiring up a complex analytical workflow, ROMA gives you a principled foundation and the plumbing to ship.
A quick look at the recursive loop
def solve(task):
if is_atomic(task):
return execute(task)
else:
subtasks = plan(task)
results = [solve(t) for t in subtasks]
return aggregate(results)
ROMA: A Recursive Roadmap for Multi‑Agent Systems