Monica, the Chinese AI startup behind the Manus platform, just released Manus 1.5, introducing what the company calls “unlimited context” processing and speed improvements that reduce average task completion times from 15 minutes to under four minutes.
The release includes two variants: the full Manus 1.5 and a cost-efficient Manus-1.5-Lite designed for routine workflows. Both versions build on Monica’s autonomous agent platform, which breaks down user requests into steps, executes tasks using 29 integrated tools, and operates asynchronously in the cloud without requiring constant human oversight.
Monica—sometimes referenced as Butterfly Effect—first launched Manus in March 2025. The platform competes in an increasingly crowded field of autonomous AI agents, including OpenAI’s Operator and systems built on Anthropic’s Claude models. Manus differentiates itself through a multi-agent architecture that deploys specialized sub-agents to handle concurrent tasks, and through its ability to maintain coherence across extended, multi-step workflows.

Image: Manus AI
The headline feature in version 1.5 is expanded context handling. While Monica has not disclosed a numeric token limit, the company describes the upgrade as addressing a common failure mode in AI agents: losing track of earlier constraints or mid-stream decisions when managing multi-file changes or complex requests. The improvement represents both expanded effective context and refined memory policies rather than a literal infinite window.
Internal benchmarks published by Monica show a 15 percent improvement in task quality and a six percent lift in user satisfaction, though these figures have not been independently replicated. The four-fold speed gain stems from a re-architected engine designed to parallelize planning and execution steps.
Manus 1.5 also includes an upgraded full-stack app builder that generates complete web applications from a single prompt. The system creates frontend interfaces, backend APIs, user authentication flows, and databases, and can embed AI features directly into the generated code. This positions Manus as a direct competitor to low-code platforms and developer-focused AI tools.
The Lite variant targets teams managing repetitive or less complex tasks. While specific technical differences between the two tiers remain undisclosed, Monica positions Manus-1.5-Lite as a way to control costs in production environments by reserving the full version for workflows requiring maximum reasoning depth and context retention. Neither version has publicly listed per-call or monthly pricing, though the company describes the Lite edition as budget-friendly.
Manus operates through an agent loop: it analyzes a request, drafts a plan, executes steps such as browsing pages or editing files, observes results, and refines the plan iteratively. Users can initiate tasks and return later, with the platform continuing work asynchronously. The system integrates with external tools and retains contextual memory to adapt to user preferences over time.
The platform uses multiple coordinated AI models, though Monica has not specified which foundation models power the latest release.
The latest launch arrives as autonomous agents shift from research prototypes to production tools. OpenAI, Anthropic, Google, and xAI are all investing in agentic systems that can plan, execute, and iterate on complex tasks with minimal supervision. Monica’s strategy emphasizes speed, context retention, and developer tooling, aiming to carve out market share in workflows that require sustained reasoning and tool use.
Monica’s founder, Xiao Hong, previously turned down an acquisition offer from ByteDance in 2024, choosing to keep the company independent. Co-founder Ji Yichao, formerly involved in browser development, serves as chief scientist. Neither executive issued public statements accompanying the Manus 1.5 release.
The platform is available through manus.im. Monica has not announced a timeline for further updates or disclosed whether version 1.5 will expand beyond its current cloud-based deployment model.