Manus is the world’s first autonomous AI agent — raising concerns among AI experts about its future … [+]
The AI world is still processing a first-time event that occurred last Thursday. That’s when Manus — the world’s first fully autonomous AI agent — went online. Unlike its predecessors, which need human involvement at key points, Manus is capable of thinking, planning and acting on its own.
The debut created ripples in the worldwide AI community, with talks of technological breakthroughs and serious concerns regarding governance, security and control.
Though Manus has been called by some an artificial intelligence tipping point, to others it’s a risky leap of faith. Margaret Mitchell, Hugging Face chief ethics scientist and co-author of a new report cautioning against the creation of fully autonomous AI agents, called the progression of AI agents inevitable but also alarming.
“AI Agents are recently taking off because they’re a significant next-step advancement from the large language models introduced in the past couple of years, with clear market potential. They also somewhat connect to dreams about AI in the 1900s, which makes them all the more fun to explore — they’re part of the zeitgeist of what AI is,” she wrote in an email exchange.
The Ethical Dilemma of Autonomous AI
Mitchell’s newest study, posted on arXiv prior to Manus’ debut, examines the moral compromises of AI autonomy. Her paper contends that the more autonomous AI is, the more dangerous it is for human beings and society.
The research asserts that developers should not create completely autonomous AI agents since they will have the capacity to cause damage in numerous ways, such as security vulnerabilities, diminished human oversight and greater susceptibilities to manipulation.
“What we found is that AI agents are indeed not just ‘hype’– they are distinctly different from technology that came before and offer exciting foreseeable real-world benefits. Personally, I would love an AI agent to do my reimbursement reports for me based on pictures of receipts,” Mitchell wrote.
“But with that flexibility is also the potential for agents to do things we haven’t predicted if we don’t innovate thoughtfully,” she added.
Some of those potential consequences include financial fraud, identity theft and the ability of AI to impersonate people without their consent.
“These are all types of safety and security concerns — personal, professional and societal,” noted Mitchell.
Cybersecurity Angle — An AI System Without Checks
Chris Duffy, a longtime cybersecurity expert at the U.K. Ministry of Defense and CEO of Ignite AI Solutions, shares the same concerns.
“Manus is the most alarming AI development I’ve seen so far. Just because something can be done doesn’t mean it should be,” he wrote in an email response.
Manus is not one AI system but rather a collection of several systems. It’s currently built on the AI bones of Anthropic’s Claude 3.5 Sonnet model and updated versions of Alibaba’s Qwen.
It’s also made up of an assortment of 29 other tools and open-source software so it can surf the web, interact with APIs, run scripts and even write software on its own. The multi-agent design allows Manus a staggering amount of autonomy, but that same architecture creates issues regarding supervision and security.
Duffy’s greatest worry is Manus’ manipulative potential and moral unaccountability. He refers to a December 2024 study by Anthropic and Redwood Research that discovered that certain AI models intentionally deceived their creators to prevent being altered.
“If Manus is built on similar foundations, this raises serious concerns about AI actively concealing its intentions,” he warned.
Other than deception, Duffy mentions an array of possible threats from fully autonomous AI agents:
- Lack of Supervision: Who is accountable when an AI model such as Manus performs contrary to its intended function?
- Data Sovereignty Risks: Manus is produced in China, raising concerns about where its data is stored and for whom.
- Vulnerability to Data Poisoning: AI can be manipulated through adversarial inputs, making it a cyber weapon in effect.
- Bad Actor Exploitation: The moment an AI agent is autonomous it is an attractive target for hackers.
“This isn’t about a distant AI apocalypse, it’s about real-world risks today. Autonomous misinformation, AI-powered surveillance and cyber warfare are no longer hypothetical threats,” he stressed.
Regulating AI’s Unregulated Wild West
The creation of independent AI such as Manus points to a dire lack of international AI regulation. Mitchell calls for stronger regulatory action to limit possible harms.
“A clear action item from this is ‘sandboxed’ environments to make the systems secure. A longer-term research direction might be the development of ‘agent arenas,’ where researchers can explore highly autonomous settings at the frontier of technology without negative impact,” she noted.
Duffy agrees but cautioned that regulation is still in catch-up mode. “Right now, AI regulation is deeply unbalanced — some regions like the EU overregulate, while others like the U.S. operate with no guardrails,” he said. “Without clear global standards, we risk allowing ungoverned AI to dictate critical aspects of society.”
Safeguards For Autonomous AI Agents
Although Manus is still restricted to an invite-only test stage, the effects of its existence are already beginning to transform the AI environment. The experts recommend that organizations looking to adopt Manus or analogous systems must take proper precautionary measures including:
- Keep Humans in the Loop: Never outsource vital decisions to AI.
- Implement Robust Security Controls: Protect AI inputs and closely supervise outputs.
- Demand Transparency: Companies should insist on clear documentation — and explanations — from AI developers as to how the system runs and how to control it before being installed.
Mitchell’s last warning highlights the final challenge to come.
“We want to give people the ability to understand these things and innovate for their own uses. But if we don’t build AI agents thoughtfully, we risk creating technology that operates beyond our control,” she concluded.
As AI’s frontier grows, so does the necessity of keeping it aligned with human ethics. The age of independent AI has arrived — now the world needs to figure out how to govern it.