On March 6, 2025, Monica.im, a Chinese AI company, launched Manus, an autonomous artificial intelligence (AI) agent with the ability to think and act independently. This follows DeepSeek, another major AI innovation from China earlier in the year—a powerful generative AI model. Unlike DeepSeek, which specializes in content generation, Manus is fully autonomous, and capable of independent decision-making and execution, making decisions and executing tasks without human prompts.
Unlike traditional digital assistants that require user input, Manus can initiate and complete complex tasks on its own, spanning financial analysis, candidate screening for recruitment, and even travel planning. For example, where DeepSeek might generate a market trend report based on existing data, Manus could analyze a market report, recommend investments, and autonomously execute trades. This level of AI autonomy represents a paradigm shift, moving from a system that follows instructions to an entity capable of independent decision-making, which could fundamentally alter industries, economies, and global AI competition.
Opportunities and Challenges for the Global South
Manus and similar AI advancements offer key opportunities for the Global South, such as enhanced efficiency, economic growth, and improved public services. AI-driven automation can streamline business processes, optimize logistics, and enhance customer service, leading to increased productivity across industries. Governments can also leverage AI for better urban planning, automated administrative processes, and enhanced public safety, improving service delivery for citizens. Additionally, AI has the potential to bridge the digital divide, providing underserved communities with access to digital banking, e-learning, and telemedicine, thus promoting financial and educational inclusion.
However, the rapid evolution of autonomous AI raises significant concerns about infrastructure readiness, workforce adaptation, regulatory preparedness, and the capability of policymakers to understand and regulate AI effectively. The digital infrastructure gap remains a key challenge, as many Global South nations struggle with limited internet connectivity and electricity shortages. Large portions of the population risk being excluded from AI-driven progress without expansion and affordability measures.
Moreover, the shortage of a skilled workforce presents a major hurdle. AI automation is expected to replace millions of jobs, particularly in industries reliant on routine tasks. Without urgent reskilling and AI literacy initiatives targeting not only the workforce but also policymakers and decision-makers, governments risk slow or inadequate policy responses to AI’s rapid advancements. This lack of preparedness can result in weak or outdated regulations, allowing unchecked AI deployments with far-reaching consequences. For example, without proper oversight, AI-driven law enforcement tools could lead to mass surveillance abuses, while unregulated AI in financial systems could increase predatory lending practices and economic inequalities. Structured AI training programs for policymakers, including workshops, scenario-based learning, and international AI governance exchanges, are essential to ensure informed decision-making and proactive regulatory action., entire workforces risk obsolescence, exacerbating economic inequalities.
Another key challenge is the risk of technological dependence, where Global South nations may remain consumers of AI rather than innovators. This stems from inadequate investment in AI research and a lack of local development capabilities. Furthermore, ethical concerns—such as algorithmic bias and data privacy—highlight the need for regulatory frameworks that ensure fairness and inclusivity in AI systems. Without strong governance, AI could deepen digital colonialism, reinforcing inequalities instead of reducing them.
Additionally, the unchecked race to develop AI poses a significant risk. As countries and corporations push for dominance, AI advancements may prioritize speed over safety, leading to insufficient regulatory oversight. This could result in unintended consequences, including unreliable or biased decision-making and ethical violations.
Even more concerning is the possibility of autonomous AI being weaponized, escalating military tensions and leading to a global AI arms race. Without international agreements, nations may deploy increasingly autonomous weapon systems, creating unpredictable global consequences, much like the nuclear arms race during the Cold War. This intensifies the need for global AI governance frameworks and treaties to ensure AI development remains safe, ethical, and beneficial for all.
Strategic Steps to Enhance Readiness
To harness the potential of autonomous AI like Manus and mitigate associated risks, Global South nations must consider the following strategies:
- Investing in Digital Infrastructure
Expanding internet access and electricity grids is crucial for AI adoption. However, beyond infrastructure expansion, it is equally important to ensure affordable access to technology for all socioeconomic groups. Projects like 2Africa, the longest subsea internet cable under construction, aim to improve connectivity across Africa, Asia, and Europe, but cost barriers could still prevent widespread adoption. Policymakers must implement measures such as subsidized internet services, public Wi-Fi zones, and affordable data plans to ensure AI-driven advancements benefit all layers of society, not just the urban elite. Without affordability, the digital divide will continue to widen, limiting the ability of marginalized communities to leverage AI for economic and social progress.
- Developing AI Talent and Literacy
Educational and training programs should not only focus on building AI talent but also on fostering AI literacy across different sectors of society. While initiatives like Deep Learning Indaba in Africa and Khipu in Latin America are fostering local AI research communities, broader AI literacy programs must be implemented to ensure that the general workforce and policymakers understand AI fundamentals, risks, and ethical considerations.
To accelerate this, training should be massively scalable through online learning platforms, mobile-based education, and game-based learning approaches. This will enable individuals from diverse backgrounds to acquire AI-related skills in an engaging and accessible manner. Governments and private sector stakeholders must collaborate to roll out these initiatives, ensuring that AI education reaches not just engineers and developers but also business leaders, civil servants, and everyday citizens who will interact with AI-driven systems.
- International Collaboration
Partnering with other nations and international organizations can facilitate technology transfer, standard-setting, and policy development. Indonesia, for example, is engaging in bilateral agreements with the U.S., Lithuania, Japan, and China to support AI ecosystem growth. Additionally, multilateral forums such as the Global South AI Forum and UNESCO play a crucial role in advocating for the interests of developing nations in the global AI race.
To bridge the growing technological divide, a balanced global AI governance framework is essential to ensure equitable access and prevent monopolization. A successful precedent can be seen in the regulatory frameworks governing telecommunications and financial technology, where global standards have been developed to ensure interoperability, consumer protection, and fair competition.
For example, the Financial Action Task Force (FATF) has established anti-money laundering (AML) standards that apply to financial institutions worldwide, ensuring compliance across different regulatory environments. Similarly, in AI, a structured governance model that balances innovation, ethical concerns, and equitable access could help prevent the dominance of AI-producing nations while safeguarding the interests of developing economies. This is particularly important in mitigating the so-called ‘Terminator Effect’, a growing concern that highly autonomous AI could spiral out of control and threaten human civilization. Stronger international oversight and ethical AI standards are essential to avoid unintended consequences that could disproportionately affect Global South nations.
The Role of Regulators and Policymakers
Regulators and policymakers in the Global South play a crucial role in ensuring that autonomous AI adoption maximizes benefits while minimizing risks. However, many of these decision-makers lack the necessary AI literacy and technical understanding to craft effective policies, which could lead to regulatory gaps and weak governance structures. Without an informed approach, policymakers may fail to implement crucial safeguards, allowing AI systems to be deployed without ethical oversight, security protocols, or economic impact assessments. For example, the deployment of facial recognition AI in public surveillance has led to privacy violations and mass surveillance concerns in some countries.
Similarly, the unchecked use of AI in financial systems has resulted in algorithmic bias, where automated credit scoring models disproportionately disadvantage low-income applicants. Without strong governance, such cases highlight how AI, if left unregulated, can reinforce social and economic inequalities, undermining trust in AI-driven decision-making. This requires a multi-faceted approach, including the establishment of ethical AI frameworks, fostering local innovation, and reducing over-reliance on foreign technologies.
To prepare for the rise of AI autonomy, decision-makers must prioritize comprehensive AI education and risk awareness programs, ensuring they translate into concrete policy actions. This includes integrating AI literacy into government training programs, establishing AI policy task forces, and collaborating with international regulatory bodies to align policies with global best practices.. Without this, governments risk delays in implementing AI regulations, which could lead to unintended consequences such as biased AI decision-making in governance, surveillance overreach without privacy protection, and a lack of accountability in AI-driven financial or legal systems., ensuring that both policymakers and industry leaders understand the implications, risks, and governance challenges of AI. to equip their workforce with the necessary digital competencies. Additionally, governments should implement AI Risk Management Frameworks (RMF) to systematically assess and mitigate AI risks.
For example, failing to regulate AI-powered hiring tools could result in widespread discriminatory hiring practices, while unchecked AI in financial services could exacerbate economic disparities due to opaque credit scoring mechanisms. to systematically assess and mitigate the risks associated with autonomous AI. Without a structured approach, these nations risk falling further behind in AI governance and adoption.
Policymakers should also consider adopting structured AI governance models, such as AI Verify, a framework developed by Singapore that ensures transparency, fairness, and accountability in AI systems. By tailoring similar models to their socio-economic contexts, Global South nations can establish robust AI regulations while maintaining technological sovereignty.
Furthermore, regional AI governance bodies should be strengthened to ensure fair representation in global AI policymaking. Malaysia, for example, has launched a national AI office to oversee regulatory formulation and establish AI as a key national priority. Other nations must take similar steps, integrating AI literacy into educational curriculums and creating regulatory sandboxes to test and refine AI policies before full-scale implementation.
Conclusion
The launch of Manus signals a new era in AI development, presenting both opportunities and challenges for Global South nations. Global South nations must act decisively to ensure an inclusive and sustainable AI-driven future. Key policy recommendations include accelerating investments in digital infrastructure, establishing AI literacy and reskilling programs, and engaging in strategic international cooperation to build regulatory frameworks that align with national interests.
Governments must ensure widespread and affordable AI access, aligning these efforts with broader infrastructure expansion and digital inclusion initiatives. Ensuring widespread, cost-effective connectivity—through public internet access points, affordable data plans, and digital education programs—will be essential in enabling equitable AI adoption while preventing the expansion of the digital divide. Additionally, regulatory bodies should adopt AI governance models—such as Singapore’s AI Verify—to maintain transparency, fairness, and accountability. Policymakers must also push for global AI treaties, much like nuclear non-proliferation agreements, to prevent unchecked AI competition from destabilizing economies and security landscapes.
The stakes are high, and Global South leaders must take a proactive role in shaping AI governance, rather than being passive consumers of foreign AI innovations. By implementing robust policies, fostering innovation, and participating in global AI governance discussions, these nations can secure a stronger, more equitable position in the future AI landscape.