Kevin Baragona, the founder of DeepAI, said Artificial intelligence has become the nuclear weapon of software
A tech mogul has described the sprint to perfect artificial intelligence (AI) as the 21st century’s nuclear arms race.
Kevin Baragona was one of the more than 1,000 leading experts who signed an open letter on The Future of Life Institute, calling for a pause on the ‘dangerous race’ to develop ChatGPT-like AI.
Like the invention of the atomic bomb in the 1940s, Baragona told DailyMail.com that ‘AI superintelligence is like the nuclear weapons of software.’
‘Many people have debated whether we should or shouldn’t continue to develop them,’ he continued.
Americans were wrestling with a similar idea while developing the weapon of mass destruction known as ‘nuclear anxiety.’
‘It’s almost akin to a war between chimps and humans, Baragona, who signed the letter, told DailyMail.com
‘The humans obviously win since we’re far smarter and can leverage more advanced technology to defeat them.
‘If we’re like the chimps, then the AI will destroy us, or we’ll become enslaved to it.’
The fears come with the extraordinary rise of ChatGPT, which has taken the world by storm in recent months, passing leading medical and law exams that take humans nearly three months to prepare.
The powers of ChatGPT-like AI have sparked a civil war in Silicon Valley.
Elon Musk and Apple co-founder Steve Wozniak signed the letter for an AI pause, while Bill Gates and Google CEO Sundar Pichai did not.
‘While I can only speculate why Gates and Sundar didn’t sign the letter to pause advanced AI research, I think they didn’t because they’re signing the checks to expedite AI’s progress,’ Baragona said.
Microsoft, founded by Gates, has heavily invested in OpenAI, the creator of ChatGPT.
In January, it was speculated Gates’s company invested an additional $10 billion in the startup to compete with Google in commercializing new AI breakthroughs.
The fears of AI come as experts predict it will achieve singularity by 2045, which is when the technology surpasses human intelligence to which we cannot control it
Microsoft also added AI to its Bing search engine in February, incorporating ChatGPT powers.
Google just opened Bard to the public on March 21, which is also a natural language chatbot.
The California company has been cautious with the rollout not to have its technology churn out inaccurate facts, but Bard’s first impression showed the company had rushed it to market.
It is yet to be seen how Bard will fair against the likes of OpenAI’s ChatGPT and Microsoft’s AI-powered Bing.
‘Microsoft is investing heavily in OpenAI, and Google into Anthropic,’ Baragona told DailyMail.com.
‘They may feel it’s not the time to walk that back over unsubstantiated fears of possible negative consequences.’
Musk, Wozniak and more than 1,000 tech leaders signed an open letter Wednesday calling for a six-month pause on developing AI.
The groups said more risk assessment needs to be conducted before humans lose control and it becomes a sentient human-hating species.
Bill Gates and Google CEO Sundar Pichai did not sign the open letter with Musk. The pair have invested heavily in AI development and see the technology as the way of the future
At this point, AI would have reached singularity, which means it has surpassed human intelligence and has independent thinking.
AI would no longer need or listen to humans, allowing it to steal nuclear codes, create pandemics and spark world wars.
Gates and Sundarare on the other side of the aisle.
They are hailing ChatGPT-like AI as our time’s ‘most important’ innovation – saying it could solve climate change, cure cancer and enhance productivity.
OpenAI launched ChatGPT in November, which became an instant success worldwide.
The chatbot is a large language model trained on massive text data, allowing it to generate eerily human-like text in response to a given prompt.
The public uses ChatGPT to write research papers, books, news articles, emails and other text-based work and while many see it more like a virtual assistant, many brilliant minds see it as the end of humanity
Elon Musk and Apple co-founder Steve Wozniak signed a letter protesting the technology that ‘poses profound risks to humanity’
Musk and Wozniak fear AI will advance beyond human control and are asking for a six-month pause to asset the risks
In its simplest form, AI is a field that combines computer science and robust datasets to enable problem-solving.
The technology allows machines to learn from experience, adjust to new inputs and perform human-like tasks.
The systems, which include machine learning and deep learning sub-fields, are comprised of AI algorithms that seek to create expert systems which make predictions or classifications based on input data.
Scott Opitz, chief technology officer at intelligent automation company ABBYY, said in a statement: Pausing AI development is like putting the toothpaste back in the tube. AI applications are pervasive, impacting virtually every facet of our lives.
‘While laudable, putting the brakes on now through a voluntary pause may be implausible.
‘What’s needed is a concerted and good-faith effort between industry and legislators to pass common-sense regulations that espouse ethical AI principles based on human-centered values of fairness, transparency, and accountability.’
Hollywood may have sparked humans’ fears of AI which were typically shown as evil, such as in The Matrix and The Terminator, painting a picture of robot overlords enslaving the human race.
However, the idea is echoed throughout Silicon Valley as more than 1,000 tech experts believe it could become our reality.
This would be possible if AI reaches singularity, a hypothetical future where technology surpasses human intelligence and changes the path of our evolution – and this is predicted to happen by 2045. AI would first have to pass the Turing Test.
When it does, the technology is considered to have independent intelligence, allowing it to self-replicate into an even more powerful system that humans cannot control.