It’s almost 20 years now since a socially awkward young computer science student set up a website for rating “hot” women.
Facemash, as Mark Zuckerberg called his creation, was shut down within days. But this crass teenage experiment was still, in retrospect, the first faltering step down a road to something even he couldn’t possibly have foreseen at the time: a social media phenomenon now accused of unwittingly helping to polarise society, destabilise the democratic process, fuel hate speech and disseminate dangerous conspiracy theories around the globe, despite what providers insist have been their best attempts to stamp out the fire.
We couldn’t have predicted then, and arguably still don’t properly understand now, what impact Facebook or Twitter or Instagram or TikTok have had on teenage mental health. We couldn’t have anticipated how life online would change our sense of self, blurring the line between private life and public content; didn’t grasp until too late how algorithms developed to drive social media consumption would shape what we read or hear, and consequently how we think or feel. But if we couldn’t have accurately predicted that from the start, with hindsight, there were surely moments along the road when the penny should have dropped.
Had governments not allowed the tech giants to race so far ahead of regulation, they might have saved themselves years of clearing up the resulting mess. But blinded by the riches the industry generated, and diverted by the pleasure its products have undoubtedly given along the way, we all missed the moment. The fear is that we’re about to do the same with something infinitely more powerful and unpredictable.
Artificial intelligence is arguably both the most exciting thing that has happened to humankind in generations – key to magical, transformative breakthroughs in everything from medicine to productivity – and the most frightening, given its potential to upend the existing social and economic order at breakneck speed.
This week some of the world’s leading AI experts called for a six-month pause on training the next wave of systems more powerful than the now famous ChatGPT-4 chatbot – which has demonstrated an uncanny ability to communicate like a human – in order to better understand the implications for humanity. They warn of an “out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control”.
Shortly afterwards the British government published a white paper arguing that, on the contrary, Britain has only a brief window of around a year to get ahead in that race, and should adopt only the lightest of regulatory touches for fear of strangling the golden goose.
The UK won’t have a new expert regulator governing what some think could become an extinction-level threat to humanity; instead, ministers will “empower” a bunch of overworked existing regulators to do what you might have hoped they were already doing, and scrutinise AI’s impact on their sectors using a set of guiding principles that may be backed up at some unspecified point by legislation.
The whole thing smacks of a government desperate for economic growth at all costs and perhaps also for something resembling a Brexit bonus; if the EU treads its usual cautious regulatory path, Britain will position itself as the comparatively unfettered, gung-ho home of the AI pioneer.
The white paper mentions the jobs AI will undoubtedly create but skates over the ones it will eliminate and the social unrest that could follow. (Think of what the decline of coal, steel and manufacturing did to rust belt towns across Europe and the US, and how that fuelled the rise of populism; now imagine AI replacing a quarter of all work tasks worldwide, as predicted in a report by Rishi Sunak’s old employer Goldman Sachs this week.)
Ministers stress the extraordinary breakthroughs possible in healthcare. But they have less to say about new forms of fraud or mass disinformation that could be perpetrated using AI tools capable of communicating convincingly like a human, or about how autonomous weaponry could be exploited by terrorists or rogue states. They don’t talk nearly enough about what new rights humans might need to live alongside AI, including perhaps the legal right to know when an algorithm rather than a person was employed to sift our job application, refuse us a mortgage, fake what looks like an entirely authentic image or craft a flirty response on a dating app (yes, there’s an AI application for that).
The risk of AI becoming sentient, or developing human feelings, remains relatively distant. But anyone who has ever got enraged by Twitter knows we’re already way past the point of algorithmic systems affecting humans’ feelings towards each other. Michelle Donelan, the new cabinet secretary responsible for tech, breezily assured the Sun this week that nonetheless AI wasn’t “something we should fear”; the government had it all in hand. Feeling reassured? Me neither.
A global moratorium on AI development sadly seems unlikely, given we haven’t managed that kind of worldwide cooperation even against the existential threat from the climate crisis. But there has to be some way of avoiding what happened with social media: an initial free-for-all that made billions, followed eventually by an angry backlash and a doomed attempt to stuff genies back into bottles. Artificial intelligence develops, in part, by learning from its mistakes. Is it too much to ask that humans do the same?