Artificial Intelligence has become commonplace in the lives of billions of people globally. Research shows that 56% of companies have adopted AI in at least one function, especially in emerging nations. That’s six percent more than in 2020. AI is used in everything from optimizing service operations through to recruiting talent. It can capture biometric data and it already helps in medical applications, judicial systems, and finance, thus making key decisions in people’s lives.
But one huge challenge remains to regulate its use. So, is a global consensus possible or is a fragmented regulatory landscape inevitable?
The concept of AI sparks fears of Orwell’s novel “1984” and his “Big Brother is Watching You” notion. Products based on algorithms that violate human rights are already being developed. So now is the time to talk, to put in place standards and regulations to mitigate the risk of a society based on surveillance and other nightmarish scenarios. The US and the EU can take leadership on this matter, especially since both blocks have historically shared principles regarding the rule of law and democracy. But on either side of the Atlantic, different moral values underpin principles, and they don’t necessarily translate into similar practical rules. In the US emphasis is on procedural fairness, transparency and non-discrimination, while in the EU the focus is data privacy and fundamental rights. Hence the challenge of finding common rules for digital services operating across continents.
Why AI Ethics Is Not Enough
Not all uses of AI are savory or built on palatable values. AI could become ‘god like’ in nature: Left to its self-proclaimed ethical safeguards, AI has been shown to be discriminatory and subversive. Consider for a moment, the AI underlying the so-called ‘social credit’ system in China. This ranks the Chinese population whereby those considered untrustworthy are penalized for anything from jaywalking through to playing too many video games. Punishments include losing rights, such as the capability to book tickets, or limiting internet speed.
Imposing mandatory rules on AI would help prevent technology infringing human rights. Regulation has the potential to ensure that AI has a positive, not negative effect on lives. The EU has proposed an AI Act, intended to address these types of issues. The law is the first of its kind by a large regulator worldwide, but other jurisdictions like those in China and the UK are also entering the regulatory race to have a say in shaping the technologies that will govern our lives this century.
Why Global Regulation Is A Challenge
The AI Act applications are broken down into three risk categories. There are systems that pose an “unacceptable risk,” such as the Chinese social credit application. There are also applications that are “high risk” like resume-scanning tools that must adhere to legal requirements to prevent discriminations. Finally other systems that are not considered high or unacceptable risk are not regulated.
Regulation is needed, yet neither the US nor the EU can impose this alone. Reaching a global agreement on the values that should support such regulations is also unlikely to happen. Challenges and disagreements exist even within the confines of the EU and the US. Some countries have national rules in place creating conflict between national and regional approaches. Similarly, without the EU and the US working together, discord could lead to the breakdown of the global digital infrastructure.
Finding Common Principles Based On Values
You need to be aware of underlying principles of what is expected from AI and what the values should be. Implicit values can easily work their way in where principles are not explicitly stated. Science has values and is cultural. Algorithms can have built-in discrimination that can be racist or inequitable. A part of research advocates for replacing implicit biases with the principles of empathy, self-rule and duty. Justice, equity and human rights are also key values that should underline common principles, although they are vague and dependent on cultures.
Some researchers also advocate for stakeholder involvement, critical to building empathy as an underlying principle. It is important to engage people who have traditionally been excluded from the AI regulation process but who are affected by its outcomes.
To walk the talk, it is critical to have the right principles in place . Strong leadership is also necessary but it is even more vital to formulate clear technical rules which can be implemented effectively.
Who Should Lead AI Standardization?
Technical standardization is taking the lead on the regulation of AI through associations like the IEEE and ISO, and national agencies like the NIST in the US, and the CEN, CENELEC, AFNOR, Agoria and Dansk Standards in Europe. In these settings, one key issue is the extent of government involvement. Concern exists about the capability of politicians to understand and make complex decisions about how to regulate technology, but governments must be involved if they are to uphold regulations.
This is critical within a democracy, due to the risks linked to holding power. Great power can be abused and major tech industry players and tenants in Silicon Valley have an undue influence over standard setting. Take Elon Musk, for instance. He is, so to speak, the “last gasp” of people thinking that the EU’s human rights-centered regulation on speech is an unacceptable constraint on the First Amendment. When the self-proclaimed “free speech absolutist” proposed to buy Twitter last April, there were fears his policy would loosen moderation. This would contravene new European moderation rules, which introduce algorithm accountability requirements for large platforms like Twitter.
The application and optimization of technical standards requires the collaboration between lawmakers, policymakers, academics and engineers, and the support of different stakeholder groups, such as corporations, citizens, and human rights groups. Without this balance, BigTech lobbyists or geopolitics will have a disproportionate influence.
Not All Is Lost
Despite all the challenges, there is hope. The US narrative gives the impression that the government cannot improve society through regulations, but major paradigm adaptations have occurred before and have been addressed. Regulation needs space, time and energy. Society must be adapted to technology in the same way that governments adapted to rail infrastructure and oil, the advent of which brought similar challenges.
Finally, while it may seem counterintuitive, the West should keep an eye on how China is attempting to regulate AI. The recent Chinese law on algorithmic recommendation services seeks to integrate Chinese mainstream values into “Made in China” AI systems, which certainly will be sold and used worldwide. It is thus about time that the US and the EU stand for liberal democratic values and human rights by promoting and funding transatlantic research and development programs that could lead to digital technologies which are not only in line with our values but which actively enhance our humanity.
Article based on the talks given by the following professors at the “Transatlantic Dialogue on Humanity and AI Regulation” conference held in May 2022 at HEC Paris: David Restrepo Amariles of HEC Paris, Gregory Lewkowicz of Université Libre de Bruxelles, Janine Hiller of Virginia Tech, Anjanette Raymond, Scott Shackelford and Isak Asare of Indiana University, Winston Maxwell of Telecom Paris, Roger Brownsword of King’s College London, Carina Prunkl and Rebecca Williams of the University of Oxford, Kevin Werbach of UPenn Wharton Business School, Philip Butler of Iliff School, Gregory Voss of Toulouse Business School, Robert Geraci of Manhattan College, Martin Ebers of University of Tartu, Ryan Calo of Universuty of Wahsington, Margaret Hu of Penn State University, Joanna Bryson of Hertie School, Sofia Ranchordas of Groningen University, Scott Shackelford Céline Caira, Responsible for AI initiatives at the OECD, Aaron McKain of North Central University, Divya Siddarth, Julio Ponce and Joost Joosten of Universidad de Barcelona, Pablo Baquero of HEC Paris, and Nizan Packin of University of Haifa, Konstantinos Karachalios of IEEE.
David Restrepo Amariles is Associate Professor of Data Law and Artificial Intelligence at HEC Paris.