in ,

Here are nine key issues in the AI debate

Earlier this week, I entered a darkened room in Back Bay just after breakfast and listened to 62 talks about artificial intelligence, emerging just in time for dinner.

It was a good way to get up to speed on the research happening in Boston and elsewhere, and what seems certain to be a momentous debate in most fields about how, whether, and where to use AI — and how to manage its impact on our jobs.

The Monday event was part of the TEDxBoston series, which features short talks with slides. The talks are later uploaded to YouTube. The speakers included many MIT students; a Northeastern professor; docs from Mass General and Britain’s National Health Service; and execs from Fidelity Investments, The Vanguard Group, and Merck. The data-driven fields of finance and drug development have been eagerly experimenting with AI technology, but TEDxBoston also touched on how it might play a role in education, law, city planning, and nudging people toward more environmentally-friendly purchases and behaviors.

Less important, Soroush Hajizadeh, a researcher at the Broad Institute, mentioned in passing that he had been curious about his cat’s breed. He fed a batch of cat pics into an AI image classification system, along with a photo of the cat. The answer: Turkish angora.

Aside from sleuthing out cat breeds, here are nine major AI debates we’re going to be having, and challenges we’ll be working through, over the next decade.

  • What will inclusive, unbiased AI look like? If we don’t agree as a society on exactly what it means to be inclusive, how will AI act in an inclusive way? MIT undergrad Sadhana Lolla talked about “de-biasing algorithms,” and poked at the ChatGPT AI chatbot for delivering a list of 10 important philosophers that were all white men, from Plato to Immanuel Kant. (When I tried to replicate that question with the same software, it threw in one woman: French writer Simone de Beauvoir.) But what is a perfectly-inclusive list of philosophers? Is it people who are cited the most by writers and teachers of philosophy? Does it have perfect balance between Eastern and Western philosophy? How many men, women, people of color are on the list?
  • Do we need to understand how AI works? The term “black box” describes AI systems where even computer scientists don’t understand how it works; it just does. (ChatGPT is an example of a black box, as are many other cutting-edge AI demonstrations.) Javier Viaña Pérez, a postdoctoral researcher at MIT, worried about “AI indirectly controlling humanity” if we don’t have software “whose reasoning can be understood by humans.” That, he argued, is the only way we can trust and regulate AI. It’s not clear who will enforce or encourage the development of AI systems that let you pop the hood and see what’s going on.
  • Is it OK if AI copies you? One of my favorite ideas of the day, from MIT professor Ashwin Gopinath, was that if a great teacher fed enough material into an AI system, it would become a kind of “digital twin” that could answer questions and adjust explanations to a particular student’s level of understanding — either in text or video form. That teacher would be available for office hours at 2 a.m., and even potentially after the human version retired or died. Is that a way of continuing to help students forever? A little creepy? Both? What about an AI system that examines your writing, music, or artwork, and creates things in your style on command — with or without your permission? You can copyright a book, and you can copyright a painting, but we haven’t yet had the discussion about whether an AI that looks at your paintings and makes something similar — but new — owes you anything.
  • Who does the fact-checking and verification? Several speakers, including MIT postdoctoral researcher Anna Ivanova, observed that while AI systems like ChatGPT can generate “coherent, plausible sentences” they are “not as good at generating factually true statements.” (When I asked ChatGPT to write a story recently about a local entrepreneur, it simply made up quotes that he had never uttered.) An important task for humans using these systems in the near term will be verifying that what they produce is correct, and not bizarre. Alejandro Maldonado, CEO of the interior design startup At Hum, said that AI had once produced a layout for a living room that included several hundred toilets.
  • How do we keep AI from being toxic and hurtful? When AI systems are trained on datasets that include toxic language or ideas, said Priya Bhasin, a former product manager at Apple, they can carry on conversations that may be psychologically harmful to people. One out of 50 documents that AI software is trained on is toxic, she said.
  • Will you be an adopter — or a resister? Hajizadeh, the cat lover, argued that we’re about to witness an era of intense change, similar to the arrival of the Internet and electricity. There are tools already emerging to help execute boring and repetitive work with less effort, or do more comprehensive research. But in many fields, people argue that the tools are not ready for prime time, or they contend that humans simply know best. It’s likely that big divides will open up between individuals and organizations that can take advantage of what AI is good at — and others who simply find reasons to wait until it’s perfect.
  • Who owns the data — and how intrusive will AI be? Many times, questions arise
    about whether people will maintain ownership and control over their data, even as it is leveraged by artificial intelligence software as part of a larger data pool to make predictions or solve problems. If AI could spot the emergence of a new pandemic by analyzing medical records, would we submit to mandatory quarantines to stem its spread?
  • How do we prepare for the inevitable job loss? One of the final talks of the day was by Yibiao Zhao, cofounder and CEO of ISEE AI, a company developing autonomous truck technology. It is focusing on the movement of trucks inside logistics yards, where freight is picked up and dropped off. There are no traffic lights, and no lane markings, but by relying on AI-enabled software, trucks can maneuver around, and independently connect to trailers. Zhao noted that the logistics industry has been suffering from a shortage of drivers, but it’s hard not to make the leap from augmenting human drivers when you can’t find anyone to hire, to having AI take over the less-desirable shifts (say, midnight to 8 a.m.), to it taking over entirely.
  • There was very little talk about whether we need political or societal mechanisms to govern the responsible use of AI, beyond broad brush strokes about “certifying” AI systems as trustworthy. Instead, there was a sense that this is a field currently moving so fast that even experts are breathless in trying to keep up.

Scott Kirsner can be reached at Follow him on Twitter @ScottKirsner.

Source link

What do you think?

‘AI needs guardrails’: Tech experts brief Congress on risks, benefits of artificial intelligence

Discord Adds OpenAI-Powered Chatbot, Moderation Features