For most organizations today, artificial intelligence (AI) has become a core part of their digitalization efforts. From large corporations to small enterprises, AI continues to provide the solutions needed to enhance productivity and make the most of the data available to them.
Be it large banks or healthcare facilities or even small businesses, AI today can be customized to provide them with the desired outcome. In fact, IBM’s Global AI Adoption Index 2022 shows that 35% of companies reported using AI in their business, and an additional 42% reported they are exploring AI today.
While the adoption of AI increases, there is also concern about how much AI can influence decisions made. For some, AI can be used to affect decisions that often would have a different outcome based on human emotions. As AI provides the best insights based on the data it has processed and access to, some still feel that there is a challenge on where to draw the line when it comes to making the most out of it.
According to Reggie Townsend the director of SAS Data Ethics Practice (DEP), ensuring AI is used for the right reasons is one of the “biggest challenges of our time”. Townsend who is also part of the National Artificial Intelligence Advisory Committee (NAIAC) in the US, continues to advocate organizations on how they can make the most out of their AI for their business.
In a blog post, Townsend noted that society must first understand that most AI harms are intentional.
“A far larger share of the population still needs foundational knowledge. They need facts about what AI is and what it is not. I now believe one of the greatest services responsible innovators can provide is a campaign to increase global public awareness about AI. Awareness enables adoption,” stated Townsend.
The role of artificial intelligence
Townsend also recently spoke at the SAS Innovate summit in Singapore. During a session with the media, here’s what he had to say about the role AI is playing for businesses today as well as how SAS is working towards ensuring AI is serving the right reasons.
“The question we’re asking is how do we use AI on a responsible and sustainable basis? As technologists, I think AI has given us the outcomes that it is going to give us. Although people may not have intended to hurt others with their creative technology, it’s this idea of let’s hurry up and do it and just put it out there and let the marketplace fix the bugs. It feels a bit dangerous at a time when we’re deploying technologies that are helping to make decisions about the health of our fellow citizens or determining the innocence or guilt of people.
At SAS, specifically on the team that I lead, which we refer to as the data ethics practice, I like to say that this is the area where we get a chance to kind of flex our moral imaginations and think about some of the big questions to answer like, how can we make the world free of gender discrimination? How might we make life better for unbanked people? How can we improve the health conditions of lesser industrial nations?
Then, we’re thinking through what are some of the conditions that need to change and how very pragmatically can we implement some of those changes right in our technology. So we work directly with our engineering teams, product management teams, and our operations folks, consulting folks, and such with the express purpose of infusing this ethos into the company,” commented Townsend.
Regulating artificial intelligence
With SAS being around for 46 years, Townsend feels that the company has been doing the right thing for that period. He also pointed out that the idea of attempting to do well and do good with AI is very comfortable and familiar to them.
“We also recognize that, in today’s marketplace, when we talk about artificial intelligence, there’s a lot of concern among governments. And so when they are working with coming up with legislative proposals and such, we need to be in tune with that. We have the ability, or, frankly, the privilege to participate in some of those conversations directly with some of those regulators as well. So we’re doing some of that work as well. We will always have the culture that we are of concern to people who are attempting to do what’s in the best interest of the societies that they serve, the companies that they serve, and such,” added Townsend.
For Townsend, there are currently burgeoning regulations around the world, with the most comprehensive being out of the European Union. The EU is expected to release the EU AI act and has recently provided SAS guidelines on liability.
“As far thinking as we may be around some of these topics, know that one of the great benefits of AI and machine learning is that it learns over time. It may put us in situations where we are faced with scenarios that we couldn’t anticipate. So you need to have legal frameworks in place to support those sorts of situations.
The courts will get involved, and we’ll have to adjudicate some of those concerns. I say that to say, we should just be comfortable with that reality. But that’s not unlike any other situation in the past, right? It’s not unlike when automobiles were first made available and horses were still on the street, people were still walking, and there were no sidewalks, no stop signs, and traffic signals. Over time, standards had to be created for us all to comply with. Over time, you would have to have insurance laws and all those sorts of things.
Everyone should feel comfortable that we are on a very similar path to paths of the past. And so our great opportunity and challenge are to take advantage of what we’ve learned from the past, leave the bad stuff, and get a great benefit from the things that worked really well,” explained Townsend.
The future of artificial intelligence
At the same time, Townsend also highlighted that they’ve spent a lot of time thinking through these parallels and analogies that are really important. While they feel compliance is key, the question also arises of what happens when compliance is the lowest threshold. It’s important, as a government or regulator to create situations where organizations have room to operate. They don’t want to create standards so high that no one can achieve them.
“Regulators have a part to play in terms of creating a wide enough field of play for us all. And then firms like ours have a responsibility to stay within that field or carve out a piece of the field for ourselves. And it’s in that piece of the fieldwork that we establish, for ourselves, a set of values and a higher set of principles in some cases that we want to comply with.
And so with that said, we spend time thinking about our principles. We want to make sure that everywhere that we show up, not just with our platform, but with our people and our business processes, we are always promoting this idea of human well-being, agency, and equity. We’re always thinking about how we can be more inclusive and not less, as well as creating levels of access for everyone to participate. We believe that is the best and right thing to do for all involved,” said Townsend.
Townsend also acknowledged that there will be some business conditions that evolve. For example, he commented that there have been even instances whereby SAS had had to wrestle its principles to the ground and state that won’t do business with some organizations because it goes dead against something they hold seriously. Admitting these are tough calls, Townsend also feels those are the calls the company has to make.
“There will be other situations where we might take on a little more risk because we believe we have the ability to navigate adverbial storms. This is not anything different from what has ever happened in the business. But the stakes are different because of the nature of AI and where it’s showing up in our lives today,” concluded Townsend.