BOZEMAN — Artificial intelligence can help libraries provide better services, including making materials more accessible, but using AI can also raise ethical questions, according to Sara Mannheimer, associate professor with the Montana State University Library.
Now, Mannheimer is leading a team working to help librarians and archivists make ethical, values-driven decisions about how best to use artificial intelligence in libraries and archives. Their efforts are backed by a $250,000, three-year grant from the Institute of Museum and Library Services.
“We see AI as this potentially transformative technology for libraries,” Mannheimer said. “AI can provide services that users love – services that can make the library better – but we want to use AI in a way that is careful and with an eye toward potential harms so that those harms can be minimized.”
Artificial intelligence refers to machines that are programmed to process complex information and mimic human actions. AI is powering social media and search engines, and it factors into data-driven decision-making in education, criminal justice, business and other fields, Mannheimer said. It helps find relevant information with personalized internet search results, book and movie recommendations and more.
Over the past few years, libraries and archives have begun using artificial intelligence to enhance their services, especially to help with describing items in their collections and helping patrons discover them. But using artificial intelligence can also come with drawbacks, Mannheimer said.
“AI is not actually intelligent,” Mannheimer said. “It’s still a machine that’s working through a process. It’s dependent on what data you feed it.” Because of that, she noted, it’s important to recognize that AI can be biased and limited by the data it uses and to consider how to account for and potentially address those biases.
Mannheimer said one of the main ways that libraries use AI is in archives and special collections to help create metadata and catalog terms. It helps librarians and patrons understand, on a larger scale, what’s in the library’s digital collections.
With that additional data, scholars can do things like data mapping – a process that connects a data field from one source to a data field in another source, which can help facilitate data migration and integration – or distant reading, which uses computational methods to analyze literature.
Having that additional data available aligns with a widely held library value of making collections more accessible, but Mannheimer cautioned that it’s important to balance accessibility issues with privacy issues.
“When collections are digitized, it opens up the scope of use if it’s openly available on the web,” she said. “Having digitized archives widely available is very different than having them sitting in a box that only certain scholars would access in person.”
In addition, libraries have begun using AI to enhance library services, such as recommending books for library patrons, yet such a service could be at odds with patron privacy.
“There’s this long-standing idea that in order for (library users) to have intellectual freedom and explore topics with impunity, we don’t keep lending records,” Mannheimer said. “But if there is a search program where a patron can opt in to creating a profile, and libraries can keep track of what you’ve borrowed and recommend other titles for you, suddenly there is a tension. We’re trying to align with long-standing library values, but also we’re coming into this culture where there are different expectations from our users about the services we provide.”
To help address the challenges that come with AI, Mannheimer and colleagues at MSU, Iowa State University and James Madison University are developing resources, including a harms analysis tool and handbook, to advise on the ethical use of AI in libraries and archives. They’re also hosting workshops with librarians, archivists and library users and have created an advisory board.
“We’re trying to bring in as many people with different perspectives as possible to make sure that the ethical decision-making guide we make can be used by people with all different sorts of values and experiences,” she said.
Mannheimer and her team plan to create a free tool to help decision-makers in libraries consider all of the potential angles and ramifications of AI. The team plans to distribute it to colleagues and also make it available online.
“It will basically be a guide where, when you embark on either buying an AI product or creating your own AI project, it will help you consider all of the potential angles,” she said. “We think it will be really helpful to have a series of questions to consider.”
In addition to Mannheimer, who is leading the grant, other members of the project team include Jason Clark, Doralyn Rossmann and Scott Young with the MSU Library; Bonnie Sheehey with the MSU Department of History and Philosophy; Hannah Scates Kettler with Iowa State University; and Yasmeen Shorish with James Madison University. The grant is affiliated with the MSU Center for Science, Technology, Ethics, and Society.
Doralyn Rossmann, dean of the MSU Library, said she’s excited to be a part of a team providing needed tools for libraries and archives to navigate the ethical challenges of using AI in keeping with library values.
“AI has so much potential to help people meet their information needs,” Rossmann said. “Our team brings together expertise to ask thoughtful questions about the ethical intersection of libraries, our patrons and AI technologies.”
– by Anne Cantrell, MSU News Service –