Several New York City government agencies risk “biased outcomes” due to a lack of guidelines for their use of artificial intelligence, according to a new state comptroller’s report.
The audit, issued Thursday morning by state Comptroller Tom DiNapoli, focused primarily on four city agencies — the NYPD, the Administration for Children’s Services and the Education and Buildings departments.
It concluded that the city isn’t assessing the effectiveness of its AI tools or setting discernable accuracy standards, “leaving it vulnerable to misguided, inaccurate or biased outcomes in several programs that can directly impact New Yorkers’ lives,” the comptroller’s report said.
“Government’s use of artificial intelligence to improve public services is not new,” DiNapoli said. “But there needs to be formal guidelines governing its use, a clear inventory of what’s being used and why, and accuracy standards, so that the algorithms that can help train educators, identify potential criminal suspects, prioritize child abuse cases and inspect buildings don’t do more harm than good.”
The most recently available city data shows that, as of 2021, city agencies reported using 21 “algorithmic tools,” which include those employing artificial intelligence or machine learning techniques. Examples of such tools include the NYPD’s use of facial recognition technology, which has come under fire for flaws when it comes to race and ethnicity, and ACS’s use of technology aimed at predicting which open child abuse cases are most likely to lead to future and severe harm.
According to DiNapoli’s report, the city doesn’t have an “effective AI governance framework,” and that while agencies are required to report their use of artificial intelligence, there’s little in the way of guard rails around how to actually use the technology.
Under former Mayor Bill de Blasio, the city created a post called the Algorithms Management and Policy Officer to establish a reporting framework for policies and guidelines around the use of AI. That position was eliminated after Adams came into office in January 2022 and its responsibilities shifted to Adams’ newly created Office of Technology and Innovation. The city signaled recently that it’s looking to hire someone for a similar role.
Around the same time, Adams signed off on a new law requiring reporting requirements similar to the ones established during de Blasio’s tenure. Part of those rules includes the requirement that a designee of the mayor compiles a list of every AI tool used by each city agency annually.
Based on a review of OTI’s website, the city did not compile such a list in 2022. A spokesman for Mayor Adams pointed to de Blasio’s administration as the focus of much of the audit and said the city is now “in a strong position to approach AI in a more centralized, coordinated way.”
DiNapoli cited more specific concerns.
He suggested that the predictive model ACS uses to select cases for further review is problematic because the agency “does not have a formalized policy specific to AI development and use.” While ACS staff reported to DiNapoli that they have guidelines in place, his audit noted they did not provide evidence that the “guidelines are required to be followed in the same way that formal policies would be.”
DiNapoli also found that the NYPD only uses facial recognition tech evaluated by the National Institute of Standards and Technology, but that the department did not review those evaluations or establish an accuracy requirement for the technology.
In a response to the audit, Adams’ Technology Commissioner Matt Fraser agreed with its findings in general, but noted that the agency “believes that the various efforts in place represent important foundations for AI governance.”