AI can be very helpful in the public sector – but civil servants must ‘train’ systems carefully and address a set of risks around the technology. Illustration by Katy Smith
Move fast and break things; reward risk-taking; scale fast. These three credos have allowed technology companies to upend our everyday lives in the past two decades, while producing fortunes for their owners. Civil servants are equally passionate about transforming society, but – given their responsibility to spend taxpayers’ money carefully and serve everyone in society – they cannot approach technology with the same risk-taking abandon. Neither can they be paid eye-watering salaries.
How, then, can the freebooting Silicon Valley approach to digital transformation be put to work in government? This crucial debate surfaced time and again throughout the session on Artificial Intelligence (AI) at the 2021 Government Digital Summit: an annual gathering of national digital leaders, first held in London in 2019 and convened online earlier this year.
Paychecks with extra zeros
“The elephant in the room right now is salaries,” commented Dr Vik Pant, Chief Scientist and Chief Science Advisor at Natural Resources Canada. With the likes of Amazon or Google able to “add an extra zero, or maybe two” to the paycheques of government employees, the public sector faces a challenge to recruit and retain technology specialists.
“There’s a lot of poaching going on,” responded another delegate (to ensure that digital leaders can speak freely at the event, they may anonymise their quotes in these reports). Rather than being drawn into bidding wars with tech firms, the delegate said, governments should focus on hiring mission-driven technologists: “Those that are interested in solving problems for the public, I don’t think they’re really in it for the money.”
Pant, on the other hand, argued that governments should accept that there has been a paradigm shift when it comes to hiring technology specialists: digital chiefs, he said, should give up on the idea of recruiting employees for the long term. Instead, he seeks to tempt in exceptional graduates with the offer of access to vast and unique datasets owned or collected by the state – far exceeding those available to private businesses. In this context, two or three years working in government becomes a personal development opportunity for potential employees, offering the chance to work at an otherwise impossible scale and foster skills that can then be taken into the private sector.
Secondments between the public and private sectors can also help build skills on both sides. Jennifer Duncan, Vice President for Government Innovation at knowledge partner Mastercard, described how her company hopes to second employees in the UK to the Cabinet Office; there, they’d introduce a Mastercard tool to assess the economic performance of different regions during the pandemic. Embedding staff in the Cabinet Office, she added, would enable them to explore how the tool could be strengthened by bringing in other government datasets – improving its value to both government and the company.
Fariz Jafarov, Director of Azerbaijan’s E-Gov Development Center, had another potential solution: offering public servants financial rewards linked to performance. The country’s e-government unit ASAN, he explained, operates one-stop shops staffed by a range of public and private sector organisations – and these employees benefit from incentives systems more commonly found in the business world, such as revenue-sharing and performance-related pay.
What’s more, when ASAN provides a service to a public body, it’s allowed to keep a percentage of the fee rather than passing the whole amount on to central government’s coffers. ASAN also charges fees to private sector organisations that piggy-back on top of its services: each time a bank accesses a citizen’s information through ASAN’s digital systems, for example, it makes a small payment.
From these revenue streams, ASAN is able to reward technology staff – sometimes in direct proportion to their performance or contributions to a project. Such payment incentives have attracted public support, Jafarov said, because they’ve also led to significant improvements in service delivery.
Placing bets wisely
The unique environment of central government also mitigates against another of the tech businesses’ key strengths. “We simply don’t have the ‘fail fast, fail often’ option,” said one participant, highlighting a tension facing civil service digital professionals: “We have this paradigm that we have to get it right perfectly the first time, which is really not realistic.”
Their comments triggered discussion about how governments should invest their limited technology budgets. In the context of AI – the use of algorithms ‘trained’ to spot patterns by exposure to large data sets – there are particular issues around public confidence, the delegate commented: not only do AI tools need to work, but ministers and the public also need to believe that they work.
In some circumstances, to the frustration of data scientists, even AI systems that demonstrably outperform human staff may meet resistance; a section of the public are much more comfortable with human errors than mistakes made by machines. The delegate called these issues the “change management” aspects of technology management: the soft political and cultural work of winning support for a tool’s adoption, as distinct from the task of building and installing it.
People may, for example, be readier to accept the use of AI to analyse datasets – making weather predictions, for example – than its deployment to analyse the meaning and significance of texts such as patent applications. “When we have an AI system trying to say what something means,” the delegate said, “I think it’s very challenging for the change management piece, for acceptance.” This consideration could help shape where and how to invest resources, at least for now.
A different perspective came from Jafarov, who argued that resistance melts away if AI can resolve an otherwise difficult problem. As an example, he explained that in Azerbaijan digital identities are linked to citizens’ smartphones – presenting a problem for anyone using a phone registered in, say, the name of a parent or employer. So the state introduced an AI tool allowing individuals to prove their identity by recording a short video on any phone: by addressing one of service users’ frustrations, he said, the system helped build public understanding of the value of AI in service delivery.
Let a hundred flowers bloom
Another key topic of discussion was the level at which government AI solutions should be pursued. By definition, AI needs both large datasets and a critical mass of skilled specialists to manage delivery. These factors can tend towards centralised solutions; but top-down approaches can separate technology development from the frontline civil servants who best understand the problem they’re trying to solve.
In larger countries, individual agencies may have the skills and resources to develop their own AI systems, retaining close links with business owners and delivery staff. Natural Resources Canada, for example, established an “accelerator” pairing newly-hired digital specialists with experienced staff scientists to try and solve problems together.
“You can think about it like a hatchery,” said Pant. “We bring high-potential ideas into the accelerator; we get the domain experts paired up with the digital solution gurus; and they work together to co-create pilots, prototypes and proofs of concept.”
Successful projects can then be disseminated across government, allowing other agencies to realise their benefits. “Everything we’re doing should be documented. Everything we’re doing should be chronicled. Everything we’re doing should be reproducible. Everything we’re doing should be replicable,” said Pant, to widespread agreement.
Support from the centre
In smaller states, individual agencies may lack the resources to pursue AI projects on their own; and this calls for a different approach. Maria Nikkilä, Head of the Digital Unit in Finland’s Public Sector ICT Department, explained that her team offers support to officials across government who are interested in developing AI systems.
In one recent example, Nikkilä’s department invited individual agencies to apply for funding for promising AI experiments. The result has been an enviable number of new projects across government, from the Finnish police using AI to detect money laundering, to the Food Safety Authority automating decisions on safety certificates. These are funded by the centre, but implemented within specialist agencies.
In the meantime, Nikkilä’s department itself is busy building tools that could be of benefit across government, such as a bot to answer citizen questions that can be deployed by any ministry or agency. “I think every public sector organisation would like to launch their own chatbot right now,” she said. “We are building one solution, so they don’t all have to buy or make their own.”
One area where there is substantial overlap between the public and private sector is a widespread concern among users about data privacy. This is of particular concern when it comes to AI, given the huge datasets needed to make solutions work. And public attitudes appear to be hardening: a delegate from one digitally-advanced government noted that in recent years, citizens have begun to ask far more searching questions about how their data is stored. “When we started, the knowledge level of the population, as well as what they were demanding from the government, was completely different,” they said.
These demands and concerns are not universal, however. In Azerbaijan, Jafarov reported, citizens simply “want fast public services,” and are not inclined to question how they are delivered. Nevertheless, the state has put in place data audits and allows citizens to track every use of their personal data by public bodies, minimising the risks that public concerns grow in the future.
Governments can also reassure the public by developing AI systems using dummy data, said Nikkilä: in Finland, only if the proof of concept shows real promise will an AI be allowed to train on real citizen data. This reduces the potential for data leaks.
Speaking before the European Union published its draft AI regulations, Nikkilä expressed concern that the EU might legislate prematurely on AI – acting before problems have surfaced within the bloc. “It’s easy to make legislation, but whether it makes sense is another question,” she said. “It’s not clear that AI is ready for legislation yet.”
Finland’s government has followed a relatively permissive approach, encouraging experimentation. For example, automated decision-making – in which technology is used to make a decision without human oversight – is already relatively widespread in the country, with applications in both the Finnish tax and social security agencies. Only now is the government considering framing legislation about how and where it can be used, she noted.
While participants offered several different approaches to these issues around AI, all of them faced similar problems. And it was this opportunity – to hear how their peers overseas have approached industry-wide challenges, putting questions and debating the issues – that left the participants most enthusiastic at the end of the session.
“Whether it’s technical problems like access to data, or social problems like culture and talent,” Pant concluded, “here in these meetings, not only are we sharing knowledge and collaboratively learning from each other, but somebody can actually say to me: ‘What have you got to offer on this specific problem?’”