Civil servants today face a complex challenge: delivering more with less, while safeguarding public trust in the digital age. As someone leading general AI literacy initiatives across government departments, I’ve seen a clear truth emerge—understanding AI is no longer the remit of a technical few. It is fast becoming a core skill for the many.
Take fraud, for example. It’s a high-profile issue with steep financial consequences, but it’s also emblematic of a broader challenge: how do we use AI responsibly and effectively to serve the public interest?
A new global study by SAS reveals that fraud, waste, and abuse (FWA) drain up to 16% of public sector budgets, and public trust suffers accordingly. It’s no surprise, then, that 85% of decision-makers rank fraud among their top five priorities—and nearly all plan to use AI and generative AI to combat it in the next two years.
But here’s the critical point for civil leaders: fighting fraud is just one of many use cases that require a more AI-literate public sector workforce. To realise AI’s potential—not only in fraud detection but in everything from tax compliance to benefits administration and citizen services—we must upskill across the board.
A literacy agenda for public value
The temptation is to view AI as a technical domain: something for analysts, data scientists, or digital innovation teams. That’s a mistake. AI touches everything—from how policies are shaped and services are delivered to how risks are assessed and decisions are made.
That’s why our AI literacy initiative is not just about skills, but about culture. We’re building capability not only to use AI tools but to ask better questions about them: How do we trust the output? What biases may be present? What does transparency look like?
When civil servants—from procurement officers to benefits managers—can engage confidently with AI, they become part of the solution. They can spot opportunities, flag concerns, and drive innovation from the ground up.
Why fraud is a compelling entry point
Fraud is a particularly powerful case study for AI literacy because it is visible, urgent, and deeply connected to public trust.
The SAS study shows that AI is already delivering measurable benefits in fraud prevention: 57% of agencies report increased workforce efficiency, while 39% are detecting more fraud, and 38% are better able to prioritise cases. Crucially, AI isn’t replacing human judgement—it’s augmenting it.
This message resonates strongly in our training sessions: AI tools are only as useful as the people who understand how and when to apply them. We explore not just the models, but the ethical and operational contexts—privacy, oversight, unintended consequences. In short, the skills that make civil servants smart AI stewards, not just passive users.
But it’s not just about fraud.
Fraud is where many departments are starting, but the implications of AI literacy extend far wider.
• In health and social care, AI supports better coordination of services across life stages.
• In tax and revenue, it’s enabling more accurate forecasting, fairer compliance, and better citizen engagement.
• In policy and planning, AI is being used to model outcomes, test scenarios, and allocate resources more effectively.
In each case, the challenge is the same: how do we build enough understanding, at all levels, to ensure AI delivers public value without undermining trust?
What civil leaders can do next
If you’re a civil service leader asking how to prepare your teams, here are five practical recommendations:
1. Make AI literacy a core part of digital transformation plans Don’t treat AI as a standalone innovation initiative. Embed AI literacy into wider upskilling programmes, leadership training, and organisational development strategies.
2. Start with practical, relatable examples Fraud is a great starting point, but every department has its own “burning platform.” Choose pilot use cases—like forecasting, triage, or citizen communication—that are manageable and meaningful.
3. Demystify the jargon AI can be intimidating. Focus training on concepts, not code. Explain models through familiar metaphors. Frame skills around impact and ethics, not just tools.
4. Foster cross-functional collaboration AI doesn’t live in IT alone. Create forums where data scientists, policy leads, service designers and frontline staff can learn from each other. Promote shared accountability.
5. Champion responsible AI use According to the SAS study, 48% of leaders cite privacy and security as top concerns, and 43% worry about responsible AI use. Make ethics and governance a visible part of your AI literacy agenda. Civil servants need to understand not just what AI can do, but what it should do.
Final thoughts: trust is the real ROI
Governments have always wrestled with complexity. What’s different today is the speed at which technology can both solve and complicate problems. AI offers extraordinary potential—but only if people understand how to use it wisely.
Fraud is a good place to start. It’s measurable, urgent, and tied to efficiency and trust. But AI literacy must extend beyond fraud to fulfil its promise.
In the end, this isn’t just about technology. It’s about equipping people to work smarter, act faster, and make fairer decisions. That’s what builds public confidence. And that’s the real return on investment in AI literacy.
To learn more about how we’re building AI literacy across the public sector—or to discuss custom workshops for your department—please get in touch with Alina Luchian: alina.luchian@sas.com
The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.