The pressure to use AI for everything, everywhere, all at once
Recent advances in AI have been dramatic, and AI adoption is more ubiquitous than ever. It’s undoubtedly a powerful technology and a tool worth having in our toolbox. But I’ve noticed a troubling trend towards, “find a reason to use AI and then go pilot.”
Rather than identifying a problem as the starting point, this approach centers a solution. There are times when this may work, but more often than not, solutions in search of a problem can be harmful. And without a focus on outcomes, we’re forcing AI solutions onto problems where AI may not be the best, or only, answer. As GOV.UK's Service manual identifies, “More established technologies might solve the problem better than AI.”
Even when projects include a fair focus on discovery, the solution space is often now constrained by the required use of AI. Take the Public Benefit Innovation Fund (PBIF)’s Summer 2025 open call, which allocated funding specifically for “AI solutions that address pressing public benefit challenges.” While well-intentioned, this funding structure exemplifies this trend: it centers AI as the required solution rather than centering challenges themselves.
Between dedicated AI funding streams like PBIF’s, executive mandates to ‘innovate with AI,’ and the fear of being left behind, there’s pressure to deliver AI. While I appreciate the value AI can bring to plenty of service issues, I’m concerned that we’re taking 10 steps back when we structure our work around AI delivery rather than outcome delivery. “When you have a hammer, everything looks like a nail” has never been truer.
In addition to producing technical debt, this approach prevents teams from seeing the larger picture of solutions, of which AI might be one or some, but very likely is not all of them. AI that helps you fill out a long form could be helpful, but perhaps the better solution is to have a shorter form (which actually requires policy changes, stakeholder alignment, and content design). When we start with AI as the solution, we miss these deeper interventions that could solve the root problem.
And as Mickin Sahni, Director of Product & AI Lab Lead at Ad Hoc wrote recently, you have to be willing to “accept that you might discover AI didn’t solve the problem.” To me, this acceptance doesn’t show failure or poor foresight. It shows maturity, honesty and an understanding of how the technology actually works. It means that when you do find success with a solution that involves AI, you can trust that it does. Because, as Sahni emphasizes, you know you’re measuring the right things and you’re being honest in your assessments.
How do we actually do this? As a service designer, I start by asking good questions to understand services and the processes that support them, meeting users and staff who deliver services to help us identify specific problems. Then I facilitate conversations to hypothesize and test a variety of solutions. I can embrace a solution that involves AI when it’s the right fit. And I help others recognize when the solution is more focused on people or process than technology. Sometimes the best solutions are staff training, process changes, or a simpler form.
The pressure to use AI for everything, everywhere, all at once won’t last forever. What will remain is the need to solve real problems for real people. And true innovation begins when we commit to solving the right problems instead of deploying the trendiest solutions.