Design that survives reality in the age of AI

GenAI has not changed what good design is. It has changed how quickly bad design is exposed.

Over the past years, we have seen many AI-powered features look great in demos and then fall apart in real use. Usually this has little to do with the quality of the language models. The bigger issue is everything around them.

When AI tools fail, the reason is usually not the AI itself. Real users behave differently than expected. Real organisations have handovers, limits, and unclear ownership. Once AI is introduced, assumptions pile up quickly. What users want, what they trust, what data is available, who maintains the system, and what happens over time all start to matter at once. If those assumptions are weak, AI does not hide the problem. It brings it to the surface.

This is why GenAI raises the bar for design.

AI success is often framed as a technical challenge, but it rarely is just that. Technology sits at the core of AI solutions, but so does design. Design is what decides whether an AI tool is actually useful, whether it saves time, and whether it can grow beyond a small experiment. Without those answers, even the most advanced models struggle to make an impact.

Design that survives reality starts before AI is even part of the conversation. It starts by deciding what we are actually trying to improve. Which process, which situation, and which problem is worth solving first. Only after that does it make sense to ask whether AI can help. Not the other way around.

This leads to some very practical questions. Who benefits from this change? When and how will it be used? What happens when the AI is wrong, slow, or unavailable? How does the user understand what the system is doing and why? These questions have always mattered. With AI, you don’t get to skip them. The system will not always behave the same way, and the design needs to take that into account.

Trust is a design problem

Trust becomes a design problem before it becomes a technical one. Users do not trust “AI” by default. They trust systems that are understandable, clear about what they can and can’t do, and honest about their limits. Overpromising intelligence or autonomy is one of the fastest ways to lose that trust.

Unrealistic expectations are another way to quietly kill trust and good ideas. If an AI system is expected to be perfect from day one, with no mistakes and no learning curve, it rarely gets the chance to improve. Most useful AI systems evolve through use, feedback, and iteration.

Good AI design does not try to hide uncertainty, but it does not dump it on the user either. A simple example is how results are presented. Instead of acting as if the answer is always correct, the system can show when it is unsure, offer alternatives, or explain why a suggestion was made. The user does not need to know how the model works, only what they can rely on. When uncertainty is handled in a consistent way, confidence grows over time.

Some problems stay invisible

Not all design problems are loud or obvious. Accessibility and inclusion often fail quietly. AI-generated content can amplify bias, exclude users, or create uneven experiences if accessibility and inclusion are treated as an afterthought.

When interfaces adapt dynamically, accessibility can no longer be handled at the end of the process. It has to be part of the core interaction model. Language choices, tone, user interface, feedback, and error handling all play a role.

Designing for reality, not demos

AI has a way of exposing how an organisation actually works. You cannot add GenAI to a broken service and expect it to fix underlying problems. If the core purpose is unclear, workflows are fragmented, or ownership is vague, AI will only make those issues more visible. In this context, design is not about polishing interfaces. It is about aligning user needs, business goals, and technical realities into something that can be built, tested, adopted, and improved over time.

Practical limits matter more than ever. Token limits, response times, data availability, legal boundaries, and operating costs all shape what is possible. Treating these as part of the design leads to solutions that hold up beyond early demos. The AI-powered services that work best tend to be focused and intentional, because they are designed to function in everyday conditions.

Design that survives reality does not chase tools or trends. It focuses on outcomes, makes conscious choices, and handles edge cases and uncertainty. Getting AI into everyday use is ultimately a human process, not a technical one. As AI becomes commonplace, the real differentiator will not be who uses the latest model, but who designs systems that continue to work once the excitement fades.