THE FOMO TRAP
Christian Hansen
9/23/20254 min read


The FOMO-Trap
A plea for cognitive networking and co-intelligence instead of expensive automation theatre.
‘The competition has AI. We need AI too. Fast.’ The logical conclusion is relatively predictable: a headlong rush to find the best automation stories, millions spent on ‘AI transformation’ – and six months later, the sobering realisation that most of it doesn't really work.
Welcome to the ‘AI FOMO-verse,’ where the fear of missing out supplants strategic thinking, where the potential of artificial intelligence is systematically diluted because we don't take the time to understand what it actually does best.
Reinforcing intelligence, not outsourcing it
If the figures circulating are correct, over 70% of companies worldwide have introduced AI in some form. But only 4–5% are apparently achieving significant added value from these investments. This gap is explained by ‘unrealistic expectations of executives’ about what AI can actually achieve.
The fault lies not with AI, but with the way we use it: as a kind of digital magic box into which you throw data and get intelligent solutions out. This, I argue, is a fundamental misunderstanding of what AI excels at.
AI is not a substitute for human intelligence. It is an amplifier. And like any amplifier, it makes everything louder – including noise and shrill feedback.
‘Shit In, Shit Out’ also applies to AI
AI systems are pattern recognition machines. They don't really understand anything, they don't think about consequences, and they certainly don't understand your business better than you do. They are excellent at finding structural relationships in large data sets and automating repetitive tasks with clear parameters.
But if you feed an AI system unclear goals, poorly structured data, or processes that humans themselves don't fully understand, you won't get solutions, you'll get automated confusion. This is why so many AI implementations fail spectacularly: not because the technology is flawed, but because we ask it to solve problems that we haven't sufficiently specified.
Take, for example, a healthcare provider that rushed to implement an AI customer service system. The project fails not because AI is incapable of handling customer service tasks, but because the company did not take the time to map out the complex decision trees that human employees navigate on a daily basis.
The intelligence gap
The real opportunity presented by AI is not to replace human thinking, but to expand it. This requires ‘cognitive coupling’: the conscious collaboration of humans and machines as an integrated intelligence system.
Instead of asking ‘How can AI automate this process?’, the better question is: ‘How can AI improve human decision-making in this process?’ This shifts the focus from outsourcing to improvement, from replacement to partnership – and the human in the loop remains the controlling authority with their skills, experience and intuition.
Expensive automation theatre
When companies skip the cognitive coupling phase and move directly to full automation, the result is what could be called “automation theatre” – systems that appear sophisticated but are in fact immature, opaque and therefore prone to spectacular failures.
The hidden costs are not only financial but also cultural: teams become sceptical about AI, executives lose confidence in their technology strategies, and companies miss real opportunities to improve their operations.
Working together to remove obstacles
The companies that are successful with AI are not those that move fastest, but those that proceed strategically. They begin by capturing human expertise, making implicit knowledge explicit, and then identifying specific points where AI can make a meaningful contribution.
This process requires patience. It means spending time understanding how the best employees make decisions, what information they use, what patterns they recognise, and where the obstacles lie that AI could remove.
Asking strategic questions
Before implementing an AI solution, managers should demand clear answers to the following questions:
What specific human decision-making process are we trying to improve? If you cannot clearly describe the current process, automation is likely to fail.
What does success look like and how do we measure it? ‘Competitive advantage’ is not a metric. ‘Reduced processing time from 3 hours to 30 minutes with 95% accuracy’ is.
What happens if the AI is wrong? Every AI system will fail at some point. How do humans recognise and correct these errors?
How do we maintain and improve the system over time? AI models lose quality without continuous training and monitoring. Who bears this responsibility?
Think first, act later
The future does not belong to the companies with the most AI, but to those that best understand how to connect human and machine intelligence.
The MIT study shows that 95% fail not because of the technology, but because of its application. The few successful ones have one thing in common: they first understood their human processes and then introduced the technology.
We must not lose the global AI race because we hesitate too long and get bogged down in scepticism. But you don't win a long-distance race like this by sprinting off headlong. You win it by training, choosing the right strategy, pacing yourself wisely and reacting tactically to the unexpected.
With the FOMO-driven 95% of projects, millions are likely to continue to be burned. The somewhat more hesitant 5% will probably not only ‘survive’ the AI revolution – they will in all likelihood define in the long term where “artificial intelligence” really works in the world of work: not in the outsourcing of human thinking, but in its reinforcement.
The competitive advantage does not lie in AI itself. It lies in thinking before AI does it for you.
ANADAI
Team up with AI
hello[at]anadai.net
+41 78 720 08 83
© 2025. All rights reserved.
Ahornstrasse 17
CH 4055 Basel
Impressum & Privacy
