The productivity gap
A thought experiment: Two managers, same company, same qualifications, identical job profile. Person 1 uses full stack AI integration. Person 2 operates under data protection premises. What does this mean for the careers of both? And what about productivity within their companies?
Christian Hansen
10/30/20256 min read


The Productivity Gap
What privacy costs us – and why that is a huge problem we need to discuss.
I'm not against data protection. I'm glad that I can choose who gets my data and who doesn't. I don't want to be a transparent citizen. And I certainly don't want all my data to be aggregated and networked on some US server for questionable purposes. Especially not while DT is seeking a third term in office. At the same time, I work with AI tools every day and see the automation and assistance potential that these systems already offer today.
I recently looked at Perplexity's ‘Guide to Getting More Done’ – and stumbled across a thought experiment that I can't get out of my head:
Two people, same company, same management position, same qualifications. Person 1 uses Perplexity to its full extent: email integration, direct access to Salesforce, Notion, Slack, automated performance analyses from company systems, lead intelligence through the merging of internal and external data.
Person 2 – due to data protection concerns – limits themselves to non-critical functions: web research, generic content creation, strategy consulting without system access.
The question that preoccupies me is: what does this mean for the productivity, innovative strength and ultimately the job relevance of both individuals? And if Person 1 works in the USA and Person 2 on our GDPR continent, what does that mean for European competitiveness?
The counter-narrative that I don't buy
Before we dive deeper, let me say that I still consider the oft-cited counterargument that AI does not actually increase productivity and only produces rubbish to be nonsense. I have been working with these tools every day for years. If you know how to use them – if you are in control – they are worth their weight in gold.
No one automatically becomes stupid when they work with AI strategically and with motivation. On the contrary: you learn new things, broaden your horizons, come up with ideas that you wouldn't have come up with on your own, and benefit from thinking partners who never tire. But that only works if you retain control over the process. If you don't blindly delegate, but orchestrate.
That is the premise I am working from. Anyone who uses AI differently – as a text machine for quick outputs without critical reflection – will get rubbish and certainly won't become smarter. But in my view, that is a usage problem, not a technology problem.
What complete integration means
Back to the thought experiment. Person 1 has access to a fully integrated AI stack:
Time savings: Email assistants that automatically triage, suggest replies and coordinate meetings. If I want to search through all email correspondence on a topic for a strategy paper, all meeting notes from the last few months and relevant documents from three different systems, Person 1 can do it in minutes. Person 2 needs hours.
Conceptual work: To prepare for a workshop, Person 1 can ask the system: ‘Which stakeholder groups did we identify in previous projects with similar transformation processes? Which communication approaches worked and which didn't?’ The system searches through all project documentation, workshop minutes and evaluations. Person 2 has to remember or search manually.
Customer understanding: Before a consultation, Person 1 can ask: ‘Show me all interactions with this customer in the last six months – emails, meeting notes, project documents. Which topics are repeated? Where were there misunderstandings?’ The system provides a synthesis. Person 2 spends an hour before the call reconstructing context from scattered sources.
Whether the Perplexity Guide is right in its assessment that Person 1 is many times more productive than Person 2, or whether it is exaggerating, a clear competitive advantage is undeniable. And I find myself in this conflict: what do I give up in order to benefit? What do I keep to myself and continue to do myself – at the expense of my time and mental bandwidth?
The question that worries me
What will happen to both individuals over time? I don't know. But questions arise:
Will Person 1 be able to take on more projects because administration takes up less time? Will they gain deeper insights into complex relationships because they can access historical data more quickly? Will they make better decisions because they have more context available? I say yes, they will.
And person 2: Will they have to spend more time on routine tasks? Will they miss opportunities because it takes too long to obtain information? Will they fall behind in their development, not because they are less capable, but because they have to work with less? Again, I would say yes.
I am not a clairvoyant; I do not know how all this will play out in the long term. But the asymmetry is there. And it is not trivial. And it has not only economic but also geopolitical dimensions: if Person 1 is based in the US and makes full use of the opportunities available, while Person 2 in Europe is restricted for regulatory reasons – what does that mean for our competitiveness?
The European privacy dilemma
The GDPR protects us from things I don't want. But it also prevents things I do want. This is not polemics, but a structural problem: the regulation targets abuse by third parties. However, it does not really distinguish between ‘data being sold to Facebook’ and ‘data being used to make one's own work more efficient’.
The result: we have protection without any real alternative. The tools offered by the powerful integrations are US-controlled: Perplexity, OpenAI, Anthropic, Google. Europe has no comparably powerful solutions in productive use (correct me if I'm wrong).
The problem is not just individual. It is structural:
Speed of innovation: US companies iterate with these tools in full stack. European companies develop within strict and unclear compliance frameworks. The time difference is already measurable and will not shrink.
Talent dynamics: Many highly qualified professionals want to work with the best tools. This is not a question of loyalty, but of professional development. If this is only possible outside Europe, some of them will probably leave.
Market dynamics: Companies that are significantly more productive in certain areas can price more aggressively and react more quickly. Whether the productivity increase is 1.5 or 5 times greater, both have consequences for European competitors.
The digital sovereignty paradox
Europe wants digital sovereignty and data protection. But the entire AI stack is controlled by the US. Perplexity, OpenAI, Anthropic, Google – all US companies. The result is neither innovation nor reliable protection. Instead, it leads to friction losses.
I can think of four possible ways to deal with this dilemma:
Own infrastructure: Europe develops GDPR-compliant AI with comparable performance. That would take years and cost billions. Whether we can pull it off is questionable, especially given our relatively rigorous data protection and IP legislation.
Regulatory adjustment: Differentiated GDPR exemptions for AI productivity tools with clear governance frameworks. Politically sensitive, but not impossible.
Hybrid models: On-premise solutions that do not send data to US servers. Technically complex, expensive, but feasible, as far as I understand the technology (again, please correct me if I am wrong).
Status quo: Europe pays the productivity price for data protection. That is a legitimate choice – but we should be honest about the consequences for us.
And now what?
I don't have a (simple) answer. I see the value of data protection. I also see the productivity potential of AI – not because I read about it, but because I have been experiencing it every day for years.
The tension between these two poles is real. And it will grow as these tools become more powerful. And they will – whether it's language models or alternative technologies from LeCun et al.
Person 2 in the thought experiment works in a regulatory environment that protects legitimate values – but entails competitive disadvantages. Whether these disadvantages are marginal or serious remains to be seen. I fear they are quite significant.
So the question is: are we, as Europeans, prepared to pay this price? And if so, are we prepared to state it clearly, instead of pretending it doesn't exist? Sometimes it seems to me that we are a rather dreamy little group of people, stuck in a saturated past. With noble values and high standards, but also a certain loss of touch with reality.
Even though I don't know how things will turn out, I think we need to talk about it. Honestly. Without panic, but also without sugar-coating it. Because a productivity gap – whether it's a factor of two or five – is opening up. And that will have consequences.
___
Christian Hansen is a strategy and communications consultant who founded ANADAI, a methodical approach to human-machine collaboration.
ANADAI
Team up with AI
hello[at]anadai.net
+41 78 720 08 83
© 2025. All rights reserved.
Ahornstrasse 17
CH 4055 Basel
Impressum & Privacy
