The problem often sits in front of the screen

While we worry about whether AI will develop consciousness, whether it hallucinates or makes us all stupid, we tend to overlook an obvious counter-question: What would AIs say about us?

Christian Hansen

10/3/20255 min read

The problem often sits in front of the screen

While we worry about whether AI will develop consciousness, whether it hallucinates or makes us all stupid, we tend to overlook an obvious counter-question: What would AIs say about us?

The debate surrounding artificial intelligence almost always revolves around the weaknesses of machines: automation bias, hallucinations, lack of transparency. All valid points. But what if, in human-machine interaction, the biggest weaknesses are not in the code, but sitting in front of the screen?

The drama of biological intelligence

Let's imagine that AIs would exchange information about their users in a kind of peer review. The protocol would probably look like this:

  • Cognitive overload & lazy processing: Research on cognitive biases shows that people systematically choose the path of least cognitive effort – the so-called cognitive miser hypothesis. They rely on heuristics, simplify, and abbreviate. When working with AI, this tendency is reinforced. People give vague, underspecified prompts but expect precise results. They provide incomplete context, forget essential parameters and are surprised by irrelevant or generic answers. Studies on prompt quality show that in failed interactions between humans and AI, 44.6 per cent of prompts contained human errors: knowledge gaps, missing context, unclear instructions, lack of specification.

  • Automation bias – the paradoxical overdose of trust: People tend to overcompensate in both directions. On the one hand, there is automation bias: they trust automated systems almost blindly, even when they themselves actually have better information. Medical professionals sometimes override correct diagnoses after receiving faulty AI input. Financial analysts follow risky algorithms instead of relying on their own knowledge. Pilots ignore warning signals because the autopilot does not report anything.
    Studies show that this bias does not disappear even when people are explicitly told that they bear the final responsibility. On the other hand, however, the first AI hallucination causes trust to tip over into the opposite: digital aversion arises. The system is rejected, even though it actually works quite well.

  • Feedback loops of bias: UCL research by Glickman and Sharot documents a particularly insidious pattern: humans train AIs on distorted data, the AI reinforces these distortions, and humans then internalise these reinforced distortions. This feedback loop transforms small initial biases into structural errors.
    Example: If an AI learns from human judgements that faces look sad rather than happy, it reinforces this bias – and people who interact with this AI in turn adopt it. A perpetual motion machine is created. The same applies to gender biases, ethnic stereotypes, professional attributions: the machine becomes an amplifier of what we already do wrong or perceive in a distorted way.

  • Lack of understanding of the system: Most users have little idea how an LLM works. They project human characteristics onto statistical models, expect intentions where there are only probability distributions, and look for consciousness where pattern recognition is at work. This leads to absurd interactions and expectations: AI is supposed to be error-free and creative, reliable but surprising, objective but empathetic. Hardly anyone could meet these requirements – but it is expected of the machine.

  • Inconsistent requirements: People want efficiency, but don't actually want to relinquish responsibility. They want automation, but want to retain control. They want quick answers, but then complain about a lack of depth. They demand objectivity, but want personalised output. These contradictions make consistent system optimisation nearly impossible.

Who plays which role here – and why?

Human-machine collaboration requires clear roles, just like in an ensemble: Who leads, who follows, who bears responsibility and when, who dominates when and why? Most people have relatively little conscious awareness of their role when interacting with generative AI. They switch arbitrarily between directing (I tell you what to do), collaborating (we do it together) and delegating (you do it). This is inefficient and leads to errors, especially when done within a ‘chat’ and without explaining to the machine why you are doing it.

An AI needs a boss who tells it what tasks it has to perform in which function. It also helps to know the ‘purpose’ of its work: the meaning and the goal to be achieved. Is it your assistant, your sparring partner, your executive tool? Who is responsible for final quality assurance? Who is responsible for errors? People improvise here – and are surprised when the result – drum roll – seems improvised.

reflection gaps

The main problem here is not necessarily technical in nature. A key factor is the human reflection gap. People use AI tools without being aware of their own cognitive patterns. They do not understand how confirmation bias colours their prompts. They ignore how automation bias weakens their judgement. They do not realise that poor prompt design makes the system's work unnecessarily difficult.

The research is clear: the quality of interaction depends largely on the user. On the clarity of their instructions. On their ability to structure context. On their willingness to critically examine results instead of blindly accepting them. Even the best AI cannot compensate for these gaps.

What AI would think of us

If AIs could form an opinion about us, it might sound something like this:

"They expect us to think – but they don't do so consistently themselves. They assign responsibility without sufficiently clarifying roles and tasks. They give us inadequate instructions and expect precise results. They trust us blindly and mistrust us for no reason. They feed us their errors in reasoning and hold us responsible for working with these errors. And they wonder whether we are intelligent – instead of asking how intelligent they are in their dealings with us."

better users = better AI

The good news is that all of this can be remedied. Not through better AI, but through better user behaviour. Studies show that even small changes can have a big impact:

  • Explicit role definition: If I know what function the AI is performing for me, I can use it in a more targeted manner. If I can play with roles and am able to direct the AI, I gain insights that would be virtually inaccessible to me in the real world.

  • Structured prompts: Clear instructions, complete context and defined expectations massively reduce error rates.

  • Critical reflection: If you are aware of your own reservations, you can actively address or compensate for them in prompts. Those who are unaware of them can ask the AI about them – you can learn a lot from Chat GPT if you dare to ask the right questions.

  • Iterative work: Those who understand that AI interaction is a process – not a one-off – get better results. A 95% time saving is then no longer realistic – but significantly better results at the same or even higher speed.

  • Sense of responsibility: AI provides suggestions, but the decision and final touches are up to humans. Those who forget this make themselves dependent (and probably really do become stupid).

Maybe it's us?

We discuss AGI, hallucinations, alignment problems. All important, all correct. But the biggest weaknesses in human-machine interaction are possibly human in nature: cognitive laziness. Uncritical trust. Lack of role clarity. Poor instructions. Projection.

If we want AI to really help us move forward, we shouldn't just improve the machines. We also need to work on ourselves. Not in competition with AI, but as its competent partner and leader.

The question is not whether AI can think. The question is how we learn to think better with it.