What Hamlet and Claude have in common

Why directing often beats prompting – and what theatre knows about AI that Silicon Valley probably didn’t tell you yet.

Christian Hansen

3/3/20264 min lesen

The other day, I sat in a rehearsal. Not in a theatre – at my desk. I'd given Claude a strategic brief, and what came back was technically flawless, factually complete, and internally dead. It sounded like what an actor does on stage when he memorised his lines but has no idea why he's saying them. Or where. Or when. Or how.

Anyone who's ever staged a theatre production knows this moment. It's when the difference between intention and impact shows. That is where many fail at using AI – without even realising it, because they may have never experienced what the alternative feels like.

The problem has a name: Prompt Engineering

When tech doesn’t give us the results we expect, we users tend to diagnose a technical problem. And yes: LLMs have flaws. Quite a few actually. But they are also insanely powerful machines which clearly have the potential to change how we work, think, and conceptualize.

Better prompts, finer parameters, more precise instructions – sure, a certain level of technical capabilities helps when working with our digital co-brains. But that’s not all you need. Cognitive depth can’t be engineered through sheer specification. That's like writing stage directions for every second of a performance and then wondering why the show feels dead.

Theatre works differently. A good director doesn't micromanage milliseconds. (S)he builds a frame: Who is who in the play? In this scene? Who wants what of whom – and why? What's at stake? What’s hidden in the background? The more the actors know about the world they act in, the more credible and vivid this world becomes on stage.

And, no: This isn't just a charming analogy. It's an operating manual for human-machine-interaction.

Briefing Hamlet

Consider Hamlet – one of the most produced plays in history. The same text, for more than four hundred years. Yet every good production creates something new with it. Not because the words or the story change. But because direction, ensemble, and context do. The very same play has been staged wildly differently hundreds and hundreds of times – by people who understand that great theatre is way more than telling Hamlet where to stand and prompting Ophelia how to drown.

It’s not that different with the LLM you’re using. Same task, same model – radically different results depending on how good you are at framing your request. The difference isn't only the technology. The disciplines that have known for centuries how to turn fixed content into a living performance are called direction and dramaturgy. Not engineering.

Three things theatre can teach us

Context beats instruction. In theatre, it's called subtext – the invisible layer beneath dialogue that determines how a line lands. "I love you" can be a promise, a farewell, an accusation or a threat. In AI work, it's the same principle: the instruction alone doesn't determine quality. The frame it sits inside does. Ignore the frame, and you get output that might sound correct, but seems dead and useless.

A role is not a costume. "You are a marketing expert" – that's how millions of prompts begin. A role isn't something you put on like a dress though. It's a bundle of objectives, constraints, relationships, and history. The distance between "play a king" and "you've waited thirty years for this throne, and you trust no one in the room, not even your own dog" – that's the distance between generic storytelling and a living, breathing drama.

Tension is productive. Good theatre doesn't come from harmony. It comes from conflict that ignites on stage. The best AI work doesn't come from asking for confirmation either. It comes from building a field of tension where the AI has to navigate competing demands. Good theatre needs friends and foes, heroes and villains, love and hate. That’s how life works – and that’s why lazy, uninspired prompting leads to unbearably boring content.

This is (not) a metaphor

I can hear the objection: Lovely comparison, Christian, but at the end of the day I still need to write a good prompt.

True. And a director still needs to tell people where to stand from time to time. But that's not the main job. The job is creating the conditions from which a fascinating performance can emerge. It’s about professional framing. “Giving directions” doesn’t mean to say “turn right” or “move backwards”. That’s the last thing you want to tell your actors, not the first.

And exactly that is what's missing from most AI courses, workshops, and strategies I see: the understanding of the importance of reflecting context before staging a request in order to get back applaudable reactions. It's not a lack of powerful machines. It's a lack of critical thinking and staging expertise.

Like a good actor, AI will get immensely better once you stop telling your aspiring Hamlet where to walk and start making clear to him what's at stake. Create a drama instead of prompting. The results you'll get wildly differ from the generic slop you might be used to. And apart from that: prompting is boring, staging is fun.

If you, your team or your organisation want to learn more about how to make AI perform better on your digital stage, reach out via DM or send a mail to ch@christianhansen.ch