Errors are not just a human trait, they are inherent to machines too. This is something historian of technology Martina Heßler has long examined. We spoke to her about the culture of error in an age of complex technologies, zero-defect strategies, the idea of a Museum of Accidents, and how design can contribute to a more positive approach to error.
Interview by Armin Scharf

What kind of technical error have you encountered today?
Everything worked fine today. But we’re constantly confronted with everyday malfunctioning. Recently, an app refused to accept the security code on my credit card. Late last week, I had to upload an invoice to a platform, but the form only provided the category “company”. And finally, my computer wouldn’t start, which turned out to be due to a flat battery. These three examples nicely illustrate the spectrum: technical errors, human errors, and design errors, such as when important options are simply missing from a form.
Two of your examples are linked to digitalisation — are digital errors increasing?
My argument is that since the 1970s and 1980s we’ve entered a new era of technological error, driven by the spread of software and the rise of large, complex technical systems. The sociologist Charles Perrow already showed in 1984 that such systems are inherently prone to failure, as the interaction of many small faults can lead to accidents. Software systems are, as we know, always flawed which means we’ve been living with a new category of errors that has been growing for about 50 years.
That sounds rather disillusioning. Should we give up hope that one day everything will simply work?
It’s only disillusioning if we cling to the ideal of the flawless, perfect machine. Once we accept that we live in an age of machine error, we can adopt a more realistic attitude. Engineers have developed countless strategies, concepts, and practices to deal with these errors and still make technology robust and reliable. Of course, everyday malfunctions, accidents and disasters occur but overall, we manage technology quite well. What’s important is to recognise that errors are always there, and they always have consequences.
Does dealing with errors make us experts?
Absolutely. The American media scholar Lisa Nakamura once said that we’ve become experts in malfunctioning and that we should be proud of having developed the ability to handle it in our everyday lives.
When machines, which we usually consider flawless, make mistakes — does that comfort us in our own human imperfection?
Yes, in a way it does. There’s this phenomenon of anthropomorphising technology or even “animalising” it. The moment a machine makes an error, we tend to feel for it, to humanise it. Suddenly the machine isn’t superior anymore, it needs our help. Think of the vacuum robot that gets tangled in cables and can’t move without our intervention.
On the other hand, we also see anger when technology doesn’t do what we expect the phenomenon known as computer rage. When computers malfunction, some people become outright aggressive. There’s a famous case in the US where a man shot his computer in the backyard because he couldn’t stand it any longer. Interestingly, many sympathised with him, even though it was an irrational and somewhat disturbing act.
In your book, you argue that we should simply leave Sisyphus’s stone where it is. What do you mean by that?
Technology keeps getting more complex and opaque. With today’s AI systems and quantum computers, we’ve reached a new level of technological inscrutability. We produce new technologies and then need further technologies to control them. It’s a spiral of technological escalation. By “leaving the stone”, I mean that perhaps we should pause and ask ourselves whether to step off this path of ever-increasing complexity and instead seek other, non-technological or less complex technological solutions.
Recently in Baden-Württemberg, 1,440 “ghost teachers” were discovered teaching posts that were mistakenly recorded as filled in an administrative system. How do such errors happen?
It’s remarkable that this went unnoticed for so long. As far as I know, a commission has been set up to investigate the causes. Whether it was a simple coding mistake or a combination of factors is still unclear. Unfortunately, the public rarely gets to learn how such errors occur. Sometimes even software developers don’t know what went wrong, a phenomenon already observed in the 1970s. But transparency is essential if society is to develop an awareness of software errors. We need to stay on it — both as researchers and as journalists.
That’s true. Perhaps that would also help us deal with AI, which is anything but error-free or unbiased and essentially a black box.
AI indeed introduces a new category of errors, new types and causes. We need to understand their nature, their sources, and the statistical functioning behind them. These errors are inherent to the system. I call this error literacy. Awareness and critical reflection are essential. Since the technology is so difficult to grasp, we rely on competent guidance. Luckily, there are now many good resources explaining AI. Katharina Zweig’s book “Die KI war’s” (“It was the AI”) is a great example, it explains how such errors arise, warns against them, and stresses the importance of carefully weighing where AI should and shouldn’t be used.
Let’s turn to design, which mediates between humans and machines. Can design help communicate or even reduce error?
Designers have an increasingly important role to play. We need to rethink the concept of usability and perhaps interpret it in a more resistant way.
What do you mean by that?
I don’t think design should be solely about making technology as simple and pleasant to use as possible. It should make the machineness of technology visible rather than humanising or even infantilising it. This applies to AI applications as much as to robots. Their fallibility should also be visible. Design can make it clear that machines aren’t flawless, even if we’re often led to believe they are. That requires creativity but designers are capable of that. In any case, a small note like the one you get with ChatGPT, “please double-check everything” isn’t enough.
Wouldn’t that contradict the promise of perfection that marketing and advertising thrive on?
It would, and that’s an important point. We expect devices and applications to be flawless. But that’s also a matter of responsibility. You might recall the CrowdStrike software outage that paralysed airports and hospitals worldwide. The company publicly promised such an incident would never happen again, a promise that simply can’t be kept. Instead, they should have communicated how to prepare for such failures, both on the part of manufacturers and users.
In future, we’ll increasingly deal with autonomous systems. How these communicate with humans is still uncertain — a rich field for communication designers, perhaps?
Human–machine interaction is already a well-established design discipline. But autonomous systems bring genuinely new challenges. Machines need to communicate in ways humans can understand, but without imitating humans. Unfortunately, anthropomorphism is still common: autonomous delivery vehicles, for instance, often have cartoon-like eyes that even blink. Instead, we should make it clear that users are interacting with an autonomous system — one with limits. Communication designers can’t solve this alone; it requires interdisciplinary teams, including psychologists and sociologists.
How we design autonomous systems is an extremely relevant, complex, and still underexplored question.
When an object is nearly perfectly designed, are we more or less forgiving of its errors?
We generally expect machines to work flawlessly. But yes, when a product is perfectly designed, our expectations rise even higher and we feel disappointment or even anger when it fails.
In your book, you revisit Paul Virilio’s idea of a “Museum of Accidents”. What would we see there?
Virilio’s central thesis is that every technology comes with its own specific errors. In such a museum, visitors would learn which technologies are associated with which failures and what consequences they have, from railway crashes to nuclear incidents. They’d see the different causes of failure, understand increasing complexity, and learn to assess consequences. It would also show that many errors don’t have consequences — sharpening our expertise in judging when an error matters and when it doesn’t. Sadly, no one has yet built such a museum, but I find the idea absolutely fascinating.
You often use the term “technological chauvinism”. What does it mean?
The term comes from the American journalist Meredith Broussard. It refers to our tendency to view machines as perfect or superior, thereby devaluing ourselves as humans. We know the stereotypes: machines don’t tire, they’re objective, rational, and don’t argue. In contrast, humans are seen as problematic. But we should ask: What do machines really do better? Behind this lie values such as efficiency, productivity, precision, and speed. Do we actually need these values in every situation? Or should we sometimes prioritise other values that technology can’t deliver? It’s fine that a calculator calculates better than we do, but whether we really need robots in care homes, for example, is something we should discuss far more deeply — including from a design perspective.
Sisyphos im Maschinenraum
Eine Geschichte der Fehlbarkeit von Mensch und Technologie
By Martina Heßler
C.H. Beck, Munich 2025,
297 pages, 32 Euros

Martina Heßler has been a professor of the history of technology at TU Darmstadt since 2019, specialising in the history of the fallibility of mechanical systems. In her book Sisyphos im Maschinenraum (Sisyphus in the Engine Room), which has been nominated for the 2025 German Non-Fiction Prize, she challenges the long-held belief that machines are superior to humans. It is now recognised that even highly automated production processes require human support to function. Autonomous systems have made this issue relevant again, creating a new field of work for designers that is only just beginning to be recognised. However, for Martina Heßler, anthropomorphisation, which promises acceptance and better interaction, is the wrong approach.

About the Author
Armin Scharf is an engineer and has been working for many years as a freelance specialist journalist. His focus is primarily on the technical side of design, innovative technologies, new processes and material-related topics. His articles are published in print and online media such as brand eins, NZZ, VDI Nachrichten, Hochparterre, md and ndion. In addition, he conducts interviews with companies and agencies on behalf of the Design Center Baden-Württemberg, supports design studios with communication issues, and produces corporate books.
Share on Social Media

