A character in Theodore Sturgeon's award-winning science fiction short Slow Sculpture says, early on in the story: “Try the truth then. If it's important, it's simple, and if it's simple it's easy to say.”

That, more than anything else, seems to have been what drove Golden Age science fiction authors. They had a peculiar way of arriving at certain truths utilising a brand of logic that defied explanation: it was entertaining, self-sure, but it was simple, albeit being perhaps too simple for scientists, or people in the know, who readily refuted the genre as dabbling in 'pseudo-science'.

Isaac Asimov certainly fit that bill when it came to being simple but elegant. And that brings us to the famous three laws of robotics set down by Asimov in his series of stories and novels featuring robots with “positronic brains”. After all, a 22-year-old worker was killed by an industrial robot in Germany at a Volkswagen plant just a few days ago. This incident immediately sparks several questions, some pertinent to how we've come to live in an already science fictional society, and some, about how science fiction has come full circle, but perhaps not in the way it had expected.

Robots do kill men

Deaths caused by industrial robots seem to be more of a reality than you might imagine – in the United States, at least one person is killed every year in a similar incident. And they have ceased causing people to break into a cold sweat over it, desensitised as we have become to such things.

Early on in the continuing lifespan of science fiction, robots were a novelty. We have come to take them for granted now, in spite of the fact that the autonomous, self-conscious entities promised to us by the genre still seem a long way off. But that's hardly the point, it seems.

The science fiction trope – be it time travel, artificial intelligence, or robots – serves a twin purpose of being both literal truth and metaphor. Therein lies its unique strength. And this “literalness” seems to have bled into “real life” and coloured our perceptions of technology.

Technology alternately sparks a sense of wonder and then reverts to banality, thanks largely to a limited attention span, and, indeed, too many distractions plaguing the twenty-first century. And science fiction, that genre dedicated to invoking a “sense of wonder”, has ironically inured us to such things.

If we are to look at the incident itself, tragic as it is, it's a little hard to see why it might have caused such a stir. The death was apparently caused by human error, and the “robot” in question seems to have been a mechanised automaton whose job it had been to pick up car parts all by itself. But the image evoked by such an incident transcends any such attempt at rendering it less awe-inspiring. This is because of the sheer vocabulary of images science fiction has helped build over time, when it comes to robots and artificial intelligence. Let us, very briefly, look at a few examples of non-Asimov penned tales of robots.

The robot fictions

Fondly Fahrenheit (1954) by Alfred Bester dealt with an android killer whose personality  seems to be mirroring his owner's; both of them have become something murderous and  unbalanced. But the android seems to be a victim of circumstances. It goes berserk only when the temperature shoots up and causes it to go, well, haywire.

Henry Kuttner, another celebrated but now largely forgotten Golden Age writer, wrote a hilarious story about a bumbling scientist having unwittingly created a vain creature in The Proud Robot, but here again, the image of the robot seems to be comfortably tempered with humour. It is largely a persona that, in Kuttner's time, was still to be exhausted of its metaphorical potentialities.

Much like the alien, or the extraterrestrial, the image of the robot was a space to utilise tropes and habits that, at the end of the day, helped us make sense of ourselves, first and foremost. It’s paradoxical for a genre that has prided itself on thinking big thoughts and revelling in intelligent escapism: ultimately, it's always been about human beings.

The laws of robotics

But Asimov's robots were something new, something that sought to lay down a set of rules. You know the drill: a robot may not harm a human being, or through inaction, cause a human being to come in harm's way; a robot must obey human beings, except when it contradicts rule one; and a robot will see to its own safety, unless it contradicts rule one and two.

But these three elegant conditions contained in them a world of possibilities and many clever loopholes, ripe for authorial plucking. In Asimov's sequel to the classic Caves of Steel, titled The Naked Sun, a detective and his robot sidekick investigate a murder in a colonised planet where humans shun physical company for more digital alternatives to communication.

This setting becomes a scene of a carefully orchestrated, Christie-like crime, where the perpetrator takes advantage of the fact that, while a robot cannot harm a human being, it can only not do so consciously, or knowingly. If it is fooled into thinking a situation is safe for a human being, it might carry out actions which cleverly work around the first, all important law.

Again, in the short story Reason, a robot created by two engineers on a space station that monitors  astronomical disturbances figures out, using reason alone, that it couldn't possibly have been created by creatures clearly less capable than him, concluding that his creator was god himself, whom he calls the Master. It is yet another testament to Asimov's brilliance, as he deftly manipulates his own three laws and arrives at yet another startling exception.

A Zeroth Law was added much later by Asimov, superseding the other laws, using the words of the First Law but replacing “humans” with “humanity”. This was a different ball game entirely, for it propelled the argument into a whole new set of variables. What is proper for humanity is ultimately subjective, but it can lead to interesting situations.

All of these have uncomfortable resonances with the incident at the Volkswagen factory, it seems. It's not difficult, in hindsight, to postulate slightly altered situations (in keeping with science fiction's obsession with alternatives, or indeed, alternity) where this could be a cause for concern: save individual lives over a nuclear power plant? Who decides? A mechanised factory construct? Could a human be any better in arriving at a decision?

Asimov's attempts at naturalizing a legendary science fiction trope, however, did not cause it to descend into a mire of mundane narrative. It sparked the imagination like nothing before, because it justified the said imagination using a framework.

Sadly, this framework, as I call it, would go on to be taken for granted in real life. That's certainly the case with the incident at the Volkswagen factory. With a little thought, it could engender seriously thoughtful debate concerning how we humans react to a techno-saturated society, where what would once be the cause of wonder is now just another blip on the social radar.

Arnab Chakraborty is a student of English Literature, an amateur cartoonist, and a science fiction enthusiast.

(The image used with this article is representative of robots only, and not related to the real-life incident referred to.)