Thoughts on "I, Robot" by Isaac Asimov.
The three laws of robotics are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
These laws govern both the Robots of Asimov’s book and the book itself. They are the underlying string connecting everything that happens in the story and are used several times to explain behavior and plot twists themselves.
The book is basically a compilation of short stories, which were written from 1940 up to 1950 by Asimov and then made a book. They narrate the story of the development of Robots.
What struck me most about this book is the underlying philosophy and best-suited than ever ideas about robots entering society.
We’re starting to see bots taking over jobs. Machines are faster and better than Humans in many, many things. But will this be for humanity’s own good or will it bring our own destruction?
I, Robot is the opposite of The Terminator.
All robots are built with the three laws of robotics. That much so, that it is _impossible _to build a positronic brain without the first law governing it. The mathematics that allowed such feats of technology are in unison with the three laws, making it impossible for anyone to build a robot without them or even more, for a robot to break any of the laws it.
It’s like our instinct to pull our hand if we touch the stove. A robot will sacrifice itself to save you without second thought, it’s a “forced reaction”.
You might have seen I, Robot, the Will Smith’s movie. Well, robots don’t go nuts in this book. By the contrary, they bring humanity to an almost-enlightenment.
Should we be creating something similar to the three laws? In less than a decade we’ll have widespread autonomous cars and flying droids. They’ll have AI, true, but will it be a good AI? Will it protect humans above all cost and if so, will this laws govern their behavior or be simply a side-note withing their programming? After battery management and Wi-Fi connection?
There’s a part of the story where Dr. Calvin, the one telling the stories, mentions that it is impossible to discern, by behavior, if a robot is indeed a robot or if it is just a really good human being.
Asimov based the three laws of robotics not on what an artificially intelligent being should be ruled by, but what we, as humans, should be ruled by.
Simply change the word “robot” to “human” and we have the recipe for a much greater society!
- A human may not injure other human or through inaction allow a fellow human to a come to harm. Take care of your fellow brothers and sisters.
- A human should obey the orders given by a superior, when such orders don’t conflict with the first law. (Keep in mind that the superior only have the good of other humans at heart too!).
Maybe the third rule can be broken. Should it be first? Should it be second? A mother won’t care of her own existence if that entails saving her child’s life.
And so, life would be so much simpler if we all lived caring for our fellow men and not just for ourselves.
Thanks for another beautiful story, Asimov. I’m proud that my second name is the same your name.