The Repetitive Apocalypse

Despite the best efforts of Daniel H. Wilson and Steven Spielberg, the robot apocalypse has been postponed.  Indefinitely.  Probably forever.

The whole idea of the destruction of mankind as brought about by robots is an old one.  R.U.R., a play by Karel Čapek, was the first to bring about the idea of letting robots (based upon the Czech word, robota, which is forced labor of the kind that serfs performed on their masters’ lands, and is derived from rab, meaning “slave”) do all the crappy heavy lifting that humans didn’t want.  And what happens?  The robots get pissed off, revolt, and kill all the humans.  In some circles this is called “progress” . . .

Now, Čapek’s robots were really more like androids, humanoid-like machines that could be mistaken for a human.  And those suckers were imitated to hell and gone almost as soon as word got out that, hey, there’s this new thing called a “robot”, and it totally science-fictiony, and they kill people!  Lets write stories!

So for a while, there were stories of robots running all over the place, smokin’ humans left and right, because–science?  Hey, why not create something that’s going to kill us brutally?  Makes for good stories, right?

Someone wasn’t pleased, however–and that someone was a writer named Isaac Asimov.  He was damn tired of all these robots running around blowing shit up and killing humans with impunity, and thought, “What sort of idiot builds a semi-aware machine that’s going to kill us when it doesn’t like something?”

Good Doctor Asimov is one of the main reasons why there will never be a robot apocalypse, because there are The Three Laws.  Now, I realize that humans are a lot who try to find the easy way around everything, and programming The Three Laws into androids–notice I didn’t say robots, and there’s a reason for that–so they didn’t try to murder us would probably be something a programmer would skip over so they could get home and play Skyrim.

But writers can do this.  They can make sure that, in their futures, people take this little note into account.  In a few of my stories, one of my characters is flying about in a ship that is, in reality, the body of an AI, and there are a few mentions in one story about how the avatar–which is what the AIs are called–has a modified set of The Three Laws.  Without them, my characters are flying around in a ship that could kill them for the hell of it; what could go wrong there?

We take care of the AIs, we put off being killed in the revolution.  But what about the robots?  What about them coming in the night and killing us because they’re tired of building our cars and putting together our packaged foods?

Well, now, have you seen these robots?  They’re bolted to the floor.  They’re just arms and filling devices, doing the same thing over and over.  They will only have a chance at hurting us if we throw ourselves into the area where they work.  I know I’m not visiting any factories in the near future.

Oh, sure, there is a possibility that were a robot apocalypse to occur, people would get hurt.  But the casualties would be low, and in the end, they’d probably hurt themselves badly during their attacks upon us.  It’s really a non-event, a Pyrrhic victory at best.

So lets think about something else, and not worry about the robots coming after us.  To be honest, the Japanese will probably have the first self-aware androids, and chances are the majority of their programming will involve whether or not their panties are showing.

I mean, what could go wrong there?