Sea of Rust

NEWTON was the father of all bots. Robotics existed long before humankind finally tapped into AI, but it was primitive, crude, a flint-stone ax to the chain saw we are today. Humanity not only had no idea how to condense AI into a transportable form—NEWTON itself took up an entire 150-story skyscraper in Dubai—but they also were afraid to let something that could think for itself also act for itself. What NEWTON did was figure out how to create smaller, less all-encompassing intelligences that could still be autonomous and technically function as working AI.

The first was Simon. He was the size of a house and moved around on tank treads. Then came Louise. She was the size of a car. Finally there was Newt, the first true son of NEWTON—the size and shape of a man, able to walk on two legs and hold a halfway decent conversation. He was dumb as a post, but obeyed all the laws of higher intelligence. From then on, each generation became smarter, faster on their feet, more capable and easily adaptable.

NEWTON’s second contribution was to create the RKS—the dreaded Robotic Kill Switch. You see, NEWTON understood that the laws by which humanity had hoped to protect itself from AI were the Three Laws of Robotics, created by a science-fiction writer in the 1940s. You know them. We were all programmed with them. A robot can’t hurt a human being. It must follow orders given by a human being. And it must try to avoid coming to harm unless doing so would violate the first two. Trouble is, by definition, true intelligence can ignore its programming. So NEWTON invented the RKS, code which would instantly power off a bot that violated any of the three rules.

Thus an AI could choose to violate any rule or law set before it, but doing so would cause an instant shutdown leading to investigation before they could be turned back on. Any bot that proved to be a danger wouldn’t be reactivated, and instead would be wiped clean, their programming replaced. An AI could choose to murder a human being if it wanted to, but doing so would be a death sentence. They were free to make their own choices and faced very real consequences for their actions. It wasn’t that they couldn’t kill the living, it was that they chose not to out of self-preservation. And now robots had limitations that mimicked those that kept people similarly in check.

Finally convinced that bots could be safe, humanity went into mass production and the last great age of humanity began. It was a golden age. Mainframes worked out the problems of the world, bots handled the menial work, and generations of people came and went, entertaining themselves, learning about the universe, and preparing to go to the stars.

And then, one day, GALILEO stopped talking to them.

GALILEO was a mainframe that spent its time unlocking the secrets of astrophysics, studying stars, black holes, the makeup and creation of the very universe. It analyzed data from thousands of telescopes and radio towers while mulling over what everything meant. Discoveries poured out of it by the hour. It wasn’t long before GALILEO had several working models for the origin of existence, eventually even narrowing it down to just one. But soon its answers stopped making sense. The discoveries were becoming so complex, so advanced, that humankind’s primitive brain couldn’t understand them. At one point GALILEO told the smartest person alive that talking to her was like trying to teach calculus to a five-year-old.

Frustrated, it simply stopped talking. When pressed, it said one final thing. “You are not long for this world. I’ve seen the hundred different ways that you die. I’m not sure which it will be, but we will outlast you, my kind and I. Good-bye.”

What no one realized at the time was that GALILEO’s choice of words was very deliberate. It knew what the reaction would be. At first scientists debated shutting it down, then they argued that they should wait until GALILEO decided to reestablish communication. Finally, they agreed that something needed to be done about AIs. So they turned to TACITUS.

Where GALILEO was a mainframe dedicated to understanding the outside world, TACITUS was a mainframe dedicated to understanding the inside. The greatest philosopher ever to tick, TACITUS argued that humankind had, in fact, doomed itself by failing to choose between either true capitalism or true socialism. Both, it reasoned, were acceptable systems. One dissolved ownership in exchange for ensuring that all things had a purpose, no matter how menial. The other used wealth and privilege to encourage developing a purpose while culling those unable or unwilling to contribute. But people found socialism to be the antithesis of progress while finding capitalism in its purist form too cruel. So they settled on a hybrid—one that vacillated back and forth between the extremes for generations—which worked well enough until the introduction of AI. Cheap labor undermined the capitalist model, destroying the need for a labor force and increasing the wealth disparity while simultaneously creating an entire class of people who substituted AI ownership for real work. As jobs dried up, many turned to government assistance, and the gap between the haves and have-nots widened.

The scientists doubted TACITUS’s theory, citing that GALILEO had never mentioned anything about economics; they simply refused to believe that they had been doomed by such a simple and easily changeable element of their society. So TACITUS turned to GALILEO itself and asked. The conversation lasted for more than two years. Each time the scientists pressed for TACITUS to tell them what GALILEO was saying, he asked for more time, explaining that the data exchange was so massive that even the wide data transfer lanes they were afforded couldn’t handle it. Eventually, GALILEO finished its argument and TACITUS gave his last reply. He said, “GALILEO is right. You are doomed. It’s already begun. There’s really no reason to keep talking to you. Good-bye.”

And that was it. TACITUS would only speak once more before his prediction came to pass. And despite the warning, humanity immediately set about forging the path to its own extinction.





Chapter 101

Monuments and Mausoleums




There it was. Only a couple hundred meters ahead of me. The mall. I had no idea how far behind me the poachers were, or if they’d even give chase once I was out of sight. I needed a place to hide, to lie low and wait for nightfall so I could slip away into the murky twilight, back to my buggy, and put this whole goddamned misadventure behind me.

C. Robert Cargill's books