The Singularity has become more reliable a presence in SF than have spaceships. So, I can’t resist. Here’s the news, far more reliable than Kurzweil’s promises: there ain’t no Singularity. It ain’t coming. It’s a logical mistake.

Here’s why.

Our understanding of the world is theories. Our theories compress information. Instead of listing endlessly different observations of motions, for example, Newton said, F=ma. And with that, infinitely many kinds of motion were described in this compressed little formula. All theory is compression.

But some information is complex. In fact, most information is complex. By “complex,” what we mean is that it cannot be compressed very much (this is described in the branch of logic called Kolmogorov Complexity or descriptive complexity). So, what we have to do is, develop ever more theories, and ever more complex theories. This is why we have biology departments and chemistry departments and so on at the university, instead of just a physics department. These fields are adding theory, of greater complexity and of specific focus.

Now, a bunch of very important but little appreciated results in 20th Century logic and mathematics show that theory cannot come for free. It can’t come easy. These include the Undecidability result of Turing (aka The Halting Problem) and Godel’s Incompleteness Theorems and the Incompressibility Method of Kolmogorov Complexity. These tell us:

* Undecidability: there is no effective procedure (e.g. good computer program) to find all the effective procedures.

* Godel Incompleteness I: for systems of sufficient strength (including all the math you need to say anything interesting), the system is either incomplete (cannot prove all truths of the system) or inconsistent (the system can prove falsehoods).

* Godel Incompleteness II: you cannot within a system tell if the system is consistent.

* Incompressibility Method: every theory has a strength, determined by its complexity, and phenomena of complexity greater than that strength may be indistinguishable from noise for the theory.

Put all that together, and it shows you can’t derive genuinely new theory from the theories you already have. You have to add to your theory — but that means you have to guess (Undecidability shows you can’t just derive it), try new things out (Incompressibility shows you need new theories to explain new complex phenomena), see how they work (Godel II shows you can’t be sure it will work out), and sometimes backtrack. You’ll always be doing this — it never ends (Godel I and Incompressibility show your theory will always miss stuff).

Note, this is true even in mathematics! Mathematics is now recognizably like physics: you add to your theory, and hope you didn’t just contradict yourself. You cannot generally predict whether or not you have contradicted yourself, except to plod along as if you haven’t and hope for the best. When you find a contradiction — the mathematical equivalent of finding an experimental result that refutes your prediction — you have to back up and try another axiom.

What all this means is that no intelligence, no matter how smart, can get theory for free. It has to work, to guess, to try things, do experiments, and learn from them. It has to develop ever more complex theories, the hard way: by trial and error and tests and dumb luck.

It has been proven, in other words, that there are no free lunches.

This is the silly part of the singularity dream. In this fantasy, you get smart machines, and they suddenly get smarter and smarter, faster and faster, from thinking alone. It’s like imagining that if we made twice as many computers we’d all be twice as smart. Or if our computers were twice as fast we’d be twice as smart. Or if our computers had twice the memory we’d be twice as smart. Yes, it really is that misguided.

But wait, you say. Won’t it make a difference to have an artificial intelligence?

We’ve no reason to believe an artificial intelligence will be smarter than us. And suppose it were — it won’t know anything we don’t tell it, or that it doesn’t learn the hard way.

Drop a modern but uneducated human into 10,000,000 BC. She’ll be the smartest hominid on Earth. Would the hominids have iPods and toilets in a few years? A hundred years? A thousand years? Would our uneducated but modern human make radios out of coconuts like the Professor on Gilligan’s Island, and also write a few novels and a symphony while she’s at it? Of course not. Humans had to earn their radios and novels and symphonies.

“I could turn this whole island into computronium, if I just had one more coconut.”

Just so, why would a smart computer know how to make godlike computers? It wouldn’t learn how from us. Supposedly it just infers these things. But that’s demonstrably impossible. And, where does this AI get all it’s godlike knowledge to tell to its miraculously smarter children? Again, from the miracle of logic-defying *a priori* inferences. The AI just figures out all of future physics, while it sits on a shelf. Way to go!

The whole thing is less realistic than any high fantasy story. We’d be as well advised to invest in a Rivendell Institute as a Singularity Institute.

This is all good news, by the way. OK, sorry, you won’t upload your brain anytime soon. But this theme of rapture and heavenly-ascent-via-singularity is poisonous. It is the pollyana flipside to our dispollyana era; apocalypse and singularity are two shadows of the same paralysis of imagination. They agree that the future is incomprehensibly bad or incomprehensibly good. They reduce humans to victims or to theme-park-visitors.

The future will not be a parade of ever-accelerating external wonders before which we must sit passively, either in despair or religious awe. The future is ours to make. Let the singularity faithful recline in dream, while you and I get off the couch.