You know it is much more difficult — and much more honest — to provide an optimistic vision.
Didn’t you first come to SF precisely for this: portrayals of worlds you wish you could live in, that you hope that your children will live in, and that you would like to believe are possible?
But as the potential grows for ever-better lives, much of our science fiction turns more indolently dreary — if not dispollyana (which is, as John Gardner observed, a form of pollyana, albeit inverted: it is the gripe that if things aren’t perfect, then life is worthless).
You are not naive to be an optimist. There is a fierceness to true optimism: a heavy duty of hope, and the consequent demand that we act.
Constrast this with the dismal implosion into violent acts of survival, or collapse into exhausting lassitude, that constitute plot in the apocalyptic undergenre. (I have enjoyed some works of this subgenre, and am even writing one; this does not alter my estimation of its relative merits, which are few and small.) This has become the dominant subgenre of our field, and the only genre work that gets reviews in the New York Times or wins Pulitzer Prizes or is noticed with a Booker nomination. Here we indulge in a fantasy of the elimination of responsibility: society has collapsed or become unworthy of continuation, so no duties weigh on us; the economy is gone or is an engine of evil, so we have no complex and uncertain career before us; the world is a wasteland, so we have no reason to preserve it. As for characters: well, they’re all evil and selfish and stupid and mean and incompetent. Presumably we are to believe the same is normal for real people.
You know it is a nobler pleasure to follow the protagonist who strives for a better world and, in meeting her duties in the perplexing and complex present, pilots bright outcomes out of the obscure future. And you know it’s truer to what we are.
We don’t need any more lazy despair. It’s easy to snuff out the candle. More light, more light.
So I don’t much want to indulge in criticism (see 30 March 2012), but I have to make an exception to gripe about Hollywood. Because I saw Prometheus.
Oh (visually) beautiful film with stunning sets! Oh cast of fantastic, compelling actors! Oh beauteous score, and sound, and lighting, and models! How could you be so lost, so hopelessly muddled?
Oh family tree of alien creatures, are you the white trash of the galaxy? Is that two-meter-tall facehugger really the child of the little tiny worms, who are uncles to the big Egyptian worms, who are related to the human-zombie, which is brother to the bipedal alien that — oh, I give up. The Teletubbies have a more realistic biological history.
Please, Hollywood, when you make a science fiction movie, hire a science fiction writer. Hire Nancy Kress or Gregory Benford. Hire Alastair Reynolds or Robert J. Sawyer. Or just open up Analog, find a story you like, and hire that writer. You’ll be embarrassed by how cheap it’ll be. We’d work for what you would consider latte money. Your accountants won’t believe you when you give them the receipts (“Seriously, you can’t have paid them in pizza.”). We’ll sign contracts that teenage punk bands would reject. And you can put the credit down at the bottom of the scroll, while the unused music plays, there, below the seven hundred names of the Korean animators. “Science Fiction Boy/Girl,” you can call it, in teeny tiny type.
You won’t even have to learn her name. Just point at the earnest, unfashionably-dressed stranger in your waiting room and shout, “Hey, you, science fiction girl, science up this script!”
I know what you’re thinking: No one cares. My ridiculous wrecks make money. Nananananah. But really, making things more plausible surely couldn’t make you less money, right? So what’s the harm? And maybe it really does matter to the bottom line. Alien, with all its improbabilities, surely endures and became a franchise and still makes money in part because it has some coherence. In the end, getting the SF right could be the thing that distinguishes a money-making long tail, from the instantly-forgotten films that flutter through our multiplexes with the lifespan of a ghost moth.
Please, Hollywood. I’m begging you. Do it for the children.
The Singularity has become more reliable a presence in SF than have spaceships. So, I can’t resist. Here’s the news, far more reliable than Kurzweil’s promises: there ain’t no Singularity. It ain’t coming. It’s a logical mistake.
Our understanding of the world is theories. Our theories compress information. Instead of listing endlessly different observations of motions, for example, Newton said, F=ma. And with that, infinitely many kinds of motion were described in this compressed little formula. All theory is compression.
But some information is complex. In fact, most information is complex. By “complex,” what we mean is that it cannot be compressed very much (this is described in the branch of logic called Kolmogorov Complexity or descriptive complexity). So, what we have to do is, develop ever more theories, and ever more complex theories. This is why we have biology departments and chemistry departments and so on at the university, instead of just a physics department. These fields are adding theory, of greater complexity and of specific focus.
Now, a bunch of very important but little appreciated results in 20th Century logic and mathematics show that theory cannot come for free. It can’t come easy. These include the Undecidability result of Turing (aka The Halting Problem) and Godel’s Incompleteness Theorems and the Incompressibility Method of Kolmogorov Complexity. These tell us:
* Undecidability: there is no effective procedure (e.g. good computer program) to find all the effective procedures.
* Godel Incompleteness I: for systems of sufficient strength (including all the math you need to say anything interesting), the system is either incomplete (cannot prove all truths of the system) or inconsistent (the system can prove falsehoods).
* Godel Incompleteness II: you cannot within a system tell if the system is consistent.
* Incompressibility Method: every theory has a strength, determined by its complexity, and phenomena of complexity greater than that strength may be indistinguishable from noise for the theory.
Put all that together, and it shows you can’t derive genuinely new theory from the theories you already have. You have to add to your theory — but that means you have to guess (Undecidability shows you can’t just derive it), try new things out (Incompressibility shows you need new theories to explain new complex phenomena), see how they work (Godel II shows you can’t be sure it will work out), and sometimes backtrack. You’ll always be doing this — it never ends (Godel I and Incompressibility show your theory will always miss stuff).
Note, this is true even in mathematics! Mathematics is now recognizably like physics: you add to your theory, and hope you didn’t just contradict yourself. You cannot generally predict whether or not you have contradicted yourself, except to plod along as if you haven’t and hope for the best. When you find a contradiction — the mathematical equivalent of finding an experimental result that refutes your prediction — you have to back up and try another axiom.
What all this means is that no intelligence, no matter how smart, can get theory for free. It has to work, to guess, to try things, do experiments, and learn from them. It has to develop ever more complex theories, the hard way: by trial and error and tests and dumb luck.
It has been proven, in other words, that there are no free lunches.
This is the silly part of the singularity dream. In this fantasy, you get smart machines, and they suddenly get smarter and smarter, faster and faster, from thinking alone. It’s like imagining that if we made twice as many computers we’d all be twice as smart. Or if our computers were twice as fast we’d be twice as smart. Or if our computers had twice the memory we’d be twice as smart. Yes, it really is that misguided.
But wait, you say. Won’t it make a difference to have an artificial intelligence?
We’ve no reason to believe an artificial intelligence will be smarter than us. And suppose it were — it won’t know anything we don’t tell it, or that it doesn’t learn the hard way.
Drop a modern but uneducated human into 10,000,000 BC. She’ll be the smartest hominid on Earth. Would the hominids have iPods and toilets in a few years? A hundred years? A thousand years? Would our uneducated but modern human make radios out of coconuts like the Professor on Gilligan’s Island, and also write a few novels and a symphony while she’s at it? Of course not. Humans had to earn their radios and novels and symphonies.
Just so, why would a smart computer know how to make godlike computers? It wouldn’t learn how from us. Supposedly it just infers these things. But that’s demonstrably impossible. And, where does this AI get all it’s godlike knowledge to tell to its miraculously smarter children? Again, from the miracle of logic-defying a priori inferences. The AI just figures out all of future physics, while it sits on a shelf. Way to go!
The whole thing is less realistic than any high fantasy story. We’d be as well advised to invest in a Rivendell Institute as a Singularity Institute.
This is all good news, by the way. OK, sorry, you won’t upload your brain anytime soon. But this theme of rapture and heavenly-ascent-via-singularity is poisonous. It is the pollyana flipside to our dispollyana era; apocalypse and singularity are two shadows of the same paralysis of imagination. They agree that the future is incomprehensibly bad or incomprehensibly good. They reduce humans to victims or to theme-park-visitors.
The future will not be a parade of ever-accelerating external wonders before which we must sit passively, either in despair or religious awe. The future is ours to make. Let the singularity faithful recline in dream, while you and I get off the couch.
It’s an interesting theme of course, and as such worth exploration. I would not disparage the idea of the incomprehensible extraterrestrial. And, it has been very well done. Lem’s Solaris, for example, is a masterwork of the subgenre. And Tarkovsky’s filming of Solaris, a masterwork of SF film.
But, most times, I just don’t buy it. A biological intelligence would have evolved — by definition. And the constraints of evolution are universal. Cultures can grow into dizzying varieties, of course, and as such resist explanation; but, in the end, each must obey the constraints of biology, or the culture will perish, and this provides the common foundation that means understanding must in principle be possible. Just so, in the end, we can penetrate the purposes of an ant or a vampire squid, and we can penetrate the reasoning of a Yanomamo or Heroic-Age Greek.
I wonder if the idea of the incomprehensible alien is not a part or product of that greater cliche — the Great Cliche, we might call it: the lazy pessimism of contemporary SF literature. All is miserable in the future, our most awarded writers tell us. Even those inclined not to people the future with cannibals roving sunken cities are likely to portray the future as at best incomprehensible (the singularity!). This social and political pessimism bleeds over to an epistemic pessimism. And yet, the evidence is against this misery mongery.
So there is here an interesting challenge: SF that is optimistic, while realistic; that portrays aliens that are genuinely strange, but not incomprehensible.
This is a theme (or is it a meta-theme?) that I want to explore more.
My story, “The Man Who Betrayed Turing,” is available (for free) at Cosmos Online. Cosmos is a fine Australian magazine, which does not have an equivalent on this side of the globe. A bit like Omni, but not really — more as if Discover Magazine regularly ran SF.
The story aims to be one of a series of — what shall I call them? Mathematical fantasies? I’ve written two plays about Godel, but I also have a flash fiction about Godel in Shimmer, of which I am fond. The Turing story is a kind of sequel to that, in my mind. I’m working on one about Cantor now — though it is coming slow.
I’m reading a Dickens biography (Charles Dickens, His Tragedy and Triumph, by Edgar Johnson, Simon and Schuster 1952), and found a remarkable thing. Dickens’s first novel was The Posthumous Papers of the Pickwick Club, written — as were most of his novels — as a serial. He wrote each month about 12,000 words, and this formed a chapter that went into a monthly magazine.
Here’s the rub. The first four issues — even, for a while, the fifth issue — of The Pickwick Club did miserably. Johnson reports that things were so bad that even for the fourth installment, which eventually took off, they printed 1500 issues for provincial distribution, and 1450 were returned to the publisher.
Then something happened. People fell in love with the character of Sam Weller, who appears in the fourth installment. Word got around. People started requesting the earlier issues. And it was all good news from then on. Eventually, Pickwick was such a sensation that there was even an explosion of Pickwick merchandise (Pickwick hats, Pickwick canes, etc.), along with endless copies, knock-offs, and theatrical piracies. But for four months, even into the fifth month and installment, Dickens’s novel looked to be a certain failure. Only a more patient time, and Dickens’s immense force of will, together prevented its cancellation long before its explosive success.
It is hard not to notice that today, The Pickwick Club likely would have been a failure. The big bookstores would pull a book before it racked up four months of bad sales (suppose, for example, that The Pickwick Club had been intended to appear as a trilogy of volumes, which was also a common practice then). Sure, a genius like Dickens would try again. But maybe not with The Pickwick Club, and so a comic masterpiece would be lost.
The economy of art has become accelerated; works must be instantaneously successful. (Perhaps this is a trend throughout the economy; nowadays, employees are not supposed to be trained, they are supposed to arrive at their job already experts.) The question, of course, is what might we lose — and what have we already lost — to such demands?