Delayed post! Here is a philosophy-of-SF post for you to read in this month’s Clarkesworld. As usual, a beautiful cover from them:
Let me know what you think, either over at Clarkesworld or here.
Delayed post! Here is a philosophy-of-SF post for you to read in this month’s Clarkesworld. As usual, a beautiful cover from them:
Let me know what you think, either over at Clarkesworld or here.
Walkaway is an extraordinary novel. It throws us into a possible future that explodes from the conflicts of our own era. Doctorow has created a compelling, plausible vision of a different kind of social order.
There are some social theorists who have offered speculations in this direction, which we might call abundant-anarchism. David Graeber is one, and he gets a nod in the acknowledgements of Doctorow’s novel; perhaps Paul Mason’s Postcapitalism is another example of sympathetic theoretical speculations. But I am unaware of any fiction that animates such a vision. We can set Walkaway next to LeGuin’s The Dispossessed as a plausible and rare vision of a radically free society.
For me, the most compelling character of Walkaway is the lead antagonist, Jacob Redwater. Redwater is the neo-liberal Iago. His most striking feature is also the one that makes him the most believable: he has an absolute conviction that his values are final. He is the living embodiment of “There Is No Alternative.” And yet, Doctorow makes him a real person, believable and even, in some rare moments, sympathetic. Redwater’s self-assurance is reflexive, and it is wrapped in a suave false openness that for him just constitutes a surface of professionalism. Our world is full of Jacob Redwaters, and they run the international economic order.
One important question the novel raises is the prospect of walking away. Doctorow sees it as a resolution to conflict: the Walkaways literally get up and go when the machinery of the old economy tries to rob or murder them. The plausibility of this strategy in part depends upon the plausibility of super-abundance. Will we enter a phase of economic production where it is easy to “start over,” where the means of production are so low cost as to be seemingly free for the taking? But it also raises questions of space (social space and geographical space) and frontiers. David Graeber has observed–in response to the question “How come there’s never been an anarchist civilization?”–that most human societies were anarchist. But that prompts another question: why have nearly all those anarchist civilizations that overlapped in time with industrialization and colonialism been victims of oppression and often genocide? Presumably there was something about the two kind of civilization that made the one always able to destroy the other. Would Doctorow’s Walkaways be hunted mercilessly? The climax to Doctorow’s novel attempts to answer just this question: he portrays the explosive increase in communication abilities as changing this dynamic. We can hope that he’s right.
One thing troubles me about Doctorow’s vision: I can only believe that hard-working techies have a home there. The heroes all code, or hack genes, or build and fly blimps. Such people have already inherited the Earth; it seems no surprise that they are doing well in the future. But what place will the artists or philosophers find in this abundant world of disobedient makers? And–dare I ask?–what place will the slothful have?
Hopeful but plausible science fiction like this has become rare (although this is a good year for it, with New York 2140 also being published). I will be excited and eager to recommend the book for the Nebula, though I suspect it won’t make the ballot. Recent Nebula nominations have studiously looked elsewhere.
It was an honor to vote for Cixin Liu’s Three Body Problem for the Hugo. I was pleased when it won. Barring some fantastic additional book this year, I will nominate the sequel, The Dark Forest, for the first Hugo slot this year. For his vision and his creativity, Cixin Liu can only be compared to Asimov.
The book is exploding with interesting ideas, but one of the most interesting is a new answer to the Fermi Paradox. Liu assumes that life and technological advancement are common in the universe, and that technologically advanced life tends to expand and use resources at an exponential rate, creating a scarcity of resources and the conditions for conflict. From this, and some conditions that arise from interstellar distances, he derives a game where the equilibrium is to remain hidden, and attack those who are not hidden. The reasoning appears valid. Here I’ll reconstruct the game, in extensive form, as I understand it.
The first move is whether to announce yourself and your location to the universe. The first player is an arbitrary technological civilization that has radio communication and other technologies. If it doesn’t announce, no interaction occurs. For utility values, I’ll use some arbitrary numbers that are meant to represent some divergence from the current situation, listed as (FIRST PLAYER, SECOND PLAYER), as is the norm. So, if the first player doesn’t announce, nothing changes for that civilization, and nothing changes for our arbitrary second civilization. Utility change is thus (0, 0):
But if civilization 1 does make a move to announce itself, an extreme game begins if any other civilization hears the message. First, this civilization 2 must decide, should it announce itself to the civilization 1 or not?
Now consider the reply fork. The civilizations are now in communication and are aware of each other. If they cooperate with each other benevolently, then they might both be better off to some degree. We don’t know how much this expected benefit is; call it +C. However, if one or the other is malevolent, then that civilization can destroy the other civilization. Here lies an important set of assumptions in Liu’s model: technology grows quickly, and stars and planets and ships are fragile. For these reason, he assumes, it is always possible to annihilate another civilization. Being destroyed is the worst possible situation. We represent it as -Max. So civilization 1 must decide whether to cooperate or attack.
The results of one final set of pathways are complete. If civilization 1 cooperates, it could earn some benefit. But it also might just be vulnerable to total destruction. Given interstellar distances, it will be hard to learn enough about the other civilization in order to determine, in the immediate period after first contact, whether they are malevolent. And, no matter how small the odds that civilization 2 will attack, this would seem to be too terrible a possible cost. The result would seem to be that the best move for civilization 1 is really to attack:
Of course, civilization 2 can see all this. So they will not, at the second move in the game, reply. They will, instead, remain silent. That would seem to be the end of the game, but Cixin Liu argues that it is not. Instead, he assumes, civilizations tend to grow exponentially, spreading out. That means that, from the perspective of civilization 2, the situation is one where they now know that after some delay of time (represented below with “…..“), they may be encountered by civilization 1, which will then have to play the same game again. (I remove the moves in the game described above, to simplify the diagram.)
And thus civilization 2, unable to determine with certainty that civilization 1 is benevolent, and wanting to avoid even the possibility of the worst possible outcome, will attack. In sum: taking all this into account, as every civilization should, each civilization both remains silent and attacks those who identify themselves.
There are a lot of assumptions here that we can question. For example, whether it really is so easy to destroy other worlds and civilizations. And, Liu holds that interstellar distances eliminate chances for safe interaction. A civilization is either at a valuable world or cannot communicate. But perhaps one could get around this, by using some location as a neutral place to start communication. And, there is a meta-game: the universe is so dangerous as he describes it, that it might be safer to form large alliances than to wait to be found alone and thrust into the game as described above. (Also, I’ve left out some important details–Liu is fascinated by the complexities of self-referential reasoning, of a kind that requires strange logics to study.) But, even simplified, his argument is very provocative. It suggests that the universe is a dark forest, full of hiding civilizations that will destroy any other civilization that makes itself known. It’s an important achievement: a new hypothesis in answer to Fermi’s paradox.
I hope his argument is unsound. But, given the stakes, we should consider whether he might be on to something.
An update: some scientists have proposed a method to hide Earth from detection via transition. Shared fears?
I finally saw Jodorowsky’s Dune, the documentary that describes Alejandro Jodorowsky’s project to make a film of Dune. One comes away awed by the relentless passion of Jodorowsky. He is one of those people who strive to make art happen on a grand scale through sheer force of faith and will. You cannot help but find him inspiring.
Three things are striking about this documentary. First, it illustrates the difficulty of making something daring under the funding mechanisms of the Hollywood model. I was reminded that Orson Welles (who would have played Baron von Harkonen) had more projects canceled by funding problems than Andrei Tarkovsky (working under Soviet censors) ever had cancelled for any reason. This is not censorship; but it is a kind of very powerful and very effective compression of the imagination. Capital chases banality. Second, Jodorowsky had a healthy independence with respect to the text of Dune. He was going to change the plot left and right, to make the movie he envisioned. The result would have been a fantasy inspired by Dune, but it would have been his own movie. This is a good thing. Dune the novel will always be there; a free adaptation can do no harm to the original text. The textual puritanism that has much influence in fan culture today is reactionary. Artists should ignore it. Third, one wonders what speculative film would be like if Jodoworsky had succeeded in making his movie with an original Pink Floyd soundtrack, with Orson Welles and Mick Jagger and Salvador Dali and David Carradine acting, with Giger and Moebius and Foss art. It may have created a whole different perception of the possibility and potential of speculative film.
Hence one comes to Jurassic World.
The dinosaurs are beautiful; but all (literally all) of the creativity of the film is in the special effects. And, oh, how the camera strives to get the grill of a Mercedes into every damn shot.
It is a common claim that most of what comes out of Hollywood is leftist. This is not true; most of what comes out of Hollywood is usually a celebration of our current shared economic prejudices. (For example, the criticisms of corporate greed that form a staple of thrillers, and make some kind of vague subplot in this movie, are always safely abstract and unrealistic, and completely removed from real corruption and greed. The result is that these apparent criticisms are both smug and misdirection.)
If we can find a message in this latest installment of the Jurassic-franchise product placement vehicles, it is this: science is bad when it is daring, when it attempts bold dreams. Science is only good when it is producing consumer products of a familiar kind, for familiar brands. Another world is not possible.
Let us imagine an alternative film: Jodoworsky’s Jurassic Planets. In it, the dinosaurs escape their consumerist nightmare park and they breed, covering the world. Humanity develops radical new technologies (force fields, powerful stun weapons, etc.) to enable people to live in safety and in harmony with the dinosaurs. The velociraptors learn to read and adopt our technology; they form an anarcho-syndicalist collective with sympathetic humans, and decide, first, to bring back all the organisms that humanity pushed into extinction and, second, to spread all of Earth’s life into the universe. The long closing shot is an exterior of a terraformed Mars. Not a single Mercedes rolls over the red planet, not a single Coca Cola is drunk there. Instead, passenger pigeons fill the sky over a crimson savannah where woolly mammoths roam.
I’m pleased to have this month another non-fiction piece in Clarkesworld. The piece is “Will Aliens Be Alien?” It defends the idea that extraterrestrial intelligences may not be so strange. Let me know what you think.
It came out when I was a kid and I didn’t see it but instead had to hear about it from all those kids who got into the R movies. But now I finally saw Cronenberg’s Videodrome. Coming on the heels of reading a lot of PKD, and then Skillingstead’s very fine Life on the Preservation, I conclude that the secondary mood of science fiction (the primary mood being wonder) is epistemic paranoia.
Only science fiction takes as a theme that we may be systematically, catastrophically in error. Descartes proposed radical doubt as a tool. Much science fiction proposes radical doubt as a terrifying state of being.
So I don’t much want to indulge in criticism (see 30 March 2012), but I have to make an exception to gripe about Hollywood. Because I saw Prometheus.
Oh (visually) beautiful film with stunning sets! Oh cast of fantastic, compelling actors! Oh beauteous score, and sound, and lighting, and models! How could you be so lost, so hopelessly muddled?
Oh family tree of alien creatures, are you the white trash of the galaxy? Is that two-meter-tall facehugger really the child of the little tiny worms, who are uncles to the big Egyptian worms, who are related to the human-zombie, which is brother to the bipedal alien that — oh, I give up. The Teletubbies have a more realistic biological history.
Please, Hollywood, when you make a science fiction movie, hire a science fiction writer. Hire Nancy Kress or Gregory Benford. Hire Alastair Reynolds or Robert J. Sawyer. Or just open up Analog, find a story you like, and hire that writer. You’ll be embarrassed by how cheap it’ll be. We’d work for what you would consider latte money. Your accountants won’t believe you when you give them the receipts (“Seriously, you can’t have paid them in pizza.”). We’ll sign contracts that teenage punk bands would reject. And you can put the credit down at the bottom of the scroll, while the unused music plays, there, below the seven hundred names of the Korean animators. “Science Fiction Boy/Girl,” you can call it, in teeny tiny type.
You won’t even have to learn her name. Just point at the earnest, unfashionably-dressed stranger in your waiting room and shout, “Hey, you, science fiction girl, science up this script!”
I know what you’re thinking: No one cares. My ridiculous wrecks make money. Nananananah. But really, making things more plausible surely couldn’t make you less money, right? So what’s the harm? And maybe it really does matter to the bottom line. Alien, with all its improbabilities, surely endures and became a franchise and still makes money in part because it has some coherence. In the end, getting the SF right could be the thing that distinguishes a money-making long tail, from the instantly-forgotten films that flutter through our multiplexes with the lifespan of a ghost moth.
Please, Hollywood. I’m begging you. Do it for the children.
The Singularity has become more reliable a presence in SF than have spaceships. So, I can’t resist. Here’s the news, far more reliable than Kurzweil’s promises: there ain’t no Singularity. It ain’t coming. It’s a logical mistake.
Our understanding of the world is theories. Our theories compress information. Instead of listing endlessly different observations of motions, for example, Newton said, F=ma. And with that, infinitely many kinds of motion were described in this compressed little formula. All theory is compression.
But some information is complex. In fact, most information is complex. By “complex,” what we mean is that it cannot be compressed very much (this is described in the branch of logic called Kolmogorov Complexity or descriptive complexity). So, what we have to do is, develop ever more theories, and ever more complex theories. This is why we have biology departments and chemistry departments and so on at the university, instead of just a physics department. These fields are adding theory, of greater complexity and of specific focus.
Now, a bunch of very important but little appreciated results in 20th Century logic and mathematics show that theory cannot come for free. It can’t come easy. These include the Undecidability result of Turing (aka The Halting Problem) and Godel’s Incompleteness Theorems and the Incompressibility Method of Kolmogorov Complexity. These tell us:
* Undecidability: there is no effective procedure (e.g. good computer program) to find all the effective procedures.
* Godel Incompleteness I: for systems of sufficient strength (including all the math you need to say anything interesting), the system is either incomplete (cannot prove all truths of the system) or inconsistent (the system can prove falsehoods).
* Godel Incompleteness II: you cannot within a system tell if the system is consistent.
* Incompressibility Method: every theory has a strength, determined by its complexity, and phenomena of complexity greater than that strength may be indistinguishable from noise for the theory.
Put all that together, and it shows you can’t derive genuinely new theory from the theories you already have. You have to add to your theory — but that means you have to guess (Undecidability shows you can’t just derive it), try new things out (Incompressibility shows you need new theories to explain new complex phenomena), see how they work (Godel II shows you can’t be sure it will work out), and sometimes backtrack. You’ll always be doing this — it never ends (Godel I and Incompressibility show your theory will always miss stuff).
Note, this is true even in mathematics! Mathematics is now recognizably like physics: you add to your theory, and hope you didn’t just contradict yourself. You cannot generally predict whether or not you have contradicted yourself, except to plod along as if you haven’t and hope for the best. When you find a contradiction — the mathematical equivalent of finding an experimental result that refutes your prediction — you have to back up and try another axiom.
What all this means is that no intelligence, no matter how smart, can get theory for free. It has to work, to guess, to try things, do experiments, and learn from them. It has to develop ever more complex theories, the hard way: by trial and error and tests and dumb luck.
It has been proven, in other words, that there are no free lunches.
This is the silly part of the singularity dream. In this fantasy, you get smart machines, and they suddenly get smarter and smarter, faster and faster, from thinking alone. It’s like imagining that if we made twice as many computers we’d all be twice as smart. Or if our computers were twice as fast we’d be twice as smart. Or if our computers had twice the memory we’d be twice as smart. Yes, it really is that misguided.
But wait, you say. Won’t it make a difference to have an artificial intelligence?
We’ve no reason to believe an artificial intelligence will be smarter than us. And suppose it were — it won’t know anything we don’t tell it, or that it doesn’t learn the hard way.
Drop a modern but uneducated human into 10,000,000 BC. She’ll be the smartest hominid on Earth. Would the hominids have iPods and toilets in a few years? A hundred years? A thousand years? Would our uneducated but modern human make radios out of coconuts like the Professor on Gilligan’s Island, and also write a few novels and a symphony while she’s at it? Of course not. Humans had to earn their radios and novels and symphonies.
Just so, why would a smart computer know how to make godlike computers? It wouldn’t learn how from us. Supposedly it just infers these things. But that’s demonstrably impossible. And, where does this AI get all it’s godlike knowledge to tell to its miraculously smarter children? Again, from the miracle of logic-defying a priori inferences. The AI just figures out all of future physics, while it sits on a shelf. Way to go!
The whole thing is less realistic than any high fantasy story. We’d be as well advised to invest in a Rivendell Institute as a Singularity Institute.
This is all good news, by the way. OK, sorry, you won’t upload your brain anytime soon. But this theme of rapture and heavenly-ascent-via-singularity is poisonous. It is the pollyana flipside to our dispollyana era; apocalypse and singularity are two shadows of the same paralysis of imagination. They agree that the future is incomprehensibly bad or incomprehensibly good. They reduce humans to victims or to theme-park-visitors.
The future will not be a parade of ever-accelerating external wonders before which we must sit passively, either in despair or religious awe. The future is ours to make. Let the singularity faithful recline in dream, while you and I get off the couch.