Vernor Vinge is the author of many novels, including Hugo Award-winners A Fire Upon the Deep, A Deepness in the Sky, and Rainbows End, as well as acclaimed novels The Peace War and Marooned in Realtime. His latest novel is The Children of the Sky.
This interview first appeared in Wired.com’s The Geek’s Guide to the Galaxy podcast, which is hosted by John Joseph Adams and David Barr Kirtley. Visit geeksguideshow.com to listen to the entire interview and the rest of the show, in which the hosts discuss various geeky topics.
You’re famous for coining the phrase “The Technological Singularity.” How did you first come up with that?
I used that term first, I think, at an artificial intelligence conference in 1982. Actually, it was a conference with Marvin Minsky, the famous A.I. researcher, and several science fiction writers were on the panel—Robert Sheckley and Jim Hogan. I made the observation that if we got human-level artificial intelligence, that would certainly be a world-shaking event, and if we got superhuman-level intelligence, then what happened afterward would be fundamentally unintelligible. In the past, when some new invention came along, it generally made all sorts of unexpected consequences, but those consequences could be understood. The example I like to use is that if you had a magical time machine and you could bring Mark Twain forward into the 21st century, you could explain our world to him and he would understand it quite quickly. He’d come up to speed in a day or two, and he would probably have a very good time with it. On the other hand, if you tried to do that explanatory experiment with a goldfish, there’s no way you could explain our world to a goldfish in a way that would be meaningful, as it is to us humans.
That is a consequence of this particular type of progress—that is, in making creatures that are smarter than humans. And I think it was probably even as I was talking on this panel, it occurred to me that the term for that was a little bit like with a black hole. There are only a few types of information you can get out of a black hole—in general relativity—and this was sort of a social or a technological example of the same sort of thing. Now, the particular idea of super-intelligence—not just A.I., but superhuman intelligence A.I.—is intrinsic in stuff that had been going on back at least to the ’50s, and the notion that it would be something that would not be understandable was probably lurking out there too. I think the only thing I said on that panel that made a special difference was the term, which I think highlighted the situation.
What are some of the scenarios for how the Singularity might unfold?
I think there are all sorts of different paths to the Singularity, at least five pretty different paths. I think they’re going to be all mixed together, but it still helps to think about them separately because it makes them easier to track. For instance, there’s classical artificial intelligence. You just build a big machine and hope you can figure out some way to make it very, very smart. Or really one that is very much I think in a lot of people’s minds now is simply that the internet plus the people on the internet—so the internet, its computers, its support software, its server farms, and then billions of human beings—those together could come to constitute a superhuman entity that would qualify as giving us a Singularity.
Another path to the Singularity that in many ways is the most attractive—and actually was also the topic of the first science fiction story I ever wrote that sold—is the notion of “intelligence amplification,” which is that we get user interfaces with computers that are so transparent to us that it’s like the computer is what David Brin calls our “neo-neocortex.” What’s nice about that is that we actually get to be direct participants, and in that particular case, when I say that the post-Singularity world is unintelligible, well, yeah, it is unintelligible to the likes of you and me, but it would not be unintelligible to the participants that are using intelligence amplification. I have a friend in robotics that I brought this up with long, long ago, and he said, “Well, Vernor, I really don’t have any argument with the claims you’re making about what’s going to happen, except this business about it being unintelligible—it’s not unintelligible if you are riding the curve of increasing intelligence.” And then he smiled and said, “And I intend to ride that curve.”
There are at least two other possibilities. One is simply bio-science raising human intelligence by enhancing our memory and enhancing our ability to think clearly, and then I think there’s one that is becoming more evident but is sort of off-stage, and that is the notion of a “Digital Gaia,” a sort of internet under the internet that consists of all the networked embedded microprocessors in the world, and the Digital Gaia is certainly the most alien of the different possibilities. In fact, I sort of like to trot it out to give an example of something that’s pretty obviously very strange and hard to understand. If you could imagine something like where the world becomes its own database, where reality itself wakes up. Actually, more than anything else it looks like some sort of implementation of animism. So that particular possibility, Digital Gaia, to me is certainly the most alien and in some ways the most nervous-making, because if the world woke up, then a lot of our common sense about the world is not valid anymore. Karl Schroeder had a great book that discussed this sort of possibility, and that was his novel Ventus.
Which works of science fiction do you think have featured the best treatment of the Singularity?
Probably the most courageous walkthrough into the Singularity was Accelerando by Charles Stross. He actually follows the development from, I think, the 2010s through the 2070s. He also said that by the time they got to the 2070s, he’s no longer seriously claiming that what he’s describing would be like the post-Singular world. I suspect that comment was related to the notion that after several decades of this, things would be seriously beyond what a writer could understand in our era, and what the readers of our era would understand.
As a retired math professor, how useful do you think mathematical models are for predicting the future?
There are a lot of different things that go under the name “mathematical models.” Moore’s Law is an observation about the past that’s turned around as an extrapolation about the future. There are a lot of different things that are mathematical models, and my attitude toward them is very cautious. I think one of the most important nonfiction books so far this century is Nassim Taleb’s The Black Swan. But I fear that what’s happening with that book is a lot of people give it lip service. “Oh, yeah, Taleb really has a good point in The Black Swan about not trusting certain sorts of models.” The thing is, there are mathematical models that are so seductively attractive that even though people recognize that they are not workable, they still go and use them because they’re so easy to use and they give such definite answers. So that’s a book I recommend for everybody to read, and it illustrates fundamental problems with dealing with models when you’re also dealing with people.
Ray Kurzweil has gotten a lot of attention recently for his optimism about extending human life spans. What do you think about his predictions?
First of all, I’m all for human life extension. In The Singularity is Near, I think, he has a nice discussion of the situation that a lot of essayists have, where they say, “Oh, we really don’t want that. A wise and philosophical person realizes that life needs be limited, and that’s a good thing.” He does a good job of criticizing that point of view, and I certainly agree with that. Furthermore, I think that a human lifespan of a thousand years with post-Singularity technology is easily doable. I think a lifespan of a thousand years would actually—Singularity aside—do human society and human nature a great deal of good, and I don’t think it is that difficult, it probably can even be achieved without having a Technological Singularity.
Life spans of 10,000 to 100,000 years, then you begin to look at what’s involved, the humans that are involved, and how capable a human mind is of absorbing variety. Larry Niven had a story many years ago called “The Ethics of Madness,” in which—it’s not the main point of the story, I don’t think, the main point of the action—but the story includes the notion of a person who lives to be 100,000 or 200,000 years old. It is really scary what they are like in the last 100,000 years or so. It raises some questions about what it means to be alive. It’s really not what you would want. This is a different sort of complaint than the complaint of all these people that say, “Oh, humans were not meant to live more than a hundred years or so.”
The complaint or the criticism here is that the human mind has a certain level of ability to handle different sorts of complexity, and if you believe that you could go 100,000 years and not be turned into a repeating tape loop, well, then let’s talk about a longer period of time. How about a billion years, or a hundred billion years? At a hundred billion years, you’re out there re-engineering the universe. The age of the universe becomes your chief longevity problem. But there’s still the issue of, what would it be like to be you after that? This raises the point, which actually I’m sure is also on Ray’s mind, that if you’re going to last that long, you have to become something greater, and the Singularity is ideally set up to supply that. So [for] the people who are into the intelligence amplification mode of looking at these things, this all fits. And I’m not saying that in a critical and negative way, it does all fit, and it puts you in a situation where you are talking realistically about living very long periods of time, perhaps so long that you have to re-engineer the universe because the universe is not long-lived enough. At the same time, you have to be growing and growing and growing. I mean, intellectually growing.
Now, if you look at that situation, it ultimately gets you, I think, to a very interesting philosophical point, which really I don’t think was within the horizon of what people normally thought about two or three or four hundred years ago. And that is, if you did grow intellectually, would you be the same person? Well, most of us would argue that we are pretty much the same person as far back as we can remember. You know, we have changes in viewpoint, but what you were when you were five and what you are now, there is certainly a community of self-interest there, and it probably doesn’t bother most people too much. They feel good about what they know now, and they feel sympathetic to what they were then.
Now, compare yourself to the zygote that became you. It’s a little bit more of an empathetic stretch necessary there. I’m sure that I understand my zygote as well as it ever understood itself, but I bet you that it doesn’t understand me very well. In fact, the amount of it that’s still in me is at a very low level, even in terms of the genes. There’s what’s happened in terms of epigenetic things since that zygote began to grow. Push that further, and the little part of this story that actually is you becomes more and more diluted. So if you really are serious about talking about living forever, not just living for a thousand years or a hundred thousand years, if you’re really serious about that, you come face to face with the same general issues that the Singularity raises, and that is issues of identity and mind.
I don’t mean this as pessimistic, and I certainly don’t mean it to put down the idea of living for a very long time, but it just raises the issue that, in a very cool way, we have come to a point where we can talk with some realism about getting the things that humans have always wanted so much, and actually facing that up close and seeing that we can do it, it pushes optimism to the point where it is, not unreasonably, something that makes people nervous.
I listened to a talk where you mentioned that one of the drawbacks of the space program is that it would give a lot more people what amounts to WMD capability. Could you talk about that?
If you google my name and the phrase “What if the Singularity does not happen?” that was in that talk. And I’m very proud of that talk, partly because I think that for scenario planners and science fiction writers in general, it’s always good that if you have some idea about what the future’s going to be like, that you also work out a scenario where it doesn’t happen, and try to explain plausibly why it might not happen. And actually, one doesn’t have to scratch that talk very deeply to see that it’s the background for my novel A Deepness in the Sky. Most of the latter part of the talk is how important space travel is for human survival, but there is also the fact that, in the short term at least, when all our eggs are still in one basket, namely on the surface of the Earth, that to be able to get something up to orbital speeds gives it a lot of kinetic energy, and those levels of kinetic energy are—depending on the mass involved—comparable to some pretty serious weapons that could do us grief at least at a city level.
So I think actually, as with all technologies, there are dangers and downsides. I would say these are relatively mild. Probably if we do get space flight there’s going to be rules of the road for anything inside cislunar space. And there’s going to be people watching pretty carefully, especially objects that are very massive. Anybody who sends an asteroid into cislunar space I think is going to be watched very, very carefully, because there you’re getting up to a level of kinetic energy weapon that would do serious damage to everybody on Earth. I have a small theory that this is one reason why space travel development has gone slowly, in that it gives a military advantage in an unclear way, and the top players were not interested in poking that particular gorilla, so they just settled for very much slower progress. I think we are entering an era now where we will see a renaissance in space flight. I hope it’s not a military renaissance, which would do the job but would probably raise the risks of the sort of threat that you are talking about. And ultimately, of course, having self-sufficient settlements off Earth is one of the most important insurance policies that the human race can have. Since we don’t know about any life anywhere else in the universe, one could also regard it as a life insurance policy for life itself in the universe.
Another thing in that lecture that really struck me is that you seemed fairly optimistic about the potential for human civilization to rebuild itself following a complete collapse. I’d always imagined that since we’ve already extracted all the easily obtainable oil and coal and so on, that would be very difficult. Could you talk about the issues with that?
That’s a really important point, how difficult it is to come back from a civilizational collapse. I’m going to say some optimistic things here, and I don’t mean them to trivialize what happens if you had a civilizational collapse. I mean, if we had a civilizational collapse, even a fairly mild one, you and I would almost certainly be dead. And a serious collapse that involved most of the people dying would obviously do that to most of the human race. It’s just absolutely ghastly. On the other hand, I think that coming back would actually be a very big surprise. The difference between us and us, say, 10,000 years ago . . . there are obvious differences, like the level of our technology. But there’s another, more important difference, and that is, we know it can be done. I think the human race wandered around for tens of thousands of years sort of bouncing from one stupid, mean-spirited solution to another, because we had no idea what could be done.
Now, one aspect that you brought up was how we’ve mined all the easily accessible stuff. I disagree with that, with one exception—fossil fuels. I agree when it comes to fossil fuels. But almost every other resource—well, actually, I should also say that if we had a really bad collapse and managed to destroy the ecosphere, that’s another resource that would be hard to get back. But the stuff that we mine otherwise, we have concentrated that. I imagine that ruins of cities are richer ore fields than most of the natural ore fields that we have used historically. And not only at the level of ore, but at the level of all sorts of technological things. Just pre-built steel beams in large cities are all over the place, and they’re quite hard to make. If you really got knocked back a long way, they’re quite hard to make. With higher sorts of technology, it becomes more and more debatable whether it would still be working, but it’s obvious that a lot of bulk technology is just there for the picking up. And this would make things go very, very fast when combined with the notion that we’d know what’s going on. Depending on how far we got knocked back, we’d have lots of detailed knowledge, even humans that remembered what things were like. Although technology built from scratch by people who not only had no idea about technology but no idea that it could even be done, in a world where there were no ruined cities . . . yeah, that would be something that would be very problematical to happen in any near-term sort of way.
I had a very interesting chat with [science fiction author] David Weber a few years ago. We were wandering around the American Library Association dealer’s floor, chatting about this exact issue, and I found that actually David Weber had a point of view that I have come to subscribe to, which is even more optimistic. His assertion was that human population could be a long time coming back, just because of human biology, but he felt that if we did not get wiped out, if there were humans left afterward, that there would be areas on Earth at 1800 to 1900 levels of technology within one human lifetime of the crash, and I’ve thought about that a lot, and I can see how it fits with the rest of the argument that I was peddling but that I didn’t have quite that much optimism for. Now, having said all that, I am afraid that it might lead some people to the conclusion that I’m saying, “Oh, there will be a bad day or two, but don’t worry about those disasters, you know, we’ll muddle through and be back as good as new before you can say ‘Jack Robinson,’” and I am not saying that. First of all, there are disasters that could kill everybody, and there’s also just the level of destruction that we are talking about, and the level of human tragedy, and the tragedy for the earth. Looking at the universe as a whole, furthermore, it is entirely plausible that there are disasters that nobody ever climbs out of. And so I would say that I am just as concerned about disasters as anyone. I have this region of the problem that I am more optimistic about than some people, but overall, avoiding existential threats is at the top of my to-do list.
Within the science fiction field, two of the concepts you’re best known for are the idea of the “zones of thought” and the idea of the “gestalt-sentient species.” How did you come up with those ideas?
Both the zones of thought and the Tines group mind critters started out in the same milieu. The zones of thought were my attempt to get around the limitations that it seems to me the Technological Singularity imposes on us science fiction writers. And the magical assumption—and it is a magical assumption—about the zones of thought is that superhuman intelligence is simply impossible in certain parts of the galaxy. And then, as sort of a fillip, I added two other zones. One was an intermediate zone in which superhuman intelligence was not possible but faster-than-light drives were. So in one universe I was able to have three or four different subgenres of hard science fiction. One is about the Technological Singularity, one is about faster-than-light travel, and one is where faster-than-light travel is not possible. Then there was a fourth zone, which is essentially intractable, and that is where even human-level intelligence is not possible, and that’s the Unthinking Depths. So that gave me a nice single universe that I could have accomplished otherwise only by doing it as a progression in time, as technology improved, different things becoming possible.
The Tines were not really to solve a problem like the zones were. The Tines grew out of my idea box—as ideas occur to me I write them down, and one observation that I made a long time ago, when I read science fictions stories, I noticed that there were all sorts of science fiction stories about group minds. The Borg was not the first such. They go back to probably the beginning of the 20th century, and they were very big in Star Maker by Olaf Stapledon. But one thing I noticed is that these group minds usually involved very large numbers of members. The individual members might be of human intelligence or they might only be of animal intelligence, but the ensemble was actually a very large group, and I noticed there were hardly ever any group minds where there were three or four or five members. It definitely had been done—for instance, Poul Anderson had a novel, I think in the Flandry series, that involved a race where each individual is actually from a different species. There was an avian type, and an herbivore type, and I think there was an ape type, and it took the three of them to make a single person.
That may be the only such story that I remember, at least at the time I made the observation, so that had been lying around in my idea box for a long time, and I decided to use it, and I think the great piece of good luck—from a purely writer standpoint in using the idea—was I decided to make the group members from a species that was at least vaguely doglike. So that meant that I had a lot of leverage with what we humans are already familiar with. We’re familiar with dealing with dogs as individuals, and we’re familiar—less familiar, but somewhat familiar—with dealing with dogs as part of pack-like groups. So an awful lot of stuff sort of came along with that idea, and I did not have to further explain those sorts of things. They were sort of already rooted in the consciousness of most readers. So adding the notion that the pack itself was intelligent meant that a whole lot of things were very, very easy to do, and lots of language was easy to use in terms of packs and in terms of group behavior.
Your latest novel is called The Children of the Sky. What’s it about?
The Children of the Sky is a sequel to A Fire Upon the Deep, and when I say “sequel,” I mean sequel as that term is understood by most people nowadays. I’ve sort of made a career of writing strange sequels, like sequels that take place 50 million years later, or sequels that take place 10,000 years earlier, and things like that. This is really a canonical sequel—it takes place two to ten years after the end of A Fire Upon the Deep. It has many of the surviving characters from A Fire Upon the Deep, and it follows along with their problems. It’s not giving anything away, but one disappointing thing about it is that it really doesn’t get into space. It’s all on Tines World, and it’s about the travails of the refugee children, who have now all been revived—almost all the survivors have been revived from the refugee ship in A Fire Upon the Deep, and so it follows their adventures along with these pack-minded creatures called the Tines.
Are there any other new or upcoming projects you’d like to mention?
I’m trying to decide what is the right next thing to write. I’ve gotten quite a bit of feedback from people who want the sequel to the sequel, that is, the sequel to The Children of the Sky, and I do have ideas for that. I also have ideas for near future things on Earth, which tie in more to the sort of things that we’ve been talking about earlier in this interview. Every time I turn around now, you know, it’s 2012! We are going into the middle of things, and maybe it’s my imagination, but I think there are all sorts of things that are visible now that were not so visible before, and I think that there’s all sorts of really cool science fiction that folks could write, and I hope to be one of those folks.
Spread the word!