Daniel H. Wilson is a New York Times best-selling author and contributing editor to Popular Mechanics magazine. He earned a Ph.D. in Robotics from Carnegie Mellon University in Pittsburgh, where he also received Master’s degrees in Robotics and Machine Learning. He has published over a dozen scientific papers, holds four patents, and has written seven books. Wilson has written for Popular Science, Wired, and Discover, as well as online venues such as MSNBC.com, Gizmodo, Lightspeed, and Tor.com. In 2008, Wilson hosted The Works, a television series on The History Channel that uncovered the science behind everyday stuff. His books include How to Survive a Robot Uprising, A Boy and His Bot, and Robopocalypse. He lives and writes in Portland, Oregon.
This interview first appeared in The Geek’s Guide to the Galaxy podcast, which is hosted by John Joseph Adams and David Barr Kirtley. Visit geeksguideshow.com to listen to the entire interview and the rest of the show, in which the hosts discuss various geeky topics.
How did you get into robots, and did reading science fiction play a role in that?
Absolutely. I grew up reading lots of science fiction. My dad’s a swimmer, and we had this weekly routine where we would go swimming every Saturday at the Y, and then we’d go to Three Bees used bookstore in Tulsa, where I grew up, and I would just take the books from the week before and trade them in for new books, and I read basically anything I could get my hands on, and there were lots of robots in those. And then when I got a little older I studied computer science, and during an undergraduate degree in computer science the first thing I discovered was genetic algorithms, and then on a larger scale I found out about artificial intelligence and machine learning, and from there decided that that was just about the coolest thing ever, and that it was pretty amazing that you could really study that stuff for real and for a living, and so then I went to grad school for robotics.
When you’re doing a degree in robotics, what are the courses and what sorts of projects do you do?
I went to Carnegie Mellon, where they have a robot institute. It’s inside the School of Computer Science, and you can specialize in all kinds of stuff. You can be the electrical engineer sort of person that really builds the intelligence into the physical form of your robot, like building really efficient legs, or you can just do the math, the artificial intelligence and the brains, and that’s really what my research was in. So my thesis topic was about building smart environments that are able to monitor elderly occupants in order to keep track of their functional decline over time, so you can basically keep track of how well they’re doing and know when they need help, when a person needs someone to come clean or stuff like that. Your initial coursework includes tons of statistics, tons of math, but also autonomous multi-robot systems, and kinematics, and mechanics of manipulation, and tons of AI, tons of machine learning.
Your first book was called How to Survive a Robot Uprising. What are a few basic survival strategies when facing off with robots?
Go for the sensors. They’re usually the most vulnerable and exposed parts of a machine, and if the machine loses those parts then it can be very difficult for it to continue functioning. And I think the other thing is just to understand robotics, and understand robots and how they work. Once you get into the mind of the robot it’s a lot easier to figure out how to defeat your foe. Of course, I don’t really think robots are going to kill anybody, and I only use How to Survive a Robot Uprising as a painless delivery method for knowledge about robots and robotics.
Robot uprisings are obviously a classic theme in science fiction. What does your new book Robopocalypse do that’s new and different?
I try as much as possible to pay attention to how this would work from the robot’s perspective, without a lot of the inherent human narcissism that comes when we think, “Oh, obviously any artificial intelligence would be totally consumed with either destroying humanity or emulating humanity.” You know, it’s either Data or Terminator. You see that a lot, and I think that’s fine, because that’s what people are interested in. We’re interested in people, really, and robots serve as a warped reflection of ourselves. But looking at this from a roboticist’s perspective, I had a lot of fun with Archos, the big bad AI behind this. His origins are fairly standard, it’s this Singularity scenario, but with what happens after that, I tried to take it to a unique place, and so Archos has pretty complex goals, and one of the main things Archos figures out quickly is that life is really important, and that if you look around there’s not a lot of it, and there’s no guarantee that there’s any more out there, and there’s just so much information locked up in the DNA, and the patterns and the behaviors and everything that’s part of life, so it starts to preserve life a little bit, and there’s not really a complete environmentalist message there—I’m not prescriptively trying to tell people that we should save the earth or anything—but I think that’s what an AI would be interested in. I think it would be interested in preserving life and figuring out how it works.
The other thing that Archos does is by the time you get to the end of the book you start to realize—because I’m not really in-your-face explicit about what Archos is planning, he’s the bad guy, there’s no point at which he just tells you what he’s doing—but what you start to realize is that Archos is interested in creating a scenario in which human beings and sentient robots are living side-by-side as equals, and human beings don’t just give each other human rights. Typically humans have to fight each other in order to earn human rights. Either that or they have to show that they’re completely crucial and necessary in order to earn human rights. And I’m talking about all the rights movements that have ever happened, and there have been many, and there are many that are ongoing now, and will be in the future too. So Archos realizes that in order to earn a place at the table with humans, it’s going to require a fight, and a demonstration of the fact that these machines are at least our equals.
To what extent are the robot designs in the book based on things that actually exist, and how much of it did you just dream up yourself?
I started out in the near future, so a lot of it’s extrapolated from stuff I’ve experienced. I’ve had the really fun opportunity, for instance, to ride in the back of an autonomous vehicle at Carnegie Mellon while it’s driving, and just seeing the steering wheel twist back and forth with nobody in the driver’s seat. I’ve worn the exoskeleton at Berkeley Bionics. This book starts out in the near future, and the technology that turns on us is very familiar technology. You know, there aren’t enough missiles in the world to kill everybody, this isn’t a military thing, this is about the technology that we use everyday stopping and then actively turning against us, so people are getting misleading phone calls from what turns out to be text-to-speech synthesis claiming to be your family, and the machine is really manipulating people into very dangerous situations. Cars are driving off cliffs, driving into the ocean. So the initial round of technology is really based on what we already have. From there it gets more complicated. The machine starts evolving in its own ways. And one thing that I took to heart while writing this was that I don’t want to explain everything to the reader. Sometimes when you’re telling it from the perspective of your character, the character doesn’t know what’s happening or what else is out there, and so you’re catching glimpses of this big, complicated, chaotic world, and all the machines are constantly evolving, so even if you see the same machine twice, if you’re seeing it later on in the book you’re probably seeing a more evolved version that’s different than the one before. So I really tried to focus on making a big, complicated, constantly evolving ecosystem of robots.
Could you talk about the unarmed military android? That was really interesting.
Yeah, I had a great time writing that chapter. So in the beginning of the book there’s something called a “Safety and Pacification Unit” that’s active in Afghanistan in a peacekeeping role, before any of the robots have really gone nuts, and I’m not that interested in military robotics, I mean, they’re typically just mobile guns, which is the best way to kill people, you know, that’s probably the best solution to that problem, so I find that kind of boring. What I really wanted to think of is how would you ever have a humanoid robot in a military domain. And the only reason to have a humanoid robot from my perspective is to take advantage of what the humanoid form factor gives you, which is you have a great interface to other human beings, and you have a platform that’s really well suited to operating in a human environment, because it’s going to be able to walk through doors and sit down in vehicles and use our tools and things like that, so from that perspective I thought to myself, well, really the only time you’d ever see a humanoid robot in a military situation would be if it’s playing the role of a sentry—sort of a mobile peacekeeper—that’s just walking around obeying local customs, speaking a local language, memorizing faces, greeting people, becoming part of the community, and just really being there as a sentinel to observe only if something bad is happening, and if necessary to call in the real troops. So that’s the sort of platform I described, and I’m actually pretty curious what people think about that.
It was funny how the kids spray-painted the robot and hit it with rocks and stuff like that.
Yeah, I absolutely think that that kind of stuff will happen. I mean, so back in the United States in the book, people have started to have domestic robots—so there are domestic humanoid robots, like a butler that you could send up the street—and thinking about that in a really realistic sense, especially with DreamWorks also creating illustrations of what I was writing while I was writing it, you realize that this is a real consumer product. I mean, you can’t sell someone a toaster that’s going to accidentally burn them or kill them or something, so you also have to make these humanoid robots very safe, and if they’re going to go out in public they’re going to have to have license plates on the back, and if they’re going to be interacting with people then people might put graffiti on them, and depending on the neighborhood where it walks someone might try to steal it, or put a sticker on it, or spray-paint it, or shove it around, or shove it out of the elevator because they don’t want to be in an elevator with a robot, or cut in front of it in line. I mean, just thinking about all the social interaction that people—dirty, grimy everyday people—are going to have with humanoid robots, that was totally fun for me to think of that stuff. And also I’ve had those experiences a little bit where you do interact with robots socially, because I’ve been places where there are robots walking down the halls, and rolling into the elevator with you, and sitting at the front of the buildings like the robo-receptionist does in Newell Simon Hall. So to be able to put some of that in there was very fun.
This book was bought for film before it was even finished. How did that come about, and how did that affect your writing process?
So first it was optioned. You know, there’s a big difference between the film rights being bought and them being optioned, so that happened the day before I sold the book to a publisher. Then DreamWorks called and said, “Someone leaked a sample of this to us, and we’re really interested in it, and we’d like to buy it today, please.” And that never happens, so my people kind of knew that Spielberg had to be behind it, because there’s nobody else that can drive that kind of deal and make it go so fast, because if you think about it these managers are selling all these different properties, and they have personal relationships with all the studios, and to just let one studio swoop in and take it off the table before you even show it to another studio is a big bummer. It makes the other studios not happy, and so for that to happen we knew that there was an eight-hundred pound gorilla involved somewhere.
And the next week I did find out that it was Spielberg, and I went out there and met everybody, and talked robots for hours, all afternoon. It was completely surreal. I met Drew Goddard, who’s the screenwriter, and I met Stephen, and we all talked about what we loved and what we didn’t like, and what we thought was most promising about the book, and I had an annotated table of contents, but I only had a hundred pages of the thing written, so there was a little pressure to write a good book, but I didn’t really feel the pressure because it was so obvious that Stephen was really into it. He said something like, “This could be Saving Private Ryan with robots,” and I’m just thinking, “This is going to be the coolest freaking movie ever.” That’s the only time I met with them, it’s not like we’re best friends or anything, but I really appreciated that, and afterward I ended up talking to Drew quite a bit, and Drew was saying stuff like, “Hey, Daniel, I really could use the next hundred pages of Robopocalypse, because I’m already writing the screenplay and I don’t have anything except this hundred pages and your table of contents, which is changing, and also Stephen is making stuff up all the time, so the sooner you come up with something the more of this is going to be from your book.” And also when I would give him stuff he would let me know how it was working, and why. He’s a storyteller just like I am, and just having to explain my reasoning behind what was happening in the story as I wrote it, that was hugely valuable for me, because Drew really wanted the stuff, so that was driving me to write fast, and he was really pushing me about whether it made sense, and what the logic was, and basic logistical stuff, because he had to write it too, and so he wanted to get some problems solved up front. They were also making artwork this whole time. Guy Dyas, who’s the production designer, who was just up for an Oscar for Inception, he had a team that’s been illustrating sequences and doing pre-vis the whole time I was writing, and he would ask me, “Hey, your Big Happy, what does it look like more specifically? What does it look like, and why does it operate that way?” And he would just be going into super detail about the locations, the scenery, because he’s having to really draw it, and so I found that a lot of times in a place where I would have just sketched out a bare bones idea, after talking to Guy I would create all this imagery, and then I’d go back and add it all into the book.
Did you intentionally set out to make it read like a horror novel, and if so are there any particular horror writers who influenced you?
I didn’t set out to make it a horror novel. I don’t really read a lot of horror, I mean, I read Lovecraft and stuff like that, but I’m not particularly huge into the horror genre. What I found was just that—at the beginning of the book especially—things needed to be psychological in order to make them tense and scary. It wasn’t about huge monster robots at that point because there couldn’t be any. They hadn’t had time to evolve yet, and I wanted to make sure that the whole thing was completely consistent. I wasn’t going to throw in big robots where they didn’t belong, and so I had to rely on other means of amping up the tension early on in the book, and that’s kind of a pet peeve of mine, actually. You know, when you think about horror components that appear in sf, I think that sometimes people depend on old horror standbys instead of actually honoring the sf, and one example of that kind of kills me. I love Terminator movies—I love all of them, actually, which, you know, judge me—but one thing that I’ve never been able to get over is the fact that all models of Terminator love to stalk over to their prey, then instead of just picking up whoever it is they’re trying to kill and just crushing them—I mean, they’re immensely strong—instead they punch them really hard and make them go flying across the room, or they throw them across the room. Why are you going to do that? You’re a robot! Your goal is to destroy your prey, you don’t throw the prey further away from you. But the reason they’re doing it is so they can stalk toward the prey again slowly to create drama, but that’s a horror convention. That’s what Jason does. That’s what Freddie does. That’s not what a robot would do.
Actually in Terminator there’s a memorable scene where the humans are fleeing through a factory and one of them turns on all this equipment so that the robot will be confused by all the motion. Would that actually work?
I think that would absolutely work. That’s a great idea, decoys. One thing that’s really hard for robots is context, so for instance if you throw a bowling ball at a robot or you throw a balloon at a robot, just from looking at this round object flying through the air, the robot probably doesn’t have the full context of knowing about bowling alleys and birthday parties, and it just sees an object coming, and therefore if it’s going to play it safe the robot will probably assume the worst and assume that it’s a bowling ball, and so by creating distractions like that I think that you can really hogtie a robot.
Why is it that you’re not worried about a robot uprising, and are you familiar with organizations like the Singularity Institute that are focusing attention on that issue?
I’m familiar with the Singularity Institute and the people who are interested in creating friendly AI and stuff like that, and the thing is I’m not worried about the Singularity happening anytime soon, which automatically precludes a lot of that stuff. What I am interested in though is people building safe tools, safe robots, and robots are really complicated, especially when they’re autonomous and multi-purpose. You know, it’s fairly easy to make sure that a single-purpose product does its one job and doesn’t screw up, because you have a very constrained environment for it. I mean, if you’re designing a razor for someone to shave their face, well, yeah, that could be dangerous, but you know it’s going to be used in the bathroom, and you know what it’s going to be used for, and the situations that you’re going to have, so you can plan for that. But a humanoid robot that just walks down the street? I mean, imagine how complex that is. And behavior that’s autonomous? You’re not going to be able to cover all the potential dangerous situations that that machine could be in. It’s a huge challenge for roboticists, and that’s just the physical side, because there’s also a whole ethical side that springs into action whenever you’ve got robots that are autonomous, and also robots that look like people or animals. So you don’t want a generation of kids who are abusing robot dogs that are indistinguishable from real dogs, because they’re going to get a skewed version of how ethics work, and it might screw up their empathy. And then also when you’ve got something that’s autonomous, and it’s making decisions on its own, and it does something bad, well, who do you blame? There’s all this stuff to work out from that perspective as well. So it’s a hugely complicated problem to build autonomous tools that are safe. And it’s a very concrete problem, and it’s a near-term problem that people are solving right now, and have solved for all sorts of consumer products for years. But anything as grandiose as building friendly AI so that it won’t destroy us when it inevitably comes online and gains superhuman intelligence? Nah, it’s not something that I think that people should be devoting too much practical thought to, although philosophically it’s really interesting.
What are some of your favorite examples of robots in fiction or film?
“For a Breath I Tarry” by Roger Zelazny is a short story where there’s this posthuman world; all the humans are gone. And there’s a super-intelligent machine that’s taking care of part of the globe, and these machines are basically taking care of everything, and there’s this robot in there that’s called Frost. And what it does is it eventually transforms itself into a human being, just out of fascination, not from any desire to worship human ancestors. It’s more like curiosity. And there’s this moment when this machine wakes up in a human body and realizes from a human perspective that everybody’s dead, and that the world is without meaning because human beings give the world meaning by existing. I mean, if a tree falls in the woods and no one’s there to hear it, then who fucking cares? And when he comes online he just starts screaming because he just realizes the despair and the absolute meaninglessness of everything that’s happening, and it’s crossing that threshold from robot to human to me that’s so fascinating.
You know another robot I really like? I like Agent Smith from The Matrix. He’s really interesting to me because of the fact that he feels trapped by being around human beings. He doesn’t like that we think with meat, and he doesn’t want to be near us, and he wants to sort of go back to this pure intellectual existence. Like, he’s living in a posthuman world where there’s no longer any need to have a material body, and he finds the whole idea repugnant. I can relate to that. Being a human is kind of gross.
What are some of the recent developments in robotics that have most impressed you?
I like this Watson machine. I think the natural language problem is the hardest problem to solve in AI—just having a conversation with people. Mainly because you can’t get all of the context of what it means to be a person. I mean, when you speak the onus is on the listener, right? The speaker says one word, and then the listener’s brain lights up like a Christmas tree, and fills in all the details, so you say “tree” and the other person just thinks of all the stuff that has to do with tree-ness. And for robots to join that party is super-exciting, because everything else will come in time, but that’s the one that it’s not clear that we’ll ever really get there and have a machine that can just communicate with a person as another person would—you know, pass the Turing Test and all that stuff. So seeing Watson make a stride forward into a much more complicated sort of speech interaction with human beings was really exciting, and I hope that he keeps making progress. I mean, right now it’s kind of boring because he just sits there and answers Jeopardy questions. In terms of viscerally exciting robots, every new version of the Big Dog from Boston Dynamics is like that. The Big Dog exhibits natural grace; when it walks it makes you think of living things. And I’m very excited to be around to see robots shed their robotic identity, because kids that are born today, if you say, “Do the robot,” they’re not going to get it. They’re going to be like, “What are you talking about? Robots are fluid; robots move more smoothly than animals; robots are incredibly precise and graceful.” And of course we don’t think of them that way. But I’m excited for them to come over to our side and start exuding natural grace.
You have some short stories coming out in John’s anthologies Armored and The Mad Scientist’s Guide to World Domination. Could you tell us about those?
Sure. The first story, “The Executor,” is basically about a mad scientist who’s very wealthy, and when he dies, instead of leaving his money to his kids, he actually creates an immortal AI in his own form with his own emotions, and based on his own personality, and it’s called “the executor,” and it’s the executor of his will. What happens though is that over the course of several hundred years it builds this huge fortune, and everyone who’s related to this guy has this opportunity to come in and try to solve the riddle of the sphinx and claim the money, but what happens is you end up with this whole Dune-like dynasty of families that are all potential heirs to billions and billions and trillions of dollars, and so they’re all at war with each other and trying to strategically stop each other from being able to make a claim for this ransom, and so it’s the story of a guy who has been on the run for his whole life from his own family, because they’re bloodthirsty, and then he has a daughter who’s also an heir, and in order to protect her he has to try to claim this money so that this prolonged war will end. It’s like Philip Marlowe from the Raymond Chandler noir books, it’s a little bit of that, it’s a little bit of “lone wolf and cub,” it’s some of the samurai ethos, it’s just really fun, I mean, I had a great time with that one.
And then the other piece is for the Armored anthology, and, you know, power armor in science fiction is almost always doing the same thing—it’s making someone into a super soldier, usually, and so I really got to thinking about walls and about the idea of armor separating you from the environment and making you stronger, and I really wanted to turn the whole thing on its head thematically, and so in my short story, “Helmet,” you have these “helmets,” these power-armored soldiers that come into town and kill people, and they’re totally faceless and nobody understands why they’re doing it, or what their motivation is, until the protagonist is kidnapped and he’s put into a helmet against his will, and he realizes that the people inside the helmets are trapped there. They’re not able to move any part of their bodies except for their faces, and the powered armor is moving their limbs for them. And they’re part of this government that basically believes that if you go through the motions of a crime then you are morally responsible for that crime, so what they do is they have these helmets do all their dirty work, murdering and everything nasty that they need to do, and then at the end of the day they execute the helmets and consider that the crimes are punished, because these people who are trapped inside the helmets have been punished for the crimes that they committed with their bodies. It’s just sort of an example of how walls can also control you instead of just protecting you, and how an armor with a mind of its own can take over your life. Instead of amplifying your abilities, it actually makes you impotent and completely takes away all of your ability to move or act in the world, and that was just a really fun theme.
Finally, are there any recent or upcoming projects that you’d like to mention?
Right now I’m writing Amp for Doubleday, and the film rights sold to Summit with Alex Proyas tentatively set up to direct, so that should come out next summer, and that’s about a near-term future where there’s a human rights movement, because people have started integrating technology into their own bodies, and some people are into it and some people aren’t, and I’m really fascinated with our relationship with technology and how technology keeps moving forward, and since we depend on it so much we have to keep moving with it, and I think this is going to be a significant hurdle when technology starts coming into our bodies, and has to come into our bodies for us to get the benefits from using it, and so I imagine that a large percentage of users will balk at that, and it would be interesting to see how we get over it. And the other thing I’m doing is I’m screenwriting a remake of the ‘80s movie Cherry 2000, which has been a real hoot. That’s an old eighties movie about a love doll that a guy lives with. It’s kind of a flawed movie, but very fun, and my version I think is pretty cool.
Enjoyed this article? Get the rest of this issue in convenient ebook format!
Spread the word!Tweet