Science Fiction & Fantasy




Interview: Marc Goodman

Security expert and futurist Marc Goodman has over twenty years in law enforcement working with organizations such as Interpol, the UN, NATO, the LAPD, and the US government. He’s also the founder of the Future Crimes Institute, and an advisor for Singularity University. His new book is called Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It.

This interview first appeared on’s The Geek’s Guide to the Galaxy podcast, which is hosted by David Barr Kirtley and produced by John Joseph Adams. Visit to listen to the interview or other episodes.

Your new book is called Future Crimes; tell us how this book came about.

I’ve worked in high-tech crime investigation for many years, and I’ve watched the flow of technology and the pace of Moore’s Law, and I’ve been fascinated by how criminals are focused on all these next-generation technologies. I wanted to write a book explaining what’s going on in the digital underground to folks so they could learn more about what’s happening and how to protect themselves, their families, businesses, and the country.

A lot of the stuff you talk about in the book is actually happening already.

Yes. You know, it depends on the audience; some will be very well versed in the idea of hacking drones, and perhaps that’s the Wired audience. But the general public may have no idea that people are hacking robots, using AI to script denial-of-service attacks, and messing around with synthetic biology for bio-attacks.

I like to think that I’m pretty up on this stuff, but I was frankly shocked by a lot of the things I discovered. Anyone listening to this is going to learn some things they definitely didn’t know before.

There is a tremendous amount going on out there that is truly at the cutting edge of science, and the fact that some of the Colombian narco-cartels have five-million-dollar R&D budgets for robotics is just one example; that narcos in Mexico are going to colleges of aeronautical engineering to hire drone engineers would be a surprise. AI, synthetic biology, robotics, big data, internet-of-things—crooks, terrorists, rogue governments, and corporations are trying to exploit this technology to the detriment of the general public.

Another thing that surprised me is that this is a book about criminals, but you start out talking about companies like Facebook and Google. Why start talking about these presumably legal companies?

I, because of my own background in law enforcement, was very much intent on writing a book about crimes, criminals, terrorists, and the like, and as I got more into the research I needed to look at where a lot of this data was originating and who else was using it. That very clearly pointed me in the direction of some companies, particularly some social media firms. The long and short of it is that all this data we are producing eventually leaks. Most of your listeners will be familiar with Moore’s Law, and in the book I created my own law, which I jokingly call “Goodman’s Law.” Goodman’s Law states that the more data you’re willing to produce, the more organized crime is willing to consume. For most people, a lot of that data is produced via their social media streams or GPS locations. Many people don’t understand the dynamics of why Facebook and some of the other social media tools are “free”; it’s the business models of those companies to take the data that you provide them willingly—and the other data that they’re able to deduce or gather through your camera, the microphone on your telephone, your GPS, your accelerometer—and divide into small groups they can sell at the highest rate to advertisers. That’s one side of it. The other side is how much of your social media data leaks to criminals; I point out in the book that Facebook, according to its own statistics, has 600,000 accounts compromised on a daily basis. For a lot of these cyber threats, I had to go back to the data and see how this data ends up in the hands of organized crime and others, and much of it leaks from third party aggregators.

To take one example that I found incredibly sleazy, you talk about how the dating site OkCupid, when you fill out this very personal data like “Do you want to have kids?” or, “How often do you drink?”—they’re actually taking that and selling it to companies that are going to pass it on to potential employers and the government. That’s their business model.

Let’s start with two of the big problems with this whole data-sphere, and that has to do with “free” and “Terms of Service.” Everybody thinks the internet is free; Facebook, Words With Friends, Angry Birds, Gmail—none of it is free. You’re paying with your data, your privacy, and at the end of the day Facebook makes, on average, between six and fourteen dollars a year on the average American. That’s how much they’re able to monetize them for, for selling off their data. I’d rather give Facebook ten bucks and say, “I’d like to keep my privacy.” They get away with this through their Terms of Service, which are ever-growing; I call them “Terms of Abuse” in the book.

OkCupid was asking people to fill out dating profiles, but then they got into much more personal stuff—sex, sexual orientation, and drug use. And people might think, “I use meth on the weekend, so I want to find somebody else who enjoys meth,” but what they don’t realize is that the minute they say they use cocaine—or meth, or ecstasy—OkCupid was dropping a cookie on their hard drive and taking that data and selling it to third parties.

We found out about this through a researcher who’s now at the Federal Trade Commission. Those third party companies are hired by employers to do background checks; you may apply for a job, not get the job, and never know it’s because OkCupid passed along that data point to an entirely unregulated data-broker industry. A multi-billion-dollar industry. And they can sell that data to the government and, if you happen to get into a DUI or car accident, don’t be surprised if somebody subpoenas your admitted cocaine use down the line. And people think, “Well, I don’t use my real name on OkCupid,” but there are dozens of companies that do data de-anonymization and are very clear that, even though your screen-name may be “Charlie1234” in San Francisco, everybody can work backwards from there and figure out exactly who you are based upon your IP address, MAC address, and other data that these folks gather.

You mention the Terms of Service, and I want to touch on that because I’m a writer, a lot of our listeners our writers, and according to the Google Terms of Service, you say that if JK Rowling had composed the Harry Potter books in Google Docs that Google would own all the rights to those books.

That’s what Google asserts and that’s what you agree to when you sign up. “I have read and agree to the Terms of Service” is the biggest lie on the internet. Facebook’s Terms of Service, when they started out, was a thousand words; they’re now at nine thousand. The US Constitution, by comparison, is only 4,400 words. The largest Terms of Service in the industry are PayPal’s: 36,000 words. Shakespeare’s Hamlet was only 32,000. If you added up all the Terms of Service the average American came up against on an annual basis, it would take seventy-eight days of reading around the clock to read through all of them. And they tell you all the different ways the companies can take advantage of you, for both free and paid services. The hundred dollars you pay AT&T for your cell phone service isn’t enough; they take all your cell phone traffic—who you’re dialing, what you’re surfing for on your cell phone—and they’re selling that to third parties.

Back to JK Rowling: When you sign up for Google, for any of their services, whether it be Google Voice or Gmail, you have to sign a waiver that says, “I hereby grant Google authority to read all my emails; to reproduce, copy, translate, perform, dramatize in this medium or any medium to ever be discovered . . .” When you use Google Docs to write your story or play, to draw your art, Google has claimed, and you have agreed to grant them, an unlimited license to use that data. So yeah, had JK Rowling written her books in Google Docs as opposed to Microsoft Word, in theory she would’ve granted Google the global license for her fifteen-billion-dollar empire.

I studied law as an undergrad, and there was this idea of conscionability in adhesion contracts, which is what these Terms of Service are: Would a court actually enforce something that’s so plainly unfair?

It’s a great question, and it’s one that people don’t ask enough. What I found is that the law is all over the place; in some cases they are found to be contracts of adhesion and non-enforceable, but on many other occasions they have been found completely legal. This goes back to the old software shrink-wrap; by opening up the package that the CD or DVD came in, you agreed to all of this stuff. And while Google has not specifically asserted those rights—and I don’t know how that would play out in court—there are many companies with similar Terms of Service that have had them upheld.

Just to show how ridiculous these Terms of Service are, I mention in the book an experiment done by a British gaming company called GameStop; for Halloween, they updated their Terms of Service to see if anybody read them, and they put in some language in the middle that said, “By clicking here, I agree to grant GameStop possession, eternally, of my immortal soul now and forever.” Seventeen hundred people that bought stuff on the website agreed to it.

If you go to McDonald’s, they can’t just put a sticker on the door that says, “By opening this door, you agree that if a member of your family dies of salmonella from improperly handled food, we’re not responsible for it. Why is a software company allowed to do that?

I agree. That’s one of the basic questions I ask in Future Crimes. If you went ahead and bought a Ford and every three feet it stalled or crashed, the courts would not allow Ford to get away with that, but these companies do. There was a great example at a company called Nordstrom’s, an upper-scale department store; they started monitoring their customers inside their store. Most folks today have smart phones and most smart phones have got Wi-Fi and most people leave their Wi-Fi port open and running all the time, which leaks the MAC address, a unique identifier. Nordstrom’s wanted to more precisely track their customers inside the store so they could know how much time they spent in shoes or dresses or ladies’ underwear, and they were doing micro-geolocation of the customers. They told their customers they were doing this by placing a six-inch sign at the mall entrance in a way that nobody would notice it that said, “In an effort to serve Nordstrom’s customers better”—which is always the clue—“we will now begin tracking your location in our stores via your mobile phone and your Wi-Fi. If you do not wish to participate in this project, either turn off your phone before coming in the store, or don’t come in.” And so, merely by walking into Nordstrom’s, you have agreed to Nordstrom’s new Terms of Service. People refer to this as the “Cookie-ing of the Street” where all of the cookies we’ve had to deal with in virtual space are increasingly coming into physical space.

Speaking of geotagging and tracking your location, you mention some really horrifying ways in which criminals can take advantage of that information: You might take a picture of your kids, and you don’t know that it has the location of where it was taken embedded in the file and a pedophile could analyze that photo and identify where you live. Another example you give is that domestic violence shelters now tell women to remove the batteries from their phones because their abusive spouse could track them down, and have in the past.

There have been lots of examples of that sort of locational threat data leaking, certainly in the case of domestic violence. Not only did they tell folks to take the battery out, but they actually confiscate the phones when they go into women’s shelters; many of these shelters are in private locations to prevent those that are committing domestic abuse from finding their victims, and now they take those phones away and don’t even permit them. Even burglars are using it, too; people have posted pictures of high-end stereos on Craigslist and the geotagging data will be hidden in those photos, and burglars are showing up and ripping people off. I tell the story of a Taliban attack where they were able to blow up a bunch of Apache helicopters that were just delivered because the GPS coordinates of the photographs posted by the soldiers on the base leaked the location of the secret base.

I had some vague sense that it was possible to hack into your computer and look out through your camera, but that this was difficult and no one would be particularly interested in watching me. But it sounds like it’s incredibly easy; you should just assume that every camera or microphone you own is recording you. And I guess there are websites where there are dozens of people being recorded twenty-four/seven that you can just watch.

Yeah; voyeur cams, stalker cams, and if you tune into these websites you can see people changing, gymnasiums, people in lawyers’ offices, detention cells in jails, dry cleaners, doctor’s offices. I forget the exact numbers, but all the cameras today are pretty much connected to the internet; forty percent of those camera systems don’t have any password on them, and another thirty percent use the default password that’s in the manual that you can google. It’s incredibly easy to get access.

The big difference is that now those cameras can be hacked with packaged malware. So you don’t need a master hacker that goes in every time and uniquely creates an account or a software tool that will break into someone’s camera; you can buy malware, or crimeware, as I call it, software that will automatically hijack somebody’s camera. That could be a baby camera, a camera in somebody’s home or on their laptop and mobile phone. We saw this happen with Miss Teen America, a young woman sixteen years old by the name of Cassidy Wolf; one day she got an email that contained over a hundred pictures of her naked in her own bedroom. And attached to the email was a threat; it said, “Hey, you have to have sex with me on online, on video chat, otherwise I’m going to post these naked pictures of you on social media and send it out to your Facebook friends.” Of course she was horrified; fortunately, she told her parents, who called in the FBI. They did an investigation into it and found that the hack was carried out by one of her classmates, a seventeen-year-old kid. And this kid was not a master hacker; he just bought some cheap software online, sent her an email, she clicked on the wrong thing and installed keystroke loggers onto her computer, and he took over her camera. What most people don’t realize about these camera hacks is that the little red or green light that you expect to see go on when you’re video recording can be disabled, so your cameras can be recording you in the background all the time.

You’re actually recommending people cover up their cameras with tape or something, right?

A sticky, a Band-Aid, a piece of electrical tape—of course, that doesn’t prevent the microphone from recording. In the case of Cassidy Wolf, she was not somebody who was always naked in her bedroom; the camera caught her coming from her bathroom, coming out of the shower, and changing in her bedroom.

I think it’s bad enough that people can get access to their data, but . . . I guess you watch movies like Die Hard or Mission Impossible and they can hack into traffic lights and prison cells and air traffic control; you tend to think all that’s Hollywood, you can’t really do that. But apparently, you can.

Art imitating life imitating art; there are lots of examples of hacking critical information and infrastructures in the book. I started talking about today’s crimes so that folks that weren’t so technologically savvy could relate. We started talking about Facebook and Google, but as you fast-forward to more advanced technologies, it is computers that run all of our critical infrastructures, whether it be the electrical grid, financial networks, ATMs, 911 dispatch systems, streetlights, hospitals’ electronic health records. And here’s the startling fact: There has never been a computer system that couldn’t be hacked. Most folks, when they think of cyber threats, think, “I was a victim of identity theft,” or, “I got a new credit card.” While those can mess with somebody’s life, that’s kind of the low-hanging fruit out there. The bigger threats are against these advancing technologies and our critical infrastructures, so in Future Crimes I tell the stories of hacking automobiles and hacking pacemakers and streetcars and air traffic control, and I give real-world examples where criminals or terrorists or hackers have done all of those things.

Most folks don’t realize the extent to which the whole world is becoming a computer; we used to have paper maps, now we have GPS; we used to have movies that were on film, now we’ve got Netflix; we had music CDs, now we’ve got Spotify and Pandora. We’re going to see the same thing that happened to music and movies and apps coming to physical objects; if you look at a 1965 Chevy or Mustang, those were mechanical cars, but the cars today . . . Any car that’s rolled off the assembly line in the past few years has well over two hundred microchips in it that control the radio, the GPS, the airbags, the cruise control, the speedometer.

Recently on 60 Minutes—they finally did a story on it even though I’ve been talking about it for years—Leslie Stall’s car was hacked; somebody was able to slam on the acceleration, slam on the breaks. A modern car is a computer that we ride in; an elevator is a computer that we ride in; an airplane is a Solaris box that we fly in. All of these devices are hackable, and as we rush towards the “Internet of Things” and the 200 billion new devices by the year 2020—according to the Intel Computer Corp—all of those new devices are going to be hackable as well.

Is there any role here for just going back to analog systems for certain things? Does the door of a prison cell really need to be connected to the internet?

I talked about a high security prison in Miami being hacked and the hackers were able to open up the cell doors and the prisoners got out and gang riots ensued. Right now, before we go ahead and connect those 200 billion new devices—whether they be pets, prisoners, plates, cars, pacemakers—we should stop and think about it. We can’t even secure the stuff that we have online today—iPads, iPhones, Android phones, Game Boys, Xboxes. All of that stuff is hackable and has more security than your smart refrigerator will. I think it’s worth asking the question, what should and should not be online. There is a movement among some companies to take certain things out of the electronic realm; KFC and McDonald’s have secret recipes for Coke and fried chicken. Those are not stored in any electronic systems; those are written down on paper and kept in a safe. After the Snowden revelations, the Kremlin—for their secret communications in Moscow—went back to manual typewriters.

The question is not “how do we stop this technological progress?” I, like most readers of Wired, am very pro-technology; I’m a technophile. All I’m saying is that we need to give much more attention to how we’re going to deal with the privacy issues that are emerging, as well as the security issues.

You also talk a lot about biometrics, which is a fixture of science fiction: retinal scanners and fingerprint scanners. I never thought about this before, but you make the point that a fingerprint scanner has a really big drawback: If somebody hacks it, if they get a copy of your fingerprint, you can’t just change your fingerprint like you change your password.

Exactly. Somebody hacks your credit card, you get a new credit card; somebody hacks your finger, you don’t get a new finger. What people forget about these fingerprint scanners is that they’re not storing your fingers; what they’re doing is taking your fingerprint and running it past an algorithm and creating a mathematical representation of your finger in a computer. It’s just data and, like all other data in a computer, it can be manipulated; it can be hacked, deleted, changed, mixed up with somebody else’s . . . I can take your fingerprints that show you’re innocent and then the police department computer can add a felony warrant to that.

And the data is leaking in interesting ways; in Israel, they had a national biometric database on their nine million citizens and a disgruntled insider went ahead and stole the entire database and posted it online. And there are other ways people are hacking biometrics, particularly with photography: Recently, a photograph was taken of Angela Merkel, the German chancellor, and they were able to—just by the resolution of the photograph—have a perfect copy of her fingerprint.

A few years ago, the German Minister of Justice, a minister of the interior over there—kind of like the Attorney General here in the United States—was pushing very hard for Germans to have biometric data on their national ID cards and he wanted all Germans to be fingerprinted, and the German people pushed back, particularly privacy advocates and those in the Chaos Computer Club. And so what they did was, when the German Minister of Justice was out at a restaurant, they got the glass that he had left behind and they were able to lift his fingerprint off of the glass and took a photograph, imported it into Photoshop, cleaned it up, and were able to replicate it on 3D printers in latex and included it as a handout in their Chaos Computer Club magazine that went out to 5,000 people. And they encouraged their readers to leave the Justice Minister’s fingerprint at crime scenes all over Germany, which they did.

There’s a lot of stuff in the book about 3D printing narcotics, explosives, rocket launchers, and even uranium centrifuges. It just sounds like all this stuff is going to be completely out of control.

Digital manufacturing is awesome for a lot of reasons: They sell 3D printers at Staples, and while most people don’t have them, the same way most people didn’t have laser printers or inkjet printers a few years ago, eventually they’ll be in our offices, our homes, and many people will be using them for lots of amazing tools. But they can be misused as well, and probably the most controversial has been the 3D printing of weapons through the DEFCAD Project and others. These guys have been able to print everything from the lower receiver of an AR-15 to magazines, and now you can even 3D print bullets successfully. As you mentioned with rocket launchers, as 3D printers get bigger, you can just print bigger and bigger things. We’ve had criminals 3D printing handcuff keys, police badges, and all of the intellectual property theft that people had to worry about in the digital space with the stealing of music and videos is going to come to physical objects. So whether it’s a Gucci purse or a Rolex watch, just scan it with your Kinect and import it and start printing the parts and put together a perfect replica. In the not-too-distant future, maybe a decade out, there’ll be printers that can print medicine. They already exist; they’ll just become more commonplace. Then, of course, if you can print medicine with a chem-printer, you can print illicit narcotics.

Does law enforcementdo wehave any strategy in mind to deal with people printing contraband out in the privacy of their homes?

Some contraband many people might not care about. We see this huge debate right now in the United States about marijuana; some states are making it one hundred percent legal for any purpose, others are making it legal for medicinal purposes. In terms of some of the other threats that are emerging in cyberspace, law enforcement, I’m afraid to say, is left behind, and this is coming from somebody who’s spent a career in law enforcement.

Policing and crime used to be almost exclusively a local affair, so if somebody robbed a Bank of America in mid-town Manhattan, the cops would know the victim was in mid-town Manhattan, they would know that the cops were located in mid-town Manhattan, they would know that the criminal was located in mid-town Manhattan, and so you had the co-location of criminal, victim, and cops. Now, the criminal can be in Kiev or El Salvador or Chicago, and they can hack somebody in Moscow or New York. So it’s not just whether the cops are keeping up, it’s that the internet fundamentally broke law enforcement, because law enforcement does not work internationally by design; you wouldn’t want a bunch of Russian cops kicking down your door in San Francisco with a search warrant and taking away an American citizen.

One of the things I mention in Future Crimes is that we actually may have the wrong paradigm for dealing with cyberthreats. When we talk about cyberthreats, we use the language of medicine; we use the language of infections—computer viruses—to describe the problem, but we don’t use the tools of medicine to address the problem, and I think there’s a huge opportunity there: What could we learn from the world of epidemiology, for example, or public health, when it comes to these threats and apply better threat models in our response?

My goal should not be to arrest every hacker in the world. My goal should be to create a self-healing immune system for the internet; even if a disease or virus gets created, that it won’t be passed to me. We need new institutions like the World Health Organization for cyber that can help drive this. If you think about how we get trained as little kids—if you sneeze, you cover your mouth, and if you cover your mouth after you sneeze, you don’t shake somebody’s hand—that’s hygiene. Nobody knows what that looks like in cyberspace, so we take USB thumb drives that we get at conferences and we plug them into our computers, and your computer, and we spread viruses; we open up attachments and downloads and forward them onto our friends and post them on social media; our own computers get ensnared in bot-nets. People don’t know how to protect against it, so I think there’s a tremendous opportunity for both public health and education to help reduce these threats.

Do you think there’s any sort of role for AI in this? Shouldn’t your computer be smart enough to know that, by opening up this attachment, you don’t want this gigantic program running and erasing all your data? Shouldn’t a water treatment plant know that it shouldn’t be taking directions from someone in China even if they do have the password?

I agree. There’s a ton of work going on in the field of applying AI to some of these cyber threats, but the challenge is that right now software is built to run; you want the default answer to be “yes”—you want the car to go, if you will. It’s incredibly difficult to foresee all of these threats. In the future, with AI and machine learning, we can do tremendous advances in this field, and there are lots of researchers, including DARPA, that are looking at this. But we’re nowhere near that now, and what we see is actually the opposite: The bad guys are using AI and machine learning to script their attacks faster and at scale.

Speaking of AI, there’s this fascinating thing in the book where this guy murdered his roommate, and when they caught him they checked his phone and he had asked Siri, “I need to hide my roommate. And Siri had replied, “Swamps, reservoirs, metal foundries, and dumps.

That’s the point I was making in that section of Future Crimes, because I was talking about AI and next generation threats. “The Future is already here,” to quote William Gibson from Neuromancer, “it’s just unevenly distributed.” We’ve had people like Bill Gates, Elon Musk, and Stephen Hawking talk about their severe concerns about artificial general intelligence, and I also share some of those concerns, but even before we get to the time of AGI, we have some serious risks emerging from narrow AI; the type of AI that recommends how you drive from point A to point B, that is doing the majority of trading on Wall Street. That’s not carried out by human beings, but by algorithms and bots. Even low level criminals, like an eighteen-year-old kid at the University of Florida: When he needed to figure out where to hide a dead body, when he was looking for a co-conspirator, he turned to Siri. And Siri, because she does not have an ethics engine, just provided the answer.

And these AI things, you can copy them endless times; you can, essentially, have billions of grifters out on the internet, all scamming people at once.

That’s already going on, and that’s a huge problem. One of the big takeaways from Future Crimes is that it’s not people committing crime anymore—crime has become software; it’s crimeware. You used to need to be a master hacker, be very familiar in programming languages and know your way around internet protocol, to break into all of these systems. Now you don’t; master hackers have created software that they are selling to lower-level criminals that can do all of this in the background. The reason why we’ve gone from one person robbing another, one-on-one in street robberies, to someone in the Target hack, robbing over a hundred million people, is because crime scales exponentially. A DDoS attack is just scripted; it’s the computer that’s carrying it out, and we’re seeing more and more sophisticated crimes being scripted and carried out. There’s a concept in malware known as “ransomware,” and these are bits of malware that get on your computer and encrypt your hard drive with a private key and then hold your computer for ransom, and you’re given forty-eight hours to pay in bitcoin, and if you don’t, you lose all the data in your computer. So even sophisticated crimes like ransoming are now being scripted; blackmail has been scripted, so all of it is becoming an algorithm, and the bad guys have built the crime-bots.

As you pointed out earlier, we haven’t built the cop-bots, and even though it looks great on TV with Minority Report, the reality is that law enforcement and government are far behind in defending against these threats, as are many companies and individuals.

And the bots you’re talking about are computer virus-type bots, but there’s no reason these couldn’t be physical bots; you talk about drones armed with guns or self-driving trucks armed with bombs.

I say it in the book: Cybercrime is going 3D. Right now, we’ve only had to worry about hacks that were behind two-dimensional screens; somebody moves some bits and you lose money from your bank account; they take over your identity, they take over your credit card. This is the thing that the general public misses about the cyberthreat: We look around us and we see Xboxes and GPS devices and smart TVs and we think we’re at the pinnacle of technology. But with Moore’s Law and the Internet of Things, we’re just about to hit the knee of the curve; this internet is going to grow in size from a metaphorical golf ball to the size of the sun with Internet Protocol version 6.

One of the things that will come out of this are computers that are going to walk, crawl, fly, swim, and roll. And we already have robots that are used extensively in the manufacturing process in automobile manufacturing; we have tons of robots that are used on the battlefield. But we’re starting to see them come into the home, whether it be Roomba vacuums, home health care bots—robots are going to be everywhere. But like all other computers, they are hackable, and we’ve seen robots hacked on many occasions before; we’ve had drones flying over Afghanistan that the Taliban hacked the video feeds on; a DHS drone on the southern border of the United States flying between Texas and Mexico that a bunch of students from UT-Austin were able to commandeer. The difference between hacking a two-dimensional computer and a three-dimensional computer is a three-dimensional computer can kick and punch and drag and lift and have super-human strength, and we’re completely unprepared for that.

You say we have this problem of a digital monoculture, where all the devices are running the same software, so once somebody writes a virus for that software, it affects millions of devices. Should there be a hundred different operating systems for people to choose from so that it limits the damage whenever a security flaw in one of them is discovered?

The concept of monoculture is really important, and it goes back to what I was saying earlier about epidemiology and public health; if there’s just one type of computing system, let’s say Windows 7, it would behoove hackers and virus writer to write their viruses for Windows 7. And we saw, in the early days of Windows, almost all of the threats were against those Windows devices because they were the predominant ones. This is where the misconception that Macs are not hackable came from; it wasn’t that Macs aren’t hackable, it’s that why would I, as a master hacker, spend my time on .01% of the market? But as Mac OS has increased, significantly, in market share—particularly on the iOS mobile front—we’re seeing that, increasingly, hackers are developing malware against those tools.

Since we were just talking about robotics, one of the things that has protected the global information grid, our critical infrastructures, and even robotics, from being hacked is that most of those times those operating systems were completely unique: The software that Con Ed used to protect and run the electricity grid in the city of New York was different from what PG&E used to protect the electrical grid in San Francisco. And up until this point, there really hasn’t been a standard robotic operating system; now one is emerging called ROS, the Robotic Operating System. So now you can create an attack that could impact a whole bunch of robots.

Is there any sort of effort towards diversifying the operating systems?

Not that I’m aware of, on a mass scale; I know people that use Linux for this exact reason.

Another ray of hope you offer in this book is the idea of crowdsourcing crime fighting: Rather than relying on the police exclusively, you would draw on the power of these millions of citizens who want to do the right thing and would help fight crime.

In Future Crimes, I tell story after story of how criminals, and even terrorists, are crowdsourcing their attacks. Criminals were able to break into the prepaid debit card network and remove the fifty- or hundred-dollar limit, and were able to send these debit cards to organized crime groups in twenty-seven different countries. They spread them among these low-level thieves that, all at once, hit up as many ATMs as they could in a ten-hour period, carried out 36,000 transactions and walked away with forty-five million dollars. The bad guys definitely understand how to leverage crowdsourcing, and they’re even using crowdfunding techniques.

The challenge is that we haven’t done a particularly great job of leveraging crowdsourcing for the public good. If you think about it, crowdsourcing has kind of been an element of law enforcement since the beginning; if you go back to the 1850s, there were “Wanted” posters in the post office, offering a reward, and we’ve had Neighborhood Watch programs. The internet creates the opportunity, as we saw with the Serial podcast; you can get tens of thousands of people investigating cases; you can get tens of thousands of people involved in the identification of malware.

One thing I call for in Future Crimes is the involvement of the gamer community in helping us protect our security; what if, as people were going through the games, they were actually shooting malware or destroying phishing attacks or identifying spear-phishing attacks? We have reserve Army, Marine Corps, police officers, even FEMA that deals with natural disasters, but we don’t have any entity that is a crowdsourced approach to cybersecurity. I think there’s a tremendous opportunity to take people of technological skill—whether it be a ten-year-old in India, or a eighty-year-old woman in Seattle—and get these people involved in this fight. I think it’s the only way we’re going to move forward and win this battle.

I like the example of the Serial podcast; that we need to leverage the massive power of podcasting for good.

I think a podcaster might like that.

This book scared the hell out of me, and I imagine it’s scaring the hell out of a lot of people who are reading it. Do you get the sense that people are taking this book seriously and are formulating a response to it?

Future Crimes just hit number three on the New York Times bestseller list for hardcover nonfiction. I’m deeply grateful to all the folks who bought the book and are reading it, and I think it’s getting someplace in the press as well as government; I’ve been contacted by a number of government leaders who are looking at it. I’m trying to point out the cutting edge of criminal innovation, and how everybody from organized crime to terrorists, governments, and corporations are using technology against the general public in ways they don’t understand. My goal is not to scare, it’s to educate, so people can get a feel for what’s going on and then they can empower themselves, It’s better to understand the risks and threats so people can protect themselves. In the last few chapters of Future Crimes, I put out a whole bunch of technology, public policy, legal, and regulatory recommendations; I call for a Manhattan Project for cyber, an XPRIZE for cyber security—all these kinds of big-think issues that we can take on to make a really dramatic difference. I also include practical tips for individuals and businesses. It’s an interesting ride; it sounds like science fiction, but it’s science fact.

You mention science fiction and I’m curious: Are you a big science fiction fan? Did science fiction inform your thinking on these issues in any way?

I like science fiction very much, whether it be Minority Report, The Terminator, or even some of the classics: Asimov’s Three Laws of Robotics. And I got introduced to this whole world of hacking through film, War Games and Sneakers and a lot of other pictures that came out that talk about some of these threats. Science fiction has been very prescient about these risks; a lot of stuff that’s been shown in those films and works has come true today.

In the acknowledgments you mention two science fiction authors, Daniel Suarez and Ramez Naam; what did they contribute?

First, they’re just generally awesome people, and I’m deeply lucky to call both of them friends. I read Daniel’s Daemon and really loved it. Ramez teaches with me at Singularity University and we also have Daniel come by and teach. Both of them are experienced authors; when I was researching and working on my book, they both were extremely generous and met with me and talked through these themes and they just gave me a lot of good advice for a first-time author. I’m very appreciative.

Do you want to say more about Singularity University? You’re also involved with something called The Future Crimes Institute; do you want to let people know about those and where they can find information?

You can get more info on Singularity University at; it was created back in 2010, co-founded by NASA, Google, Autodesk, the Kauffman Foundation, Nokia, and a bunch of other companies. It’s housed on the campus of the NASA research center in Silicon Valley in Mountain View, and the school has but one mission: To teach students about exponential technologies, robotics, artificial intelligence, nanotech, big data, Internet of Things, and AI, amongst others, and make sure our students take those tools with the mission of positively affecting the lives of a billion people over the next ten years. We want our students to take drones and not just use them as weapons, but use them to deliver food to Africa, medicine to isolated villages that don’t have access to a doctor or advanced medical techniques. Amazing faculty; we have astronauts and roboticists and big data scientists and physicians on staff, and there’s a bunch of free videos online, and we also run executive programs, too.

The Future Crimes Institute was something I founded back in 2010, and the goal of that was to unite technologists and those working in criminal justice to discuss these emerging threats. It’s a virtual institute online, we have about 3,500 members; chief security officers from companies your listenership would be familiar with, as well as senior leadership from the FBI, Scotland Yard, and the Royal Canadian Mounted Police. We conduct individual research on everything from the security threats of virtual reality and augmented reality to hacking, 3D printers, and the like.

Are there any projects, websites, or anything else you want to mention?

If folks want to learn more about the book, they can stop by; if you sign up I can send you an info-graphic that I’ve created that specifically lists six steps that everybody can take to reduce their cyber security risk by eighty-five percent. Future Crimes is on sale; you can pick it up at stores or anyplace online. I look forward to continuing the conversation; Wired is a great magazine and I love it; I’ve written for RGK before, and it’s just a great audience. I’m glad I got the opportunity to discuss these topics with you today. Thank you for your time.

Future Crimes is a really good book everyone has to read; there’s tons of stuff we didn’t get a chance to talk about that you really need to know. Marc, thanks for joining us.

The pleasure is mine.

Enjoyed this article? Consider supporting us via one of the following methods:

The Geek’s Guide to the Galaxy

The Geek's Guide to the Galaxy

The Geek’s Guide to the Galaxy is a science fiction/fantasy talk show podcast. It is produced by John Joseph Adams and hosted by: David Barr Kirtley, who is the author of thirty short stories, which have appeared in magazines such as Realms of Fantasy, Weird Tales, and Lightspeed, in books such as Armored, The Living Dead, Other Worlds Than These, and Fantasy: The Best of the Year, and on podcasts such as Escape Pod and Pseudopod. He lives in New York.