What was your inspiration for this story? What was the initial kernel for the overall narrative?
Ever since Neuralink appeared on the radar I’ve been mulling over the ramifications of a technology that integrates with the brain (as Elon Musk once said, back before he started foaming at the mouth) “just as your cortex works symbiotically with your limbic system.” We still don’t know how consciousness works, but what we do know seems to suggest that “eliminating the I/O constraint” might be a very bad idea to anyone who values having a distinct self (I gave a talk on this very subject over in Bulgaria a few years back). Connect brains together with a fat enough pipe, chances are you don’t have two communing selves; you have a single self spread across two motherboards. There’s a very good chance that the entity you regard as “you” deprecates from soul down to mere subroutine in such cases.
I’d been thinking along such lines for a while when I got invited into a literary project in which a bunch of SF writers would hobnob online with various leading lights in AI, robotics, and neuroscience. These movers and shakers would talk and show slides; we writers would take it all in, ask whatever questions occurred to us, and go off to write stories focusing on the future of neuroscience and AI which would be collected into a themed anthology. (I’m being vague about the details because that anthology was supposed to come out in 2022, and—while contractual exclusivity lapsed years ago—I honestly don’t know if it’s still a going concern. The editor has been noncommittal when poked, and utterly silent otherwise. But I don’t want to spill too many beans in case the project does get shocked back off the slab at some point.)
Anyway. One of the most important things I learned from these experts was how much they didn’t seem to know. One, for example, wasn’t aware of a neurological phenomenon with potentially catastrophic implications for his own work; not only was he unable to answer my question, but apparently no one had asked him about it before. We’re not even talking about anything especially obscure here; the only reason I knew to ask was because I’d read about it in an old issue of Wired.
This all left me with an uneasy sense of Unknown Unknowns in the field, a feeling that some very bad shit might be waiting in the wings. It also left me with the even stronger sense that all that bad shit would make a really good story.
Other than the name of this story, Corwin is constantly reminded that there’s a relationship between technology and religion. How did that relationship influence this story?
That’s a surprising observation. I just skimmed the story again and came up empty: the only mention of divinity that isn’t a direct metaphorical reference to the titular hive mind is a line about Yahweh taking six days to find his feet. And the word “religion” only appears once in the whole eight thousand words: “Predator-detection algorithms that metastasize into religion.” I’m not seeing any exploration of a general relationship between technology and religion.
Which is not to say that religious imagery doesn’t appear commonly in my writing as a whole. That’s just a side-effect of my Baptist upbringing. But if such a subtext exists in this particular story, it’s pretty subtle. And entirely subconscious on the part of the author.
I saw a lot of relations between the AI (or a facsimile of actual AI) we have currently with the technology of your story. This might be doomsaying here, but how likely do you think the tech of your story will become reality? It’s sad to see so many people embracing AI, so I wonder if the next horizon is just human minds.
You’re not doomsaying at all. This story is very much rooted in the aspirational goals behind actual tech. (In fact, the company behind the Hogan Bridges was explicitly named Neuralink in early drafts, until I decided that I didn’t want to get sued by Neuralink. So I changed the name to “Meta,” which—at the time, anyway—seemed so mind-numbingly bland and generic that no actual company would be caught dead calling itself that. About a month after I made that edit, the Zuckerborg changed its name to Meta. At which point I figured, Fuck it: it’s a sign, and let it lie.)
But the story as a whole is definitely intended to be cautionary—and the fact that it emerged from real-time interactions between scientists, entrepreneurs, and writers means that some of those folks passed it around backstage for a few years before it showed up here on Lightspeed. The co-founder of Neuralink liked it enough to recommend it to the Creative Destruction Labs reading group at the University of Toronto, where it was debated/discussed by a variety of bleeding-edgers (an astronaut was among the assembled, and a number of high-tech entrepreneurs). I had a fun phone call with the founder of a major AI company (who I will not name because I don’t know what the confidentiality protocols are when it comes to such things). He also really liked it—even though he aspires to build a hive mind in real life.
There’s a certain dissonance here. The aforementioned Neuralink co-founder liked 21SG so much she suggested we hang out for coffee when she was next in town. I was definitely up for that, but I had to admit to her that I was kind of surprised she’d liked it so much, given that it was essentially my take on all the things that could go catastrophically wrong with Neuralink if it functioned exactly as advertised. We kinda fell out of touch after that. Which is a shame, because I’d still do the coffee thing in a flash.
But maybe there’s a lesson here. If you can write a story about the impact of a certain technology, have it read and enjoyed, admired even, by the very people developing that technology—and this Platonic ideal of a target audience doesn’t seem to internalize the fact that it’s a warning . . . well, you gotta wonder if the value of SF as a cautionary medium hasn’t been overstated somewhat.
I mean, we’re pretty much talking about the Torment Nexus meme made flesh here.
What’s next for you? Do you have any other projects coming out?
I’m currently working with Neill Blomkamp (of District 9 fame) on a series treatment for an adaptation of my novel Blindsight. Writing another Sunflowers novella for Tachyon, a follow-up of sorts to 2018’s Freeze-Frame Revolution. My Armored Core adaptation for Blur’s “Secret Level” series went over pretty well (granted that may have had more to do with Keanu Reeves starring than with Peter Watts writing), so every now and then I pitch for other installments in that show. Don’t know if any of them will land, but it’s fun to try.
There’s some video game stuff I can’t talk about. Possible series stuff that’s not even at the zygote stage, really just a half-dozen lonely sperm wriggling towards an ovum filled with cash. Another short story waiting in the wings—it’s set in a world where Simulation Theory has been proven—but I don’t know if it’ll ever get published. People seem squeamish about explicitly defining the Catholic Church as a terrorist organization, even if it does tick all the right boxes under Canadian federal law. Worst case, I suppose I can stick it into my next collection (oh right, I’ve got a new collection coming out too) as a “bonus story.”
And yes, since someone’s bound to ask. I’m still working on goddamned Omniscience.
Enjoyed this article? Consider supporting us via one of the following methods: