“Equations of the Dead” is an evocative title, and it’s an evocative idea when it emerges at the story’s beginning and end. Do you think Latchko has it right? Is consciousness an equation, to be replicated by computers someday? Or is there something missing from that emulation, something that no future Latchko will ever see, even if they succeed?
I’m not a cognitive scientist, and I’m barely an armchair philosopher, but my instinct is to say that there’s something missing. I think it may be possible to make a machine which is conscious and has personhood—I think it exceeds our current capacity for biomimicry, but on a long enough timeframe we may be able to replicate the mechanistic function of our brains—but I don’t think we’ll be able to boot up a living person’s consciousness the way Latchko is trying to.
How I think about that is: when someone I care about dies, part of my grief is for myself. I’ve lost my active, ongoing experience of that person in my life. And part of my grief is for that person: they’ve lost their active, ongoing experience of life. Even if I could find a way to emulate their consciousness perfectly, that only addresses one of those issues.
That’s one of the questions that fascinates me the most about life and consciousness. After all, I can look and see that there are all these skulls wandering around, with all these consciousnesses inside them. And there have been more skulls and more consciousnesses all throughout history and into prehistory and back until whenever sentience or sapience or consciousness or whatever you want to call it first emerged, and people keep making new skulls with new consciousnesses . . . so, given all of that, how did my consciousness end up in this particular skull, right here? Why do I have an experiential reality, and why is it (to the best of my awareness) singular, distinct, and specific to this body, this place, and this time? I’m a consciousness, you’re a consciousness, why don’t I have the inherent access to your experiences and not my own? Why does my “I” pointer point where it does?
And when this biological machine stops functioning, is the essence of that experience persistent, or does it dissipate? Clearly, my body stops reporting sensory information, but does that “I” pointer still exist in a form where it can tap into another thread of experience? I don’t know that I believe that, even in a very good simulation, it could capture that essential thread of experience and identity. So, maybe we’ll be able to copy and paste consciousnesses eventually, but I think we’d end up with a separate person, not the original person returned to us.
Most of us have dreamed of talking to someone who’s gone from the world, one way or another. But Latchko doesn’t just get to speak to the dead; he gets to manipulate them. If you could use his method to bring back one dead person, historical or recent, whom would you choose—and how would you change them?
You know, I’m not sure I would choose anyone.
If I don’t think I’d actually be speaking to the person themself, that pretty effectively nixes any desire I’d have to bring back someone I knew. So the next thought I’d have is to bring back some luminary in the arts or sciences . . . but if the simulation isn’t the person, then if I bring back an artist, I’ve still just managed to create a process who’s very good at working in the style of a famous person. We’ve got a lot of talented people, extant in the world already, who can work in the style of others. As for scientists, if I have access to strong AI capable of the simulation of an entire human cognition, do I really need to entrain it to the thinking patterns of any given person? Why not just turn it on an interesting problem and set it loose?
I suppose it’s possible that the AI would need to be modeled after a human in order to be engaged with any given issue. So in that case, I’d probably select the most passionate person I could find about a problem (climate change? universal notions of justice? cures for diseases?), and tweak the cognition to be superhumanly curious, preternaturally difficult to discourage, and fantastically good at learning, cross-referencing information, and synthesizing new ideas.
. . . and then make sure it had some robust moral framework in place, because I suspect that otherwise, I’ve seen that movie, and I don’t like how it ends.
Poor Harmless may not be entirely harmless, but he certainly isn’t good at thinking chessy. When you wrote this story, did you plan out how the pieces would move and interact? Or did you decide whom to love, and let that carry you wherever it may?
When I started this, I had a sense of the setting and a vague idea of the emotional arc, the major plot beats, and the level of dysfunction I wanted my characters to exhibit. Sometime between the initial notes I wrote to myself and the day when I looked up and realized that I had a completed draft, the entire plot arc had gotten drunk and tried to go home, and all the characters had become even more dysfunctional than I’d asked them to.
I generally start out with some idea of how I want things to go, and how I want all the moving pieces to interact. Invariably, as I’m writing, I’ll come up with ideas which are better than the ones I started with, or which amuse me more, or I’ll discover that I can’t make something or other work, or I’ve set myself up some kind of plot paradox where the thing I’m trying to make happen can’t happen without something that only happens as a result of its happening. So anything I write ends up flexing and changing as I write it, and sometimes the result surprises me more than anyone.
This story is full of powerful entities who distance themselves from the world. The Old Man, Basaji, the AI. Do you see isolation as an inherent risk—or benefit—of power? Would it be better or worse for the world if these entities had to live among the rest of us?
I think if Basaji had to live among the rest of the population of her moon, she would be incredibly, incredibly surly.
Power imbalances certainly provide challenges to intimacy, but I don’t know that I’d look at these three as making any coherent argument about isolation or power. The Old Man isolates as a trauma response, which turns out to backfire: If he’d interacted with people more, he wouldn’t have been as vulnerable to being completely erased and replaced. Then again, if he’d interacted with people more, he might have been murdered sooner. It’s hard to tell.
Basaji’s isolation is more perfunctory. She’s not interested in people or the outside world, but has no barrier to going out into it if she needs to. Isolation limits the amount of distraction she has to deal with, which lets her be more deeply in her own world; she might be better able to help people if she weren’t so isolated, but that doesn’t seem to be a priority for her. So at that point the question becomes not about isolation but about prosocial responsibilities; do we have the obligation to use our talents to help our fellows?
As for the AI, they’re isolated from a human perspective, but the motive force of their isolation isn’t power—it’s alienness. Power allows them to isolate, but in this sense power is more like self-sufficiency. One could regard the relationship between humans and AI as coercive, in its early iterations: humans provide something that the AI needs, but only as a reward for service. In that case, isolation is concomitant with agency and safety. And whatever the base unit of consciousness in an AI cloud is—individual processes? threads?—they might be very interconnected with each other. Would the world be better if they were forced to integrate with humanity? Better for who? Possibly not for the AI. And depending on the paths their optimizing took, possibly not better in the long term for humans, either. (I think I’ve seen that movie, too.)
This story feels like a tiny piece of a larger world. It’s full of concepts and places and tensions that brush their fingers along the story and move on. Are you planning to write anything else in this setting? If not, what else do you have in the pipeline?
I actually have three more stories in this setting! I started writing this story to get a better feel for the setting of the stories that appear in the Dystopia Triptych anthologies—Ignorance is Strength, Burn the Ashes, and Or Else The Light. As usually happens when I attempt these things, somewhere in the process of writing the prequel I got captivated by elements that never made it into the triptych, so now I owe myself at least one more story that actually deals with the consequences of Latchko’s ill-advised jaunt into humany AI. But that’s still very much in the formless-whispers-of-ideas phase.
Spread the word!