Welcome to Lightspeed Magazine! We’re so happy to have “Bots All the Way Down” as one of the science fiction stories for this month. Can you tell us about what inspired you to write this story?
There’s a problem of enshittification, as named by Cory Doctorow, where services we use are getting actively worse. It’s a combined problem of capitalistic incentives (why get your users to the link they’re looking for quickly if you can instead get them to click on more ads first and you make more money), and a side issue of companies relying on AI technology in places where the AI is just not as good as their previous tech, but they feel the need to be in the AI “race” and thus use it anyway. I’ve worked in tech since 2004, and there’s been a real shift away from doing what’s best for the user. (Not every company. But many.) It’s frequently because the user is not the paying customer (like in a free search or social media product), and making the customer happy can come at the expense of making the user happy.
I started to think about how internet search, for example, has gotten much worse. Take Google as an example. (Note: I worked for Google in Maps and Android marketing from 2007-2012.) You have Google’s search algorithm for which some results come up on top. Every single business who wants to be found is engaging in Search Engine Marketing to try to make their site more highly ranked. (It is absolutely true that some people have put white text on a white background to have content that the algorithms see but not users, to try to help optimize without looking janky.) But those businesses are now switching to AI to do this in an automated way. Meanwhile Google is trying to tweak their algorithm as fast as possible to prevent less-relevant results from gaming the system. They are now also switching to AI to do this in an automated and faster way. So you have algorithms on both sides adapting to each other. For this story I thought, what if we looked at it from the AI points of view?
Could you tell us a little about the choice behind the title for this piece?
There’s a phrase, “it’s turtles all the way down,” which initially referred to the world theory that there was a flat Earth balanced on the back of a stack of turtles. When asked what was beneath the turtles, this was the answer. So here, it’s if you’re looking at what’s happening on the internet, there’s more and more algorithmically generated content, and it’s algorithms responding to algorithms responding to algorithms responding to algorithms . . . it’s bots all the way down.
The structure/framing of this story is so fun—why did you choose to format the piece this way?
I liked the contrast of the oral almost-fairytale tradition, with something that is as far from it as possible—AIs. And there’s nothing that’s more fun to me than a story that’s ouroboros-shaped!
Do you think it is possible that AI will one day attain consciousness comparable to humans? Why or why not?
In the way, way, way far-off future? Sure, anything’s possible. Anytime soon? It seems like no, to me.
One part of the problem is that AIs need enormous quantities of data to train on. Research has shown that when generative AI is trained on a lot of AI output, it can get a lot worse, and AI companies are not finding the quantities of legal new human-generated content to train on that they might want. There’s an expectation that the limited availability of human-generated data (taken in legally or illegally) will cause the exponential advancement of AIs to plateau at some point, and whether that happens before or after we hit AGI (Artificial General Intelligence) is an open question, though some people believe the plateau is already here. AGI is way less advanced than actual consciousness though.
Another part of the problem is that AI is only training on brain outputs, not actual brains. Just like a shadow on a wall is at best a minimal representation of a complex and colorful 3D shape, outputs from human consciousness, whether they are digital photos or books or movies or whatever, are not the same as actual human consciousness. Right now, we don’t understand consciousness well enough to even have something consciousness-shaped to attempt to train AIs on. Without that training, I can’t see them getting there themselves, even if they are able to mimic the outputs admirably.
Is there a project you are currently working on? And if not are there any themes, objects, or news that might be tickling your fingers?
Lately I’ve been doing a lot of writing for political activism. I’ve been writing political satire (effieseiberg.com/political-satire) that I’ve been reading at No Kings rallies (a mix of SFF and horror, but always with humor) about a variety of things this administration has been up to. I also wrote an op-ed (archive.ph/KPhx4) for the San Francisco Chronicle about the harm RFK Jr has been doing to people with chronic illness, from my own perspective as a person disabled with ME/CFS. It’s been sort of a two-pronged approach to try to get folks to call their electeds (whether through humor or something personal and heartfelt!) to try to get some change to happen.
I do believe humor has a huge role in piercing this veil of perceived power in authoritarian regimes . . . a sort of “emperor has no clothes” situation from the frogs in Portland to people’s punny protest signs. There was even this case in Serbia where a comedy stunt was the beginning of the end for the autocrat Milosevic! I’m alternating the comedy against this administration with the beginnings of a novel that explores this conceptually, while looking for a new agent with a different novel.
Enjoyed this article? Consider supporting us via one of the following methods:







