On personalized media, alternative AI writing futures, and reconciling the poetic with the political
In conversation with Vauhini Vara, Author and Pulitzer Prize finalist
Vauhini Vara is a fiction writer and journalist. Her debut novel, The Immortal King Rao, was a finalist for the Pulitzer Prize, and her upcoming story collection, This is Salvaged, will be released this September. Currently, Vauhini is working on a new essay collection, Searches, that explores, among other things, the future of language in the age of artificial intelligence. She is also a journalist who has covered technology and business at the Wall Street Journal and the New Yorker. Her writing has appeared in countless other publications and she is currently a Wired contributor and sometimes works as a story editor at the New York Times Magazine.
We first encountered Vauhini’s work after reading her essay “Ghosts,” a collection of nine nonfiction vignettes about her sister’s death, that were written in collaboration with GPT-3. In each story, Vauhini writes an increasingly detailed prompt about losing her sister, and GPT-3 completes the rest. The essay was originally published in 2021, and featured in the 2022 Best American Essays anthology.
Our conversation with Vauhini covers her personal experience of AI writing, the implications of AI on literature more broadly, and her thought experiments imagining alternative AI models for writers.
On Writing “Ghosts”
In the introduction to “Ghosts,” you write that “technological capitalism has been exerting a slow suffocation on our craft. A machine capable of doing what we do, at a fraction of the cost, feels like a threat.” How has this threat changed since you wrote this in 2021?
I'm so glad I wrote that line. It feels much more vivid now—when I wrote that essay, it felt so theoretical, like an idea, not a reality. Now, I feel like the crazy theoretical concerns that artists and writers have are really within the realm of possibility.
AI-assisted writing is controversial within writing circles. How did people respond to “Ghosts”?
People who loved “Ghosts” but otherwise feel anywhere from indifferent to hateful toward AI writing often say to me the reason that it was so amazing was because of everything that I wrote. Subjectively, I disagree. That’s not my experience of the essay. In my experience of the essay, GPT-3 produced some language that moved me and came across to me as creative.
There’s one point where GPT-3 came up with a fictional scene where I’m driving home from the beach with my sister and she takes my hand:
“We were stopped at a red light, and she took my hand and held it. This is the hand she held: the hand I write with, the hand I am writing this with.”
This struck me as really profound because the essay is about the relationship between the version of me that coexisted with my sister and the version of me that has to exist without her. To say this is the same hand in that context feels really intense and very personal.
It’s one of my favorite pieces of writing in the essay and I did not write it. For me, that being true is harder to deal with than if I felt that everything that is produced by AI is garbage. Because I have to sit with the fact that AI currently is capable, at least in my experience, of excellent writing.
AI is sometimes talked about as a cultural artifact that evokes a form of collective consciousness. In your novel, you write “the stories of our lives are ephemeral…but what if someone could hold onto them for safekeeping?” Did you experience GPT-3 in this way while writing “Ghosts”?
The voices that were being produced in that essay represented something of what people who had grieved in the past had said about grief in their own writings, whether it was a Reddit thread or something that they'd published. There is something that felt profound to me about that.
Have you worked on other AI writing since?
I wrote a short story using AI but I have qualms about publishing it. The surprising thing for me with that story was that GPT-3 came up with the climactic plot moment, and it was, in my opinion, really good. That vexed me because when I was done with the story, there was something about it that didn’t fully feel mine. It almost felt weird to take full credit for it. With “Ghosts”, I feel there are a lot of things about that essay—the inner play, the commenting on AI—that made me feel okay about publishing it.
On the implications of AI on literature
How do you think about the implications of engaging with AI in your writing?
The question of how I, as a writer, should engage with AI has to do partly with questions of beauty—if AI can help me produce great work, then one could argue I should just do that all the time—and partly to do with questions of politics.
If I am engaging with an AI system and that means that my work as a writer is helping to train a model, and therefore is being used to produce work that OpenAI will eventually profit from and I won't—is that something I'm comfortable with? There are all kinds of tangents to go down, but they all have to be sort of considered separately rather than conflated. I think there may be a scenario in which working with AI produces something beautiful, and yet for unrelated reasons, it makes sense for me not to do it or to not publish it. That's what I'm trying to work out right now.
What about the broader societal implications of an increasingly AI-dominated media ecosystem? For instance, there’s been a lot of discussion about the role of AI in creating more personalized content. Is being able to share stories and talk about the same TV shows and books our friends are consuming a critical component of how we relate to media?
The origins of storytelling in our societies had to do with storytelling as a communal communicative function—passing morals or information about how the world works from one generation to the next, or from one community to another. There's always been something linking storytelling with human relationships. The corporatization of certain aspects of storytelling has divorced storytelling from what it was supposed to do in the first place. On an intuitive level, that kind of fragmentation feels like it would not be good for us.
A counterargument to that would be to say that besides our capacity for storytelling, another thing that's special about humans is that we're always changing. One might argue there’s no reason that storytelling in the future couldn't become something different from what it's been in the past. But as someone who feels there's something special about that communicative value of storytelling, it does feel like something is lost.
On alternative AI writing futures
Recently in AI, there’s been a debate between those who wish to pause AI development and others who wish to accelerate, or at a minimum, maintain the current pace of innovation. At the heart of this conflict is the idea that a future with AI as a dominant, powerful tool is inevitable—the most important goal is to reach that future before our competitors.
Your novel alludes to similar questions around technological determinism. What is your perspective on how much agency we have in charting our technological futures?
It sometimes feels inevitable that AI will be able to do what we do as artists, but I think that people have agency. I can imagine a world, for example, in which artists decide together that this is a really compelling tool that produces beautiful work—and yet, we’re not going to use this.
This is where the broader question of technological determinism comes in. Just because something is really intellectually and creatively rich, does that mean we must do it? Obviously, that's a matter of choice. The only reason it would be a path that we can't help but go down is that there are a lot of people with a lot of money riding on these things who want them to continue. It serves corporate interests to say “well, it's all inevitable, so we might as well do it.” It's the same as when people say “well, climate change is inevitable and there’s nothing we can do about it now”—that serves the interest of big oil more than it serves any of us.
Are there ways we can reconcile the tension between the potential of AI to aid in the production of excellent writing and the political questions it's wrapped up in?
I’ve asked myself: if I could create my own sort of private model that could write amazing literature like what I wanted to write (and if it were technically feasible to do so), would I do it? Answering this question forces me to think about why I write in the first place.
For me, it is not like I have something specific that I want to say and then I go write it. It's about not knowing what I want to express and needing to go through the process of writing in order to unravel for myself what it is I want to say. There's a kind of frustration intrinsic to that process. Well-meaning researchers often say that AI tools can be helpful for easing writer’s block. The question is: will AI tools prompt me in a way that is in line with what I would have naturally come up with?
Alternatively, imagine an open-source model where there’s a system of opting in for every element of training data. Perhaps, it can't be used for profit. You could imagine all kinds of conditions you could put on this hypothetical model. That would be interesting, right? If you could somehow divorce the technology itself, which is fascinating and creative, from the commercial aspects, that would be really interesting. The problem is that it’s really hard to do that in our society.
Embeddings is an interview series exploring how generative AI is changing the way we create and consume culture. In conversation with AI researchers, media theorists, social scientists, poets, and painters, we’re investigating the long-term impacts of this technology.