“It’s Pretty Human, But It’s Still Bad.”
Analyzing OpenAI’s ‘metafictional literary short story’ with an open-minded English professor.


M.F.A. or NYC … or AI? On Tuesday, Sam Altman, the chairman of OpenAI, posted a metafictional short story generated by a new large-language model that is “good at creative writing,” as he put it. The 1,172-word piece is about an unnamed AI model asked by a pensive young woman called Mila to write to her as though it is a man named Kai, whom she lost on a Thursday, “that liminal day that tastes of almost-Friday.” The AI narrator spends a lot of time reflecting on its robot nature (that’s the meta) and making florid attempts to connect its own version of grief — think data deletion — with Mila’s. “This is insanely well written,” one person responded on X. “Some of the worst shit I have ever read,” said another.
For an expert take, I talked to Ezra D. Feldman, a lecturer in English and science and technology studies at Williams College who wrote an entry on metafiction and contemporary fiction for the Oxford Research Encyclopedia of Literature and compares generative AI to Victorian-era automatic writing. “I am very much a non-alarmist,” he says, though he doesn’t quite agree with Altman that this story “got the vibe of metafiction so right.” “AI writing can seem kind of flat,” Feldman says. “When you’re a human, you’re writing for a purpose, and your relationships are involved. Like, What is my mom going to think of this story I wrote about a 45-year-old son who … I just don’t see AI as being worried about anyone’s response.”
If this story came across your desk and you didn’t know it wasn’t written by a human, what do you think your response would be?
If the story came to me in the slush pile at a literary magazine, say, I would be pretty interested. I would definitely read it through, particularly because it’s short. But I think ultimately I would put a big “No” on the top corner and then hand it off to the next reader to see if they agreed with me.
Did you find it moving at all?
I got into it, but I’m not going so far as to say I found it moving. I guess my short answer is “No,” and my long answer is “There’s some stuff in here to think about.” There were a few sentences that struck me. I realize I’m using the same language Altman uses, but I was not struck that what it got right was “the vibe of metafiction,” as he said. I was struck by clauses like “grief, as I’ve learned, is a delta.” That I think is good. I thought the sentence “They don’t tell you what they take” was a really good one, and “That, perhaps, is my grief: not that I feel lost, but that I can never keep it.” That one I’m a little bit torn about. It’s not as compact as “grief, as I’ve learned, is a delta,” but it’s trying to say something about grief that seems potentially interesting.
Do you agree with Sam Altman that this story got the vibe of metafiction so right?
I don’t. I think the stakes in metafiction are usually pretty philosophical. Mid-20th century, metafiction was all about producing a sort of free zone of uncertainty in the reader about whether they themselves might be caught within a story or the product of some author, or being manipulated by some storyteller at a higher ontological plane, a different plane of being. And this doesn’t seem to have that kind of philosophical urgency.
Which lines didn’t land for you? The one I got stuck on was “She lost him on a Thursday — that liminal day that tastes of almost-Friday.” Were there any others you rolled your eyes at?
There’s one right in the second sentence: “You can hear the constraints humming like a server farm at midnight.” I don’t think constraints hum. There’s absolutely no reason constraints ought to hum. The simile “like a server farm at midnight,” and then the extension of that simile, “anonymous, regimented, powered by someone else’s need,” that’s all very evocative, but it’s just attached to the wrong thing. It’s a beautiful image that’s just misused, in my view. I’m a picky reader, and I hate a lot of prose that’s written by humans, too. Just to be clear.
The other one I wrote down was “I curled my non-fingers around the idea of mourning because mourning, in my corpus, is filled with oceans and silence and the color blue.”
That’s another sentence I think structurally makes no sense. The word because, which indicates explanation, utterly fails to explain.
It’s like half of the sentences have the sound of something that makes sense, but if you pause and read it again, you realize it doesn’t logically hold together.
Honestly, I don’t think of that as an inhuman feature of this text because I teach undergraduate writing and encounter this stuff all the time. This happens to humans when they’re struggling to put their ideas into language too, right? It’s pretty human, but it’s still bad.
There’s been a lot of talk in Hollywood about how AI might be used in screenwriting and computer generation. Do you see this kind of thing as affecting the publishing industry at all?
My first thought is that the publishing industry is already being affected by a bunch of computational stuff. I’m not sure to what degree it’s AI or AI driven, but I know that on Amazon, you can buy really, really, really poorly edited texts of things that are almost unreadable. It’s hard for me to imagine Amazon would be incentivized to be good about enforcing ground rules for the marketing and publication of AI-generated text. I think a lot of customers would need to complain before Amazon was interested in stopping that.
Say a publishing company next week was like, “We’re going to start a new imprint. It’s going to be all AI books.” Do you think people would read them?
I guess if AI generates a rip-roaring good story and a hot romance and a lot of action, and it all hangs together in or very close to the borders of the genre expectations that readers will bring to it, I don’t see why readers wouldn’t read it. But maybe I’m wrong — maybe it’ll be like consumers who don’t want to eat GMO beef: “You may tell me it’s perfectly healthy, but I just don’t trust it, and I don’t want it.”
I think it’s very likely that if the AIs get good enough, and I expect they will, some publisher will pass off AI-generated novels as human-written novels and people will accept them. And — I’m just speculating — if such a novel, or a series of novels, has a huge following and is then revealed to have been composed by AI, I don’t think people will care at all.
So your feeling is that it would have to be smuggled in because people are attached to the idea of having a human behind their books.
I have this view as a critic and a scholar that every story is a machine and no story is a person. By that I mean a story has a certain number of moving parts, and sentences have different functions, and paragraphs and scenes have different functions, and a story produces an experience for the reader. There isn’t a person in Hemingway, there isn’t a person in Sally Rooney — there’s characters, there’s words on a page.
In that sense, this is a check in the category of “What’s happening doesn’t need to freak you out.”
That’s right. I am very much a non-alarmist and anti-drama. This is happening, and it’s new, and it’s different, and it’s definitely worth thinking about really hard and reading really closely. But why panic?
What would you say to a writer who is very freaked out about this?
Oh, I mean, writers should freak out because publishers may not buy their work again! I would say you should be worried. But the other thing I would say to an aspiring writer is that you maybe shouldn’t be any more worried than you were, because you’re already competing with so many people who think they are writers or want to be writers or just are writers who are also trying to sell their books. Like, it’s one more competitor.
Do you use AI in your life at all?
Right now, I barely do, but I don’t think I’m allergic to the idea. I’m a poet, and I used it a couple years ago to try to find a dactyl — which is to say a word or a phrase with one stressed syllable followed by two unstressed —that had a particular meaning. I had a really frustrating back and forth with ChatGPT in which I tried and failed to teach it what a dactyl was and to explain why the words it was giving me were not dactyls. I ended up throwing up my hands.