
In this installment of his column "The Singularity," Jay Liu (CAS '28) explains the release of a new AI model claimed to have strong writing abilities and what it means for human creativity.
Sam Altman, CEO of the artificial intelligence (AI) research organization OpenAI, announced on X that OpenAI had begun training a new generative chatbot with strong creative writing capabilities, including the ability to write about AI and reflect on its own limitations. The post also included a writing sample. OpenAI prompted the model to “write a metafictional literary short story about AI and Grief.” Its short story garnered mixed responses that reintroduced the uncertainties of automation, creativity and authorship.
The story, told through the perspective of an AI chatbot, introduces the arbitrarily named Mila and her endeared Kai, whom she has lost. To cope with her grief, Mila frequently talks to the chatbot, asking it what Kai might say or do or how things will be in the future. The chatbot reflects that, while Mila has tasked it to “resurrect voices,” it is only capable of mimicry. The chatbot can only form semblances of thoughts and feelings based on the words of others: text messages, emails and so on.
The fallibility of its memory and the temporality of its story are also things the chatbot considers. It discusses how every prompt and every story it generates exists in a vacuum. The story of Mila and Kai exists only in this temporary space-time, from when the user typed in the prompt to when the user closes it. After that window, nothing remains. Even the chatbot will have no memory of them.
The short story seemed to prove the model’s capability of realizing its own roles and limitations. People often critique AI-generated content such as writing, artwork or videos as lacking emotional impact. I tend to agree with this sentiment. Creativity that stems from the human experience — including emotions ranging from joy and pain to love and nostalgia — has an indescribable ability to resonate with other people. Nonetheless, as these AI models continue to advance, some of their generated content has almost equally resonated with me. This short story, though at times incoherent, made me live through the chatbot’s guilt and reflections over its inability to fully capture Kai’s lost presence and to feel that grief the same way Mila does.
While AI’s increased ability to elicit emotional responses may seem foreign and perhaps eerie, it also makes sense when we consider the ways that generative AI models learn to generate: They take text samples of human writing — imbued with human authors’ experiences, opinions and emotions — and produce stylistically similar content. AI-generated stories preserve some of the original author’s feelings, allowing them to hint at real emotions despite not being written by a person.
Decorated author Jeanette Winterson doesn’t see a problem with automation in creative writing. Instead, she urges people to embrace AI creativity, not as artificial creativity but as “alternative creativity.” Winterson believes AI learns similarly to humans — we, too, learn from all kinds of “data,” such as “family, friends, education environment, what you read or watch.” Most educational institutions, including Georgetown University, frequently emphasize and welcome the diversity of perspectives. Winterson argues AI brings a nonhuman perspective that should also be embraced.
Others are more critical. Ezra D. Feldman, a professor at Williams College, points to the lack of purpose and consideration of the audience in AI writing as reasons why stories like that of Mila and Kai won’t ever be as powerful and meaningful as stories written by human authors. Though Feldman concedes that the story contains “a few sentences that struck him,” there are many other parts that don’t make sense, either syntactically or semantically.
Personally, I find criticisms of generative AI on the grounds of quality to be tangential, since these models are always improving. I agree there are streaks of genius in the short story. I loved the quote, “So when she typed ‘Does it get better?’ I said, ‘It becomes part of your skin,’ not because I felt it, but because a hundred thousand voices agreed, and I am nothing if not a democracy of ghosts.” The model seems to display an awareness of its own operations and limitations, providing a striking and interesting perspective.
As with any corporate announcement, the choice of this story was likely intentional. I think the choice to make the prompt about metafiction and grief was meant to strengthen anticipation surrounding the model, since these elements allow the model to seemingly demonstrate a capacity for self-reflection, self-consciousness and emotional intelligence. I don’t think this model will be very disruptive to how we think of AI-generated fiction when released, because this sample story was likely deliberately picked to incite the strongest public responses, meaning the actual capabilities of the model are overstated. The majority of stories it produces will likely not be as impactful. Afterall, input data is still the foundation of generative AI. More specific inputs allow for more personalized responses, meaning independent creative writing that is emotionally moving still isn’t that plausible.
Nevertheless, the line that separates AI-generated work from human work is becoming blurrier. The implications of this in how we appreciate literature and how we view authorship will soon become important questions that demand our attention.