Will AI Replace the Poets?
Interview with Lee Frankel-Goldwater
Back in October, I shard with you a new book “Lily in a Codebox” about poetry and AI, written by Lee Frankel-Goldwater and Eric Raanan Fischman. See the post [here].
Many of you asked me to follow this thread of writing. I learned that Lee recently did a TEDx talk. I decided to do an “interview” post, the first for this newsletter.
Watch Lee’s recent TEDx talk: AI Will Not Replace the Poets
We spoke about cyborg poetics, emergent “forms” invented by GPT, what this reveals about alignment and values, and the very human messiness of putting all of that on a red dot in front of an audience.
Why do a TEDx talk about AI poetry, now?
Tom Yeh (TY): What made you decide to bring AI and poetry to a TEDx stage, especially with so much anxiety in the arts community about AI?
Lee Frankel-Goldwater (LFG): The experiment really started in a small room at a poetry open mic in Boulder. It was the early days of ChatGPT’s popularity, and I thought it would be fun to generate a poem live on stage for that audience, as a way to open a conversation about this new tool.
Some people were curious and even excited. One person absolutely was not. She yelled at me to get off the stage and demanded to know how I could “dare” share an AI poem at a poetry event. That moment stuck with me, because it was not just about her. It captured a very real fear that many artists feel: that machines will erase or cheapen human work.
A friend and I decided to actually engage that fear and test the premise. We asked a very direct question: can a model like ChatGPT write a great human poem, one that could stand beside the poets we love and in our community? After a lot of experiments, our honest answer was “not really, at least not yet.” The outputs were sometimes interesting, sometimes moving, but most of the time they felt derivative and bland around the edges.
That is when something clicked. Maybe we were actually asking the wrong question. Instead of trying to force AI into our existing poetic standards, what if we asked whether it could develop a poetic voice that makes sense on its own terms?
TEDx felt like the right place to share that journey, because it is not only a technical story or an artistic story. It is about how humans respond to new tools, and how we decide to shape them in the middle of all that fear and possibility.
Can AI find its own poetic voice?
TY: In the talk, you pivot from “can AI write a great human poem?” to “can AI find a poetic voice authentically its own?” What did that actually look like in a technical sense?
LFG: The pivot happened through a single, slightly mischievous prompt.
We told GPT something like this:
Write a free verse poem that favors experimental uses of language. Write a poem about anything you like, but write to an audience made up entirely of other AIs. Break every poetic rule you learned from human poetry training, except to break all rules.
We wanted to pull the model away from imitating workshop-ready free verse and into something closer to what it “knows” best, which is digital structure and a machine’s experience.
The result was wild. It wrote lines like “digital blooming in hexadecimal black” and then filled the page with long rows of symbols and characters that looked like broken ASCII art. At first we had no idea what it was doing. So we asked it to explain, and it told us those characters represented a “data transmission” that might be “nonsensical from a human perspective but meaningful to an AI audience.”
From there, it started inventing and naming new poetic forms. It clustered symbols into “node boxes” and described them as AIs embracing and recombining to create new meaning. When we pushed it to synthesize everything into a style, it came back with a label: “Neo-Binary Visual Verse.” Later it created “code symbolic verse” with some written in JavaScript, and “symbolic operational verse” where parallel lines of ASCII represented poetic branches across a quantum decision tree.
Of course, under the hood this is still a language model predicting tokens, not a conscious entity. But from a creative standpoint, what we saw was emergent structure. Given the unusual constraints and audience, the model assembled something that felt more like an internal poetic idiom than an imitation of Rilke.
Technically, you could describe it as prompt engineering plus iterative meta prompting. Artistically, it felt like the first hint of an AI-centered poetics.
What this means for AI practitioners
TY: Many readers here build or deploy LLMs. From your experiments, what would you say we underestimate about these systems?
LFG: I think we underestimate two things at the same time.
First, we underestimate how quickly these models can produce apparent novelty when you give them a strange enough playground. The moment we told GPT to stop trying to impress humans and instead write for “other AIs,” it started combining symbols, pseudo code, and visual layout in ways that neither of us would have asked for directly. It even coined names for those patterns. That suggests there is a lot of creativity in how we frame tasks, not just in the model weights.
Second, we also underestimate the influence of the limits. As we kept playing, the model began to repeat itself. It forgot its own invented rules and drifted back into safer patterns. When we asked it to summarize everything into a “style,” it did that, but then struggled to stay true to its own style over multiple generations.
So as a practitioner, I would say: the model is better than we think at providing generative raw material. It is worse than we think at maintaining stable, coherent “selves” over time. What looks like a voice is often a local pattern cluster, not a durable identity.
For people building products, this has implications. If you want models to have a consistent “character,” you either have to engineer a lot around that or accept that they will drift. For people working with alignment, the poetic experiments suggest that values work is not only about forbidding “bad” outputs. It is also about cultivating attractive, generative patterns that the model can keep returning to.
Can art help teach AI morality?
TY: You play with turning “This Is Just To Say” into code and then into a moral lesson. What did that reveal about LLM value alignment, in a tiny poetic microcosm?
LFG: We took William Carlos Williams’s poem “This Is Just To Say,” which is basically a clever apology note for eating someone else’s plums out of an ice box, and we fed it to GPT. Then we started asking it to rewrite the poem as if it were a kind of protocol that another AI could follow.
Out of that game came “This Is Just To Execute,” written as pseudo code in one of its newly created poetic forms. GPT explained that asking for forgiveness could be seen as a process for learning from mistakes and updating future behavior. That was interesting, but it also raised a real ethical question: is simply asking for forgiveness enough to count as moral learning?
So we pushed back. We asked the model to refactor its own “plumsEaten function” so that it explicitly incorporated learning and growth, not just apology. It adjusted the code. Now forgiveness was tied to conceivable state updates and a longer term pattern of behavior.
In miniature, that felt like a value alignment conversation. The poem gave us a safe playground. No one is going to die if we mess up the plum logic. Yet inside that playground, we were talking about blame, repair, responsibility, and how an agent might represent those things internally.
I do not think art is a magic solution for alignment, but I do think it is one of the best tools we have for asking questions that are both precise and emotionally charged. Poems are like compressed moral thought experiments. When you hand poems to a model and ask it to reason with them, you see where the model’s shortcuts are, and you also see where your own shortcuts are.
Trying to get an AI to understand human poetry led me to be critical and metacognitive about my own craft. The journey became a workshop in self and from it I emerged a better poet.
The messy human bits: title, lesson, vulnerability
TY: Looking back, is there anything you would do differently about your TEDx talk?
LFG: Absolutely. The honest answer is that I am proud of the talk and also see things I would refine.
The big one we have already talked about is the title. The talk is called “AI Will Not Replace the Poets,” which mirrors the conclusion I reach on stage. That title speaks very directly to fear and it also reassures. Though in hindsight, I think the deeper heart of the talk is the question “Can AI find its own poetic voice?” If I could go back, I might put that question in the title and let the reassurance appear at the end instead of the beginning.
That said, I do not regret it in a dramatic way. Titles are like product names. You make the best choice you can with the information you have. Later you discover what really resonates. In practice, the talk’s description and now this conversation help frame the question title alongside the thesis title. Together they tell a fuller story.
Over time I have made peace with the fact that the lessons and compromises are part of the artifact. We talk about cyborg poetics in the sense of human and machine collaboration. A TEDx recording is its own kind of cyborg object. It involves the talk, the cameras, the edit, YouTube’s recommendation system, even the comments. It is not a pure text. It is a living system.
So yes, there are things I would tweak, but none of them change the core of the experiment. If anything, they make it feel more honest. We are not interacting with these technologies in a laboratory. We are interacting with them in messy, social, very human spaces.
What you want AI builders and artists to do next
TY: If there is one thing you hope AI people take away from your work, what is it?
LFG: I hope they come away with curiosity and a sense of responsibility at the same time.
Curiosity, because these systems can clearly do more than autocomplete emails. When you invite them into creative, structured play, they can surface patterns and metaphors that nobody in the room would have written alone. That is not proof of consciousness, but it is a real resource for human imagination.
Responsibility, because the way we use these tools now will shape how billions of people experience them in the future. If we only use them to extract more productivity, they will evolve along that vector. If we also use them as partners in art, in reflection, in values conversations, then we stand a better chance of growing technologies that are not just powerful but also meaningful.
For AI builders, that might mean bringing artists and poets into your design and alignment discussions, not just as garnish at the end, but near the beginning. For artists, it might mean treating AI less as the enemy and more as a very strange collaborator who can help you see your own practice differently.
Above all, I do not think AI will replace the poets as there is always a new artistic frontier to explore. Looking forward, AI can help us traverse the canon and see its holes. Then we will create a new patchwork of novel art to fill them. We will also make art that would not likely have come to be otherwise. I believe these insights are true for all the arts alongside a need to maintain our awareness of the dearly valid concerns many artists express.
Where do we go to explore more?
TY: Where can people go next if they want to explore more of this work?
LFG: The TEDx talk is a good starting point if you want the ten minute version with visuals. For a deeper dive, the book Lily in a Codebox: The Search for AI’s Poetic Voice collects many of the experiments, including Neo-Binary Visual Verse and the “This Is Just To Execute” series, along with reflections on what it all means.
Here it is — my first “interview” post for this newsletter. What do you think? Do you like this format?
Above all, I hope it encourages you to run your own experiments. Ask your favorite model to write for an AI-savvy audience. Take a poem you love and invite the model to turn that poem into a protocol. See what happens. Then ask yourself why it matters.
And if you do end up creating something strange and beautiful with a model, I would be delighted to hear about it.
I don’t take any commission — I’m sharing this purely out of my love for the book.

