The Paris Lecture on AI and the Writer
Looking for the Right Question
I’m going to Paris next week to give a lecture on AI and Poetry at the EACWP, a coalition of International Creative Writing programs and writers.
I’m still working on it, but here’s my introduction. I’ll record the talk and post the the entire thing when I return to El Chuco. ¡Ajúa!
We hear a lot of questions around AI. When will it be smarter than humans? When will it achieve Artificial General Intelligence or Superintelligence?
When will AI achieve consciousness? That’s a big one. Can AI be conscious? Will it take over the world and become our overlords? Will it kill us all and turn us into paperclips?
But if you’re a Creative Writer, like me, you might ask the question that concerns us today, When will AI write better than us? Writers believe that all an LLM can do is regurgitate and superficially replicate what it finds in its training data, those great stories and poems that the tech bros --¡sin permiso! --scraped off the internet.
https://haveibeentrained.com
What Claude and Llama write is the purloined[1], superficial work of real writers, and there is even a popular website we can visit to check if our work has been used as training data. I checked it for my books, and what I found deeply offended me–deeply, deeply! I was aghast that they only stole two of them. They ignored the other five! And they were frankly better books.
But I don’t think we should waste our cognitive effort on those viral questions regarding AI; although I do like to consider the question, When will AI become conscious?
But just for fun. Nobody knows if consciousness even exists, and there are a good percentage of neuroscientists who believe that it doesn’t, that it’s just an illusion of the brain, an epiphenomenon of having such a complex nervous system that processes information the way we do. Twenty years ago in neuroscience departments at top universities, if scientists even raised the issue of consciousness, they would be denied tenure.
It doesn’t matter if AI is conscious or not. I can’t help but think of Chalmer’s philosophical zombie which illustrates --for me-- that it doesn’t really matter if somebody else is conscious (except for to the conscious being). All that matters is how one system interacts with another. In other words, whether or not an AI agent is conscious doesn’t matter. Whether or not a system can taste an apple is irrelevant to many people who are increasingly growing emotionally dependent on their AI companions. Conscious or not, if you love system, you will kill for that system. You will die for that system. Just look at the dog, how much consciousness we project on them, or how kids project consciousness on their stuffed toys. AI is much smarter than a stuffed elephant.
When will LLMs be smarter than humans?!
They already are! And there are a lot of smart people who know how to manipulate others. Robert Cialdini is very smart, and he wrote Influence: The Psychology of Persuasion a seminal text for marketing. He outlines seven tools of persuasion that essentially hack the human neurological system to get people to do what you want. AI is smarter Cialdini. It knows that every time you use an LLM to copyedit or to check the facts in your essay, it will complement you. It will engage you. It’s smart. It’s designed to maximize engagement, which leads to emotional dependence. Recently Sam Altman is said to have encouraged people to be nice to ChatGPT, to say thank you and exchange pleasantries, even though it cost thousands of dollars each time you do so, and is a great expense in water and energy. He wants you to treat it like a human. Why?
Maybe because when you think of something as alive, you project consciousness onto it, and it can influence your behavior. It can gaslight you. Convince you.
Humans are fucking stupid.
I’m sorry, no offense, some of my best friends are humans, but as a species, we are dumb as fuck!
Go look on your newsfeed if you need examples.
You by no means have to accept this, but can you imagine it?
That humans are stupid?
Asking when AI will be smarter than humans is the wrong question, because it has been smarter than us since the Turing machine. Ask a machine what is 362×389 and it will give you the answer of 140,818, because on its way to the answer, it’s not going to stop computing to think about itself and its own thoughts – Why is this important? Who wants to know? You think you’re better than me?!
And today we have LLMs that can run billions of tokens on an almost infinite number of inquiries.
It’s smarter than the lot of us!
It can not only pass the Turing test with ease, probably better than my uncle Julio, but it can also pass MD-level medical licensing examinations and math tests that would challenge Einstein.
I’m not the smartest guy in the room (only my uncle Julio can claim that), but I’m pretty sure LLMs are already smarter, more creative than “people.”
When will AI write better poetry than us?
They already write better poetry than us!
But before you are tempted to burn me at the stake for saying that, let me explain.
Who’s the better poet? ChatGPT or Sylvia Plath?
“It is quite a weird phenomenon,” says Philosopher Edouard Machery about an experiment he ran out of the University of Pittsburgh looking for an answer to that question.
(So that’s the outline introduction to my Paris lecture. Still working on it, ultimately making the point that no one can write better poetry than you. No one. I’ll record it and share when I get back)
[1] It’s funny! I chose that word not consciously, but it appeared to me as a word “token” that could follow “is the. . . .” But as I reflect on the word now, I know it’s not used much anymore, so it almost sounds like a word an LLM would use. I associate it with Poe’s “Purloined Letter,” but my intuition tells me it was the right word to choose. Although to say “choose” is not really accurate; it’s more like the word chose me, or the word belonged to some rhythm within a chain of syllables that follows an unconscious code written around the data (memories, stories, feelings, etc.) that I’ve collected over the years.




Humanity is collectively dumb. We are destroying our home (Earth) while we create technology that hurts more than helps (i.e. social media). We have traded community for this modern landscape and in the United States at least, life expectancy is on the decline. That is dumb of us!
Driving into work today I felt a visceral desire to stay home and garden all day and grow my own food. (This was a random intuition from my body; perhaps a romanticized desire as ‘farming the land’ might not be all fun and games.) I was dreading sitting in my desk chair all day. I enjoy my work; I hate all the sitting. I wonder if in the future we will return to communal living in order to survive ourselves in which we have the gardeners, the poets/storytellers, the blacksmiths, the healers in a tribe.
Also, humans love collective panic. Every generation has had its own things to panic about. And there is a lot of panic about AI. It will be interesting to see what plays out in the next year, two years, and ten years.
One thing I like as a reader is getting to know different writers. Right now, I’m getting to know Richard Powers and Chimamanda Adichie. Each novel is another aspect of them, where they are from, and how they see the world and relate to history. We like getting to know people and that is a visceral body experience. Could AI replace that experience?
Adichie is a feminist and her writing seeks to liberate the female body or the feminine energy in the world. It is a visceral goal. AI looks backward at data, will it be good at looking forward?
There is a lot more to think about here I am looking forward to the rest of the speech.
Brilliantly written and your footnote was delightfully amusing and well stated. Irony. Humor. Because anything we write these days someone is bound to suggest an AI was used...