There’s no attention free lunch
Almost 25 years ago, I published a book (co-authored with John Beck) called The Attention Economy (see cool cover above). The book—the first on the topic, I’m pretty sure—argued that there was too much information and not enough human attention to digest it all. Although it was one of my favorite books to write, it didn’t sell very well. Ironically, it fell prey to the problem we described. The book’s publication date was Sept. 10, 2001. For some inexplicable reason, the world’s attention wasn’t on our book at that time.
I still remember doing drive-time radio interviews the morning of 9/11. Between interviews I watched in horror at the fates of the World Trade Center buildings, the Pentagon, and of course their occupants. I kept telling the radio station producers, “I’m pretty sure nobody is paying attention to this interview,” but they insisted on continuing.
Sometimes I feel that the purveyors of generative AI have the same attitude as those radio producers. They are providing us with a technology that furnishes more and more content of mediocre quality, while we humans can barely attend to the content that humans already produce on their own. I just received my weekly print copy of The New Yorker today, for example, which has great content but usually sits unread on my coffee table until I throw it away (I do sometimes read the online versions of stories). Generative AI can’t—I don’t think—produce stories of the same quality as the average New Yorker issue, so why would we entrust it to create large volumes of content for us?
I find it particularly troublesome that now gen AI can produce podcasts using tools like Google’s Notebook LM. It is certainly a curiosity and an impressive achievement, but really? There are 4.52 million podcasts that are already presumably created by humans. Isn’t that enough? I did demonstrate Notebook LM to my students by taking a case study I’d written and turning it into a podcast, but it didn’t increase participation in class discussion.
One of the other features of gen AI is the ability to accurately summarize content. This might appear to preserve scarce attention. However, one wonders if reading summaries of books, articles, and conversations will have the same impact on our brains that reading the entire document—or at least skimming it to decide where to devote full attention—would. Perhaps at some point is it likely that content will be created by LLMs that is largely consumed or at least summarized by other LLMs? Maybe we’ll never read, see, or listen to anything that hasn’t been trained on or circulated through gen AI? I certainly hope not.
You may have seen the recent study led by Natalia Kos’myna of the MIT Media Lab, provocatively titled “Your Brain on ChatGPT.” This book-length paper argues that there is a cost to the use of gen AI to create content. Specifically, in an experiment on 54 people, the brains of those who created SAT topic essays with ChatGPT were not engaged with the process according to an EEG monitoring brain activity. Those who wrote essays the old-fashioned way—i.e., using their own brains—showed much higher engagement. The LLM users also “struggled to accurately quote their own work” created only minutes earlier, whereas the brain-alone users did well in quoting what they had written. The only good news for gen AI is that those who wrote essays on their own and then switched to LLMs did better brain-wise than those using only LLMs. Once the brain is activated it seems to keep going. Not sure if this is good news or not, but the LLMs wrote longer essays than the human brain essayists did. I suppose that’s fine as long as no one has to read them!
In short, there is no free lunch with gen AI. If we use it to create our content, we won’t engage our attention with the creation process, and we won’t remember what it’s created. And with the ease of creating vast amounts of content—and the frequent absence of the human touch in the output—the reader probably won’t pay as much attention to it either.
I don’t use gen AI to create my content (other than some occasional brainstorming that sometimes yields a good idea, which I then write up), but I have experimented with it for other purposes. At one point, for example, I used it to transcribe and summarize my interviews with research subjects. It seemed to do a pretty good job, but no free lunch here either. When I later wanted to write up the results, I found that I had very little recall of what the interviewees told me compared to when I typed up the interview as we were talking. So I abandoned the use of Otter, Zoom AI, etc. Research suggests that I would probably have even better recall if I hand-wrote my notes, but I would have to be able to read my chicken-scratch handwriting—which sometimes isn’t possible.
What Beck and I wrote in 2001 is still true. An important skill of the attention economy is knowing where to spend your scarce attention. No attention, no value—and no shortcuts. If we don’t devote attention to our writing, we won’t remember it—and if the readers of our writing detect that a human didn’t create it, they may not attend to it. If we don’t read content with our full attention, we might as well not bother reading it. If we try to multi-task, we won’t do any of the tasks well.
Generative AI is not going away, and no doubt it has value in certain contexts. But we should focus on quality, not quantity, in the creation of content with it. And if we don’t want it to rot our brains, we should find ways to engage them in the process of gen AI-driven content creation.










