We’ve been promised that GPT-5 was coming for a while now. Sam Altman, OpenAI’s CEO, had predicted that AGI (artificial general intelligence) would come in 2025, and we’re now 2/3 through that year, so we might have expected that GPT-5 would accomplish the elusive AGI (which has no clear definition, of course). But even according to Altman, it falls short of AGI, though he thinks it’s a step in that direction. He described it as like having “a team of Ph.D.-level experts in your pocket.”
I wanted to try it out and I struggled to think of what I might do with a team of Ph.D.s in my pocket other than needing bigger pants. I eventually came up with an idea: what if I got it to rewrite my aborted novel? This potential Great American Novel didn’t turn out to be that. I conceived of and wrote several drafts in 2019 and 2020. What else to do during a pandemic shutdown?
The topic—I hoped soon to be a major motion picture—was robot American football as practiced at MIT and its nerdy intercollegiate rivals. Driven by an epidemic of CTE, the world has decided that football is no longer safe for humans. The MIT team members advance the robots toward greater autonomy until a robotic apocalypse occurs in the final game of the season. I thought it was a great idea, but reviews—even from my friends—were not complimentary. “Good idea, but it needs a lot of work” was the most common comment. If generative AI at its best could turn it into a great novel, that would put us well on the way to at least artificial literary intelligence.
So I had GPT-5 review and try to improve it. It made some mistakes—for example, saying it was going to interweave its ideas for new topics throughout the text, but then putting them all at the end, and repeating itself multiple times for plot ideas—but I can’t deny that it was helpful. I hadn’t tried this earlier with the various permutations of GPT-4, but I’m guessing GPT-5 was a little better than they would have been. At one point I had commissioned a human editor to take a pass through the manuscript, and I think GPT-5 may have been more helpful. That doesn’t mean it would be better than all editors, of course.
The other thing I must say about GPT-5 is that it is unfailingly willing to assist. It kept volunteering to do more and more, and I kept saying yes. I think I wanted it to say, “Why don’t I just rewrite the damn thing,” but it didn’t go that far. But it did provide extensive human relationship ideas (ironically), some ways to flesh out the villainous opposition to robot football, and helpful suggestions on avoiding too much sex—apparently the readers of tech fantasy fiction don’t like that, it confidently told me (could that be a hallucination?). It also told me that my technical speculations on what robots would be able to do in five years were a bit aggressive but within the realm of plausible tech fiction.
However, it did not feel like a team of Ph.D.s in my pocket, or at least not a team of Ph.D.s with expertise in modern fiction. I’d compare it to a smart B.A. graduate who majored in contemporary literature and was also a robotics hobbyist (there are so many of them!). Most of its suggestions arose out of the prompts I gave it. I had to tell it that I needed an image, and what it should look like (see above—pretty good but not sure why only one player has a number). I would be confident that a Ph.D. with the right speciality could do a good job of rewriting the book (I think that’s called “ghostwriting”), but it was clear to me that I still needed to be the literary integrator or uber-editor of all of GPT-5’s suggestions.
In other words, this feels more like a good upgrade to GPT-4 than a major breakthrough in generative AI capabilities. That seems to be the consensus of the other reviews that I’ve read. I’m somewhat grateful for this outcome, since I was concerned after Altman’s predictions that human writers, thinkers, and artists would pretty much be put out of business by GPT-5. But it seems that most of us are still fairly safe.
Mostly I hope that it will quiet the chorus about which genAI model is best, when we’re going to reach AGI, and when humans will become irrelevant. I’m not as negative about generative AI overall and GPT-5 in particular as Gary Marcus, but I do agree that we are going to need more than transformer-based models if machines are to beat humans on every dimension. I read a review of Alexa Plus last week from Amazon, and it suggests that while its conversational skills are better than the previous form of Alexa, the ability of generative AI to seamlessly carry out “skills” is worse than when deterministic AI was doing it. I’ve seen similar kinds of results from the early efforts at agentic AI. So I’m a fan of the idea that we will need probabilistic, deterministic, robotic, and many other forms of AI programming before we really need to start worrying about it taking all our jobs.
In the meantime, if you have any feelings about whether a novel on robot American football at MIT would be an interesting read, I’d love to hear them!










