Will humans flourish with the technology or sleep and play video games?
The tech world is aflame with discussion of AI agents and agentic AI (which I think are basically the same thing, but this article begs to differ). If you have somehow managed to escape this onslaught of agent-focused verbiage, some of which is justified and some is clearly hype, here’s a quick definition. AI agents are AI programs that can not only inform us or make predictions, but can actually perform a digital task. My favorite example is a vacation agent—or collection of them—that can not only research and plan your next vacation, but also book and pay for your flights and hotels, make your restaurant reservations, and drive your rental car. Well, that last part might be a big exaggerated at this point, but you get the idea.
I have contributed to the flurry of agentic content by co-authoring (with 8 co-authors!) a book called Agentic Artificial Intelligence: Harnessing AI Agents to Reinvent Business, Work, and Life. Pascal Bornet took the lead on this book and in getting it out quickly. As I recall many of my contributions involved the need for maintaining a human in the loop when that loop is automated by AI agents. I still believe that agents are TNBT (the next big thing, if you must know) in AI, and that they offer the potential for improved efficiency in business processes with less human intervention.
However, an experience I had this week gave me pause about my role in promoting AI agents. I was at an AI advisory board meeting for a big financial services company, and several of that company’s executives were there as well. The company is considering making a big bet on AI agents, and two vendors’ CEOs came in to present their agentic AI wares. The first CEO presentation was OK with me, perhaps involving too much vendor hype but not hitting my moral funny bone. There was a fair amount of mention of humans remaining involved in the processes the vendor’s agents would automate.
The second vendor CEO, however, did strike my ulnar nerve with his utter disinterest in human capabilities. He did mention that “hybrid teams” with humans and AI collaborating are the current norm, but viewed these as only a steppingstone in the relentless march toward the “autonomous company.” He made a brief mention of the need to occasionally have some human intervention, but the overall theme was that agents would eventually automate every business process and that one-person unicorns would become the norm—the one human perhaps needed only to sign the purchase orders for the vendor’s software.
During the CEO’s presentation, two things struck me with at least some degree of profundity. One is that the vendor planned to show a video demonstrating the agentic software’s capabilities, but it simply wouldn’t play. There was some glitch, probably related to the host company’s wifi settings. So he had to improvise and describe his video in words. It struck me that if an AI agent were giving the presentation this might have required substantial human intervention.
But a more profound event was when one of the company’s managers raised his hand and asked, “Is anyone thinking about the economic and social consequences for human beings if these types of agents really take off?” The vendor CEO mumbled something like, “Yes, but not enough—maybe it will lead to guaranteed basic incomes.” Some of the other advisory board members—perhaps, like me, they wished they had asked the question—began to discuss the issue. One said that it would lead to a flourishing of human creativity when we no longer had to do boring work. I commented that most of the recipients of guaranteed basic income in the various experiments with it around the world ended up sleeping more and watching more TV. Or perhaps, as in the excellent book (and OK movie) Ready Player One, out-of-work but still compensated people will play video games all day. None of these outcomes is a ringing endorsement of the concept. And I didn’t say this at the meeting, but the chances are a) slim and b) none that the Washington, DC enthusiasts of the “Big Beautiful Bill” would provide a guaranteed basic income for those who lose their jobs to AI. But the conversation on this issue didn’t go very far—most of the company’s attendees wanted to get back to discussing the productivity benefits of agents.
No doubt that companies need to get productivity from AI (it’s been sorely lacking thus far, or at least not apparent at the macroeconomic level), and that if you don’t use agents or related AI tools your competitors probably will. But that’s a key reason why you need to have humans in the agentic loop. If you’re only using AI to improve productivity and efficiency, you and your competitors will all become more efficient, but your customers won’t have jobs and won’t be able to buy your products and services anymore.
If you can somehow inject smart, creative humans into your agentic loops, you can still derive some productivity gains, but you can also have more accurate, flexible, and innovative outcomes. Agentic systems are likely to be smarter and more flexible than previous workflow-focused systems like robotic process automation, but they will still make mistakes and won’t be able to anticipate every eventuality (like the video not working in the presentation). Many businesspeople don’t trust agents (according to this survey) and it’s largely because they lack many human capabilities.
If we succeed in teaching humans to understand, think critically about, intervene when necessary, and fix agentic AI-based business processes, we’ll be a bit less efficient but much more successful in the long run. I am not sure how to turn humans into effective collaborators with AI agents—see last week’s post on the education issue relative to AI, or the previous week’s on whether humans can step up to what’s needed to work with AI—but it seems that it should be a key focus of companies, vendors, consultants, and every human who is excited about AI agents or those simply worried about their future employment.
Written by: Tom Davenport, originally published on Substack on June 12, 2025