And do we want these jobs?
Everybody in the AI world is excited about agents, which of course can perform digital tasks of various types rather than just inform curious humans. However, astute observers are a bit less excited about agents now than they were a few weeks ago. Cracks in the agent armor have begun to appear:
Gartner predicts (always with rounded percentages!) that 40% of agentic projects will be canceled by 2027. The research analysts at the company blame escalating costs, unclear business value or inadequate risk controls. Surprising that a technology in its infancy is already entering Gartner’s “trough of disillusionment.”
Anthropic used its LLM Claude to run a small automated shop in its headquarters building. As an agent it could decide on prices, what to stock, when to restock, and how to communicate with customers. The experiment did not go well; to quote an Anthropic blog post, “If Anthropic were deciding today to expand into the in-office vending market,2 we would not hire Claudius [the name it gave its LLM Claude for this job]…it made too many mistakes to run the shop successfully.”
Carnegie Mellon researchers tried out ten different agents to run a software business. The agents made lots of mistakes; the most effective agent (from Anthropic, ironically) only completed 24% of its assigned tasks successfully.
The vendors all say that these agents will get better over time, and they no doubt will. And of course human workers also make mistakes. But these predictions and results suggest a specific set of roles for humans in a world of lots of AI agents. Are they roles that we want to play? I can’t speak for all humans, but I’m not sure I would want these jobs. Here are some I envision:
Human agents—The inability of agents to perform all roles suggests that there will be “human agent” roles to do certain steps in a process that an AI agent can’t (yet) perform well. Maybe they will involve interfacing with other humans, or performing physical tasks that robots aren’t yet good enough to undertake. This description of a new Amazon warehouse, for example, suggests that humans perform tasks like “unloading trucks and packing orders of unusual sizes or shapes.” To emphasize that these human workers are a form of agent, the article points out that, “Many of the roles at the Shreveport facility involve managing and collaborating with robotic systems.”
Agent auditors—If AI agents make a lot of mistakes, they will need humans to review and audit their performance. They’ll also probably have to reverse or fix the mistakes that agents have made (I think agent task reversal is a critical capability!). This auditing doesn’t strike me as a particularly fun job; it would seem not to involve a high degree of creativity or “agency.” You’re responding to a machine all day long—and if it’s only doing a quarter of its assigned tasks well, you’re going to be very busy.
Incremental agent improvers—A well-designed work environment involving AI agents would probably require that humans not only monitor and correct their mistakes, but also make suggestions to the agents (or their human creators) for how to improve their performance. In order to do this well, the human worker would need to be technically astute and know something about how the agents work. This could be a more fulfilling role than the agent auditor role, but my experience in talking with programmers is that they’d rather write new code than fix someone else’s bugs. Perhaps this job and the auditor one will eventually be replaced by “quality control agents” that will monitor performance and recommend fixes for errors.
Agent architects—One of the better roles in an agent-centered world is designing what they do in an overall system or process. I don’t think it will be easy, given that a broad process may involve lots of different agents, each doing a different task. And given the current agent accuracy level, it will involve both human and AI actors. At some point we’ll probably have trusted third parties to rate and authenticate agent performance, which should make it easier for architects. This service would be something like what Apple and Google do for smartphone apps. There’s nothing like that at the moment, however.
Agentic decision-makers—The most important role in an agent-centered world is deciding whether to use agents or not for a particular purpose. Again, this could be a tough job. Agents are likely to put some humans out of work, and being a hatchet person is never fun. They are also likely to substitute for existing software applications; Satya Nadella, CEO of Microsoft, is one of the software company leaders who have argued that AI agents will replace conventional SaaS software. Deciding when and how to do that within a particular organization isn’t going to be easy.
Agentic Psychologists—A somewhat tongue-in-cheek role I might suggest is that of the agentic psychologist, who would counsel unhappy humans about their daily work with AI agents. AI can usually work faster than we humans can, and doesn’t need to take breaks. The digital tasks that humans still perform are also likely to be measured. The speed and monitoring of the AI/human collaboration will probably put stress on the humans assigned to partner with them. One social media-based survey of Amazon warehouse workers suggests that they “are suffering physical injuries and mental stress on the job as a result of the company’s extreme focus on speed and pervasive surveillance.” Physical injuries aren’t very likely from collaborating with AI agents, but stress does seem a probable side effect. Perhaps the humans who do work with AI agents will have easy access to a counseling chatbot to relieve their stress.
I would stress that it’s early days for agentic AI, and we don’t know how things will eventually turn out in terms of the capabilities of agents and the ways humans will need to adapt. But perhaps we should think a bit about the implications for humans before we charge off too fast in an agent-focused direction.










