One might argue that the world doesn’t need more words from Tom Davenport. I’ve written or co-authored 26 or 27 books, 240 or so Harvard Business Review articles, 71 MIT Sloan Management Review articles, and a bunch in Forbes, The Wall Street Journal, etc. All water under the bridge, and you can find them fairly easily if you want to.
But I find that I often need to discuss the most important topic on which I can shed some light, which is what will happen to humans in the Age of AI. None of the other places I typically publish are well-suited to deep or even shallow thoughts on the topic.
I have studied this issue for more than a decade, and co-authored two books squarely focused on it—Only Humans Need Apply and Working with AI, if you must know. But I am still quite uncertain about how the AI vs. humans story will end, or even what some of the middle chapters will look like. Or even whether we are at the beginning, middle, or end of the narrative. Or whether my outlook is optimistic, pessimistic, or just paranoid. Or, most importantly, whether the most likely outcome is large-scale automation or larger-scale augmentation.
I was much more optimistic when Julia Kirby and I wrote Only Humans Need Apply, because it was fundamentally about augmentation—how smart humans and smart machines can find ways to work with each other. I was still optimistic when Steve Miller and I wrote Working with AI, because we were able to come up with 30 chapters (edited down to 29 because our editor didn’t like the one on working with AI as a cancer patient), each about an example of humans already collaborating effectively with AI. We didn’t even have to work that hard to find them.
But now we have rapidly-advancing generative AI, and it’s made me quite paranoid about the future of human employment. In fact I believe that if you are not paranoid about the future of human jobs, you’re simply not paying attention. It’s still true, of course, that artificial general intelligence (AGI) isn’t here yet. It’s still true that virtually every human does multiple tasks in their jobs, and that AI only can do some of them.
However, we’re getting closer to AGI all the time, and AI can do increasing numbers of tasks. I’m particularly worried about entry-level workers, because now AI can carry out a pretty high proportion of the tasks they are typically assigned. I’ve been worried (at a lower level) about this since 2013, when Jeanne Harris and I wrote an article called “Automated Decision Systems Come of Age.” In that piece we commented that “the reality is that there is little need for low-skilled or entry-level employees once automated programs are in place.” We also wrote, “it is also by no means clear where companies will be able to find tomorrow’s experts. As the ranks of employees in lower-level jobs get thinner, companies may find it increasingly difficult to find people with the right kinds of skill and experience to create and maintain the next wave of automated decision systems.”
This week in the New York Times an article appeared that confirmed my fears or at least gave them added weight. Entitled “I’m a LinkedIn Executive. I See the Bottom Rung of the Career Ladder Breaking,” it provided data suggesting that the entry-level labor force is already having a hard time, and it’s likely to become harder. Both AI and economic uncertainty seem to be the driving forces, and it’s not yet clear which is more influential. The article suggested that AI-based code generation is the canary in the coal mine for entry-level workers, and that it’s the reason why hiring for software engineers has slowed considerably.
I had also seen this problem coming when doing research for Working with AI. Several of the AI collaborators we interviewed commented that they weren’t sure that entry level workers would be needed much in the future. None had any answers to the question of how you create experienced workers if you are not hiring inexperienced ones. We wrote about this issue near the end of the book, but my co-author Steve Miller did a good job of portraying the more positive findings about the issue. I wasn’t entirely convinced by them but I thought the book would benefit from some optimism on the issue.
Then just a couple of days ago I was at the MIT CIO Symposium, where I moderated a panel. After it I spoke with one of the attendees, a CIO from a large financial services company. She said that they already weren’t hiring as many people for entry-level roles, particularly in software development. But the same trend was taking place in other parts of the company, and to such a degree that executives are discussing changing the basic organizational structure model. Instead of aiming for a pyramid—lots of lower-level employees, fewer middle managers, relatively small numbers of senior executives—they’re seeing more of a diamond. There will be many fewer entry-level workers, more middle-level experts, and the same number of senior executives.
This diamond model is discouraging enough for entry-level workers. But I asked her my usual question of how the middle of the diamond will be created in the future if the company isn’t hiring many people for the bottom of it. Like everyone else I’ve asked, she didn’t know.
As it happens the next day I participated with several other AI-focused, mid-or-late-career individuals on a high school panel where we discussed what the technology will mean for students by the time they graduate from college. Below are a few things we suggested that might help:
Develop a “digital mindset.” You don’t have to be a programmer—in fact everyone can create at least the first draft of programs now by just saying what you want to a large language model—but you do need to know how AI systems work, what they’re good at, and what they don’t do very well.
Become a subject matter expert at something, anything. The most valuable employees of the future—maybe even the present—are those who understand AI but also have a deep knowledge of supply chain management, marketing, finance, or even English composition.
Never stop learning. AI is changing very rapidly, and you have to keep up with it.
Use AI a lot, but in the right way. Apply the technology to multiple aspects of your life and work to see whether it makes you more productive and effective. Don’t let it rot your brain, but rather try multiple prompts, edit the output, check the citations, etc.
Exercise your critical thinking capabilities. Analytical AI made a prediction for you? Look at the data and the variables in the model and check whether or not they make sense. AI write a paragraph for you? Review it to see whether you could do better.
Who knows if the entry-level workers who take these approaches will be among the relatively few who come in at the bottom of the diamond. But at least you will have more of a chance of getting a job and creating a career than most people. And what will all the other people do who are not inclined to become AI-enabled, heat-seeking missiles? Not saying I know the answer to that question, but I hope to reflect on it later in another Substack post.
Written by: Tom Davenport, originally published on Substack on May 22, 2025.