It’s not at all obvious, even to the AI-informed
I have decided that my website posts should generally address questions for which I am not confident I know the answer. This one certainly qualifies. It deals with the issue of what and how schools—at every level—should teach with and about AI. That is a big subject, perhaps worthy of a book. And to give my friend Jeff McMillan, head of AI at Morgan Stanley, a plug, he is writing a book about it. Perhaps this post will earn me a mention in it. I don’t know enough to write an entire book.
I’ll describe some of my own experiences as a professor teaching with and about AI, and then I will cover other observers’ perspectives. But before either of those, I want to quote (without permission) one of my readers, Michael Weir, in a response to my recent post on AI and its perils for entry-level jobs:
A personal experience with my two college-age children has had me thinking about this topic for a while now as well. I have been asking both of them how often AI is brought up in their classes as something for which they must prepare themselves, and their answer is virtually never. Even recently, the topic of AI discussed in their classes is likely more about cheating.
One child just graduated with a degree in supply chain and analytics. He received two offers upon graduation. The other child is still a college sophomore. For the past several years, I have been asking both of them questions about how their universities are approaching the topic of AI. Sure it gets mentioned now and again, my children acknowledged, but the universities don’t appear to have come up with a solution for increasing the awareness of AI to the point where the students are either seeking a solution to the knowledge gap at their university, in their class or major selection, or preparing themselves on their own time. In fact, my children have been a bit skeptical about my AI warnings thinking I have been reading dystopian fiction novels secretly in the basement.
I did my best to help my sophomore select his business major which is now Business Analytics and Information Systems (BAIS) which I felt was an upgrade to some other business majors in regard to this topic. Focusing back on my older child who just graduated – I told him to pay attention to any mention of AI at his new job as supply chain roles can be impacted by AI to a high degree. I told him to get involved in those initiatives, be the leader that bridges the gap to his supply chain knowledge on projects. Check the AI output to verify the quality and find potential pitfalls. In other words, be the person who knows how to leverage AI to be a force multiplier in his role to improve his professional profile.
I am impressed by Weir’s attempts to prepare his children for an AI-suffused job market, and they seem to be heeding his advice to some degree (mine generally did not), although he notes that they are skeptical of it. But his message points out several problems:
1. Schools aren’t doing much with AI yet, other than trying unsuccessfully to ban it;
2. There’s no certainty about what strategies will work;
3. There is no obvious course of study that is well-suited to an AI future.
Here’s my experience as a professor. I teach undergrads and masters students about AI, generative and otherwise. I figure it’s crazy not to have them use AI to do some of their work in my class, which they would probably do whether I allowed it or not. And using the tools, in my view, will make them more productive and effective. So I require its use, but with some conditions:
1. They should try out several different prompts to see what is elicited by each one;
2. They should edit the output, making it more interesting and supplementing it with their own thoughts;
3. They should ask the LLM to provide some citations and check to ensure that they actually exist.
I think my approach is reasonable, but it hasn’t really worked. Students really hate doing all these extra tasks; one complained, “It was easier when we just made a few edits to a Wikipedia entry.” Most of them simply don’t do the extra tasks, for which I lower their grades (I tried having ChatGPT grade their papers, and it does provide the students with a lot of feedback, but all grades it gives are pretty much the same—so I still give my own grades).
I don’t think my students are different from most people in the business workforce, for better or worse. One survey suggests that only 20% of gen AI output is reviewed or edited at all in businesses. The number of gen AI screwups by professional journalists, lawyers, and HHS policy analysts suggests that many of them are not reviewing much either. But how can students develop their knowledge and critical thinking if they aren’t reviewing AI outputs?
Perhaps you have to find out for yourself that AI output isn’t trustworthy. A religion professor at Elon University, for example, wrote in Scientific American about his students to “grade” the output of ChatGPT:
I created an AI-powered class assignment. Each student was required to generate their own essay from ChatGPT and “grade” it according to my instructions. Students were asked to leave comments on the document, as though they were a professor assessing a student’s work. Then they answered questions I provided: Did ChatGPT confabulate any sources? If so, how did you find out? Did it use any sources correctly? Did it get any real sources wrong? Was its argument persuasive or shallow?
You may not be surprised to learn that all 63 of his students discovered hallucinations in their essays. More importantly, the exercise lowered their regard for the technology, at least in this context.
In terms of what other smart people say about how to educate students with and about AI, I can’t say I feel they have the answer either. The very smart (and Nobel Prize winning) head of Google DeepMind, Demis Hassabis, was interviewed recently on the podcast Hard Fork (one of the hosts of which, Kevin Roose, wrote a good piece about AI and entry-level jobs last week). When asked what students should be studying to prepare for a world of artificial general intelligence (he thinks AGI is coming shortly after 2030), he had some pretty conventional answers:
1. Learn how to code (even though you probably won’t have to do any of it);
2. Immerse yourself in AI tools (presumably Google’s);
3. “Learn how to learn”—well, yes, that would be useful;
4. Develop “meta skills” like creativity, adaptability, resilience.
One might hope for a more distinguished response from a Nobelist. In fact, I got a better answer (hmm…what does that tell us) from one of Google’s own AI products, Gemini Deep Research. When asked what students should study in a world with fast-approaching AGI, it named some interesting new skills to prepare for:
I’ve identified several key emerging fields that will be crucial in a world with AGI. These include Human-AI Collaboration/Interaction Design, which focuses on creating intuitive and ethical ways for humans and AI to work together; AI Ethics and Governance, which addresses the societal implications and responsible development of AGI; and AI Safety Research, dedicated to mitigating potential risks. I’m also seeing the rise of Machine Psychology, an interdisciplinary approach to building AI with human-like learning, and Prompt Engineering, a specialized skill for optimizing AI outputs.
Pretty impressive, except perhaps for the old chestnut of prompt engineering. I think that gen AI will eventually do most of the hard work of refining prompts itself.
It also gave a thorough description of how AGI is likely to change the nature of human work. Most of it was pretty well thought out, but one line could have come straight from a vendor press release (or one of my earlier books on AI):
Instead of solely replacing human work, AI is increasingly positioned to assist and enhance human capabilities. This human-AI collaboration allows individuals to offload repetitive or computationally intensive tasks to AI, thereby freeing up human time and cognitive resources to focus on more complex, creative, and higher-value activities.
Comforting, but is it really true? The efforts of Deep Research are pretty “complex, creative, and higher-value” themselves, bringing up the whole issue of how students can compete with, or at least learn from, it.
Three Canadian economists with AI expertise have weighed in on this subject for an International Monetary Fund magazine. Ajay Agrawal, Joshua Gans, and Avi Goldfarb wrote: “To sharpen judgment, policymakers could expand access to high-quality education and training that emphasizes complex decision-making skills, ensuring that more people in different regions develop the judgment needed to complement AI.” But I’m afraid that a) not everyone has the ability to acquire complex decision skills; and b) AI is already better than most of us in that regard.
My MIT Initiative on the Digital Economy colleague Sinan Aral recently weighed in on this in a Yahoo Finance interview. He said that universities absolutely need to change their curricula to help students learn to work with AI, and that banning it is a bad decision. I agree with both, but just how to best help students isn’t clear to me. He said that MIT is an early adopter of these approaches, which is true. So is Babson. But I don’t think anybody has fully figured it out.
Coming back to Michael Weir’s comment at the beginning of this post, I think the situation his kids are in is pretty typical. Most teachers and professors don’t spend all their time thinking about AI and don’t have the knowledge to help their students adapt to this technology. As a result they haven’t changed their pedagogical methods and don’t discuss the issue with students. However, they’re not happy about the situation; as a 404 Media post put it, “Teachers Are Not OK.” I do spend most of my waking hours thinking about AI, and I’m still not sure what the answer is. But it’s clear to me that most of us in the educational profession need to give this issue some serious thought. The brains of our children (or in my case, grandchildren) hang in the balance!