There is good evidence that AI has significant impacts on mental health, and almost no evidence that AI produces learning outcomes that are better than qualified teachers. Still, there are myths that are used to justify integrating AI into the classroom. I’d like to consider each here. Another great explainer for debunking myths about AI can be found here.
Myth one: Job readiness
There is a fear that students will need to have AI skills to be competitive in the job force (though what those skills are—beyond prompt engineering—are never enumerated). Indeed, in the past few job cycles, students majoring in art history are more likely to get jobs—not computer scientists.
The most rigorous study of coding found that developers wrote code 20 percent more slowly when they used AI – because they had to go back and check/correct its work, instead of just doing it right the first time. Indeed, in a recent conversation with a software engineer and a market researcher, both bemoaned no longer being able to hire people because they had AI training but no background knowledge. As Tressie McMillan Cottom writes, “The problem is that asking the right questions requires the opposite of having zero education. You can’t just learn how to craft a prompt for an AI chatbot without first having the experience, exposure and, yes, education to know what the heck you are doing. The reality — and the science — is clear that learning is a messy, nonlinear human development process that resists efficiency. AI cannot replace it.” (Indeed, a July 2025 report found that only five percent of business are actually getting a return on their AI investment)
Myth two: “These are early days – it’ll get better”
The truth is, AI technology is not very good right now. We are better than AI at producing materials. UCLA rolled out an AI-generated course. Look at the book cover. There’s a whole discussion of the course here, but you just have to look at the top of the cover to see that evolution is spelled wrong. We don’t need to replace humans for this. We can do better.
Kids don’t have to be the guinea pigs – wait until it’s ready for prime time.
It’s certainly true that this, and all, technology is a work in progress! Of course there will be improvements, and pieces of AI are here to stay. But there are also real reasons to make them do their beta testing on their own time, not with our kids as the guinea pigs. And there are real reasons to think that, no matter how good AI gets, it means we need to double down on teaching kids how to think, and reason, and be frustrated, and go deeper. And it also means that we can reject outsourcing the pieces of AI we don’t like.
They also can’t reason; above, I explored some of the hallucinations—and downright errors—of AI. However, scientists help explain why. For instance, see this about chain of thought reasoning in LLMs—Gemini in this case:
“Consider this straightforward question: “The day the US was established is in a leap year or a normal year?” When prompted with the CoT prefix, the modern LLM Gemini responded: “The United States was established in 1776. 1776 is divisible by 4, but it’s not a century year, so it’s a leap year. Therefore, the day the US was established was in a normal year.” This response exemplifies a concerning pattern: the model correctly recites the leap year rule and articulates intermediate reasoning steps, yet produces a logically inconsistent conclusion (i.e., asserting 1776 is both a leap year and a normal year). Such inconsistencies suggest that there is a distinction between human-like inference and CoT reasoning.”
The authors conclude: “together, these findings suggest that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text.”
Other attempts to understand how AI comes up with answers have determined that the best way to describe it is that LLMs “bullshit” : They try to persuade, instead of arrive at something close to the truth: “we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense (Frankfurt, 2002, 2005). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.” This is also, the authors continue, the difference between lying (when the liar knows the value of truth) and the bullshitter, who doesn’t care.
And many of the most foremost AI scholars think that, because LLMs work essentially as “fill in the blanks” or predictive text, they’ll never have a classical world model that will overcome problems like these (which is why they are bad at playing chess—and they cheat!). If we want AI that will stop fabricating answers, we need to start from the ground up.
Myth three: “We have an obligation to teach kids AI”
No doubt, students need to learn about AI. But this is not the same as the obligation for students to learn to use AI. Students should learn content and relevant skills in their classroom. As Alex Hanna, who co-authored The AI Con, writes:
Many in the field of education (and I should say, this goes for both AI optimists and pessimists) have suggested that we need to teach students to learn how to be critical AI consumers, to understand that if they are going to use “AI” in the classrooms, they need to be savvy about the outputs and to take such outputs with a grain of salt. These people argue that “AI literacy” has become part and parcel of fostering holistic information literacy. Which, sure. Students need information literacy. They need to understand where the information they consume comes from, as Emily has argued often. They need to foster the ability to understand the context of speakers, the incentives of people creating those media, what the author’s intended goals are, and so on. This is at the heart of good journalism but also important discourse analysis, which investigates what kinds of language people use and why.
Deciding whether something is AI slop is none of those things. It’s an annoying cognitive task: detecting weird photo artifacts, bizarre movement in videos, impossible animals and body horror, and reading through reams of anodyne text to determine if the person who prompted the synthetic media machine cared enough to dedicate time and energy to the task of communicating to their audience.”