When I started this project, my question was simple: even if I put aside the significant ethical and environmental quandaries about generative AI, does it on balance improve educational outcomes and experiences for my kids? As of right now, my answer to that question is “no,” and I think all parents and students should begin to ask that question as well.
Beyond the mental health threats of the technology, there is dubious evidence that generative AI actually improves learning outcomes or experiences.
The premise of this memo is to express skepticism over the integration of generative AI into preK-12 classrooms, and to provide a rationale for parents, students, and educators to reject the uncritical integration of this technology into student education. Rather than allowing tech companies to dictate the terms under which these tools are integrated into education settings, I believe the moment has come for a careful public conversation about the tools and their uses.
I am not going to take the position that there is no room for generative AI in any classroom; instead, I am making a threefold argument: the current educational applications do not hold up to scrutiny and in fact are directly harmful to the goals of education; there are significant mental and neurological risks to the repeated use of AI; and the reasons given for why AI is inevitable don’t actually hold up to scrutiny. These arguments of course put aside the real concerns over the environmental impact of AI[1] and the fact that generative AIs were trained on millions of texts that constituted massive copyright violations. The terminology surrounding AI can be quite confusing; I use terms more-or-less interchangeably here, but this is a nice guide to the lingo if you’d like more precise terms (Utrata 2025).
Critics might implicitly argue that AI is worth it if you put aside your ethical beliefs (after all, its copyright and environmental infringement are not unique, even though they are devastating): I am making the argument that it is not. It is an inferior product to currently existing educational tools with real downsides for education and should largely be kept from the classroom.
I also believe that we have an obligation to show students why learning is intrinsically valuable; if we continue to motivate students by grades, there is little incentive to do more than game the system. We need students to understand why knowledge and skills will help them be better versions of themselves, and that may very well require rethinking how education is administered.
Even though generative AI is being presented as an inevitability, teachers, parents, and students are also skeptical. Jessica Grose, a columnist for the New York Times, reports that she has not found one elementary school parent who supports the use of AI (Grose 2025).
A quarter of teachers say AI does more harm than good, with only six percent saying it does more good than harm (Lin 2024). And professors at Trinity College Dublin have taken a public stand against using AI in the classroom (Kelly, Bruisch & Leahy 2025). Ashanty Rosario, a senior at a high school in Queens, talks about how her classmates’ use of AI is taking away her sense of what it means to be a student (Rosario 2025). Organizations have raised questions about the lack of public oversight by policymakers, with the National Education Policy Center calling for a pause until there is effective oversight over AI (Williamson, Molnar, and Boninger 2024).