This post has a long personal intro. If you have a parent-teacher conference coming up and have questions about AI, click here to jump to the expert advice.
A few months ago, my ten-year-old son came home from school and let me know his essays would now be graded by AI. Before I could ask any questions, he added that he should be able to write his essays with AI. "It's not fair that I have to write by hand while the teachers just press a button," he said.
After spending half an hour explaining why using AI to write his essays would defeat the purpose of, you know, teaching him how to write, I reached out to his teacher.
She was quick to reassure me that the essays he wrote in class would still be graded by hand (phew!). But, she explained, his state standardized tests would, indeed, be graded by a natural language processing (NLP) model. And apparently, the AI is a stickler—only a handful of students ever receive the top score of 4/4.
Further conversation revealed that the online tools both my kids use every day are adaptive learning platforms powered by AI. While I felt good that my son's teacher was monitoring their progress and offering enrichment above and beyond the learning platforms, I began to worry about where all this is heading.
I had questions and more questions.
As a marketer who works with AI every day and even builds AI automations, I know that AI is a powerful but imperfect tool. In my experience, it's good for brainstorming, quick summaries, and grammar checks. But its writing usually has an ersatz feel, and it's often riddled with factual errors. Is it really the right tool for judging students' writing?
Another problem is that most AI models have an agenda—to keep us engaged in the chat. It's why ChatGPT always ends every answer with a question about what it can do for you next. My son and his thirteen-year-old sister already have enough trouble staying off screens. Could introducing AI to kids be the ultimate marketing hack, and not necessarily good for developing minds? After all, OpenAI is already prioritizing the education market.
I also wondered if schools might take a hint from businesses and try to replace human teachers with more and more AI. Our local schools, like public schools everywhere, face budget issues. Could the presence of AI in the classroom be used as an argument for higher student-teacher ratios?
All of these questions left me feeling a bit uneasy.
AI-enabled cheating at scale?
At dinner a couple of weeks before the school year ended, I asked my kids to tell me what they think about AI. My son's response was to show me some examples of Italian brainrot. My thirteen-year-old daughter giggled and said that several of her classmates had gotten in trouble for using ChatGPT to write their in-class essays. It wasn't a sophisticated cheating operation—they were obviously typing prompts into their phones in class and were busted within minutes.
But it reminded me of what happened when I tried to launch an AI tool to write long-form blog posts for businesses. The product premise was simple. Because of context window limitations, AI has memory issues and can only write around 1,000 words or so without losing the plot. My AI started with an outline and fed previous sections as prompts for subsequent sections.
When I went to advertise on Google, I got a lot of interest, but it was all from college students looking to generate papers. I stopped actively promoting the product, and had one more AI worry. Is it empowering a new generation of cheaters?
I decided it was time to get some answers.
Starting with the data
Based on casual conversations with other parents, I knew I wasn't the only one who was worried. But I wanted to establish some context for my questions and get a handle on just how concerned we all are. I also wanted some practical guidance on how AI ideally should be used in the classroom, and to be ready with questions I can ask at my next parent-teacher conference.
Checking the temperature online
I decided to start with Reddit, an anonymous social platform where people are likely to be very honest about what they think. Also, Reddit allows researchers to collect data for non-commercial projects, as long as you follow their guidelines. (Read more about working with Reddit data here.)
Using Reddit's API, I gathered posts from 10 education-focused communities where teachers, parents, and education technology professionals actively discuss classroom challenges. The subreddits included:
Reddit Communities Included
I searched for posts that mentioned both AI terms (like "artificial intelligence," "ChatGPT," "generative AI") and education terms (like "classroom," "curriculum," "teaching") from the past six months, ultimately collecting 415 relevant discussions. Data was gathered on June 17, 2025.
After gathering the posts, I used a machine learning model to analyze the text and identify commonly occurring keywords. I only looked at meaningful keywords. "Filler words" like the, and, but, etc. were filtered out.
The short story: People are worried about AI at school
Keywords were counted and grouped into five broad categories based on their emotional tone:
Emotional Tone Analysis of 415 Reddit Threads Referencing AI in Education
🔴 Concern (52 mentions)
The prevalence of concern-related keywords reflects deep worries about AI's impact on student development. Posts in this category expressed fears that students are losing fundamental skills, with one parent noting that children struggle to read, write, or form independent opinions, yet continue advancing through grade levels unprepared. Another worried that AI might be making students intellectually passive, making them easier to influence or control.
Overall, these posts often expressed fear of long-term harm to students' cognitive development and civic readiness. And I can understand where these parents are coming from. I want my kids armed with critical thinking skills, so they won't accept whatever they hear from YouTubers and other influencers at face value.
🟡 Confusion (36 mentions)
The second-largest group of keywords suggests widespread uncertainty about AI policies and boundaries. Many educators described receiving vague guidance, with one teacher explaining that their school's policy simply advised using "best judgment" without defining what that meant. Others questioned where to draw lines—if AI brainstorming is acceptable, what about AI-assisted outlining or drafting?
These posts reflect the ambiguity that teachers, students, and parents are navigating, often without clear institutional guidance. Again, this is a feeling I know well. My kids' schools have rules about cheating with AI, but haven't yet developed more nuanced standards or guidelines.
⚫ Skepticism (33 mentions)
Skeptical posts frequently challenged AI's accuracy and appropriateness for educational use. One educator described AI grading as fundamentally flawed, like trying to use the wrong tool for the job entirely. Others noted that students often couldn't explain work they'd submitted, simply copying whatever AI produced without understanding it.
These posts focused on AI's accuracy issues, built-in biases, and lack of contextual awareness. In my experience, these limitations of AI are real, and some skepticism is warranted.
🟢 Hope/Engagement (19 mentions)
Though smaller in number, hopeful posts were notably present, especially among educators with instructional design backgrounds. Some described positive experiences, like students feeling proud of AI-assisted resume writing, or teachers viewing AI literacy as the next important educational frontier, teaching students to recognize when AI is right or wrong.
Based on my reading, most of these posts were not hyping AI but rather genuinely optimistic about how it might help their students.
🔵 Frustration (15 mentions)
Frustrated posts often came from educators who had tried AI tools and felt disappointed by the results. Teachers described AI grading systems that provided meaningless positive feedback regardless of work quality, or expressed annoyance at being expected to use tools that couldn't perform basic academic tasks like proper citation.
Teasing out key themes
Once the keyword analysis was complete, I made myself a cup of coffee and did a closer reading of the actual conversations. I scrolled through posts, pausing on the ones that caught my attention or resonated with my concerns as a parent. What emerged were three clear themes that I then validated using another round of data analysis.
01AI-enabled cheating really is everywhere.
Parents and teachers are grappling with where to draw the line between legitimate AI assistance and academic dishonesty. The posts revealed a frustrating cycle where students use AI tools to generate answers without understanding the content, while educators struggle to detect and address AI-generated work that lacks structure, logic, or originality.
Several paraphrased examples illustrate the breakdown of academic integrity:
One student openly wondered whether they could use ChatGPT for all their schoolwork to achieve success, while ironically noting that their teachers were using the same AI tools to create assignments.
A teacher warned against relying on ChatGPT-based plagiarism checkers, observing that students often turn to unreliable free detection tools that barely function.
One educator captured the deeper concern, noting that while the system appears to work—assignments get submitted and passing grades are earned—actual learning isn't happening.
Another teacher expressed frustration with students presenting fabricated research, describing their over-reliance on AI as mindless repetition that prevented genuine intellectual engagement.
02Educators are flying blind.
Many teachers are navigating AI without clear institutional guidance, and in some cases, they're being encouraged or even required to use AI tools despite concerns over accuracy, fairness, or educational value. The posts highlighted a concerning lack of clear policy, with AI use often imposed top-down without teacher input.
One teacher described their school's circular (and insane) policy: if students use ChatGPT to write papers, teachers are expected to use ChatGPT to evaluate and provide feedback on those papers.
Another educator revealed the extensive mandate they'd received to integrate AI across all aspects of teaching, from lesson planning and text creation to assignment development and essay grading.
Some teachers expressed skepticism about AI-generated feedback, particularly when the suggestions didn't align with what was actually taught in class.
Perhaps most concerning was an educator's observation about systemic challenges. Teachers lack the tools, funding, support, and authority needed to establish meaningful safeguards around AI use.
03AI is helping some disengaged students reconnect.
Perhaps most unexpectedly, several posts suggested that AI is actually helping students who normally resist writing to become more engaged. Rather than avoiding writing altogether, some students are using AI as a starting point or confidence-builder.
The data showed mentions of "ai help," "ai write," and "generative tools," often in contexts where struggling students found ways to engage with writing tasks they previously avoided.
One teacher described students openly using ChatGPT on the classroom smartboard to generate essay content, then copying it while the instructor watched, suggesting that while the approach was problematic, it represented engagement rather than complete avoidance.
The data also showed that while concerns about cheating dominate the conversation, there's also genuine curiosity about how to harness AI's potential for learning. The frequency of terms like "critical thinking" and "ai help" suggests that educators are actively seeking ways to integrate these tools responsibly rather than simply banning them.
This tension between AI as a shortcut that undermines learning and AI as a helpful tool that enables it runs through much of the discussion, suggesting that the key may not be whether to use AI in education, but how to use it thoughtfully.
Advice from a human expert
After reviewing my findings, I was actually more concerned than I was before I started the research. The Reddit analysis uncovered widespread confusion, legitimate worries about academic integrity, and teachers struggling without clear guidance. But as a parent who works with AI professionally, but not as an educator, I realized I needed an expert to help me separate legitimate concerns from overreactions.
This turned out to be harder than I expected.
I reached out to about 30 education researchers, ed-tech specialists, and university professors who study AI in learning environments. Most didn't respond—understandable, given that everyone probably wants to pick their brain about AI these days.
Fortunately, Dr. Torrey Trust, a professor at UMass Amherst who specializes in educational technology and digital literacy, graciously agreed to answer the five questions that had been stuck in my head since completing the Reddit analysis. Her insights were both reassuring and actionable, and often validated concerns mentioned in the Reddit discussions.
1. My child says AI is grading their assignments. Should I be concerned?
"Yes! You should definitely be concerned if teachers are using generative AI (GenAI) tools to grade assignments," said Dr. Trust. She went on to quote education researcher Peter Greene, who recently wrote:
Using ChatGPT to grade student essays is educational malpractice. It is using a yardstick to measure the weight of an elephant. It cannot do the job.
Dr. Trust added that "GenAI technologies are not intelligent, can't think, can't understand context or learners. These tools are simply predictability machines that guess which words go together to make the most plausible sound response. If a GenAI tool is used for grading, it's likely guessing what an appropriate grade would be based on its training data, and this data is biased!"
2. How can I tell whether my child is using AI to learn or just to copy and paste?
Dr. Trust suggested both direct and indirect approaches: "Sit next to them and do an assignment with them, although then they might not be inclined to show you how they use GenAI tools. Or, simply ask them how they are using these tools."
But she warned that these tools give an illusion of learning, when in fact they often limit or reduce learning. She gave a concrete example: "If a student puts a textbook chapter into Google NotebookLM to summarize instead of reading it, they are relying on an AI summary of a reading—which likely has made up information, hallucinations, and just flat out wrong information—instead of trying to understand the original text and author's intent."
Her practical test for parents: "Wait a few hours after your kid completes an assignment, then ask them to tell you everything they learned from the assignment. If they can't speak to it very much, they might have used GenAI or another tool for assistance."
3. If my child's teacher is assigning AI-assisted writing, what should I ask to make sure it's educationally sound?
Dr. Trust acknowledged that "GenAI tools can provide quick and immediate feedback for writing, in particular for academic or technical writing." But she was clear about limitations: "Even with good prompting, you're not going to get these tools to produce writing at the quality of professional writers." (She's right about this. Trust me, I've tried!)
Her key question for parents to ask teachers: "Does the use of AI for writing assistance come along with the teacher reviewing students' work on an ongoing basis and providing feedback? Without the teacher—or human in the loop—students are putting too much reliance on guessing machines to improve their writing."
Most importantly, she emphasized that "it's essential that students engage in metacognitive reflection—what did the GenAI tool suggest/revise/improve with my writing and WHY? Otherwise, if they just take the suggestions or revised text as is, they won't learn how to improve their own writing."
4. How can I encourage responsible AI use at home, especially if my child struggles with writing or motivation?
Dr. Trust suggested starting small: "Use a GenAI tool as a sentence starter; to help with transitions between paragraphs; to do whatever small task (1-2 sentences) your student might be struggling with."
The key, she emphasized, is maintaining critical thinking: "It's okay to use GenAI tools for help, as long as you are doing so in a critical and reflective way—not just taking the sentence starter and using it without considering 'why did AI choose this as the sentence starter for this topic? Is this the best way to start a paper on this topic, given what I know about it'"
5. What should every parent understand about AI's role in school today?
"Most students are using GenAI tools...often," Dr. Trust revealed. "But they are not often given guidance on how to use these tools for learning and what is allowed and what isn't allowed when it comes to using GenAI tools for academic assignments."
Her advice for parents: "[Children] need opportunities to have conversations with teachers and parents about these tools, explore them together, figure out how they might be helpful and how they might be harmful."
Dr. Trust has developed an AI policy for her own university courses that she discusses with students at the beginning of each semester, emphasizing the importance of transparent, ongoing dialogue about appropriate AI use in academic settings.
Where do we go from here?
After analyzing the online conversions, talking to my kids, and hearing from Dr. Trust, I'm honestly more worried than when I started. The Reddit data highlighted a concerning pattern. Schools are implementing AI tools without adequate training, clear policies, or evidence that they actually improve learning. Meanwhile, students are using AI extensively with little guidance about appropriate use.
It does seem like we're conducting a massive, uncontrolled experiment on our children's education. Some schools are adopting AI tools because they're new and seem innovative, not because there's solid evidence they improve learning outcomes. Teachers are being asked to use tools they don't understand. Students are developing dependencies on systems that may be undermining their cognitive development. And parents are largely kept in the dark.
What's the answer? I believe the first step is asking a lot more questions. Here's what I'm planning to do when school starts again in the fall:
My Action Plan for Parent Engagement on AI
I want to prepare my kids for a world where they can think critically about AI and everything else. But that requires adults who are willing to ask hard questions and demand better answers than "it's the future, so we have to use it."
Our children's cognitive development is too important to leave to chance—or to the marketing departments of AI companies. As parents, we have both the right and the responsibility to know what educational technologies are being used at school and exactly how they support genuine learning.
The future of education may well include AI, but it should be AI that enhances human capabilities, not one that replaces the hard work of thinking, writing, and learning.
Email karen@wonderingabout.ai