post image March 16, 2026 | 7 min Read

AI's in the Classroom. Now What?

It’s 11pm. A student has an essay due tomorrow and no idea where to start. They open an AI chatbot, type in the prompt, and twenty minutes later have a draft. They read it over, change a few words, and submit it.

Meanwhile, their teacher spends the weekend reading thirty suspiciously polished submissions and wondering what, exactly, they’re grading anymore.

This is where we are. AI in education isn’t a future problem to prepare for — it’s a present one to navigate. The question isn’t whether students and teachers should engage with it. They already are. The question is how.

How AI Is Actually Being Used in Schools Right Now

Artificial intelligence in education shows up in more places than most people realise. Some of it is visible; a lot of it isn’t.

On the student side: AI writing assistants, chatbots for homework help, tools that summarise reading materials, and platforms that generate practice questions on demand. ChatGPT is the obvious example, but it’s far from the only one. Platforms like Khan Academy have built their own AI tutors. Language learning apps use AI to adapt difficulty in real time.

On the teacher side: AI tools for grading short-answer responses, generating lesson plan drafts, creating differentiated materials for students at different levels, and flagging students who might be falling behind based on engagement data.

In higher education, the picture is more complicated. Universities are simultaneously investing in AI learning tools and trying to write policies that prevent students from misusing them — often doing both with inadequate resources for either.

A 2023 survey found that more than half of US college students had used AI on assignments. The number has only gone up since.

Where AI Genuinely Helps

The honest case for AI in education isn’t that it will revolutionise everything. It’s that it solves some real, specific problems — if it’s used well.

Personalised learning at scale. One of the hardest things about teaching thirty students at once is that they’re all in different places. Some need more time on fractions; others are already ahead. AI personalised learning tools can adapt to each student’s pace and flag where they’re struggling, giving teachers something they’ve never really had: a clear picture of individual progress without spending hours on assessment.

Tutoring access. Quality tutoring has always been expensive and unevenly distributed. An AI tutor isn’t a perfect replacement for a human one, but it’s available at midnight, it won’t make a student feel embarrassed for not knowing something, and it can explain the same concept ten different ways until one sticks. For students who can’t afford private tutors, that’s genuinely useful.

Reducing the admin burden on teachers. Teachers in most countries spend a significant portion of their time on tasks that aren’t teaching: marking, admin, writing reports, preparing materials. AI tools for teachers can reduce some of that load. Not eliminate it — but reduce it enough to matter.

Accessibility. For students with dyslexia, ADHD, or other learning differences, AI tools that convert text to speech, simplify language, or allow voice input aren’t just convenient. They can make the difference between participating and not.

The Concerns Worth Taking Seriously

AI’s critics in education aren’t wrong. The concerns are real.

Academic integrity. This is the obvious one. When students can generate a passable essay in minutes, what does written assessment actually measure? Schools are trying various responses — AI detection tools (which are inconsistent), oral exams, in-class writing, tougher plagiarism policies — but none of them are clean solutions. The honest answer is that AI has broken some traditional forms of assessment, and education systems are going to have to rethink them. That’s uncomfortable and expensive and slow.

Critical thinking. There’s a difference between using a tool to help you think and using a tool to avoid thinking. If students never practise constructing an argument, organising ideas, or sitting with the discomfort of not knowing what to say next, they’re losing something that matters. AI can do the hard bits for you — but the hard bits are often where the learning happens.

The access gap. The AI tools worth using mostly aren’t free. Schools with better resources will adopt better tools faster. Students from wealthier backgrounds will get more benefit. If we’re not careful, AI in education widens the gaps it’s supposed to close.

Data and privacy. When students use AI platforms, they’re sharing a lot of information — about their learning patterns, their questions, their struggles. Most of the major platforms are US-based and subject to US data laws. Schools and parents often don’t know exactly what’s being collected or how it’s used.

AI detection tools are unreliable. They produce false positives that can wrongly flag students — particularly non-native English speakers — for work they wrote themselves. Schools leaning on these tools as a primary integrity solution are taking on significant risk.

What This Means for Teachers

The teaching-is-dead narrative doesn’t hold up. But the teaching-is-unchanged one doesn’t either.

What AI can do well: process information, explain things repeatedly without getting tired, identify patterns in data, generate content at scale. What it can’t do: build genuine relationships with students, read a room, make a judgement call about when someone needs encouragement versus challenge, or teach by example what it looks like to be a curious, ethical person engaging with the world.

That’s not a small list of things. It’s the core of what teaching is.

The question isn’t whether AI will replace teachers. It’s whether schools will use AI-generated efficiencies to cut costs or to genuinely improve conditions for the people doing the work.

What’s actually changing is the mix. Teachers who spend less time on note marking and content generation can, in theory, spend more time on the relational and critical work that AI can’t replicate. Whether that shift actually happens depends on decisions made well above classroom level.

If you’re a teacher navigating this: start with the tasks that eat time without requiring deep expertise — generating first drafts of materials, creating differentiated resources, early identification of struggling students. Protect the work that requires human relationship and judgement. And push for training — you shouldn’t have to figure this out alone.

Higher Education: A Different Set of Stakes

AI in higher education hits differently, for a few reasons.

University students are assumed to have more autonomy and more academic maturity. The expectation is independent thinking. When that thinking can be outsourced to a language model, the question of what a degree actually certifies becomes harder to answer.

Universities are also research institutions, which creates a strange tension: they’re producing AI research while simultaneously trying to stop students from using AI. Many are trying to find a middle path — teaching students to use AI critically and transparently, rather than prohibiting it — but implementation is inconsistent. Some departments have embraced it; others have effectively banned it; most are somewhere in between, without clear policies or shared language.

Generative AI in education at the university level also raises harder questions about intellectual property: if a student uses AI to generate part of a paper, who holds the copyright? What does originality mean? These aren’t solved problems.

Some universities are now requiring students to document their AI use alongside citations — treating it similarly to referencing a research assistant. It’s one of the more pragmatic approaches we’ve seen, though it’s far from universal.

So Where Does This Leave Us?

AI’s impact on education is real, already happening, and genuinely mixed. The honest position isn’t enthusiasm or panic — it’s engagement.

The schools and teachers getting this right aren’t the ones who banned AI first or the ones who adopted every new tool uncritically. They’re the ones who’ve asked the harder questions: What are we actually trying to teach? What does good learning look like? Where does AI help that, and where does it get in the way?

Those questions don’t have universal answers. They depend on the subject, the student, the school, the context. Which is exactly why the people best placed to answer them are educators — not technology companies, not policymakers who’ve never been in a classroom, and not AI systems.

The tool is here. Using it well takes human judgement. That’s not an argument against the tool. It’s an argument for taking the human part seriously.