
Positive and Negative Impact of AI in Education: Evidence, Risks, and Practical Ways Forward
Educational uses of AI stretch from adaptive tutoring and feedback to accessibility tools and school operations. A U.S. Department of Education report urges human-centered use, teacher leadership, and strong safeguards for bias and privacy. UNESCO’s guidance calls for clear policy, assessment reform, and AI literacy for students and staff.
The NIST AI Risk Management Framework highlights context, governance, and continuous monitoring as foundations for trustworthy systems. On access, ITU estimates 5.5 billion people online in 2024, yet 2.6 billion still offline, with deep gaps across income groups—gaps that shape who can benefit from AI in schools. Global efforts such as UNICEF-ITU’s Giga map school connectivity to target investments and close those gaps.
How AI Helps: Learning Gains, Feedback, and Access
Personalized Support and Measurable Gains
Across dozens of controlled evaluations, intelligent tutoring systems (ITS) tend to improve learning relative to business-as-usual instruction. A meta-analytic review found consistent positive effects across subjects; gains approach those from small-group tutoring in some contexts.
Another review notes many ITS studies in STEM with meaningful progress over standard classroom methods. Put simply: well-designed step-based tutors guide learners through each step of problem solving and can lift performance.
Fast, Actionable Feedback
AI-supported feedback can shorten the loop between practice and guidance. Work on self-explanation and ICAP shows that prompts which nudge learners to explain steps deepen learning.
AI can deliver such prompts at the moment of need. Regular low-stakes quizzing with feedback strengthens retention—the “testing effect”—so frequent, short checks delivered by tools make sense for memory over time.
Access and Inclusion
Text-to-speech and related supports can help students with reading difficulties engage with content for longer and with less fatigue, with mixed but encouraging results on comprehension.
Reviews and studies document gains in reading rate and on-task time, though findings vary by learner and task. Qualitative work following learners across years reports sustained use of assistive tech such as TTS and speech-to-text.
For schools with limited connectivity, progress depends on infrastructure. ITU’s 2024 figures show large online-access gaps; UNICEF-ITU’s Giga tracks school-level connectivity to focus resources.
Where AI Hurts: Integrity, Bias, Privacy, and Equity
Academic Integrity: Detectors Create New Problems
Public studies find AI-text detectors unreliable and biased against non-native English writing. A 2023 paper in Patterns shows detectors often misclassify human work by multilingual writers as machine-generated.
Universities now caution against using detectors as proof of misconduct. Researchers have even shown AI-generated exam answers can pass unflagged and receive high marks, which points to assessment design, not surveillance, as the lever for quality.
Privacy and Surveillance Risks
Student-monitoring and remote proctoring tools can trigger false flags, chill participation, and raise equity concerns. Civil-society groups document harms linked to always-on surveillance and algorithmic flags.
The EU AI Act now bans emotion inference in education settings and places strict limits on biometric applications—clear signals for schools to avoid intrusive analytics.
Bias and Hallucination
Policy groups urge strong human oversight since models can fabricate facts or mirror biases in training data. U.S. Department of Education guidance and the NIST AI RMF both call for risk analysis, documentation, and human review in learning contexts.
Digital Divide
AI benefits reach learners only where devices, bandwidth, and support exist. ITU estimates show high-income countries near universal internet use, with low-income countries far behind; Giga’s mapping helps governments target schools still offline.
Questioning Skills: The Anchor for AI-Supported Learning
AI can surface information quickly; learning comes from how students question that information. The strongest gains appear when learners explain steps, argue with evidence, and retrieve knowledge from memory.
What Research Backs
-
Self-explanation: Prompts that ask “Why this step?” link new ideas to prior knowledge and raise achievement across ages and domains.
-
Socratic dialogue: Computer tutors that probe reasoning with “Why/How/What if?” have delivered sizeable gains against passive reading.
-
Retrieval practice: Frequent low-stakes testing improves long-term retention in classroom settings.
Classroom Moves That Work (use with any AI tool)
-
Prompt for reasons, not answers: “Explain the rule you used,” “Point to the sentence that supports your claim,” “Show the step you would try next.”
-
Mix generation and checking: “Draft a two-line answer, then ask the tool for two counter-arguments. Which stands up?”
-
Refine the question itself: “Turn this broad question into three targeted ones: definition, evidence, application.”
-
Think-aloud capture: Ask students to narrate the decision path they took, then compare with the tool’s suggestion to spot wrong turns.
Assessment in the Age of AI: Practical Redesign
-
Shift to authentic tasks: Performance tasks that mirror real work (field notes, data commentary, oral defense) lower the rewards of copy-paste responses and center reasoning. Sector bodies such as QAA and Jisc advise programs to move in this direction.
-
Be explicit about allowed help: Publish assignment-level rules on acceptable AI use, citation of prompts, and disclosure formats. Templates from UK sector groups and universities show clear wording that students can understand.
-
Assess the process: Collect planning notes, drafts, prompt history, and short viva-style checks tied to the student’s own work.
-
Test transfer, not template matching: New data, new constraints, or a changed audience make shallow paraphrase fail.
-
Design for retrieval practice: Short, spaced quizzes build memory and reduce last-minute cramming.
Accessibility With Care
-
Text-to-speech (TTS) and speech-to-text (STT): Evidence shows TTS can increase time on task and reading rate, with mixed effects on comprehension that depend on the learner and the task. Use as an aid, not a replacement for explicit reading instruction.
-
Co-design with students: Gather feedback on speed, voice, and interface. Offer keyboard shortcuts and captions by default.
-
Document accommodations: State plainly how assistive tech can be used on homework, quizzes, and exams.
Data Governance and Safe Use
-
Pick tools with clear data policies: Avoid products that store student prompts or analytics beyond the course need.
-
Run a structured risk review: Use the NIST AI RMF ideas—describe context, identify risks, test for harmful outputs, record decisions, and revisit after deployment.
-
Avoid intrusive analytics: Skip emotion inference and webcam-based attention tracking; EU rules ban such uses in education spaces.
-
Keep a human in the loop: Staff review is not optional when grading, placing students, or flagging conduct. U.S. Department of Education guidance stresses human oversight and teacher agency.
Equity and Infrastructure
-
Connectivity first: Without devices and stable internet, AI widens gaps. ITU’s 2024 snapshot shows 68% of people online and 32% offline; adoption lags in low-income regions.
-
Target schools, not averages: Giga’s live maps help ministries and partners find unconnected schools and cost out upgrades.
-
Pair tech with training: AI literacy for teachers and students is now a core skill set; UNESCO has released competency frameworks to guide curricula.
Guardian Principles for Academic Integrity
-
Focus on learning, not policing: Rely on better assessment design and explicit expectations. Sector guidance warns against over-reliance on detectors.
-
Teach citation and disclosure: Ask students to name the tool, paste key prompts, and state how outputs were used.
-
Use vivas and in-class checks: A short oral walkthrough of key decisions reveals real understanding.
-
Offer restorative paths: If misuse occurs, invite a redo under supervision; reserve penalties for clear, documented cases.
Teacher Workload: Where AI Saves Time—and Where It Doesn’t
-
Drafting support: Idea generation and first-pass rubrics can save prep time, followed by human editing for accuracy and tone.
-
Feedback triage: Use tools to cluster common errors, then write targeted mini-lessons yourself.
-
No autopilot grading: Use AI as a “second reader” only for formative feedback; never as the sole decider on grades or misconduct.
Policy Starter Kit for Schools and Colleges
-
Purpose statement: Describe educational benefits you seek (feedback speed, accessibility, questioning practice).
-
Approved uses by task: What is allowed on homework, projects, and exams; how to disclose tool use.
-
Data controls: Storage limits, retention periods, vendor contracts, opt-out routes.
-
Risk and bias checks: Pre-flight reviews, sample audits, and documented fixes. Use NIST RMF concepts to structure the process.
-
No surveillance defaults: Disallow emotion inference and webcam attention tracking; reference applicable law and sector guidance.
-
Professional learning: Ongoing workshops on questioning skills, feedback design, and AI literacy.
-
Student rights: Clear appeals process and human review for any AI-assisted decision.
Case Snapshots (evidence-informed moves)
Adaptive Math Practice with Explanation Prompts
A department pairs an established tutor with short, structured self-explanation prompts at key steps. Students type the rule used for each step, then compare with a model explanation.
This follows evidence that step-based tutoring plus explanation improves outcomes.
Reading Support With TTS
A resource room deploys TTS for extended reading blocks. Staff track reading rate and comprehension, mixing silent reading, audio-assisted reading, and teacher-led fluency practice. Reported benefits in rate and on-task time match mixed-method findings.
Assessment With Disclosure
Programs publish plain-language AI rules and give students a simple disclosure template. This aligns with sector guidance from QAA and Jisc.
Twelve Research-Backed Facts to Cite in Your Own Work
-
ITS show positive effects over standard classroom instruction across many studies and subjects.
-
Many ITS gains come from step-based guidance through problem solving.
-
Frequent low-stakes testing boosts long-term retention across classrooms.
-
Self-explanation prompts raise learning by connecting steps to reasons.
-
AI-text detectors often misclassify non-native English writing as machine-generated.
-
Universities advise against using detectors as final evidence of misconduct.
-
AI-generated exam answers can pass under human grading without detection, prompting assessment reform.
-
Student monitoring and remote proctoring raise equity and privacy risks.
-
Emotion inference in education is prohibited under the EU AI Act.
-
ITU estimates about 68% of people online in 2024, leaving 2.6 billion offline.
-
Giga maps school connectivity to direct investments and track progress.
-
National and international guidance emphasizes human oversight and risk management for AI in learning.
A Simple Classroom Playbook
Set norms
-
Publish a one-page policy per course: what help is allowed, how to cite prompts, where help is off-limits.
-
Provide two or three safe tools that match your context and data policies.
Teach AI literacy
-
Show students how to question outputs: ask for sources, ask for two counter-arguments, and check a claim in a textbook or database.
-
Model short, precise prompts. Save them for reuse.
Build questioning into tasks
-
Add “Explain the step you chose” to problem sets.
-
Require a 60-second audio note on “what changed in my understanding.”
Redesign assessments
-
Use live or recorded mini-vivas.
-
Swap some essays for data commentaries, design memos, or oral posters.
Support accessibility
-
Offer TTS/STT options and alternative formats.
-
Check with students what speed, font, and display settings work.
Guard privacy
-
Disable analytics that infer mood or attention.
-
Keep data local when possible; minimize retention.
Key Takeaways
-
Use AI where it strengthens practice: step-based tutoring, quick feedback, accessibility aids.
-
Center student questioning—self-explanation and retrieval—so tools serve thinking, not replace it.
-
Redesign assessment; rely on authentic tasks and disclosures rather than detectors.
-
Guard privacy and fairness; avoid emotion inference and intrusive monitoring.
-
Plan for access: devices, bandwidth, training, and clear classroom rules.
FAQs
How can schools talk about acceptable AI use without confusing students?
Publish a short, plain-language policy per course with concrete examples of what help is fine on homework and what is not allowed on tests. Provide a simple disclosure format for prompts and tools. Sector bodies offer templates.
Is it safe to rely on AI-text detectors to catch misconduct?
No. Studies show high error rates and bias against non-native English writers. Use better assessment design, process evidence (drafts, notes), and short vivas.
What’s a quick way to build questioning skills with AI tools?
Add one self-explanation prompt to every problem (“Why this step?”) and a 3-question check: define, cite evidence, apply to a new case. Research supports explanation and retrieval for durable learning.
Do assistive tools like TTS help students with dyslexia?
They can support time on task and reading rate; outcomes for comprehension vary. Treat TTS as an aid within a broader literacy plan, not a standalone fix.
What laws or standards should leaders keep in mind?
Use the NIST AI RMF to structure risk reviews and avoid banned uses under the EU AI Act, such as emotion inference in education.
Artificial intelligence (AI)