AI in IEPs—When the Real Ethical Question is Before the AI

Top 3 Key Takeaways:

  • The real ethical crisis in IEP development isn’t AI—it’s the systemic conditions teachers face long before AI enters the picture.
    Special educators are often required to build legally compliant, data-rich IEPs using tools that were never designed for students with disabilities. Ethical concerns start before AI is introduced. High-quality diagnostic tools created for special education, like Let’s Go Learn’s DORA and DOMA diagnostics, are essential for establishing a trustworthy foundation.

  • AI can help—but only when paired with valid, diagnostic data and strong human-led processes.
    The CDT’s report highlights risks around FERPA, IDEA compliance, bias, and accuracy. These risks increase when AI is fed poor or inappropriate data. Responsible AI use must begin with assessments that provide true present levels of performance. Resources such as the U.S. Department of Education’s AI Guidance help districts establish safeguards, while platforms like Let’s Go Learn’s LGL Edge offer AI-supported tools grounded in research-based diagnostics.

  • Before districts debate the ethics of AI, they must address the structural burdens placed on special-education teams.
    Heavy caseloads, timeline pressures, inadequate assessments, and disconnected goal banks create a system where compliance is already at risk—AI or no AI. Ethical AI implementation requires fixing the foundation: collaborative processes, appropriate workloads, and valid diagnostics for students with disabilities. For evidence-based guidance, CDT’s recent brief and resources from Council for Exceptional Children offer strong frameworks for policy and practice.

AI in IEPs—When the Real Ethical Question is Before the AI

When we ask whether it’s ethical for teachers to use artificial intelligence (AI) tools in crafting IEPs, we’re asking the wrong first question. The more urgent ethical question is this: What kind of system did we build that forces teachers into impossible positions in the first place?

The CDT’s recent brief — *“From Personalized to Programmed: The Use of Generative AI to Develop Individualized Education Programs for Students with Disabilities” — shows that 57% of teachers reported using AI for IEPs or 504 plans in 2024-25, up significantly from 39% the previous year. ClientEarth+1 It raises important concerns: privacy (under Family Educational Rights and Privacy Act / FERPA), compliance with Individuals with Disabilities Education Act (IDEA), accuracy, bias, and the overarching need for human oversight. ClientEarth+1

These are valid and crucial conversations. But let’s step back a moment:
What if the system that precedes AI already contains serious ethical flaws?

The Invisible Ethical Problem

  • Many special-education teachers are asked to construct IEPs based on data from general-education screeners (like MAP Growth or i‑Ready) that were never meant to define “present levels” or individualized functional performance for students with disabilities.
  • In some districts, the special education coordinator writes the IEP narrative without consulting the teacher working directly with the student—or worse, using only summative test scores to justify supports.
  • Special education teachers are expected to find present levels, write SMART goals, all on their own with very little to no automated diagnostic tools.
  • Add in heavy caseloads, limited diagnostic tools, and bureaucratic pressure to comply with timelines and mandates: the system asks teachers to do something legally and pedagogically demanding, but gives them inadequate tools. Often, I hear SPED directors complain that SMART goal library systems create a situation where teachers choose goals disconnected from the student. 

This is an ethical issue: by design, we place teachers into positions where they cannot reliably meet legal and educational expectations. And we do so before we even ask whether AI is appropriate.

How AI Fits In, and Why the Real Question Is Bigger

AI can absolutely offer benefits: efficiency, drafting support, pattern-recognition, and freeing up teacher time. CDT highlights these possibilities. But they also highlight the risks: accuracy, bias, data privacy, transparency, and compliance with IDEA and FERPA.

Here’s the catch:

  • If the foundation (data, time, diagnostic assessment, teacher collaboration) is weak, then adding AI can amplify the problem rather than fix it.
  • If teachers are already forced to write IEPs from inadequate data or systems, then AI might seem like a shortcut—but it doesn’t fix the root ethical problem.
  • In other words: yes, we should ask “Is it ethical to use AI in IEPs?” But before that, we must ask: “Is it ethical to expect teachers to deliver individualized education programs under conditions that make accuracy and compliance highly unlikely?”

My Estimate

In the districts we work with, AI usage introduces perhaps a 20% risk of ethical or legal issues (because of misuse, bias, lack of oversight). But the system itself, what we ask teachers to do, with what we provide them, has built-in risks that I’d peg closer to 30-40%. That structural risk is often invisible in the AI debate.

What Needs to Happen

  • Ensure teachers have appropriate diagnostic tools and data specifically designed for students with disabilities, not just general-education screeners. Make this data available, like we do, before AI is used. Good data in, good data out!
  • Write and enforce collaborative processes: the teacher who works daily with the student must have input into the IEP, not just a coordinator disconnected from direct instruction.
  • Provide explicit training, policies, and oversight around any AI use in IEP development (as CDT recommends). ClientEarth
  • District leadership must acknowledge and address the systemic burden placed on special-education teams: workload, improper tools, rushed timelines.

Only then can we responsibly introduce AI tools—not as a Band-Aid, but as part of a stronger system.

Conclusion

The conversation around AI in IEPs is important. But the more profound ethical question is: what kind of system did we build to put teachers in positions where compliance, accuracy, and legality are already at risk? Let’s fix that system first. Only then should we ask how AI fits in. At Let’s Go Learn, we are implementing ethical and effective AI use today with our customers. We’re happy to share what works and what does not work.