Please do not leave this page until complete. This can take a few moments.
Artificial intelligence in classrooms is no longer a distant prospect, and Massachusetts education officials on Monday released statewide guidance urging schools to use the technology thoughtfully, with an emphasis on equity, transparency, academic integrity and human oversight.
"AI already surrounds young people. It is baked into the devices and apps they use, and is increasingly used in nearly every system they will encounter in their lives, from health care to banking,” the Department of Elementary and Secondary Education’s new AI Literacy Module for Educators says. “Knowledge of how these systems operate—and how they may serve or undermine individuals’ and society’s goals—helps bridge classroom learning with the decisions they will face outside school.”
The Department of Elementary and Secondary Education released the learning module for educators, as well as a new Generative AI Police Guidance document on Monday ahead of the 2025-2026 school year, a formal attempt to set parameters around the technology that has infiltrated education.
Both were developed in response to recommendations from a statewide AI Task Force and are meant to give schools a consistent framework for deciding when, how and why to use AI in ways that are safe, ethical and instructionally meaningful, according to a DESE spokesperson.
The department stressed that the guidance is "not to promote or discourage the use of AI. Instead, it offers essential guidance to help educators think critically about AI — and to decide if, when, and how it might fit into their professional practice."
The learning module for educators itself notes that it was written with the help of generative AI.
The first draft was intentionally written without AI. A disclosure says "the authors wanted this resource to reflect the best thinking of experts from DESE’s AI task force, from DESE, and from other educators who supported this work. When AI models create first drafts, we may unconsciously 'anchor' on AI’s outputs and limit our own critical thinking and creativity; for this resource about AI, that was a possibility the authors wanted to avoid." However, the close-to-final draft was entered into a large language model like ChatGPT-4o or Claude Sonnet 4 "to check that the text was accessible and jargon-free," it says.
In Massachusetts classrooms, AI use has already started to spread. Teachers are experimenting with ChatGPT and other tools to generate rubrics, lesson plans, and instructional materials, and students are using it to draft essays, brainstorm ideas, or translate text for multilingual learners. Beyond teaching, districts are also using AI for scheduling, resource allocation and adaptive assessments.
But the state’s new resources caution that AI is far from a neutral tool, and questions swirl around whether AI can be used to enhance learning, or short-cut it.
"Because AI is designed to mimic patterns, not to 'tell the truth,' it can produce responses that are grammatically correct and that sound convincing, but are factually wrong or contrary to humans’ understanding of reality," the guidance says.
In what it calls "AI fictions," the department warns against over-reliance on systems that can fabricate information, reinforce user assumptions through "sycophancy," and create what MIT researchers have described as "cognitive debt," where people become anchored to machine-generated drafts and lose the ability to develop their own ideas.
The guidance urges schools to prioritize five guiding values when adopting AI tools: data privacy and security, transparency and accountability, bias awareness and mitigation, human oversight and educator judgment, and academic integrity.
On privacy, the department recommends that districts only approve AI tools vetted through a formal data privacy agreement process and teach students how their data is used when they interact with such systems. For transparency, schools are encouraged to inform parents about classroom AI use, maintain public lists of approved tools, and describe how each is used.
Bias is another central concern. The guidance suggests generative AI tools have built-in harmful biases, as they are trained on human data, and that teachers and students should examine how AI responses may vary.
"When AI systems go unexamined, they can inadvertently reinforce historical patterns of exclusion, misrepresentation, or injustice," the department wrote.
Officials warn that predictive analytics forecasting a student's future outcome could incorrectly flag them for academic intervention, based on biased AI interpretation of data.
"Automated grading tools may penalize linguistic differences. Hiring platforms might down-rank candidates whose experiences or even names differ from dominant norms. At the same time, students across the Commonwealth face real disparities in access to high-speed internet, up-to-date devices, and inclusive learning environments," the guidance says.
The document also places responsibility on educators to oversee and adjust AI outputs. For example, teachers might use AI to draft a personalized reading plan but still adapt it to reflect a student’s individual interests, such as sports or graphic novels.
For students, the state is moving away from a tone of outright prohibition of AI, and towards one of disclosure for the sake of academic integrity.
The documents suggest that schools could come up with policies for students to include an "AI Used" section in their papers, clarifying how and when they used tools, while teachers teach the distinction between AI-assisted brainstorming and AI-written content.
"Schools teach and encourage thoughtful integration of AI rather than penalizing use outright... AI is used in ways that reinforce learning, not short-circuit it. Clear expectations guide when and how students use AI tools, with an emphasis on originality, transparency, and reflection," it says.
Beyond classroom rules, it emphasizes that "AI literacy" — not only the technical knowledge, but understanding and evaluating the responsible use of these tools — as an important job and civic skill.
"Students need to be empowered not just as users, but as informed, critical thinkers who understand how AI works, how it can mislead, and how to assess its impacts," the guidance says.
That literacy extends to the personal and environmental costs of technology. Students, the department suggests, should reflect on their digital footprints and data permanence while also considering environmental impacts of AI like energy use and e-waste.
The new resources emphasize that "teaching with AI is not about replacing educators—it’s about empowering them to facilitate rich, human-centered learning experiences in AI-enhanced environments."
The classroom guidance arrives as Gov. Maura Healey has taken a prominent role in shaping Massachusetts’ AI landscape. Last year she launched the state’s AI Hub, calling it a bid to make Massachusetts a leader in both developing and regulating artificial intelligence. Healey has promoted an all-in approach to integrating AI across sectors, highlighting its potential for economic development.
Education officials positioned their new resources as part of that broader statewide strategy.
"Over the coming years, schools will play a critical role in supporting students who will be graduating into this ecosystem by providing equitable opportunities for them to learn about the safe and effective use of AI," it says.
The documents acknowledge that AI is already embedded in many of the tools students and teachers use daily. The challenge, they suggest, is not whether schools will use AI but how they will shape its role.
The release also comes against the backdrop of a push on Beacon Hill to limit technology in classrooms.
The Senate this summer approved a bill that would prohibit student cellphone use in schools starting in the 2026-2027 academic year, reflecting growing concern that constant device access hampers focus and learning. Lawmakers backing the measure have likened cellphones in classrooms to "electronic cocaine" and "a youth behavioral health crisis on steroids."
The House has not said when it plans to take up the measure, or even when representatives will return for serious lawmaking, a timetable that now appears likely to fall after the new school year begins. That uncertainty leaves schools in a period of flux, weighing how to integrate emerging AI tools even as lawmakers consider pulling back on other forms of student technology use.
0 Comments