AI in Schools: The Promise and Perils Reshaping K-12 Education [Guest]
AI in Education is a Game Changer: Schools Need AI Experts to Step up to Support Success
Hey, it’s Devansh 👋👋
Our chocolate milk cult has a lot of experts and prominent figures doing cool things. In the series Guests, I will invite these experts to come in and share their insights on various topics that they have studied/worked on. If you or someone you know has interesting ideas in Tech, AI, or any other fields, I would love to have you come on here and share your knowledge.
I put a lot of effort into creating work that is informative, useful, and independent from undue influence. If you’d like to support my writing, please consider becoming a paid subscriber to this newsletter. Doing so helps me put more effort into writing/research, reach more people, and supports my crippling chocolate milk addiction. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly. Many companies have a learning budget that you can expense this newsletter to. You can use the following for an email template to request reimbursement for your subscription.
PS- We follow a “pay what you can” model, which allows you to support within your means, and support my mission of providing high-quality technical education to everyone for less than the price of a cup of coffee. Check out this post for more details and to find a plan that works for you.
The following is a guest post from Dr. Julia Rafal-Baer, CEO and co-founder of ILO Group, and Laura Smith, Principal at ILO Group and lead of AiLO. AiLO is a division of ILO Group dedicated to advancing the seamless and safe implementation of AI in K-12 districts, SEAs, and organizations.
The ILO Group is a woman-owned policy and strategy firm that supports education leaders with a keen understanding of their day-to-day political realities. They work with leaders that reach 1 in 3 kids in America and have operated a working group with over 40 of the nation’s largest districts and state leaders, providing support on the technical, fiscal, operational, and political realities of AI implementation. The following article contains insights from their work helping educators and school districts navigate the safe and ethical use of AI in their operations.
As always, I’m very excited to hear what you have to say about this topic. As a technical expert, I’m particularly interested in learning about how more technical experts like myself can help educators keep up with AI. Any and all ideas would be very welcome.
The AI revolution in K-12 education is in its infancy and already it is rapidly changing the landscape and discussions in our sector. Through novel tools that strengthen learning, ease administrative burdens, and advance strategic decision-making, new applications for the transformative technology are emerging daily. Simultaneously, AI has the potential to become the education sector’s worst political nightmare without sustained focus on risk mitigation to maximize its benefits.
The Consortium for School Networking (CoSN)'s 2024 Annual Technology Leadership Survey, which represents a robust sampling of EdTech leaders across U.S. school systems, highlighted this potential and concern. Findings from 981 responses collected between January 10 and February 29, 2024 note: “EdTech Leaders recognize that AI has potential risks and benefits. The overwhelming majority (97%) see benefits in how AI can positively impact education and over a third (35%) of districts report having a generative AI initiative. The areas with the greatest potential for positive impact of Gen AI most cited were productivity (43%) and personalized education (30%). New forms of cyberattacks (63%) and cyberbullying (47%) that are enabled by AI were cited as top risks, along with the lack of teacher training for integrating AI into instruction (49%). Most districts (54%) do not have a separate AI use policy but a growing number address AI use within current policies (31%) and only 3% have bans. One-fifth (20%) of respondents work in districts that use tools designed to detect AI-generated answers in student work.”
These concerns are widespread and highlight the emerging challenges already facing school systems. From deep fakes to personally identifiable information (PII) issues to new vendors coming to market and not delivering on their commitments – all of this is real and happening – there are so many situations where well-intentioned staff might use AI without realizing the risks inherent in what they are uploading into AI, or the need to have more of a focus on the output.
For example, even issues of plagiarism detection raise complex issues about student privacy. Recently a high school student in Maryland was accused of cheating, supposedly by using AI to write an essay. Her school ran her paper through a website called GPTZero, which claimed there was a 90% chance the work was AI-generated. Based on that information, the school put a plagiarism mark on her record, despite the student’s denials. While her family says she wrote the paper herself and her family has filed five appeals to date, the school's actions raise two separate privacy issues. First, GPTZero's terms of service require that anyone uploading content must either own it or have permission from the owner to share it. The school did neither - they uploaded the student's work without her consent. Second, under the federal education privacy law FERPA, schools must get permission before sharing a student's educational records with outside companies. By uploading the paper to GPTZero, a third-party service, without the student's or family's consent, the school may have violated these federal protections.
As happened here, adults may inadvertently compromise student privacy by inputting sensitive personal information into AI tools for various educational purposes, potentially leading to unauthorized processing or storage of protected student data.
This privacy risk is compounded by other risks, such as the issue of models encoding racial or other biases. AI tools carry significant risks of bias that stem from their training data. Whether it's language models showing gender stereotypes in career-related responses, or image generators depicting certain professions with limited diversity, these biases can subtly reinforce and amplify existing social disparities. The challenge is particularly concerning because many users assume AI systems are neutral and objective, when they're mirroring back our own societal biases, often in ways that aren't immediately obvious. These biases can extend beyond race, gender, and socioeconomic status to include factors like English language proficiency, learning differences, and academic performance levels.
In a June 2024 Education Week article, computer scientist Ashok Goel highlights a critical limitation in AI's ability to serve diverse learners: “Case-in-point: One of the most exciting possibilities of AI for K-12 educators is its potential for personalizing lessons for students. But AI's feedback on student work "might be right for, say, a neurotypical child and maybe not right for a neuroatypical child," said Ashok Goel, a professor of computer science and human-centered computing in the School of Interactive Computing at Georgia Institute of Technology, who is developing and testing an AI chatbot to assist adult learners. "But the AI will not be able to make that distinction because it doesn't have the data on neuroatypical children" since that population is "harder to collect data on." Similarly, voice recognition software used to gauge a student's reading level may not accurately assess students with strong regional accents or those whose first language isn't English.”
Yet despite these real risks that require intention and substantial mitigation, the opportunity for dramatic impact is even greater. Education leaders must devote the time into learning, researching, understanding, and sharing to meet this rapidly evolving moment. If there has ever been a time where a coordinated effort is required to systematically explore AI’s implications in our sector, it is now - focusing on the technical, the operational, the fiscal and the political realities while staying clear-eyed about the improvements required in the technical capabilities, pedagogical impacts, ethics, and long-term learning and behavioral outcomes.
Predictably, the availability of high-quality guidance for school systems – both local school districts and state education agencies – has lagged the lightning speed of AI technical innovation. It was that dearth of high-quality guidance that inspired ILO Group’s creation of two AI Frameworks: the Framework for Implementing Artificial Intelligence in K-12 Education aimed at school district leaders and the Framework for Implementing Artificial Intelligence in State Education Agencies (SEAs) for leaders at the state system level.
Both are designed as “living documents” to evolve alongside AI advancements and lessons learned from real-world use. The two frameworks provide district and state leaders, respectively, with a roadmap for understanding AI’s potential benefits, addressing associated challenges, and making well-informed decisions about implementation.
At the highest level, these four key areas to consider - political, operational, technical, and fiscal - then inform more specific AI applications across various departments within the school district or state education agency.
The political sections emphasize establishing the foundational governance structures and guiding principles for AI implementation in education. Through comprehensive stakeholder engagement, it ensures that educators, administrators, parents, and students have a voice in shaping AI policies. This framework creates a clear vision emphasizing responsible and ethical AI usage, while establishing frameworks for safety, privacy, fairness, transparency, human oversight, and accountability. Importantly, it reinforces that AI should complement and enhance human educators' work rather than attempt to replace them.
The operational sections translate vision into action through detailed roadmaps establishing internal governance structures along with dedicated teams to oversee implementation. This area focuses heavily on building capacity through AI literacy and digital literacy training for staff, while developing strategic communication plans to effectively convey initiatives to stakeholders.
The technical sections ensure robust and secure AI implementation by aligning with government technical standards and procurement guidelines. It begins with comprehensive readiness assessments of existing infrastructure and capabilities, while establishing specialized testing facilities like AI Assurance Laboratories for safety and quality control. This framework creates dedicated security and review teams, implements strong data security requirements, and builds statewide technical support networks for LEAs. These technical foundations ensure AI systems are both secure and effective.
The fiscal section describes supporting sustainable funding mechanisms for comprehensive AI implementation, including dedicated funding streams for AI initiatives and professional development, while launching innovative programs to pilot new solutions, with specific focus on addressing AI access disparities across socioeconomic and geographic differences.
Beyond these broad areas of consideration, the potential for AI extends to numerous educational functions, including curriculum and instruction, special education, student support services, and family and community engagement, and administrative tasks such as enrollment management, human resources, facilities and operations (including transportation), and information and technology management.
Each of these department specific AI applications has intriguing potential use cases for reducing administrative burden and supporting students. A few of note:
Curriculum & Instruction: AI can help evaluate instructional materials against state quality indicators and creates safeguards for AI-generated content.
Educator Support: One innovative use case is an AI-powered chatbot that answers complex licensure questions for educators across states, simplifying what is typically a confusing process.
Assessment: AI enables more sophisticated testing approaches, including AI-assisted oral assessments and performance evaluations with open-ended tasks. The technology helps generate test items, provides rapid scoring, and creates visualizations that make data more accessible to stakeholders.
School Improvement: Lleverages AI to analyze patterns in successful turnaround efforts. The system can examine historical improvement plans, identify what worked in similar contexts, and provide customized recommendations for specific schools based on their unique characteristics.
Communications: AI translation tools can instantly convert documents and web content into multiple languages.
Fiscal Management: AI models can employ AI to streamline grant management, providing automated feedback on applications and monitoring fund usage.
Research: AI can helps identify early warning signs of chronic absenteeism by analyzing multi-year attendance data. It can even provide street-by-street analysis to help districts target their back-to-school outreach efforts and provide language-appropriate support to families.
From ILO Group’s Framework for Implementing Artificial Intelligence (AI) in K-12 Education
These resources serve as the foundation of our work with districts and states around the country on the pressing issues and powerful opportunities produced by AI in the education context. We partner with leading researchers and practitioners to ensure the leaders and systems we serve have the resources and research-basis to thoughtfully embrace AI. This work provides us with a front-row view of both the advancements of the technology as well as the implementation and application of AI-powered tools in K12 education, learning what is actually being used and under what conditions and with what levels of impact. We also can see first-hand the political risks that are coming forth across schools in varying size and type of communities.
While education leaders must tackle key questions related to vision setting and pedagogy, across our work with K12 leaders, we see three key areas where education systems would benefit from greater involvement from technical experts at the forefront of AI innovation and application:
1. Safe Generation of State and District Documents using RAG-trained LLMs
AI-driven tools like chatbots are becoming more commonplace in managing the vast and complex data within educational institutions. The newest generation of chatbots often utilize some form of Large-Language Model (LLM) to generate responses. Although responses from these systems can sometimes be erroneous, model tuning techniques like Retrieval-Augmented Generation (RAG) offer a way to improve their performance and safety by incorporating specific data sources to help the systems provide quick and contextually relevant information in response to user queries. This can be particularly helpful for state government agencies with extensive guidance and regulatory materials that are often challenging to find answers to, for example around areas like certification and reciprocity rules for educators. Furthermore, RAG-training LLMs may further important insights that improve students support and learning. A fascinating study from researchers out of Stanford and NYU, Predicting Results of Social Science Experiments Using Large Language Models, in August found that LLMs could augment social studies science research by enabling rapid, low-cost pilot studies.
However, the effectiveness of these systems hinges on their ability to handle sensitive educational data securely and the risk tolerance that states and districts find acceptable for handling inaccurate or incomplete information.
With the appropriate technical support to tune AI models specifically for educational contexts, leaders can ensure that these AI systems and their applications are both reliable and secure. This approach not only enhances the utility of AI-driven tools but also protects the integrity of the data they manage, allowing educators and administrators to confidently rely on these technologies in their daily operations.
2. Developing an AI Assurance Laboratory to Identify Key Safety and Ethical Risks
As AI becomes more integrated into education, the need for rigorous safety and ethical oversight as states and districts roll out products grows. Establishing an "AI Assurance Laboratory" within state agencies can help address this challenge, serving as a dedicated resource for identifying and mitigating the unique risks associated with AI adoption and implementation in the K12 context.
This laboratory will need to conduct comprehensive pre-deployment risk assessments, covering potential misuse and harmful capabilities, as well as external evaluations of the organization’s data sharing and privacy practices, including their fidelity in following these protocols. Before any AI-powered products are rolled out to students and teachers, school systems will need assurance that these tools are limited to their intended use and have robust protocols in place to safeguard any personally identifiable information that is required to operate. The complexity of AI systems means that education organizations require external support to develop robust frameworks that can pinpoint potential vulnerabilities and ethical concerns before a product is ever launched.
Moreover, for ongoing monitoring and evaluation, school systems need resources that not only teach team members how to run acceptable use tests but also help them set and understand acceptable thresholds, ensuring their AI systems consistently adhere to those standards. An AI Assurance Laboratory can play a critical role in safeguarding the integrity of educational AI systems. Such a laboratory can further set the standard and implement other key risk mitigation measures. Important ones such as safety drills, third-party pre-deployment audits, whistleblower provisions, capability restrictions, and unlearning and fine-turning restrictions.
3. Systems for Notification, Investigation, and Response to Fake AI Incidents
The need for robust systems to handle Fake AI Incidents is critical as deepfake technology becomes more sophisticated. One such incident occurred in Maryland, where a faked audio clip circulated of a high school principal purportedly making racist and antisemitic remarks against students and colleagues. The audio clip, which surfaced in January, quickly went viral and divided the school’s community over its veracity.
Examples like this one illustrate that education systems and leaders need access to experts equipped with digital forensic tools that are capable of real-time detection of manipulated media. By integrating these tools, schools can ensure rapid response to such incidents, minimizing their impact. Additionally, forming partnerships with AI experts in digital forensics will provide schools with the necessary expertise to stay ahead of evolving threats and handle responses accordingly.
Schools Need Partnerships with Technical Experts
We are just at the beginning phases of the robust transformation that AI will likely foment in education. If implemented effectively, the opportunities to enhance learning and improve school operations will be immense. However, to fully realize these benefits, technical experts will need to partner with K12 leaders in ways that balance the multiple demands and competing priorities that come with implementing game-changing technology safely in a complex system at scale.
The education sector has historically struggled to capitalize on technological advancements, as exemplified by the failure of inBloom in 2013. This ambitious project aimed to centralize student data across states but collapsed within a year due to privacy concerns and public mistrust. This case, along with recent AI mishaps in other sectors, highlights the missed opportunities and potential pitfalls in implementing new technologies without proper safeguards and deep levels of community engagement. We recommend a minimum of one year of community engagement before implementing any large-scale AI deployments.
To lead in the AI era, and avoid repeating past mistakes, education leaders must proactively develop a clear vision for AI and digital literacy in their communities. This approach requires balancing competing priorities and starting with the desired outcomes rather than specific products. By engaging stakeholders, addressing privacy concerns transparently, and fostering an understanding of AI's benefits and risks, educators can shape AI implementation to serve their communities' needs. The alternative – allowing federal mandates or vendor-driven solutions to dictate the terms – risks missing another generational opportunity to positively shape technology's role in education, much like what happened with social media. Leaders must act now to define AI literacy, ensure proper safeguards, and align AI solutions with educational goals to avoid the pitfalls seen in other sectors and previous technological shifts.
From ensuring data security and ethical integrity to developing robust systems for detecting and responding to threats like deepfakes, the partnership of those at the leading edge of the field will be essential in building a future where AI is not just an innovative tool but a trusted, reliable resource that empowers educators and protects students. Now is the time for AI experts to get involved—their knowledge and skills can make a critical difference in shaping the responsible and effective use of AI in schools and ensure education leaders are able to harness the full potential of AI safely and ethically.
I provide various consulting and advisory services. If you‘d like to explore how we can work together, reach out to me through any of my socials over here or reply to this email.
I put a lot of work into writing this newsletter. To do so, I rely on you for support. If a few more people choose to become paid subscribers, the Chocolate Milk Cult can continue to provide high-quality and accessible education and opportunities to anyone who needs it. If you think this mission is worth contributing to, please consider a premium subscription. You can do so for less than the cost of a Netflix Subscription (pay what you want here).
If you liked this article and wish to share it, please refer to the following guidelines.
That is it for this piece. I appreciate your time. As always, if you’re interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow. You can share your testimonials over here.
Reach out to me
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
Small Snippets about Tech, AI and Machine Learning over here
AI Newsletter- https://artificialintelligencemadesimple.substack.com/
My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819
Great stuff!
I didn't realize how complex of an issue this is.