top of page

Generative AI and Neurodiversity

Where are we now?

What tensions might we need to manage as we learn?

February 2025


Introduction & Advocacy


In April 2023, ISLES and NFI partnered to issue a joint statement on the role of Generative AI in learning support. Our assessment was that Gen AI could be a valuable asset in education if used thoughtfully and ethically, with a focus on developing students' higher-order thinking skills and understanding of AI's limitations. Since then, GenAI has exploded. The exponential development of this technology and the integration of these tools into our daily lives has meant that the conversation has shifted very quickly from “should we use AI?” to “how should we use AI?” and schools and educators feel like they are walking a tightrope and managing the tension of the rope at the same time.


The World Economic Forum Future of Jobs Report released in January 2025 underscores the rapid development and adoption of AI and the impact it is expected to have on the future we are preparing our students for. Technological advances are “expected to have a divergent effect on jobs, driving both the fastest-growing and fastest-declining roles, and fueling demand for technology-related skills, including AI and big data, networks and cybersecurity and technological literacy, which are anticipated to be the top three fastest- growing skills” (World Economic Forum, 2025).


In independent international schools, these rapid changes and adoption of new technologies pose additional challenges. The responsibility for keeping pace of new developments and best practices often falls to a few to lead, and requires all community members to join the conversation to harness collective wisdom and bravely and transparently engage in the good struggle of learning together with uncertain outcomes. Accepting a degree of vulnerability and the reality that schools are still in their infancy with AI can be both overwhelming and exciting, and it is not going away or moving slowly so the time to engage in dialogue and learning is now.


The NFI & ISLES stance in advocacy recognizes that AI holds enormous potential for supporting students with neurodivergent learning profiles when the tensions and risks are thoughtfully managed by human educators and leaders. Recognizing that much has changed since the position paper was written, ISLES and NFI engaged with members in international schools to learn about how AI is currently being used to support students with learning needs.

At present we are highlighting two human sentiments to navigate; hope, and caution. We need to continue to learn about and from GenAI to inform decisions in the best interests of all moving forward. To some extent managing these sentiments requires somewhat of a leap of faith in lockstep with knowledge, strong collaboration, and an extra layer of due diligence in schools about how data is used by GenAI products available for use within schools.



Hope: Recognizing the Opportunity, and Potential for Learning Innovation

Potential of AI Use by Educators

Gen AI has the potential to:

Examples Include:

Save time for teachers

  • Automating routine tasks

  • Drafting emails, feedback, newsletters

  • Analyzing student learning data efficiently, enabling educators to offer more timely formative feedback and tailored instruction 

  • Drafting rubrics 

  • Creating custom images rather than searching for them on the internet


  • Acting as a thought partner in designing lessons (e.g. LUDIA, a chatbot trained on the UDL principles)

  • Drafting short texts to help build background knowledge in order to enable students to comprehend grade level text

  • Identifying patterns in student feedback 

  • Efficiently creating personalized learning resources, including images, videos, podcasts, etc. tailored to student interests and readiness


Considerations For AI Use by Educators

AI is advancing at lightning speed, and we are often reminded that it is the least sophisticated it will ever be right now. GenAI is moving beyond text and voice based chatbots and image and sound generators. With reasoning models, AI agents, and deep analysis models making their own choices we will be challenged in new ways as AI becomes more autonomous of human input. Regardless, we believe that AI cannot replace teacher expert knowledge and judgment. Applying the 80-20 Pareto Principle to this suggests that AI might do 80% of the work and humans do the other 20%, which includes the human skills and dispositions of

applying discernment and professional judgement. A major consideration for discernment implies that ongoing learning and staying abreast of rapid changes and developments in the technology will be required. When using AI, we need to prioritize human oversight and interaction. AI should serve as a tool to support, not supplant, the crucial role of educators and human judgment in the learning process. Fostering intellectual development requires a focus on retaining and cultivating uniquely human skills and attributes, including: student-teacher connections and relationships, problem-solving, metacognition, understanding the interdependence of humans, systems, and ideas, communication skills, contextual information

interpretation, empathy, impact awareness, and critical thinking.


To effectively integrate AI, we must reimagine the design of adaptive learning tasks. Success involves not only task completion but also task enrichment, improvement, and innovation, ultimately requiring the potential reinvention of learning activities. Successfully making the transition to AI-conscious instruction will involve a focus on making learning processes visible and having students (and teachers) be accountable for the process. As we explore the potential of AI to support students with learning needs, rigorous impact analysis, analyzing AI's effects on individual and organizational learning, will be essential to inform best practices. Collaboration among educators, administrators, and technology experts is crucial for responsible and effective AI implementation. Cultivating student agency by involving students in decision-making processes regarding AI use in their learning will support the development of their metacognitive skills.


Potential of AI Use by Students

AI has the potential to greatly improve the digital tools available to neurodivergent students. Examples include:

  • Improved capabilities of adaptive digital tools to further personalize learning opportunities based on students’ individual needs and learning progress

  • Enriched and personalized learning experiences, such as simulations, which have the potential to increase student engagement and motivation through increased relevance and alignment with student goals, interests and readiness

  • Assistive technologies that provide more customized support to make learning environments and experiences more accessible

  • Tutoring chatbots designed to provide content support as well as facilitating critical thinking, metacognition and problem-solving skills



Caution: Maintaining Hope and Proceeding with Informed

Diligence

Gen AI Impact on Learning

Optimism about the potential of student use of AI tools to transform learning has been tempered with caution by educators who worry about academic integrity, deskilling, and over-reliance on technology. Emerging research indicates cause for caution (Gerlich, 2025; Hardman, 2025, Bastani, H et al., 2024), with studies finding that:

  • While engaging with generic GenAI tools such as ChatGPT can result in a moderate increase in student engagement and task performance, these gains do not persist or transfer to novel situations when access to the tool is removed (and students may perform worse than those who had not used AI)

  • Cognitive offloading (using AI tools to decrease demands on working memory) was strongly associated with decreased critical thinking and cognitive engagement

  • Interaction with AI tended to decrease metacognition and cause learners to be overconfident about their understanding

  • These effects are amplified in younger adults and less advanced learners, who are more likely to become dependent on AI and engage in less sophisticated interactions with AI.


While this field of research is very new, and more needs to be done to fully understand the promise and perils of integrating student use of AI into the classroom, the evidence so far supports the need for an informed and thoughtful approach. There is very limited research on the long term learning impact for school aged children, and there seems to be almost no research yet for middle school aged students or younger. Given this, a cautious approach is particularly important for younger children and for students who require individualized and specialized instruction based on their learning profiles.


Some Considerations for Mitigating Risks to Learning

Hardman (2025) suggests the following strategies for mitigating some of the potentially negative effects of using Gen AI in learning:

  • Restrict AI use when teaching foundational skills

  • Be very cautious in deciding whether to use AI with younger learners

  • Use AI tools designed for use in education that offer appropriate developmental levels of support with school-aged children. These tools have guardrails to facilitate learning, whereas generic AI tools do not

  • Build in opportunities for independent problem solving

  • Assess transfer and retention when using AI in learning activities

  • Build in opportunities for critical analysis of AI

  • Build in opportunities for students to engage in reflection, goal setting, progress monitoring and evaluating their own performance


Protecting Data, Personal Safety, Wellbeing and Privacy

Protecting student data is paramount, and educators must be mindful of the ethical implications of using AI. Concerns about intellectual property, privacy, social and emotional development remain a focus for future work. It is essential that schools are familiar with the terms and conditions of any services they endorse, and offer guidance to educators about what tools may be used and how. Developing a school toolbox of vetted tools for AI can serve to guide practitioners as they navigate and experiment safely with tools that have some

guardrails for student safety and learning. Procedures for ensuring the anonymization of data, including what can be uploaded and how is an important aspect of this work


Navigating Uncharted Ethical and Legal Issues

The UNESCO (2023) Guidance for Generative AI in Education and Research identifies a range of ethical and policy issues arising from the use of AI in general, and GenAI specifically. The “uncharted ethical issues” of access and equity, human connection, human intellectual development, the psychological impact, and hidden bias and discrimination go far beyond the obvious and initial concerns about academic integrity that Gen AI sparked in educators around the world.


Other questions about the ethics and legality of AI use are emerging:

Is it legal/permissible to upload to LLMs that will use student work to train their models?

Recent UK policy states, “Schools and colleges must not allow or cause students’ original work to be used to train generative AI models unless they have permission or an exception to copyright applies.” (UK Department for Education, 2025)


Is anthropomorphizing technology harmful, especially for children?

Students and adults alike may credit AI with human qualities and abilities because of the way that it is packaged as being human-like, which increases our comfort in interacting with it. This lowers our guard, and may cause vulnerable people to become emotionally involved with AI in an unhealthy way. This has potential to reduce human interaction, and raise the risk that already isolated students may turn to technology instead of humans to meet their social and emotional needs (OECD, 2024)


Might using AI inadvertently contribute to discriminatory beliefs and practices?

The concept of "techno-ableism" suggests that technology, including AI, is often presented as a cure or fix for disability, aiming to make individuals with disabilities conform to a non-disabled, "normal" world. This perspective frames disability as a personal deficit that technology can solve, rather than considering broader societal barriers and the diversity of human experience. Essentially, it tries to "fix" the person to fit the world, rather than addressing the world's limitations in accommodating diverse needs (OECD, 2024). When considering bias from multiple perspectives, it is important to be mindful that many AI models, e.g:

image generators and language models, can reinforce bias especially gender and racial bias or spread misinformation.


How might we prevent or mitigate the dishonest use of AI?

Generating social norms and general agreements and for disclosure of use of AI for work in schools is needed. These agreements should be developed in a way that develops human trust and transparency with each other with the intention of opening doors for dialogue around how it is being used, without driving it underground and losing the opportunity for learning together. Creating space for transparency allows professionals and students to engage in conversations about uncharted ethical issues, technical challenges for safety and impacts on learning.


How might we mitigate the safeguarding risks that are amplified by some Gen AI tools?

The very power of AI to streamline workflows and automate tedious tasks also empowers malicious actors, enabling them to inflict harm with unprecedented ease, efficiency, and effectiveness. As AI tools advance, so too does the challenge of mitigating their risks, particularly the spread of harmful misinformation and disinformation, including increasingly sophisticated deepfakes. The potential for AI to generate non-consensual intimate imagery ("deep nudes") and facilitate targeted exploitation is especially alarming.


The Role of Parents

As partners in their children's education and development, parents have an important role to play as AI becomes increasingly integrated into learning. When schools keep parents informed about how AI is being used and the potential implications, parents are better equipped to engage their children in meaningful conversations about its appropriate and ethical use.


Monitoring their child's AI usage can help promote safe and responsible practices. Beyond that, family conversations about values surrounding learning, human connection, and critical thinking can help shape children's perspectives on AI's role in their lives and the world around them.


Conclusion

Even as we write this statement, advances and new tools are being developed and launched, and we can expect that this will continue. The need for more research on the impact of the use of AI with students in general, and students with learning needs in particular is clear.


Our greatest hope remains where it always has been in education and

where it will be in the future - in human potential to learn.


Navigating AI and harnessing the power of human potential to learn in a changing landscape is another next frontier to explore, and to this, we are no strangers.

Iterative training, support, and ongoing dialogue amongst professionals, students and parents to effectively integrate AI tools into the lives of the students in our care are core competencies that remain the same in our collective work.

Understanding the management of the human sentiments of hope and caution will serve to support dialogue and decision making. Practitioners, leaders, students and parents shouldn’t go it alone. It requires dialogue, transparency, careful experimentation and courage to support one another as we walk the tightrope.





On behalf of ISLES and NFI, we would like to thank all those who contributed to this article by offering their thoughtful expertise, resources, and conversations through January and February of 2025.


Editors

Cindy Warner-Dobrowski, International School of Kuala Lumpur- Director of Student Services, ISLES Vice

President

Kristen Pelletier, Castle Corner, Redefining Access LDA & NFI Design Team President


Contributors

Anthony Copeland, American School of Dubai- High School Technology Integrator

Andrew Bumgarner, Canadian International School of Hong Kong - Head of Learning Support

Charlotte Diller, International School of Kuala Lumpur - Director of Technology

John Mikton, International School of Geneva - Digital Learning Facilitator

Kristel Solomon, Inclusive Education Solutions (ISLES Board, NFI Design Team)

Matthew Kelsey, American School of Dubai- Director of Technology

Ochan Kusuma Powell, Education Across Frontiers - Founder and Executive Director

Stephanie Hepner, United World College Southeast Asia East- Head of Student Support Services (ISLES

President)



References


Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2024). Generative AI can harm learning. https://doi.org/10.2139/ssrn.4895486


Dell', F., Saran, A., McFowland, R., Krayer, L., Mollick, E., Candelon, F., Lifshitz-Assaf, H., Lakhani, K., & Kellogg, K. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality.


Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006


Hardman, P. (2025, January 24). The impact of Gen AI on human learning. Substack.


OECD. (2024). The potential impact of artificial intelligence on equity and inclusion in education.


LUDIA (2025). OpenAI gpt-4o-mini based POE assistant customized by Beth Stark and Jérémie Rostan. poe.com/ludia


UNESCO. (2023). Guidance for generative AI in education and research: Towards a human-centered approach to the use of generative AI.


World Economic Forum. (2025). Future of jobs report 2025.



Updated 7 March 2025

 
 

Comments


ISLES Collaborative

A group of dedicated school leaders who care deeply about providing high quality support services to students within international school settings around the world.

Follow Us

  • Twitter
  • Facebook
  • LinkedIn
  • Instagram
bottom of page