Over the last few years, artificial intelligence (AI) has taken the world by storm. Although AI has actually been around in some capacity since 1956 with the Dartmouth Conference which formally made AI an academic discipline, the general population started to catch on to it starting in 2020, starting with 15.ai and later ChatGPT and others. AI has been baked in almost everything that we use now; and its use doesn’t show any signs of stopping or slowing down, as evidenced by the trend of people using AI to create action figures. People can easily generate these by going to a site like ChatGPT, giving the AI a photo and a prompt like “use this photo to turn me into an action figure, with accessories that go with my hobbies/interests1” and then in a short time the AI spits out a picture that satisfies the user’s request.
AI has intruded into the neurodiversity space as well. Some notable tools in this space include Goblin Tools, a series of free and small tools designed to help neurodivergents (for example, breaking down large tasks into smaller ones, rewriting things to make them sound more or less formal or even just converting a “brain dump” into a list that contains actionable things) and Koji, an AI powered augmentative and alternative communication (AAC) program that can be used on a smartphone.
This leads some people to ask, what else can AI do?
In a talk hosted by the Cultural Autism Studies at Yale group on April 14, 2025, Samantha Chipman discussed the possibility of using AI to assist with the autism diagnosis process, as well as the potential ethical issues that might arise from using such technologies, and the underlying assumptions that might affect its efficacy.
She starts with noting that AI is quite controversial - something that I’ve seen on multiple platforms. People usually are either very much pro AI (using it as a tool to improve productivity) or very much against AI (due to multiple factors, such as its environmental impact, ethical issues, etc.)2. Likewise, AI is constructed so that it’s bi-directional. Within the autism/disability space, this means that it could reinforce stigmas or ableism that already exists, but it could be used to assist with self-advocacy.
Chipman goes on to define autism, neurodiversity and AI. One of the key points made here is that AI is a machine based system that can make predictions, recommendations or decision influencing real or virtual environments, given a set of human defined objectives. So, theoretically, we could give AI diagnosis criteria or things to look for that autistic people do, and then based on real world data, determine whether a person is autistic or not.
This has been done in the literature, for example, in 2015 by Kosmicki et al. using machine learning and a subset of the Autism Diagnostic Observation Schedule (ADOS) that Chipman cites, in 2022 by Wolff et al. using machine learning to differentiate autism and ADHD diagnoses, and in 2023 by Farooq et al. using federated learning and AI for autism detection.
These studies generally show promise that AI can be a useful tool for diagnosing autism, particularly with the pediatric population, where early diagnosis can enable people to access interventions that can improve outcomes. There are issues with these studies; mostly with regards to data sets. In the case of the first two papers, the data sets used for training are male dominated, which could affect classification of females, which have historically been underdiagnosed or misdiagnosed. In the case of the third paper, where federated learning is used, data might be in different formats on different devices, which makes it difficult to develop a model that works on all devices.
There is also the issue with false positives/false negatives, which Chipman mentions in one of the case studies that we discussed towards the end of the talk. Chipman mentions that a group was developing an app where a parent could upload videos and information of their child, and then the app uses an algorithm to make a determination. While this could improve access to diagnosis by removing some barriers (such as the time required to get an appointment with the appropriate medical professionals), there is a chance that the app may rule the patient as not having autism when they actually do (a false negative). There is also the chance that the app could have the opposite problem, where a person could be diagnosed with autism when they really don’t (a false positive). False negatives are probably a bigger issue than false positives, as a false negative means that the family would have to dig further to get answers. False positives might still benefit from the early intervention; even if they aren’t autistic they might have some other condition or neurodivergence that additional supports could assist with.
A third issue relates to the definitions used for autism. Over time, the criteria for autism have expanded and cover a wider swath of people. This has caused the prevalence of autism to increase:
Inevitably, as we continue to learn more about autism, the definitions will continue to change. Will AI be able to adapt? Or will it be stuck using an old definition, which may leave some people out?
This issue does lead to an potentially interesting way that AI could be useful. If AI remains on old definitions, then we could AI as a discovery tool, where we discover either
new neurodivergences, ie neurodivergences that are not covered under the DSM/ICD,
ties between neurodivergent conditions (this was covered briefly, as a participant did ask a question as to whether the presence of ADHD would affect the AI’s ability to detect ASD), and
being able to develop a better set of diagnostic criteria that is not built on a deficit model, but rather one that better reflects the true potential of one’s mind
This last point is rather important, especially since one of the criteria for ASD diagnosis is that “symptoms cause clinically significant impairment in social, occupational, or other important areas of current functioning”. This means that one might not be diagnosed if they are thriving, despite actually having the condition. Only when one enters a crisis state does one then get diagnosed (as covered by NeuroDivergent Rebel):
With a better set of criteria, which might be discovered with the help of AI, we could diagnose people without them first going into a crisis state.
One last thing I will note from the talk is that AI tends to give simplistic explanations that can fail to capture all of the intricacies of our mind. It can also be quite inflexible. Going back to the beginning with the action figure example, I would have to give AI a well crafted prompt to give me what I am looking for, whereas if I am left to my own devices, I can craft something that better represents who I am:

So, in much the same way, we shouldn’t blindly rely on AI to diagnose people with autism, as it might miss out on certain details.
Overall, I think AI has the potential to improve the diagnostic process - both with reducing potential barriers to diagnosis, and by mapping the spectrum of neurodivergence better, enabling us to diagnose with better accuracy, which in turn, would enable us to better support people under the neurodivergent umbrella. Care must be taken, however, to ensure that we do not overrely on this tool.
If you liked this article, feel free to like and share! It really helps get the word out. Also consider subscribing, so you don’t miss out on future content:
And if you really liked it and want to support me, you can do so by clicking on this button:
You would then have to list what those hobbies/interests are.
Overall, I’m neither pro nor against AI - I understand that AI can be very useful at processing data and generating insights that we (humans) can use to improve the world and fix issues. I have used AI, whether knowingly, or unknowingly when it is slipped in to other products that I use (ex. the algorithm that Spotify and YouTube use to produce recommendations). However, I’m wary of its impacts, and we also should not overly rely on it, as although AI has improved over time, it still hallucinates, or creates information that’s completely false. For us to truly harness its power, we have to tame these issues.