In the last decade, the success of AI has led to this unbridled enthusiasm and daring demands – although users errors often occur that AI does. An AI-powered digital assistant can embarrassingly misunderstand an individual's speech, a chatbot could hallucinate facts, or, as I've experienced, an AI-powered navigation tool could even guide drivers through a cornfield – all without registering the errors.
People tolerate these mistakes because technology makes certain tasks more efficient. Increasingly, nevertheless, advocates are advocating using AI — sometimes with limited human oversight — in areas where errors carry high costs, corresponding to healthcare. For example one Bill introduced within the U.S. House of Representatives in early 2025 would Enable AI systems to prescribe medications autonomously. Health researchers and lawmakers since then have debated whether such a prescription can be feasible or advisable.
Exactly how such a prescription would work if this or an identical law is passed stays to be seen. However, the stakes are increasing as to what number of errors AI developers can allow their tools to make, and what the implications can be if those tools result in negative outcomes – even the death of patients.
As Researcher who studies complex systemsI'm researching how different components of a system interact result in unpredictable results. Part of my work focuses on exploring the frontiers of science – and AI specifically.
Over the last 25 years, I even have worked on projects, amongst other things Traffic light coordination, Improving bureaucracy And Detection of tax evasion. While these systems may be very effective, they’re never perfect.
Errors might be unavoidable, especially for AI Consequence of how the systems work. My lab's research suggests using certain properties of the info to coach AI models play a job. This is unlikely to vary regardless of how much time, effort, and money researchers put money into improving AI models.
Nobody – and nothing, not even AI – is ideal
As Alan Turing, considered the daddy of computer science, once said: “If a machine is predicted to be infallible, it cannot even be intelligent.” This is because learning is a necessary a part of intelligence and folks normally learn from mistakes. I see this tug of war between intelligence and infallibility in my research.
In a study published in July 2025, my colleagues and I showed that completely organizing certain data sets into clear categories is feasible could also be inconceivable. In other words, a given data set may produce a minimal amount of error just because elements of many categories overlap. For some data sets – the core of many AI systems – AI will perform no higher than probability.
MirasWonderland/iStock via Getty Images Plus
For example, a model trained on a dataset of thousands and thousands of dogs that only tracks their age, weight, and height is more likely to discriminate Chihuahuas out of Great Danes with perfect accuracy. But there may be errors in the excellence Alaskan malamute and a Doberman Pinscheras different individuals of various species can fall into the identical age, weight and height ranges.
This categorization is named classifiability and my students and I started studying it in 2021. Using data from greater than half 1,000,000 students who attended the Universidad Nacional Autónoma de México between 2008 and 2020, we set out to unravel a seemingly easy problem. Could we use an AI algorithm to predict which students would complete their studies on time – inside three, 4 or five years of starting their studies, depending on the sector of study?
We have tested several common algorithms used for classification in AI and likewise developed our own. No algorithm was perfect; one of the best − even one which we developed specifically for this task − achieved an accuracy rate of about 80%, meaning that at the least one in five students was misclassified. We found that many students were equivalent in grades, age, gender, socioeconomic status, and other characteristics—but some made it on time and others didn't. Under these circumstances, no algorithm would give you the option to make perfect predictions.
You might think that more data would improve predictability, but that typically comes with lower returns. This signifies that for each 1% increase in accuracy, for instance, you could need 100 times the info. Therefore, we might never have enough students to significantly improve the performance of our model.
In addition, after the primary 12 months of school, there may be many unpredictable events within the lives of scholars and their families—unemployment, death, pregnancy—which can be more likely to affect whether or not they complete their studies on time. So even with an infinite number of scholars, our predictions would still produce errors.
The limits of prediction
To put it more generally: What limits prediction is complexity. The word complexity comes from Latin and means intertwined. The components that make up a fancy system are intertwined, and it’s that interactions between them who determine what happens to them and the way they behave.
Therefore, examining elements of the system in isolation would likely result in misleading findings about them – and likewise in regards to the system as an entire.
For example, let's take a automobile driving through a city. By knowing the speed at which it’s traveling, it’s theoretically possible to predict where it’ll land at any given time. However, in real traffic, its speed relies on its interaction with other vehicles on the road. Because the small print of those interactions emerge within the moment and aren’t known prematurely, an accurate prediction of what is going to occur to the automobile is feasible only just a few minutes in the longer term.
Not with my health
The same principles also apply to prescribing medication. Different conditions and illnesses can have the identical symptoms, and folks with the identical condition or illness can have different symptoms. For example, fever may be attributable to a respiratory disease or a digestive disease. And a chilly may cause a cough, but not all the time.
This signifies that healthcare data sets have significant overlap that will prevent AI from being accurate.
Of course, people make mistakes too. But if the AI misdiagnoses a patient, because it actually will, the situation falls into legal limbo. It's not clear who or what can be responsible if a patient were injured. Pharmaceutical corporations? Software developer? Insurance agencies? Pharmacies?
In many contexts, neither humans nor machines are one of the best option for a given task. “Centaurs” or “hybrid intelligence” – i.e. a mixture of humans and machines – are likely to be higher than each individual. A health care provider could actually use AI to determine which medications to make use of for various patients depending on medical history, physiological details, and genetic makeup. Researchers are already exploring this approach in precision medicine.
But common sense and that Precautionary principle
suggest that it remains to be too early for AI to prescribe medications without human supervision. And the incontrovertible fact that errors may be built into the technology could mean that human oversight will all the time be required where human health is at stake.

