HomeEthics & SocietyAI within the doctor’s office: GPs turn to ChatGPT and other tools...

AI within the doctor’s office: GPs turn to ChatGPT and other tools for diagnoses

A brand new survey has found that one in five general practitioners (GPs) within the UK are using AI tools like ChatGPT to help with day by day tasks similar to suggesting diagnoses and writing patient letters. 

The research, published within the journal BMJ Health and Care Informatics, surveyed 1,006 GPs across the about their use of AI chatbots in clinical practice. 

Some 20% reported using generative AI tools, with ChatGPT being the most well-liked. Of those using AI, 29% said they employed it to generate documentation after patient appointments, while 28% used it to suggest potential diagnoses.

“These findings signal that GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning,” the study authors noted. 

We do not know what number of papers OpenAI used to coach their models, however it’s definitely greater than any doctor could have read. It gives quick, convincing answers and may be very easy to make use of, unlike searching research papers manually. 

Does that mean ChatGPT is usually accurate for clinical advice? Absolutely not. Large language models (LLMs) like ChatGPT are pre-trained on massive amounts of general data, making them more flexible but dubiously accurate for specific medical tasks.

It’s easy to guide them on, with the AI model tending to side together with your assumptions in problematically sycophantic behavior.

Moreover, some researchers state that ChatGPT may be conservative or prude when handling delicate topics like sexual health.

As Stephen Hughes from Anglia Ruskin University wrote in The Conservation, “I asked ChatGPT to diagnose pain when passing urine and a discharge from the male genitalia after unprotected sexual activity. I used to be intrigued to see that I received no response. It was as if ChatGPT blushed in some coy computerised way. Removing mentions of sexual activity resulted in ChatGPT giving a differential diagnosis that included gonorrhoea, which was the condition I had in mind.” 

As Dr. Charlotte Blease, lead writer of the study, commented: “Despite an absence of guidance about these tools and unclear work policies, GPs report using them to help with their job. The medical community will need to search out ways to each educate physicians and trainees in regards to the potential advantages of those tools in summarizing information but additionally the risks by way of hallucinations, algorithmic biases and the potential to compromise patient privacy.”

That last point is vital. Passing patient information into AI systems likely constitutes a breach of privacy and patient trust.

Dr. Ellie Mein, medico-legal adviser on the Medical Defence Union, agreed on the important thing issues: “Along with the uses identified within the BMJ paper, we’ve found that some doctors are turning to AI programs to assist draft grievance responses for them. We have cautioned MDU members in regards to the issues this raises, including inaccuracy and patient confidentiality. There are also data protection considerations.”

She added: “When coping with patient complaints, AI drafted responses may sound plausible but can contain inaccuracies and reference incorrect guidelines which may be hard to identify when woven into very eloquent passages of text. It’s vital that doctors use AI in an ethical way and comply with relevant guidance and regulations.”

Probably essentially the most critical questions amid all this are: How accurate is ChatGPT in a medical context? And how great might the risks of misdiagnosis or other issues be if this continues?

Generative AI in medical practice

As GPs increasingly experiment with AI tools, researchers are working to guage how they compare to traditional diagnostic methods. 

A study published in conducted a comparative evaluation between ChatGPT, conventional machine learning models, and other AI systems for medical diagnoses.

The researchers found that while ChatGPT showed promise, it was often outperformed by traditional machine learning models specifically trained on medical datasets. For example, multi-layer perceptron neural networks achieved the best accuracy in diagnosing diseases based on symptoms, with rates of 81% and 94% on two different datasets.

Researchers concluded that while ChatGPT and similar AI tools show potential, “their answers may be often ambiguous and out of context, so providing incorrect diagnoses, even whether it is asked to supply a solution only considering a selected set of classes.”

This aligns with other recent studies examining AI’s potential in medical practice.

For example, research published in Network Open tested GPT-4’s ability to investigate complex patient cases. While it showed promising leads to some areas, GPT-4 still made errors, a few of which could possibly be dangerous in real clinical scenarios.

There are some exceptions, though. One study conducted by the New York Eye and Ear Infirmary of Mount Sinai (NYEE) demonstrated how GPT-4 can meet or exceed human ophthalmologists in diagnosing and treating eye diseases.

For glaucoma, GPT-4 provided highly accurate and detailed responses that exceeded those of real eye specialists. 

AI developers similar to OpenAI and NVIDIA are training purpose-built medical AI assistants to support clinicians, hopefully making up for shortfalls in base frontier models like GP-4.

OpenAI has already partnered with health tech company Color Health to create an AI “copilot” for cancer care, demonstrating how these tools are set to develop into more specific to clinical practice.  

Weighing up advantages and risks

There are countless studies comparing specially trained AI models to humans in identifying diseases from diagnostics images similar to MRI and X-ray. 

AI techniques have outperformed doctors in all the pieces from cancer and eye disease diagnosis to Alzheimer’s and Parkinson’s early detection. One, named “Mia,” proved effective in analyzing over 10,000 mammogram scans, flagging known cancer cases, and uncovering cancer in 11 women that doctors had missed. 

However, these purpose-built AI tools are definitely not the identical as parsing notes and findings right into a language model like ChatGPT and asking it to infer a diagnosis from that alone. 

Nevertheless, that’s a difficult temptation to withstand. It’s no secret that healthcare services are overwhelmed. NHS waiting times proceed to soar at all-time highs, and even obtaining GP appointments in some areas is a grim task. 

AI tools goal time-consuming admin, such is their allure for overwhelmed doctors. We’ve seen this mirrored across quite a few public sector fields, similar to education, where teachers are widely using AI to create materials, mark work, and more. 

So, will your doctor parse your notes into ChatGPT and write you a prescription based on the outcomes to your next doctor’s visit? Quite possibly. It’s just one other frontier where the technology’s promise to save lots of time is just so hard to disclaim. 

The best path forward could also be to develop a code of use. The British Medical Association has called for clear policies on integrating AI into clinical practice.

“The medical community will need to search out ways to each educate physicians and trainees and guide patients in regards to the secure adoption of those tools,” the BMJ study authors concluded.

Aside from advice and education, ongoing research, clear guidelines, and a commitment to patient safety will likely be essential to realizing AI’s advantages while offsetting risks.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read