Generative AI, the can create and analyze images, text, audio, videos and more, is increasingly finding its way into healthcare and is being pushed forward by each Big Tech firms and start-ups.
Google Cloud, Google's cloud services and products division, is collaborating with Highmark Health, a Pittsburgh-based nonprofit healthcare company, on generative AI tools to personalize the patient intake experience. Amazon's AWS division says it’s working with unnamed customers on a strategy to use generative AI for analytics medical databases for “social determinants of health.” And Microsoft Azure helps construct a generative AI system for Providence, the nonprofit healthcare network, to mechanically triage messages sent by patients to providers.
Prominent healthcare generative AI startups include Ambience Healthcare, which is developing a generative AI app for doctors; Nabla, an ambient AI assistant for practitioners; and Abridge, which develops medical documentation evaluation tools.
The widespread enthusiasm for generative AI is reflected within the investments in generative AI efforts in healthcare. Overall, generative AI in healthcare startups have raised tens of hundreds of thousands of dollars in enterprise capital up to now, and the overwhelming majority of healthcare investors say generative AI has done so significantly influenced their investment strategies.
But each experts and patients are divided over whether healthcare-focused generative AI is prepared for prime time.
Generative AI is probably not what people want
In one current Deloitte surveyOnly about half (53%) of US consumers said they consider generative AI could improve healthcare – for instance, by making it more accessible or reducing wait times for appointments. Less than half said they expected generative AI to make medical care cheaper.
Andrew Borkowski, chief AI officer at VA Sunshine Healthcare Network, the U.S. Department of Veterans Affairs' largest health system, doesn't think the cynicism is unwarranted. Borkowski warned that the usage of generative AI could also be premature as a consequence of its “significant” limitations – and concerns about its effectiveness.
“One of the essential problems with generative AI is its inability to handle complex medical questions or emergencies,” he told TechCrunch. “Its limited knowledge base – that’s, lack of current clinical information – and lack of human expertise make it unsuitable for providing comprehensive medical advice or treatment recommendations.”
Several studies suggest that these points are credible.
An article within the journal JAMA Pediatrics described OpenAI's generative AI chatbot ChatGPT, which some healthcare organizations have been testing for limited use cases found that it makes mistakes Diagnosis of pediatric diseases in 83% of cases. And in testing Doctors at Beth Israel Deaconess Medical Center in Boston found that the model, using OpenAI's GPT-4 as a diagnostic assistant, ranked the fallacious diagnosis as the highest answer almost two out of 3 times.
Today's generative AI also struggles with medical administrative tasks which might be an integral a part of doctors' day by day workflows. In the MedAlign benchmark, to judge how well generative AI can do things like summarize patient records and search notes, GPT-4 failed 35% of the time.
OpenAI and lots of other generative AI providers warn against counting on their models for medical advice. But Borkowski and others say they might do more. “Relying solely on generative AI in healthcare could lead on to misdiagnosis, inappropriate treatment and even life-threatening situations,” Borkowski said.
Jan Egger, who leads AI-driven therapies on the Institute for AI in Medicine on the University of Duisburg-Essen, which studies the applications of recent technologies to patient care, shares Borkowski's concerns. He believes that the one secure strategy to use generative AI in healthcare without delay is under the strict, watchful supervision of a physician.
“The results could be completely fallacious and it’s becoming increasingly difficult to take care of awareness of this,” Egger said. “Of course, generative AI could be used, for instance, to pre-write discharge letters. But it’s the responsibility of the doctors to envision and make the ultimate decision.”
Generative AI can perpetuate stereotypes
One particularly damaging way generative AI can go fallacious in healthcare is by perpetuating stereotypes.
In a 2023 Stanford Medicine study, a team of researchers tested ChatGPT and other generative AI-powered chatbots on questions on kidney function, lung capability and skin thickness. The co-authors found that not only were ChatGPT's responses steadily false, however the responses also contained several long-reinforced, unfaithful assumptions that there are biological differences between blacks and whites – untruths which have been known to cause… medical professionals misdiagnosed health problems.
The irony is that the patients probably to be discriminated against by generative AI in healthcare are also those probably to make use of it.
People without medical health insurance – largely people of colorAccording to a KFF study, they’re more willing to try generative AI for things like finding a physician or providing mental health support, the Deloitte survey found. If AI recommendations are affected by bias, this might exacerbate disparities in treatment.
However, some experts argue that generative AI is making progress on this regard.
A Microsoft study published at the top of 2023 states: Researchers said they achieved 90.2% accuracy on 4 demanding medical benchmarks with GPT-4. Vanilla GPT-4 was unable to realize this rating. However, the researchers say that through prompt engineering – designing prompts for GPT-4 to provide specific outputs – they were capable of increase the model's rating by as much as 16.2 percentage points. (It's value noting that Microsoft is a significant investor in OpenAI.)
Beyond chatbots
But asking a chatbot an issue isn't the one thing generative AI is sweet for. Some researchers say medical imaging may benefit greatly from the facility of generative AI.
In July, a bunch of scientists presented a system called cComplementarity-driven clinical workflow shift (CoDoC) in a study published in Nature. The system goals to determine when medical imaging specialists should depend on AI as an alternative of traditional techniques for diagnosis. According to the co-authors, CoDoC performed higher than specialists while reducing clinical workflow by 66%.
In November, a Chinese research team demonstrated panda, an AI model for detecting potential pancreatic lesions in X-ray images. A study showed Panda could be very accurate in classifying these lesions, which are sometimes detected too late for surgical intervention.
In fact, Arun Thirunavukarasu, a clinical research fellow on the University of Oxford, said there’s “nothing unique” about generative AI that precludes its use in healthcare.
“More mundane applications of generative AI technology are feasible In “The solutions can be found within the short and medium term and include text corrections, automatic documentation of notes and letters, and improved search functions to optimize electronic health records,” he said. “There is not any reason why generative AI technology – if effective – can’t be used In I’ll tackle such roles immediately.”
“Strict science”
But while generative AI shows promise in certain, limited areas of medication, experts like Borkowski point to the technical and compliance hurdles that should be overcome before generative AI could be useful – and trustworthy – as a comprehensive assistive healthcare tool.
“Significant privacy and security concerns surround the usage of generative AI in healthcare,” said Borkowski. “The sensitivity of medical data and the potential for misuse or unauthorized access pose significant risks to patient confidentiality and trust within the healthcare system.” Additionally, the regulatory and legal landscape surrounding the usage of generative AI in healthcare remains to be evolving , although questions regarding liability, data protection and the practice of medication by non-human entities still have to be resolved.”
Even Thirunavukarasu, who’s optimistic about generative AI in healthcare, says there should be “rigorous science” behind patient-focused tools.
“Especially without direct oversight from physicians, there ought to be pragmatic randomized control trials demonstrating clinical profit to justify the usage of patient-focused generative AI,” he said. “Proper governance is critical going forward to protect against unexpected damage following a large-scale deployment.”
Recently, the World Health Organization published guidelines advocating for one of these science and human oversight of generative AI in healthcare, in addition to the introduction of independent third-party audits, transparency and impact assessments of this AI. The aim, as outlined by the WHO in its guidelines, is to encourage a various cohort of individuals to take part in the event of generative AI for healthcare and to supply a possibility to lift concerns and supply input throughout the method.
“Unless concerns are adequately addressed and appropriate safeguards are put in place,” Borkowski said, “widespread implementation of medical generative AI … could possibly be potentially harmful to patients and the healthcare industry as a complete.”