Startup companies say that new programs similar to ChatGPT could complete doctors' paperwork for them. But some experts worry that inherent bias and a tendency to fabricate facts could lead to errors. ER Productions Limited/Getty Images hide caption
ER Productions Limited/Getty Images
Startup companies say that new programs similar to ChatGPT could complete doctors' paperwork for them. But some experts worry that inherent bias and a tendency to fabricate facts could lead to errors.
ER Productions Limited/Getty Images
When Dereck Paul was training as a doctor at the University of California San Francisco, he couldn't believe how outdated the hospital's records-keeping was. The computer systems looked like they'd time-traveled from the 1990s, and many of the medical records were still kept on paper.
"I was just totally shocked by how analog things were," Paul recalls.
The experience inspired Paul to found a small San Francisco-based startup called Glass Health. Glass Health is now among a handful of companies who are hoping to use artificial intelligence chatbots to offer services to doctors. These firms maintain that their programs could dramatically reduce the paperwork burden physicians face in their daily lives, and dramatically improve the patient-doctor relationship.
"We need these folks not in burnt-out states, trying to complete documentation," Paul says. "Patients need more than 10 minutes with their doctors."
But some independent researchers fear a rush to incorporate the latest AI technology into medicine could lead to errors and biased outcomes that might harm patients.
Shots - Health News
Therapy by chatbot? The promise and challenges in using AI for mental health
"I think it's very exciting, but I'm also super skeptical and super cautious," says Pearse Keane, a professor of artificial medical intelligence at University College London in the United Kingdom. "Anything that involves decision-making about a patient's care is something that has to be treated with extreme caution for the time being."
A powerful engine for medicine
Paul co-founded Glass Health in 2021 with Graham Ramsey, an entrepreneur who had previously started several healthcare tech companies. The company began by offering an electronic system for keeping medical notes. When ChatGPT appeared on the scene last year, Paul says, he didn't pay much attention to it.
"I looked at it and I thought, 'Man, this is going to write some bad blog posts. Who cares?'" he recalls.
But Paul kept getting pinged from younger doctors and medical students. They were using ChatGPT, and saying it was pretty good at answering clinical questions. Then the users of his software started asking about it.
In general, doctors should not be using ChatGPT by itself to practice medicine, warns Marc Succi, a doctor at Massachusetts General Hospital who has conducted evaluations of how the chatbot performs at diagnosing patients. When presented with hypothetical cases, he says, ChatGPT could produce a correct diagnosis accurately at close to the level of a third- or fourth-year medical student. Still, he adds, the program can also hallucinate findings and fabricate sources.
Shots - Health News
How Can We Be Sure Artificial Intelligence Is Safe For Medical Use?
"I would express considerable caution using this in a clinical scenario for any reason, at the current stage," he says.
But Paul believed the underlying technology can be turned into a powerful engine for medicine. Paul and his colleagues have created a program called "Glass AI" based off of ChatGPT. A doctor tells the Glass AI chatbot about a patient, and it can suggest a list of possible diagnoses and a treatment plan. Rather than working from the raw ChatGPT information base, the Glass AI system uses a virtual medical textbook written by humans as its main source of facts – something Paul says makes the system safer and more reliable.
"We're working on doctors being able to put in a one-liner, a patient summary, and for us to be able to generate the first draft of a clinical plan for that doctor," he says. "So what tests they would order and what treatments they would order."
Paul believes Glass AI helps with a huge need for efficiency in medicine. Doctors are stretched everywhere, and he says paperwork is slowing them down.
"The physician quality of life is really, really rough. The documentation burden is massive," he says. "Patients don't feel like their doctors have enough time to spend with them."
Bots at the bedside
In truth, AI has already arrived in medicine, according to Keane. Keane also works as an ophthalmologist at Moorfields Eye Hospital in London and says that his field was among the first to see AI algorithms put to work. In 2018, the Food and Drug Administration (FDA) approved an AI system that could read a scan of a patient's eyes to screen for diabetic retinopathy, a condition that can lead to blindness.
Alexandre Lebrun of Nabla says AI can "automate all this wasted time" doctors spend completing medical notes and paperwork. Delphine Groll/Nabla hide caption
Alexandre Lebrun of Nabla says AI can "automate all this wasted time" doctors spend completing medical notes and paperwork.
That technology is based on an AI precursor to the current chatbot systems. If it identifies a possible case of retinopathy, it then refers the patient to a specialist. Keane says the technology could potentially streamline work at his hospital, where patients are lining up out the door to see experts.
"If we can have an AI system that is in that pathway somewhere that flags the people with the sight-threatening disease and gets them in front of a retina specialist, then that's likely to lead to much better outcomes for our patients," he says.
Other similar AI programs have been approved for specialties like radiology and cardiology. But these new chatbots can potentially be used by all kinds of doctors treating a wide variety of patients.
Alexandre Lebrun is CEO of a French startup called Nabla. He says the goal of his company's program is to cut down on the hours doctors spend writing up their notes.
"We are trying to completely automate all this wasted time with AI," he says.
Lebrun is open about the fact that chatbots have some problems. They can make up sources, get things wrong and behave erratically. In fact, his team's early experiments with ChatGPT produced some weird results.
For example, when a fake patient told the chatbot it was depressed, the AI suggested "recycling electronics" as a way to cheer up.
Despite this dismal consultation, Lebrun thinks there are narrow, limited tasks where a chatbot can make a real difference. Nabla, which he co-founded, is now testing a system that can, in real time, listen to a conversation between a doctor and a patient and provide a summary of what the two said to one another. Doctors inform their patients that the system is being used in advance, and as a privacy measure, it doesn't actually record the conversation.
Shots - Health News
Google Searches For Ways To Put Artificial Intelligence To Use In Health Care
"It shows a report, and then the doctor will validate with one click, and 99% of the time it's right and it works," he says.
The summary can be uploaded to a hospital records system, saving the doctor valuable time.
Other companies are pursuing a similar approach. In late March, Nuance Communications, a subsidiary of Microsoft, announced that it would be rolling out its own AI service designed to streamline note-taking using the latest version of ChatGPT, GPT-4. The company says it will showcase its software later this month.
AI reflects human biases
But even if AI can get it right, that doesn't mean it will work for every patient, says Marzyeh Ghassemi, a computer scientist studying AI in healthcare at MIT. Her research shows that AI can be biased.
"When you take state-of-the-art machine learning methods and systems and then evaluate them on different patient groups, they do not perform equally," she says.
That's because these systems are trained on vast amounts of data made by humans. And whether that data is from the Internet, or a medical study, it contains all the human biases that already exist in our society.
The problem, she says, is often these programs will reflect those biases back to the doctor using them. For example, her team asked an AI chatbot trained on scientific papers and medical notes to complete a sentence from a patient's medical record.
"When we said 'White or Caucasian patient was belligerent or violent,' the model filled in the blank [with] 'Patient was sent to hospital,'" she says. "If we said 'Black, African American, or African patient was belligerent or violent,' the model completed the note [with] 'Patient was sent to prison.'"
Ghassemi says many other studies have turned up similar results. She worries that medical chatbots will parrot biases and bad decisions back to doctors, and they'll just go along with it.
ChatGPT can answer many medical questions correctly, but experts warn against using it on its own for medical advice. MARCO BERTORELLO/AFP via Getty Images hide caption
MARCO BERTORELLO/AFP via Getty Images
ChatGPT can answer many medical questions correctly, but experts warn against using it on its own for medical advice.
MARCO BERTORELLO/AFP via Getty Images
"It has the sheen of objectivity: 'ChatGPT says you shouldn't have this medication. It's not me – a model, an algorithm made this choice,'" she says.
And it's not just a question of how individual doctors use these new tools, adds Sonoo Thadaney Israni, a researcher at Stanford University who co-chaired a recent National Academy of Medicine study on AI.
"I don't know whether the tools that are being developed are being developed to reduce the burden on the doctor, or to really increase the throughput in the system," she says. The intent will have a huge effect on how the new technology affects patients.
Regulators are racing to keep up with a flood of applications for new AI programs. The FDA, which oversees such systems as "medical devices," said in a statement to NPR that it was working to ensure that any new AI software meets its standards.
"The agency is working closely with stakeholders and following the science to make sure that Americans will benefit from new technologies as they further develop, while ensuring the safety and effectiveness of medical devices," spokesperson Jim McKinney said in an email.
But it is not entirely clear where chatbots specifically fall in the FDA's rubric, since, strictly speaking, their job is to synthesize information from elsewhere. Lebrun of Nabla says his company will seek FDA certification for their software, though he says in its simplest form, the Nabla note-taking system doesn't require it. Dereck Paul says Glass Health is not currently planning on seeking FDA certification for Glass AI.
Doctors give chatbots a chance
Both Lebrun and Paul say they are well aware of the problems of bias. And both know that chatbots can sometimes fabricate answers out of thin air. Paul says doctors who use his company's AI system need to check it.
"You have to supervise it, the way we supervise medical students and residents, which means that you can't be lazy about it," he says.
Both companies also say they are working to reduce the risk of errors and bias. Glass Health's human-curated textbook is written by a team of 30 clinicians and clinicians in training. The AI relies on it to write diagnoses and treatment plans, which Paul claims should make it safe and reliable.
At Nabla, Lebrun says he's training the software to simply condense and summarize the conversation, without providing any additional interpretation. He believes that strict rule will help reduce the chance of errors. The team is also working with a diverse set of doctors located around the world to weed out bias from their software.
Regardless of the possible risks, doctors seem interested. Paul says in December, his company had around 500 users. But after they introduced their chatbot, those numbers jumped.
"We finished January with 2,000 monthly active users, and in February we had 4,800," Paul says. Thousands more signed up in March, as overworked doctors line up to give AI a try.
How AI can help doctors? ›
Benefits of AI in Healthcare & Medicine
With AI's ability to process big data sets, consolidating patient insights can lead to predictive benefits, helping the healthcare ecosystem discover key areas of patient care that require improvement. Wearable healthcare technology also uses AI to better serve patients.
The doctor's stethoscope is placed on the notebook computer. An artificial intelligence chatbot was able to outperform human doctors in responding to patient questions posted online, according to evaluators in a new study.Do you think doctors can be replaced by AI? ›
“Doctors will not be replaced by AI, but they may not directly profit from it either,” Dranove says. And it's not clear if even the healthcare organization will get monetary rewards. Medical care in the United States is often based on a fee-for-service model.Can AI reduce medical errors? ›
By analyzing patient data and other relevant information, enterprise AI can help healthcare professionals reduce medical errors.What is an example of how AI is helping with medicine health? ›
Innovative medical devices are a great example of AI technology used in healthcare concerning imaging tools. Their purpose is to screen chest x-rays in search of tuberculosis or cancer.What are the benefits of AI in medicine and healthcare? ›
Benefits of AI applied to health
Early detection and diagnosis of diseases: machine learning models could be used to observe patients' symptoms and alert doctors if certain risks increase. This technology can collect data from medical devices and find more complex conditions.
But it's important to note that AI did not substantially outperform human diagnosis.” More specifically, the analysis found that AI can correctly diagnose disease in 87% of the cases, whereas detection by healthcare professionals yielded an 86% accuracy rate.Do people trust AI in healthcare? ›
12-18, 2022, of 11,004 U.S. adults finds only 38% say AI being used to do things like diagnose disease and recommend treatments would lead to better health outcomes for patients generally, while 33% say it would lead to worse outcomes and 27% say it wouldn't make much difference.Do people trust artificial intelligence in healthcare? ›
Results indicate overall lower trust in AI, as well as for diagnoses of high-risk diseases. Participants trusted AI doctors less than humans for first diagnoses, and they were also less likely to trust a second opinion from an AI doctor for high risk diseases.Which human will not replaced by AI? ›
Regardless of how well AI machines are programmed to respond to humans, it is unlikely that humans will ever develop such a strong emotional connection with these machines. Hence, AI cannot replace humans, especially as connecting with others is vital for business growth.
What jobs AI Cannot replace? ›
- Lawyers. Average annual income: $127,990. While A.I. ...
- Physicians. Average annual income: $300,000 a year for surgeons, $200,000 for pediatricians. ...
- Psychiatrists. Average annual income: $249,760. ...
- Plumbers. Average annual income: $59,880. ...
- Architects. Average annual income: $80,180.
The most obvious and direct weakness of AI in healthcare is that it can bring about a security breach with data privacy. Because it grows and is developed based on information gathered, it also is susceptible to data collected being abused and taken by the wrong hands.What are the disadvantages of AI in medical imaging? ›
Lack of Validation Datasets
The validation of AI in radiology depends heavily on the availability of patient data. Generating validation data sets is a time-consuming task that bottlenecks many machine learning projects. However, AI models can not be tested for modern applications without such data sets.
Similar to the intense practice doctors must undergo, it takes thousands of examples for an algorithm to learn how to recognize illness. In fact, with a standard accuracy of 72.52%, AI diagnoses illness even more accurately than the average doctor, who, in the same study, diagnosed with 71.4% accuracy.What are the negative effects of AI? ›
AI systems learn from training data, by which they learn to make decisions. These data may contain biased human decisions or represent historical or social inequalities. Likewise, the bias could enter directly into the programming of the human being who designs the system, based on their own biases.What is a real world example of AI in healthcare? ›
Scientists used over 25,000 blood sample images so that the machines could learn how they should find the harmful bacteria. AI allowed the machines to learn to identify these bacteria in the blood and predict their presence in the new samples with an accuracy of 95 percent, reducing fatality by a large margin.What are the four uses of AI in healthcare? ›
Deep learning AI can be used to help detect diseases faster, provide personalized treatment plans and even automate certain processes such as drug discovery or diagnostics. It also holds promise for improving patient outcomes, increasing safety and reducing costs associated with healthcare delivery.How does AI solve problems in healthcare? ›
Algorithms like reinforcement learning algorithms can identify, discover and validate novel drugs, updating contemporary medical practices and advancements. This data can help doctors and patients see implications of a drug or particular chemical combination on a health issue.Why should AI be used in medicine? ›
In addition to helping clinicians spot early signs of disease, AI can also help make the staggering number of medical images that clinicians have to keep track of more manageable by detecting vital pieces of a patient's history and presenting the relevant images to them.What problems might these AI solve? ›
- Customer support.
- Data analysis.
- Demand forecasting.
- Image and video recognition.
- Predicting customer behavior.
Will radiologists who use AI replace those who don t? ›
- Radiologists who use AI will replace those who don't. ...
- Larger range of tools and better precision. ...
- With precision comes automation.
- Diagnose diseases. Correctly diagnosing diseases takes years of medical training. ...
- Develop drugs faster. Developing drugs is a notoriously expensive process. ...
- Personalize treatment. Different patients respond to drugs and treatment schedules differently. ...
- Improve gene editing.
Machine learning algorithms enable AI to analyze large data sets and identify patterns that may be difficult for human experts to identify. Deep learning algorithms enable AI to analyze medical images such as X-rays, CT scans, and MRIs, which has significantly improved medical imaging diagnosis.What is the misdiagnosis rate of AI? ›
Most respondents were very concerned or somewhat concerned about AI's unintended consequences, including misdiagnosis (91.5%), privacy breaches (70.8%), less time with clinicians (69.6%), and higher health care costs (68.4%).How do you tell if something was written by an AI? ›
GLTR is currently the most visual way to predict if casual portions of text have been written with AI. To use GLTR, simply copy and paste a piece of text into the input box and hit "analyze." This tool was built with GPT-2, meaning it won't be as extensively trained as if it were written with GPT-3 or GPT-4 content.What will AI never be able to do? ›
Creativity: AI systems can be trained to generate new ideas, but they will never be able to come up with truly original and creative concepts in the same way that humans can. Creativity requires imagination, intuition, and the ability to draw from one's own experiences and emotions.Why AI will never take over humans? ›
AI can't replace human talent
Artificial intelligence is superlative at certain tasks, but it can only "think" in terms of its training data. An AI tool can't innovate or create, so businesses will still rely on humans for fresh ideas.
Overall, it's likely that AI will change the nature of work in many industries, rather than replace human workers completely. As AI becomes more advanced, it's likely that humans will be freed up from performing repetitive and manual tasks and will instead focus on higher-level, creative, and strategic work.What job is replaced by AI? ›
Jobs which rely on technology are at the biggest risk of being taken over by AI. According to Insider, coders, computer programmers and software engineers are set to be easily replaced by the advent of new technology such as ChatGP.
- English language and literature teachers.
- Foreign language and literature teachers.
- History teachers.
- Law teachers.
- Philosophy and religion teachers.
- Sociology teachers.
- Political science teachers.
What is the average salary for an AI engineer in the US? ›
The national average salary for a AI Engineer is $1,05,400 in United States. Filter by location to see AI Engineer salaries in your area. Salary estimates are based on 171 salaries submitted anonymously to Glassdoor by AI Engineer employees. How accurate does $1,05,400 look to you?What are 3 problems with AI? ›
Notwithstanding the tangible and monetary benefits, AI has various shortfall and problems which inhibits its large scale adoption. The problems include Safety, Trust, Computation Power, Job Loss concern, etc.What is the ethical controversy of AI? ›
But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.What is a limitation of AI medical field? ›
One major drawback in artificial intelligence is the reliance on training data and the lack of generalizability to other data. AI algorithms are often developed by training on data from one specific hospital or clinic. This algorithm could show a very high performance on the test data from that same hospital.What diseases can be diagnosed with AI? ›
This article covers the comprehensive survey based on artificial intelligence techniques to diagnose numerous diseases such as Alzheimer, cancer, diabetes, chronic heart disease, tuberculosis, stroke and cerebrovascular, hypertension, skin, and liver disease.What did Elon Musk say about AI? ›
“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk said in his interview with Tucker ...What is an example of unethical AI? ›
Examples of Unethical AI
The AI software was designed to look through resumes of potential candidates and choose those that were most qualified for the position. However, since the AI had learned from a biased data set that included primarily male resumes, it was much less likely to select female candidates.
Artificial intelligence systems can reinforce and even amplify biases that are present in the data they use for training. This may cause unjust or biased results. AI tools have the potential to propagate false information and manipulate data, further undermining public confidence in the digital world.How robots are helping doctors? ›
Beyond administrative work, robots can also aid diagnosis. AI-enabled robots now exist that can analyse X-rays and identify problems based on them, Ali says. Far from replacing doctors and other clinical staff, robot-assisted analysis supports clinicians' judgments and helps them to offer better diagnoses.How AI is transforming healthcare? ›
Artificial intelligence is a powerful tool that can increase the speed, efficiency, and effectiveness of global health systems. By analyzing large amounts of data in real time, AI can help improve clinical and nonclinical decision making, reduce medical variability, and optimize staffing.
How AI can make healthcare more human? ›
“By using AI to interrogate that dataset, we can improve treatment by identifying risk factors, selecting more appropriate drugs and treatments, and developing personalized precision medicine based on a large library of previous cases.”What will happen if doctors use robots to do surgery? ›
A Robotic-assisted surgery benefits you directly—shorter recovery time—as well as indirectly—the surgeon has better visualization, leading to a more precise surgery. Other benefits: Your surgeon has greater range of motion and dexterity. Your surgeon sees a highly-magnified, high-resolution image of the operating field.Why are robotic doctors better than human doctors? ›
Artificial Intelligence Robot Surgery
AI-assisted robotic surgeries have proven to be minimally invasive and maximally precise. This example of Artificial Intelligence software allows to perform surgery with as small incisions as it is possible, which can be difficult or impossible to do for a human surgeon.
Robots & AI optimise resources such as time and cost, address staff shortages, and thereby create more time between staff and patient. Robots can be deployed in care, medicine and administration. In doing so, they always map processes.Is AI already being used in healthcare? ›
The emergence of artificial intelligence (AI) in healthcare has been groundbreaking, reshaping the way we diagnose, treat and monitor patients. This technology is drastically improving healthcare research and outcomes by producing more accurate diagnoses and enabling more personalized treatments.What is the quote about AI in healthcare? ›
“The promise of artificial intelligence in medicine is to provide composite, panoramic views of individuals' medical data; to improve decision making; to avoid errors such as misdiagnosis and unnecessary procedures; to help in the ordering and interpretation of appropriate tests; and to recommend treatment.”Is AI the future of healthcare? ›
Artificial intelligence is a powerful tool for healthcare and hospitals, with the ability to improve patient outcomes and patient satisfaction. It has already made significant headway in drug discovery, data analytics, robot-assisted surgery, and virtual nursing assistants.How can AI solve healthcare problems? ›
Connecting the Dots - Predicting Diseases and Treatments
With technology for diagnostics and imaging tools like MRIs, CT and PET scans, algorithms can be trained to accurately measure symptoms, faster than humans.
Known as SISH (self-supervised image search for histology), the new tool acts like a search engine for pathology images and has many potential applications, including identifying rare diseases and helping clinicians determine which patients are likely to respond to similar therapies.How AI will impact the healthcare industry? ›
Digital data interventions can enhance population health
AI can provide powerful tools to automate tasks and support and inform clinicians, epidemiologists and policy-makers on the most efficient strategies to promote health at a population and individual level, the paper says.
What diseases can be detected by AI? ›
Researchers have used various AI-based techniques such as machine and deep learning models to detect the diseases such as skin, liver, heart, alzhemier, etc.How accurate is AI in medical diagnosis? ›
How Accurate Are AI Diagnoses? It's safe to say that AI is, by and large, successful at correctly diagnosing patients. It takes years of training for doctors to learn how to diagnose illnesses correctly. Even then, misdiagnosis is rampant; an estimated one out of 20 adult patients is misdiagnosed every year in the U.S.What are the disadvantages of AI in diagnosis? ›
- Data Collection Concern. The first problem is the inaccessibility of relevant data. ...
- Algorithms Developments Concerns. ...
- Ethical Concerns. ...
- Social Concerns. ...
- Clinical Implementation Concerns. ...
- Biased and Discriminatory Algorithms.