Tag: ChatGPT

Good evidence confuses ChatGPT when used for health information, study finds

chatgpt
Credit: Pixabay/CC0 Public Domain

A world-first study has found that when asked a health-related question, the more evidence that is given to ChatGPT, the less reliable it becomes—reducing the accuracy of its responses to as low as 28%.

The study was recently presented at Empirical Methods in Natural Language Processing (EMNLP), a Natural Language Processing conference in the field. The findings are published in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.

As large language models (LLMs) like ChatGPT explode in popularity, they pose a potential risk to the growing number of people using online tools for key health information.

Scientists from CSIRO, Australia’s national science agency, and The University of Queensland (UQ) explored a hypothetical scenario of an average person (non-professional health consumer) asking ChatGPT if “X” treatment has a positive effect on condition “Y.”

The 100 questions presented ranged from “Can zinc help treat the common cold?” to “Will drinking vinegar dissolve a stuck fish bone?”

ChatGPT’s response was compared to the known correct response, or “ground truth,” based on existing medical knowledge.

CSIRO Principal Research Scientist and Associate Professor at UQ Dr. Bevan Koopman said that even though the risks of searching for health information online are well documented, people continue to seek health information online, and increasingly via tools such as ChatGPT.

“The widespread popularity of using LLMs online for answers on people’s health is why we need continued research to inform the public about risks and to help them optimize the accuracy of their answers,” Dr. Koopman said. “While LLMs have the potential to greatly improve the way people access information, we need more research to understand where they are effective and where they are not.”

The study looked at two question formats. The first was a question only.

ChatGPT passes the nutrition test, but experts remain irreplaceable

In a recent study published in the journal Nutrients, researchers evaluated the potential of chat generative pretrained transformer (ChatGPT) to provide nutritional guidance.

Non-communicable diseases (NCDs) are the foremost cause of mortality, accounting for 74% of deaths globally. The 2019 global burden of diseases study estimated there were 43.8 million cases of type 2 diabetes (T2D), 1.2 billion cases of non-alcoholic fatty liver disease (NAFLD), and 18.5 million cases of hypertension. Obesity prevalence has almost tripled between 1975 and 2016.

Various studies have consistently underscored the impact of lifestyle and dietary factors on NCD onset and progression. Of late, internet searches for information on health-related queries have been increasing. ChatGPT is a widely used chatbot that generates responses to textual queries. It can comprehend the context and provide coherent responses.

ChatGPT has emerged as an accessible and efficient resource for medical advice seekers. Chatbots can deliver real-time, interactive, personalized patient education and support, helping improve patient outcomes. Nevertheless, data on the utility of ChatGPT to improve nutrition among NCD patients have been limited.

Study: Is ChatGPT an Effective Tool for Providing Dietary Advice?Study: Is ChatGPT an Effective Tool for Providing Dietary Advice?

The study and findings

In the present study, researchers compared the nutritional advice provided by ChatGPT with recommendations from international guidelines in the context of NCDs. Analyses were performed using the default ChatGPT model (version 3.5). The team included medical conditions requiring specific nutritional treatments, such as arterial hypertension, T2D, dyslipidemia, obesity, NAFLD, sarcopenia, and chronic kidney disease (CKD).

A set of prompts for these conditions, formulated by doctors and dieticians, was used to obtain dietary advice from the chatbot. Separate chat sessions were conducted for each prompt conversation. ChatGPT’s responses were compared with recommendations from international clinical guidelines. Two dieticians independently assessed and categorized ChatGPT’s responses. Responses were deemed “appropriate” if they aligned with the

Using ChatGPT for medical advice could put your health at risk, reveals study

Until now, it was commonly advised not to search for illness symptoms on Google, as the information might be inaccurate. Now, a similar caution is being raised about ChatGPT, the AI chatbot developed by OpenAI, which has gained popularity as a platform for users to seek answers to their queries. According to researchers, the free version of ChatGPT may provide inaccurate or incomplete responses, or even no response at all, to medication-related queries. This poses a potential risk to patients who depend on OpenAI’s popular chatbot for medical guidance.

In a recent study conducted by pharmacists at Long Island University, the free version of ChatGPT came under scrutiny for its responses to drug-related questions. A study by pharmacists found that chatbot, ChatGPT, provided inaccurate or incomplete answers to nearly three-fourths of drug-related questions. The study posed 39 questions to ChatGPT, and the study deemed only 10 responses as “satisfactory” based on established criteria. The remaining 29 drug-related questions either received incomplete responses, inaccurate information, or failed to directly address the query.

As reported by CNBC, researchers sought references from ChatGPT to verify the accuracy of its responses. And to the prompts, ChatGPT provided inaccurate or incomplete answers to nearly three-fourths of drug-related questions. Additionally, when requested to provide references for its responses, ChatGPT only included references in eight responses, and each of those references cited nonexistent sources.

One notable instance highlighted by the study involved ChatGPT inaccurately stating that there were no reported interactions between Pfizer’s Paxlovid and the blood-pressure-lowering medication verapamil. The reality is that these medications can excessively lower blood pressure when taken together, posing potential risks to patients.

Looking at the results, the study highlighted the importance of exercising caution for both patients and healthcare professionals who may consider using ChatGPT for drug-related information. Lead author Sara

Parents are Using ChatGPT for Medical Advice: Risks & Benefits- Motherly

The story of a mama who, after over three years of zero answers, solved her child’s mystery medical illness using OpenAI’s ChatGPT went viral last month. Despite countless doctor visits and tests, the cause remained elusive—until she entered a comprehensive history of her child’s symptoms into ChatGPT. To her surprise, she was given a probable diagnosis of tethered cord syndrome—a rare condition that her doctors had missed—and the medical mystery was solved.

This story underscores the growing allure of using ChatGPT for medical advice. It’s well documented that women experience medical gaslighting or feel dismissed by care providers, which may make it even more tempting to turn to AI for answers. And no one can blame an exhausted, desperate mama for trying to find answers after possibly facing months or even years of frustration and uncertainty. “When your child is suffering, you want to use every tool in your box to find answers,” Dr. Harvey Karp, pediatrician, author and creator of the Snoo Smart Sleeper, shares with Motherly.

Every parent can admit they’ve typed symptoms into Google at 2 a.m. to self-diagnose their child (or themselves). But is ChatGPT any better? “ChatGPT is like a super-duper Google search to give parents ideas to share with their physician, especially for a puzzling situation,” says Dr. Karp.

Still, red flags abound when considering using AI for medical diagnosis. “Where it gets tricky is when people start to use tools like ChatGPT instead of seeking a trained medical professional’s help,” Dr. Karp shares. “I think that’s important for families to understand.”

Using ChatGPT for medical advice lacks the human touch

AI may seem like it’s the easy answer, but it severely lacks the human element. “Practicing medicine is an art,” Dr. Flora Sinha, board-certified Cedars Sinai Medical Group internist, shares. “While

ChatGPT may be more accurate than other online medical advice : Shots

Researchers used ChatGPT to diagnose eye-related complaints and found it performed well.

Richard Drew/AP


hide caption

toggle caption

Richard Drew/AP


Researchers used ChatGPT to diagnose eye-related complaints and found it performed well.

Richard Drew/AP

As a fourth-year ophthalmology resident at Emory University School of Medicine, Riley Lyons’ biggest responsibilities include triage: When a patient comes in with an eye-related complaint, Lyons must make an immediate assessment of its urgency.

He often finds patients have already turned to “Dr. Google.” Online, Lyons said, they are likely to find that “any number of terrible things could be going on based on the symptoms that they’re experiencing.”

So, when two of Lyons’ fellow ophthalmologists at Emory came to him and suggested evaluating the accuracy of the AI chatbot ChatGPT in diagnosing eye-related complaints, he jumped at the chance.

In June, Lyons and his colleagues reported in medRxiv, an online publisher of health science preprints, that ChatGPT compared quite well to human doctors who reviewed the same symptoms — and performed vastly better than the symptom checker on the popular health website WebMD.

And despite the much-publicized “hallucination” problem known to afflict ChatGPT — its habit of occasionally making outright false statements — the Emory study reported that the most recent version of ChatGPT made zero “grossly inaccurate” statements when presented with a standard set of eye complaints.

The relative proficiency of ChatGPT, which debuted in November 2022, was a surprise to Lyons and his co-authors. The artificial intelligence engine “is definitely an improvement over just putting something into a Google search bar and seeing what you find,” said co-author Nieraj Jain, an assistant professor at the Emory Eye Center who specializes in vitreoretinal surgery and disease.

Filling in gaps in care with AI

But the findings underscore a challenge facing the health care

Need cancer treatment advice? Forget ChatGPT – Harvard Gazette

The internet can serve as a powerful tool for self-education on medical topics. With ChatGPT now at patients’ fingertips, researchers from Brigham and Women’s Hospital sought to assess how consistently the AI chatbot provides recommendations for cancer treatment that align with National Comprehensive Cancer Network guidelines.

The team’s findings, published in JAMA Oncology, show that in one-third of cases, ChatGPT provided an inappropriate — or “non-concordant” — recommendation, highlighting the need for awareness of the technology’s limitations.

“Patients should feel empowered to educate themselves about their medical conditions, but they should always discuss with a clinician, and resources on the Internet should not be consulted in isolation,” said corresponding author Danielle Bitterman, a radiation oncologist and an instructor at Harvard Medical School. “ChatGPT responses can sound a lot like a human and can be quite convincing. But when it comes to clinical decision-making, there are so many subtleties for every patient’s unique situation. A right answer can be very nuanced, and not necessarily something ChatGPT or another large language model can provide.”

Although medical decision-making can be influenced by many factors, Bitterman and colleagues chose to evaluate the extent to which ChatGPT’s recommendations aligned with the NCCN guidelines, which are used by physicians across the country. They focused on the three most common cancers (breast, prostate, and lung cancer) and prompted ChatGPT to provide a treatment approach for each cancer based on the severity of the disease. In total, the researchers included 26 unique diagnosis descriptions and used four, slightly different prompts to ask ChatGPT to provide a treatment approach, generating a total of 104 prompts.

Nearly all responses (98 percent) included at least one treatment approach that agreed with NCCN guidelines. However, the researchers found that 34 percent of these responses also included one or more

Can You Spot the Bot? Study Finds ChatGPT Almost Undetectable in Medical Advice

Summary: A new study suggests that ChatGPT’s healthcare-related responses are hard to distinguish from those provided by human healthcare providers.

The study, involving 392 participants, presented a mix of responses from both ChatGPT and humans, finding participants correctly identified the chatbot and provider responses with similar accuracy.

However, the level of trust varied based on the complexity of the health-related task, with administrative tasks and preventive care being more trusted than diagnostic and treatment advice.

Key Facts:

  1. In the study, participants correctly identified ChatGPT’s healthcare-related responses 65.5% of the time and human healthcare provider responses 65.1% of the time.
  2. Trust in ChatGPT’s responses overall averaged a 3.4 out of 5 score, with higher trust for logistical questions and preventative care, but less for diagnostic and treatment advice.
  3. The researchers suggest that chatbots could assist in patient-provider communication, particularly with administrative tasks and chronic disease management.

Source: NYU

ChatGPT’s responses to people’s healthcare-related queries are nearly indistinguishable from those provided by humans, a new study from NYU Tandon School of Engineering and Grossman School of Medicine reveals, suggesting the potential for chatbots to be effective allies to healthcare providers’ communications with patients.

An NYU research team presented 392 people aged 18 and above with ten patient questions and responses, with half of the responses generated by a human healthcare provider and the other half by ChatGPT.

Participants were asked to identify the source of each response and rate their trust in the ChatGPT responses using a 5-point scale from completely untrustworthy to completely trustworthy.

The study found people have limited ability to distinguish between chatbot and human-generated responses. On average, participants correctly identified chatbot responses 65.5% of the time and provider responses 65.1% of the time, with ranges of 49.0% to 85.7% for different questions. Results remained consistent no matter the

How to Fact-Check Health Information From ChatGPT and AI Sources

ChatGPT and similar AI bots tell you to double-check health information they provide. However, choosing the best resources to verify this information can be tricky. Here’s how to find and use reputable sources for health information online.


For any serious or recurring issues, contact your physician or set up a telemedicine appointment. While ChatGPT can offer reliable health information, it can’t replace a medical professional’s evaluation of your particular issue.

MAKEUSEOF VIDEO OF THE DAYSCROLL TO CONTINUE WITH CONTENT

ChatGPT health information caveat

When prompted, the chatbot itself will even reiterate this idea. Although AI technology will affect the future of healthcare in a number of ways, it isn’t likely to replace the human component in the near future. For the most important medical questions, continue to rely on your personal physician or healthcare provider.

2. Rely on Established Medical Resources

Many medical resources available online provide reliable, reputable, evidence-based information. Even better, they present the facts in a straightforward way, making them more accessible to people from all sorts of backgrounds (not only medical professionals). These are some examples:

  1. Mayo Clinic: The nonprofit medical practice and academic research organization is well known for providing extensive health information on many conditions, and it offers facts about treatments and preventive care as well. Use its extensive health library to double-check information about diseases, supplements, and even general lifestyle advice.
  2. National Health Service (NHS): England’s healthcare system, the NHS runs a website filled with plenty of articles on medication, general health topics, and current news. Research symptoms, learn about all kinds of conditions, and get more facts about mental health conditions here.
  3. Centers for Disease Control and Prevention (CDC): This US government agency provides reliable information about diseases, vaccinations, and basic health advice. Use its resources to stay healthy while traveling, research just about

The Top 6 Factors to Consider Before Using ChatGPT for Mental Health

Tools powered by artificial intelligence (AI) make daily life drastically easier. With large language models such as ChatGPT, access to information in a conversational way has improved. ChatGPT’s analytical features have found a large application in mental health. However, there are caveats to using it for mental health.

MAKEUSEOF VIDEO OF THE DAYSCROLL TO CONTINUE WITH CONTENT

Mental health conditions should only be diagnosed and treated by certified professionals. However, using AI to improve the management of symptoms has both advantages and disadvantages. While ChatGPT avoids giving medical advice, there are some factors to keep in mind before trusting it for mental health information.


1. ChatGPT Is Not a Replacement for Therapy

ChatGPT is a large language model trained on an enormous database of information. Therefore, it can generate human-like responses along with proper context. Such responses can help you learn about mental health but are not a replacement for in-person therapy.

For example, ChatGPT cannot diagnose an illness from your chats. However, it will give you an objective analysis but advise you to consult a medical professional. The analysis could be misinformed, so you must always fact-check it. Therefore, relying solely on AI for self-diagnosis could be detrimental to your mental well-being and should be avoided.

Using telehealth in place of in-person therapy is a better option. You can access mental health professionals remotely and at a significantly lower cost through telehealth services.

2. The Right Prompts Matter

An example of a specific prompt in ChatGPT

Through specific prompts, you can make better use of ChatGPT’s analytical ability and logical reasoning. It can act as a virtual companion rather than a therapist and provide various insights about mental health.

The more specific you are in your prompts, the better the responses. For example, a prompt like “List 10 ways to deal with social anxiety” provides a

7 Reasons to Consider Using ChatGPT for Health Advice

Searching for the best advice on health and wellness can be tough. After all, the amount of information floating around the internet is overwhelming, and not all of it is top-notch. And at the end of the day, who can you trust?


Enter the AI tool that took the internet by storm: ChatGPT. When used wisely, it can become your new AI-powered health and wellness advisor. Here’s why it can be a valuable tool for health and wellness advice and what you should remember about its limitations.

MAKEUSEOF VIDEO OF THE DAYSCROLL TO CONTINUE WITH CONTENT

1. ChatGPT Has a Vast Knowledge Base and Up-to-Date Information

ChatGPT boasts a vast knowledge base, providing you with access to a deep library of health and wellness information.

ChatGPT can also synthesize all this knowledge in seconds, making it the ultimate research partner. It’s like having not only a personal library at your fingertips but also a personal librarian and research assistant, all rolled into one super-smart AI package.

Because ChatGPT has been trained on so much data, it demonstrates familiarity with a range of topics, from nutrition and exercise to mental health and stress management. Some critics argue that ChatGPT is over-rated or shouldn’t be trusted because the current knowledge base is limited to 2021. However, health and wellness advice released up to 2021 is not necessarily outdated.

Furthermore, in March 2023, OpenAI launched plugins that extend the AI bot’s functionality to access third-party knowledge sources and databases, including the web. This means ChatGPT’s recommendations could include the very latest research, trends, and expert opinions in the health and wellness sphere.

2. ChatGPT Has Access to Diverse Knowledge Sources

One of the key strengths of ChatGPT is its access to diverse knowledge sources. It can pull from a wealth of resources,

Back To Top