Study Warns: AI Chatbots Can Give Risky Medical Advice

3 Min Read

Islamabad(The COW News Digital)A recent study conducted in the United Kingdom has raised concerns about the use of artificial intelligence (AI) chatbots for medical advice, warning that reliance on such platforms could pose serious health risks. Researchers from Oxford University found that AI chatbots often provide inconsistent or inaccurate guidance, making it difficult for users to discern which advice is reliable.

The study involved 1,300 participants and aimed to evaluate how effectively people could interpret AI-generated medical guidance. Participants were divided into two groups, with one group instructed to consult an AI chatbot for hypothetical medical scenarios, such as severe headaches. Researchers then analyzed how accurately users were able to judge the usefulness of the advice provided.

Findings revealed that people often receive a mix of good and bad responses from AI systems, and the quality of advice frequently depends on the way questions are phrased. Many users struggled to determine which recommendations were safe to follow. The study also highlighted that regular users of AI chatbots were particularly prone to misunderstanding the limitations of these tools, often failing to know what to ask or how to interpret the answers.

“AI can provide medical information, but obtaining actionable and reliable guidance is challenging,” the researchers noted. “Users may omit or misstate details about their symptoms, increasing the likelihood of inaccurate or misleading responses.”

The study further emphasized that AI chatbots can present mixed or contradictory information, which complicates decision-making for users seeking urgent or precise medical advice. Even the most advanced AI models face difficulties accurately interpreting gradually provided information or incomplete symptom descriptions, which may inadvertently lead to unsafe recommendations.

While AI systems have the potential to support healthcare by providing general information and guidance, the researchers stressed that human oversight is essential. They advised caution when using chatbots for self-diagnosis or treatment decisions and suggested that AI should complement, rather than replace, professional medical advice.

The Oxford study underscores the importance of improving AI safety and effectiveness, signaling a need for companies to develop more reliable and user-friendly systems capable of providing accurate and context-aware guidance.

Researchers hope their findings will encourage the creation of AI platforms that better understand user inputs and reduce the risk of harmful recommendations, ultimately making AI a safer tool for public health applications.

- Advertisement -
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *