A new study has warned that people should not rely entirely on artificial intelligence tools such as ChatGPT for medical advice and should exercise caution when seeking health-related guidance from AI platforms.
According to the report, more than 230 million people worldwide turn to ChatGPT each week with health-related questions. Many users ask the platform about safe foods, symptoms of illnesses, or possible home remedies for various medical conditions.
The research found that while the AI tool can often correctly recognize clear medical emergencies, it underestimated the seriousness of more than half of the cases where immediate medical attention was required.
For the study, researchers created 60 different medical scenarios covering 21 medical specialties. These scenarios ranged from minor health issues to severe emergency situations.
The findings also showed that the AI’s responses were less reliable when users asked questions about sensitive issues such as self-harm. In such cases, researchers noted that the responses sometimes appeared inconsistent or contradictory.
One of the study’s co-authors emphasized that the findings do not mean AI should be completely avoided in healthcare. Instead, the technology can still be helpful if used carefully and alongside professional medical guidance.
Experts further advised that individuals experiencing symptoms such as chest pain, severe allergic reactions, or rapidly worsening conditions should seek immediate medical attention rather than relying solely on chatbot advice.
The study also highlighted that AI language models are continuously being updated and improved, meaning their performance may change over time. As a result, ongoing research and monitoring are necessary to better understand their reliability in healthcare settings.







