ChatGPT May Change the Role of Physicians
The emergence of AI chatbots in healthcare has opened up new possibilities for doctors and patients. While this technology can mimic human conversations and create personalised medical advice, it also comes with the risk of misdiagnosis, data privacy issues, and biases in decision-making. One popular example is ChatGPT, which has recently passed the US Medical Licensing Exam. Experts believe that ChatGPT could help doctors with paperwork, examine X-rays, and weigh in on a patient’s surgery. Its communication is so effective that a study found ChatGPT may have better bedside manners than some doctors.
However, Dr. Robert Pearl, a professor at the Stanford University School of Medicine, believes that the current version of ChatGPT needs to be understood as a toy. It’s probably two per cent of what’s going to happen in the future. This is because, according to researchers, generative AI can increase in power and effectiveness, doubling every six to 10 months.
Developed by OpenAI, ChatGPT was released for testing to the general public in November 2022 and had explosive uptake, with over a million people signing up to use it in just five days. The software is currently free as it sits in its research phase, though there are plans to eventually charge.
It’s important to remember, ChatGPT can act like a digital assistant for doctors, not take over their jobs, as previous studies have shown that physicians vastly outperform computer algorithms in diagnostic accuracy. Medical knowledge doubles every 73 days, and it’s impossible for a human being to stay up to date at that pace. By using ChatGPT to sift through the vast amount of medical knowledge, it can help a physician save time and even help lead to a diagnosis. People are looking at using the platform as a tool to help monitor patients from home.
While the potential for ChatGPT is vast, there are also some pitfall. It is trained on vast amounts of data made by humans, which means there can be inherent biases. Additionally, ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers, as acknowledged by its developers on their website. The potential for misdiagnosis is just one of the drawbacks of using ChatGPT in the healthcare setting.
Others have said the software could potentially become as crucial for doctors as the stethoscope was in the last century for the medical field. However, ChatGPT is still years away from reaching its full potential.