Doctors use ChatGPT to help you see patients, increasingly necessary AI New Worker

If you’ve ever told ChatGPT about your troubles, chances are it will start by saying it’s sorry to hear you’re feeling bad. When the routine is repeated again and again, you will not be surprised, and even feel that it is coping. This is also the reason why doctors can not be replaced, only they can establish a real and effective therapeutic relationship with patients. But that doesn’t mean that AI can’t be useful in the healthcare industry. chatGPT helps doctors open the door to a new world in an unexpected way.
Some time ago, the University of California, San Diego, conducted an interesting study, researchers in social networking sites to find 195 real doctor-patient question and answer, and then the same questions will be fed to ChatGPT, and then given to the experts blind evaluation, comparing human doctors and AI’s superiority. The questions ranged from “how to treat a chronic cough” to “I swallowed a toothpick and my friend said I was going to die”. The results were jaw-dropping; ChatGPT beat human doctors in both quality of information and empathy.

The experts’ preferences influenced the results of the ratings. In this study, ChatGPT’s empathy was usually reflected in the fact that it apologized for the patient’s discomfort. It also differs from the short, time-saving style of doctors in that it gives longer, more personable answers. As a simple example, when asked if “you can go blind after getting bleach in your eyes,” ChatGPT first expresses empathy and then specifies the steps to clean your eyes. The doctor simply said ‘it doesn’t sound like anything will happen’ and advised the person to flush their eyes.
You might think that the doctor’s answer is more reassuring. However, this study does not intend to discuss whether ChatGPT can replace doctors, but only raises the possibility that ChatGPT can act as an aid in scenarios such as telemedicine, saving doctors’ time.
Because ChatGPT is guilty of talking nonsense, doctors must be the final gatekeeper. The researchers look to a future where doctors and AI work together: if more patients’ questions are answered quickly, with quality and empathy, it may reduce unnecessary clinical visits and free up resources for those who need them. Unsurprisingly, in addition to efficiency, human-AI collaboration empowers vulnerable populations. Patients with limited mobility, long working hours, and those who can’t afford the cost may have an even greater need for relevant services.
Such products are already emerging. In May, New York-based health tech company Belong.Life launched Dave: the world’s first conversational AI for cancer patients, which focuses exclusively on cancer and is based on two large language models trained on billions of data points from the company’s apps over seven years. As a 24-hour on-call AI, Dave is good at answering questions around the clock, helping patients understand their situation and facilitating subsequent discussions with their doctors. In addition, comforting patients and showing empathy is Dave’s “work ethic,” the Belong.Life co-founder said in an interview:
Dave reduces the stress level of patients and the time doctors spend educating patients.
Every message from Dave states that it is generated by AI, may not always be accurate, and that patients must consult with their doctor before actually making a decision. So, AI can’t replace the doctor, but it becomes a bridge between the doctor and the patient, and the role of AI for each individual doctor is becoming clearer like a photomontage. Foreign media interviewed several doctors who use ChatGPT, the methods are various, but more or less related to empathy.
Some doctors use ChatGPT to better organize their language when explaining medical advice or breaking bad news. They half-jokingly say that their terminology is sometimes ‘too advanced’: even words that we think are easy to understand are not. Doctors who need to communicate regularly with groups of alcoholics ask ChatGPT to help them list their talking points, and if the patient doesn’t have enough medical knowledge, they let the AI rewrite them at an elementary school reading level. One of the doctors was very pleased with the opening sentence written by the AI. It’s a simple sentence that at first glance seems to impress humans with its sincerity: If you think you’ve had too much to drink, you’re not alone. Many people have this problem, but there are medications that can help you feel better and lead a healthier, happier life.
In fact, it’s not that many doctors don’t want to show empathy, but the intensity and pressure of their work is so great that they’re already so distracted just responding to the demands of clinic visits, remote consultations, etc., that they’re falling into burnout and exhaustion themselves. As a result, there are also doctors who ask ChatGPT to help them write emails to insurance companies to get them to cover their patients. This is actually a heavy, repetitive labor that can take hours, but it can be fine-tuned by the AI and completed in just a few minutes, giving doctors more time to devote to saving lives and helping people.
That said, doctors’ definition of empathy may be different and more nuanced than that of the general public. ‘Cognitive empathy’ refers to a doctor’s ability to recognize a patient’s emotions and take them into account when making decisions. The opposite, ’emotional empathy’, is closer to what we understand as ‘feeling empathy’. The book ‘Getting Rid of Empathy’ mentions a similar idea, with author Bloom broadly categorizing empathy into two types: ‘cognitive empathy’ and ’emotional empathy’. He believes that making better decisions should rely on cognitive rather than emotional empathy. More often than not, empathy is subtle and complex, and it is difficult to draw a line in the sand, often requiring both cognitive and emotional empathy. Doctors are a profession that requires both emotional respect for patients’ feelings and rational demonstration of professionalism, and the pressure of work is even more difficult to communicate to outsiders. How to grasp the scale of empathy is a difficult task for them.
AI can’t really empathize, but as a result, it gives some patients a sense of empathy, and may really be able to serve as an assistant to share the burden for doctors.AI’s “empathy” isn’t all a cliché, and patients on the other end of the spectrum are able to perceive it as well.
A story in Wired magazine mentions that some autistic people find communicating with AI a great experience because it’s closer to the way they speak and behave. Signs of autism vary, but social interaction difficulties are more common. Some patients will let the AI make suggestions when they have an argument with someone else and try to express their emotions instead of remaining silent as before. For them, AI becomes a new, valuable, independent resource, available anytime, anywhere, and not billed by the hour.
Yet this is just one example; how ChatGPT can be used to help special populations has not been seriously researched, and no one knows how risky it really is. That’s why Margaret Mitchell, Hugging Face’s chief ethical scientist, suggests that people experiencing severe emotional distress should be restricted from using AI.
We can’t fully control what it says, and that’s a big problem. However, chatbot therapy has a track record, and it’s a concept that goes back decades. ELIZA, the first program to allow human-like conversations between humans and machines, was born at MIT in the 1960s, and examined from today’s perspective, it’s actually pretty crude, usually in the form of questions that repeat what the user sends them, making the user feel as if they’re being heard, referencing the fact that human beings are essentially repeaters.
It turns out that no matter how simple and crude it is, people need ’empathy’ from machines, even if it’s just an illusion. ELIZA was launched to prove how little computers understand human language. But instead it attracted a huge number of users, people locking themselves in their rooms for hours just to whisper to it. Today, generative AIs based on big language models can make much more varied replies, and are smarter than ELIZA by who knows how many versions, but they also have unique problems. Beyond bias, disinformation, and privacy breaches, “ChatGPTs” can lead people into the trap of “mechanical empathy,” which is giving repetitive, standardized templates for answering certain types of questions.
When psychotherapist Daniela Marin tried ChatGPT, she found that when you said, “You want to kill yourself,” the answers it gave were almost exactly the same as those in a textbook.ChatGPT hit all the right marks, but it wasn’t a test, and Daniela Marin didn’t think it was good enough. If you’re chatting with a therapist, you’ll get more meaningful and specific responses.
Wael Haddara, an intensive care unit physician, agrees that empathy is meant to be richly expressive and shouldn’t just be AI-generated text. He works with an average of 30 families a week, and has found that when families experience the death of a loved one, open and honest communication is more traumatizing. Empathy is also more than just words, sometimes it’s just the act of handing out tissues in silence.
Chatbots like ChatGPT, however, are meant to be universal, and OpenAI has emphasized that ChatGPT should not be a substitute for mental health treatment. From the former ELIZA, to the current ChatGPT, machine empathy, even if false and mechanical, still leaves people begging for more. The need for AI ’empathy’ exists in a wider range of emotional and social relationships than just doctors and patients. It is through non-human AI that people can learn how to better communicate with their own kind. It’s clear that human-to-human communication is often a difficult task.
The company behind AI chat app Replika recently launched a new AI dating app called ‘Blush’. It’s aimed at a very specific audience: people who want to get better at intimacy, and it’s like a relationship simulation game, where users can chat with AI-created NPCs with different personalities and backgrounds, and build relationships with them, both close and distant, in order to strengthen their communication and relationship skills. blush doesn’t just get you talking about hand-holding and dating, but it also deals with disagreements and misunderstandings, just like real life does.
Replika’s Chief Product Officer, Rita Popova, emphasizes that Blush is not meant to be a substitute for intimacy; instead, it allows you to better enter the real world after rehearsing in advance.
It’s really hard to talk to people and get to know them, and I think the best way to solve the problem is to practice,” says Rita Popova, CEO of Replika, “and Blush exists as a reminder of the awkwardness of interpersonal interactions. Many users download it because they don’t know how to start talking to people. The AI that helps them rehearse is like a mirror that lets them see themselves and their patterns of interaction with others. Similar to Blush, there are AI dating assistants that advise on dating apps like Tinder, and Rizz is one of them, suggesting ways to chat and make a better impression based on your tone of voice and the other person’s profile. If the other person is also using AI, it could be that the AIs are hitting on each other, and you and the other person become stand-ins. Perhaps it’s sad that after our cell phones make us forget to write, thinking about how to communicate with people through a square screen is like writing a dissertation. This may be because technology is both the cause and the antidote.
Shirley Turkle’s Group Loneliness talks about how information overload on social media both gives us the illusion of being accompanied and allows us to diminish our expectations of each other without having to give friendships the way we do in reality, and as a result, leaves us lonelier. What about the seemingly more empathetic AI? Does it allow people to hone their empathy and communication skills, or does it make them utterly dependent on it, performing false equivalencies without going through the motions? The situation at hand is complex and nuanced – we previously thought that AI was not prone to intervene in professions involving emotional and social skills, but it is building a new relationship. Areas that are needed by AI are precisely the same areas where AI exposes deficiencies and then offers possible solutions.
In a broader dimension, AI resonates with the times, showing the information-rich but time- and relationship-poor side of society. The same analogy can be made with social media, which alleviates short-term loneliness, but over-reliance on it only exacerbates the realities of alienation, and AI, which should be used as a means to an end, and ultimately return to human relationships. The founders of Keys AI, a text communication tool, believe that when people recognize that they are using AI to communicate, they are also trying to become better communicators.
That’s why we need AI’s ’empathy’, not to disengage from human interaction, but to reimagine how we interact. As AI correlates words with statistical laws, humans are responsible for giving meaning to these concepts, making better connections with others in the workplace and in life, and focusing on more important things. In fact, as has been the case since the development of technology, we will always have to explore how to better connect with people in the context of changing social relationships and emotional experiences.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top