Google’s medical chatbot based on artificial intelligence succeeded in testing medicine in the United States, but its results were lower than those achieved by humans, according to a study published Wednesday in the journal Nature .
The tech giants have been locked in frantic competition in the booming field of artificial intelligence since the launch of ChatGPT, designed by Microsoft-backed OpenAI, a rival to Google, last year. missed.
Health care is one of the areas in which tangible progress has already been made thanks to technology, as it has been shown that some algorithms ensure better reading of medical X-ray images than those taken by humans.
And Google announced – last December – in an article – about the “Med-PaLM” artificial intelligence tool for medical questions.However, unlike ChatGPT , this tool is not made available to the public.
Google confirmed that Med-Palm was the first large program based on the language model, an artificial intelligence technology trained by large amounts of human-generated text, to pass the US medical licensing exam.
Success in this test qualifies the person to practice medicine in the United States, and this requires that he obtain a score of approximately 60%.
In February, a study revealed that GPT Chat achieved a satisfactory score close to that required to pass the test.
And Google researchers reported – in a new study reviewed by other researchers and published yesterday, Wednesday, in the journal Nature – that “Med-Palm” obtained a 67.6% rate in answering the multiple-choice questions approved in the medical licensing exam.
The study described these results as “encouraging, but still less than those of humans.”
And to reduce what is called “hallucinations” – the word that refers to a wrong answer provided by a model of artificial intelligence – Google has developed a new standard for evaluation, according to what it announced.
Karan Singhal, a Google researcher and lead author of the new study, told AFP that his team had tested a newer version of the model.
A study published last May, but not reviewed by other researchers, indicated that Med-Palm 2 scored 86.5% in the medicine test, outperforming the previous version of the program by about 20%.
The Wall Street Journal also reported that Med-Palm 2 has been undergoing testing at the prestigious American research hospital, the Mayo Clinic, since last April.