Home / Health / A machine gets high marks to diagnose sick children

A machine gets high marks to diagnose sick children



The average wait times in the emergency rooms of the USA. UU They exceed two hours, which makes both clinicians and patients feel the pain of an overloaded system. Many parents have endured those hours with an anguished child, they have escaped due to lack of urgency only to be sent home with unnecessary antibiotics for a garden variety viral infection.

With the money and time absorbed by visits to the emergency room and emergency care, the possibility of revisiting old visits to medical consultations is a great attraction. What if the visit came from an intelligent machine? Artificial intelligence systems are already experts in recognizing patterns in medical images to aid in diagnosis. New findings published on February 11 in Natural medicine show that similar training can work to derive a diagnosis from raw data in a child's medical history.

For this study at the Women and Children's Medical Center in Guangzhou, southern China, a team of doctors collected information from thousands of health records on key words related to different diagnoses. Then, the researchers taught these keywords to the artificial intelligence system so that they could detect the terms in the real medical charts. Once trained, the system combined electronic health records (EHRs) of 567,498 children, analyzing notes from real-world doctors and highlighting important information.

It went from extensive diagnoses to specific ones among 55 categories. So, how did the robbery-doc? "I think it's pretty good," says Mustafa Bashir, an associate professor of radiology at Duke University Medical Center who was not involved in the work. "Conceptually, it's not that original, but the size of the data set and the successful execution are important." Data processing, says Bashir, follows the typical steps of taking a "huge, messy giant data set" by submitting it to an algorithm and giving order of chaos. In that sense, he says, the work is not especially novel, but "having said that, his system seems to work well".

The practice of medicine is both art and science. Skeptics may argue that a computer that has processed a large amount of patient data can not provide the kind of qualitative judgment made by a general practitioner to diagnose a human being at a distance. In this case, however, a lot of human experience was put into play before the machine training began. "This was a massive project that we started about four years ago," says study author Kang Zhang, professor of ophthalmology and head of ophthalmologic genetics at the University of California at San Diego. He and his colleagues began with a team of doctors who reviewed 6,183 medical records to collect keywords that mark the symptoms or signs related to the disease, such as "fever." The artificial intelligence system (AI) then conducted training on these key terms and their association with 55 of international use. Diagnostic codes for specific conditions, such as an acute sinus infection. When analyzing a table for the relevant terms, the system went through a series of "present / absent" options so that specific sentences reach a final diagnostic decision.

To verify the accuracy of the system, Zhang and his colleagues also used the old-fashioned "technology": human diagnosticians. They compared the conclusions of the machine with those of the original records, and had another team of doctors perform diagnoses using the same data as the AI ​​system.

The machine received good grades, matching humans about 90 percent of the time. It was especially effective in the identification of neuropsychiatric diseases and upper respiratory diseases. For acute infection of the upper respiratory tract, the most common diagnosis in the huge group of patients, the AI ​​system hit 95 percent of the time. Would 95 percent be good enough? One of the following questions that should be investigated, says Zhang, is whether the system will lose something serious. The benchmark, he says, should be how older doctors perform, which is not 100 percent either.

A human clinician would serve as a quality control backup for the AI ​​system. In fact, humans and machines would probably follow a series of similar steps. Like a doctor, the machine begins with a broad category, such as "respiratory system," and works from top to bottom to arrive at a diagnosis. "It mimics the progress of the decision of the human doctor," says Dongxiao Zhu, an associate professor of computer science at Wayne State University who was not involved in the study.

But Zhu sees this as "enhanced intelligence" instead of "artificial intelligence" because the system handled only 55 diagnostic options, not the thousands of possibilities in the real world. The machine still can not delve into the more complex aspects of a diagnosis, such as the conditions that accompany it or the stage of the disease, he says. It is not clear how well this system could be translated out of its Chinese configuration. Bashir says that while the application of AI to patient information would be difficult anywhere, these authors have shown that it is feasible.

In addition, Zhu expresses an additional skepticism. Gathering diagnostic keywords from text notes in an EHR will be "radically different" in a language such as English rather than Chinese, he says. It also points out all the work required for only 55 diagnoses, including the human energy of 20 pediatricians with 11,926 records to compare their findings with machine diagnoses. Given the four years that the general process requires, it is likely that parents will have to wait long before a computerized doctor can prevent them from visiting the emergency room.


Source link