A research team led by Macau University of Science and Technology (MUST) has developed a new AI-based model as a clinical diagnostic aid that processes multimodal input in a unified manner.
The model IRENE, presumably the first such approach, was designed to help make medical decisions by jointly learning holistic representations of medical images, unstructured chief complaint and structured clinical information, according to the team, which also included researchers from the West China Hospital of Sichuan University and the University of Hong Kong.
The study has been published in the latest edition of Nature Biomedical Engineering, a monthly peer-reviewed scientific journal, according to Xinhua.
In the face of limited medical resources in certain regions, machine-learning techniques have become the de facto choice for automatic yet intelligent medical diagnosis in order to meet the increasing demand for precision medicine.
Among these techniques, the development of deep learning endows machine-learning models with the ability to detect diseases from medical images near or at the level of human experts.
Although AI-based medical image diagnosis has achieved tremendous progress in recent years, how to jointly interpret medical images and their associated clinical context remains a challenge, according to the IRENE research team.
MUST professor Zhang Kang, the team leader, said IRENE has the ability to jointly interpret multimodal clinical information simultaneously, and its intra- and intermodal attentional operations are consistent with daily clinical practices.
The team applied IRENE to pulmonary disease identification and adverse clinical outcome prediction in patients infected with COVID-19, achieving desirable results, Zhang said. MDT/Agencies