Google’s AI model could help improve neural networks in medical research

New Delhi: A team of Google researchers has developed a new artificial intelligence (AI) model which they claim could have a major impact on medical research and clinical applications. Led by Shekofeh Azizi, AI resident at Google Research, the model could help build a self-supervised deep neural network that could improve the efficiency of clinical diagnostics of such algorithms.

The major conflict that this research tried to resolve was to make deep neural networks more robust and efficient in important medical applications. In various medical research tasks such as cancer, clinicians do not always have sufficient data sets that are clearly labeled in terms of their formation. This has made it difficult for medical AI researchers to build efficient training models for deep neural networks to identify medical data with high accuracy.

Azizi and his team have created a ‘self-supervised learning’ model, called multi-instance contrastive learning (MICLe). The key concept of self-supervised machine learning models is that they are trained on unlabeled data, enabling the application of AI to specific areas where collection of clearly defined data sets may be difficult – As in cancer research itself.

His paper, Azizi says, “We performed experiments on two different tasks: dermatological skin condition classification from digital camera images, and multi-label chest X-ray classification, to demonstrate self-supervised learning on ImageNet, followed by additional Labeled domain-specific medical images over self-supervised learning significantly improve the accuracy of medical image classifiers. We introduce the novel MICL method that will be available to generate more informative positive pairs for self-supervised learning. but, uses multiple images of the underlying pathology per patient case.”

MICL itself is based on Google’s existing research into supervised convolutional neural network models. At the 2020 International Conference on Machine Learning (ICML), Google researchers presented the Simple Framework for Contrast Learning, or SimCLR – on which MICE is based. Simply put, SimCLR uses multiple variations of the same image to learn multiple representations of the data it holds. This helped to make the algorithm more robust and accurate in terms of its detection.

With MICLe, the researchers used multiple images of a patient that did not have clearly labeled data points. The first layer of the algorithm used an available repository of images with labeled data, ImageNet in this case, to give the algorithm an initial round of training. Azizi said his team then applied a second layer of images, this time with unlabeled data, to the algorithm to create image pairs. This enabled neural networks to learn multiple representations of the same image, something that is important in medical research.

In clinical treatment, images routinely have different perspectives and positions because medical imagery cannot be orchestrated or choreographed. After the above two layers of training, the researchers applied a very limited data set of labeled images to fine-tune the algorithm for application to the target. Along with accuracy, such algorithms can also significantly reduce the cost and time spent developing AI models for medical research, the researchers said.

“We achieved an improvement of 6.7% in TOP-1 accuracy and an improvement of 1.1% in mean area under the curve (AUC) on dermatology and chest X-ray classification, respectively, beating the pre-trained robustly supervised baselines on ImageNet . Furthermore, we show that large self-supervised models are robust to distribution shifts, and can learn efficiently with a small number of labeled medical images,” Azizi said in his research.

subscribe to mint newspaper

, Enter a valid email

, Thank you for subscribing to our newsletter!

Never miss a story! Stay connected and informed with Mint.
download
Our App Now!!

,