IISc researchers use GPU to discover human brain activity in recent study

A new Graphics Processing Unit (GPU) based machine learning algorithm developed by researchers at the Indian Institute of Science (IISc) may help scientists better understand and predict connectivity between different regions of the brain.

The algorithm, called Regularized, Accelerated, Linear Fascial Evaluation, or Real-Life, can rapidly analyze massive amounts of data generated from Diffusion Magnetic Resonance Imaging (DMRI) scans of humans. Brain,

According to IISc, using Real-Life, the team was able to evaluate dMRI data up to 150 times faster than existing state-of-the-art algorithms. Press release Released on Monday.

“The tasks that previously took hours to days can be completed in seconds to minutes,” said Devarajan Sreedharan, associate professor at the Center for Neuroscience (CNS), IISc. study published in the journal Nature Computational Science.

millions neurons The brain fires every second, generating electrical pulses that travel from one point in the brain to another via connecting cables or “axons” in neuronal networks. These connections are necessary for the calculations that the brain performs.

“Understanding brain connectivity is critical to uncovering the brain-behavior relationship at large,” said Varsha Srinivasan, a PhD student in the CNS and first author of the study. However, traditional approaches to study brain connectivity typically use animal models, and are invasive. dMRI scans, on the other hand, provide a non-invasive method to study brain connectivity in humans.

The cables (axons) connecting different areas of the brain are its information highways. Because axon bundles are shaped like tubes, water molecules move through them, along their length, in a directed manner. DMRI allows scientists to track this movement to produce a comprehensive map of the network of fibers in the brain, called connectomes.

Unfortunately, pinpointing these combinators is not straightforward. The data obtained from the scans only provide a net flow of water molecules to each point of the brain, the release said.

“Imagine that water molecules are cars. The information obtained is the direction and speed of vehicles at each point in space and time without any information about the roads. Our task is like inferring a network of roads by looking at these traffic patterns, Sreedharan explains.

To accurately identify these networks, conventional algorithms closely match the predicted dMRI signal with the predicted dMRI signal from the predicted dMRI signal.

Scientists had previously developed an algorithm called LiFE (Linear Fascial Evaluation) to perform this optimization, but one of its challenges was that it worked on conventional. Central Processing Unit (CPU)Which made the calculation time consuming.

In the new study, Sreedharan’s team tweaked their algorithm to reduce the computational effort involved in a number of ways, including removing redundant connections, which greatly improved the performance of LIFE.

To further speed up the algorithm, the team redesigned it to work on the specialized electronic chips – the type found at the high end gaming computer – called Graphics Processing Unit (GPU)Which helped them analyze the data at a speed of 100-150 times faster than the previous approaches.

This improved algorithm, real-life, was also able to predict how a human test subject would behave or perform a specific task.

In other words, using the connection strength estimated by the algorithm for each individual, the team was able to explain the variations in behavioral and cognitive test scores in a group of 200 participants.

Such analysis may also have medical applications. “Large-scale data processing is becoming increasingly necessary for big data neuroscience applications, especially for understanding healthy brain function and encephalopathy,” says Srinivasan.

For example, using the obtained connectomes, the team hopes to be able to identify early signs of aging or decline in brain function in Alzheimer’s patients before they appear behaviorally.

“In another study, we found that the previous version of Real-Life could do better than other competing algorithms for distinguishing Alzheimer’s disease patients from healthy controls,” says Sridharan.

He said that their GPU-based implementation is very generic, and can also be used to tackle optimization problems in many other areas.