Google Is Using AI to See Inside the Brain Like Never Before


Google Is Using AI to See Inside the Brain Like Never Before

By Dave Gershgorn JULY 16, 2018

New data might help neuroscientists better understand the structure of the brain.

Neuroscientists have a lot of data on the brain—we can see it, take pictures of it, study it. But for all the data, the brain’s workings are still relatively unknown.

A new paper published in Nature Methods might help neuroscientists better understand the structure of the brain and how it functions, according to research scientists from Google. A Google team trained an artificial neural network, the kind of AI perfect for automating simple human tasks, to sift through 663 GB of images of a zebra finch’s brain and construct a 3D model of every neuron and synapse.

“The real impact of this is the amount of neuroscience that can be done,” Viren Jain, a Google co-author on the paper who has been researching this automated neuronal structure problem for 12 years, told Quartz. “One thing that historically neuroscientists haven’t had access to is being able to study the actual patterns of neurons in the brain in a comprehensive way.”

The data that the AI crunched came from an electron microscope, and the data isn’t actually incredibly new. The Max Planck Institute, the German research center, collaborated with Google on the project and provided the data, which it’s had since 2012. It’s easiest to think of the data as thousands of 2D images showing a slice of the brain. When stacked on top of each other, they make a 3D image.

But like a block of marble in Michelangelo’s studio, the neurons’ true forms were trapped in all of that surrounding data of space and other entangled neurons. Manually, a neuroscientist would have to look at an image, identify the slices of neurons, and specify each one for the computer to turn into a 3D model. Google estimates it would have taken 100,000 hours to label the entire sample, which was only a 1mm cube. The AI trained for seven days to be able to accomplish the same task.

Google’s algorithm took this process and automated it, looking slice by slice and tracing the neurons through the sample. Though Google wasn’t the first to attempt this automated process, its algorithm is ten times more accurate than previous automated approaches. Jain said that the breakthrough was teaching the AI to trace one neuron structure at a time, rather than trying to trace every neuron in the same at once.

Next, the researchers at Google and Max Planck will take this data from the bird, and try to determine how they learn to sing.

Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger