Researchers have created an artificial neural network that helps them to better understand how real living neural networks work, and can be manipulated.
MIT researchers James DiCarlo, Pouya Bashivan and Kohitij Kar created a computational model that enabled them to create images which could then be used to strongly affect specific neurons in the brain during animal tests.
Over several years the researchers have been designing models of the visual cortex and visual system. To create these artificial neural networks they started out by building an arbitrary architecture that was made up of nodes that represent individual neurons. These nodes can connect to each other with varying degrees of strength.
These models were then used to feed a library of over one million images, each containing a label that notes the most prominent object in the image such as a car or food type. The artificial neural network trains itself to learn what each image is by manipulating the strength of the connection between each node.
What the researchers found was that the nodes in the artificial neural network react in a very similar pattern to how real neurons in animal visual cortices react when they are shown the same image.
One of the lead authors on the researchers paper Pouya Bashivan commented in a MIT blog post: “What has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before.”
The research could be instrumental in helping neuroscientists understand how neurons interact with each other and subsequently lead to new treatments for neurological disorders such as alzheimer’s disease, epilepsy or depression.
Real Neural Network Manipulation
The team then took the research a step further and tested whether the models they created could be used to create images that could manipulate a real neuron into a ‘desired state’, essentially they wanted to test if the model could control neural activity in the visual cortex of an animal.
To test this, the team created a one-to-one map of a specific section of an animal’s brain. The area selected was V4, which contains the visual cortex and is home to millions of neurons. The researchers mapped out five to 40 neurons at a time. This was done by showing both the animals, and the computational model, images from the library, then recording and comparing subsequent neuron response patterns.
James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences commented that: “Once each neuron has an assignment, the model allows you to make predictions about that neuron.”
In one test they got the artificial neural network to create synthetic images that do not resemble natural objects, but would still illicit a response from target neurons. When the animal test subject was shown these synthetic images the neurons targeted responded 40 percent of the time.
“That they succeeded in doing this is really amazing. It’s as if, for that neuron at least, its ideal image suddenly leaped into focus. The neuron was suddenly presented with the stimulus it had always been searching for,” commented independent research reviewer Aaron Batista, an associate professor of bioengineering at the University of Pittsburgh,
“This is a remarkable idea, and to pull it off is quite a feat. It is perhaps the strongest validation so far of the use of artificial neural networks to understand real neural networks,” stated Batista.
The researchers are working on improving the model by incorporating new data that is been learned from using the synthetic images.
“If we had a good model of the neurons that are engaged in experiencing emotions or causing various kinds of disorders, then we could use that model to drive the neurons in a way that would help to ameliorate those disorders,” concluded Bashivan.