Research News

Neurons that have food on the brain

Scientists discover neurons that light up on sight of images of food

A gooey slice of pizza. A pile of crispy French fries. Ice cream dripping down a cone on a hot summer day. When people look at any of these foods, a specialized part of the brain's visual cortex lights up, according to a new study by MIT neuroscientists.

This newly discovered population of food-responsive neurons is in the ventral visual stream, alongside neurons that respond specifically to faces, bodies, places and words. The unexpected finding may reflect the special significance of food in human culture, the researchers say. 

The findings, based on an analysis of a large public database of human brain responses to a set of 10,000 images, raise many questions about how and why this neural population develops.

The study was funded in part by two grants from the U.S. National Science Foundation and published in the journal Current Biology.

"Food is central to human social interactions and cultural practices," says Nancy Kanwisher, a cognitive neuroscientist at MIT and one of the authors of the paper. " It's not just sustenance. Food is core to so many elements of our cultural identity, religious practice, and social interactions, and many other things that humans do."

More than 20 years ago, while studying the ventral visual stream, the part of the brain that recognizes objects, Kanwisher discovered cortical regions that respond selectively to faces. Later, she and other scientists discovered other regions that respond selectively to places, bodies or words. Most of those areas were discovered when researchers set out to look for them. However, that hypothesis-driven approach can limit what you end up finding, Kanwisher says.

To try to uncover the fundamental structure of the ventral visual stream, Kanwisher and Meenakshi Khosla, lead author of the paper, analyzed a large, publicly available dataset of full-brain functional MRI, or fMRI, responses from eight human subjects as they viewed thousands of images.

The researchers applied a mathematical method that allows them to discover neural populations that can't be identified from traditional fMRI data. An fMRI image is made up of many voxels — 3D units that represent a cube of brain tissue. Each voxel contains hundreds of thousands of neurons, and if some of those neurons belong to smaller populations that respond to one type of visual input, their responses may be drowned out by other populations within the same voxel.

The new analytical method, which Kanwisher's lab previously used on fMRI data from the auditory cortex, can tease out responses of neural populations within each voxel of fMRI data.

The researchers also used the data to train a computational model of what scientists call the ventral food component. The model is based on work MIT research scientist N. Apurva Ratan Murty, an author of the paper, had developed for the brain's face and place recognition areas. This allowed the researchers to run additional experiments and predict the responses of the ventral food component. In one experiment, they fed the model matched images of food and nonfood items that looked very similar — for example, a banana and a yellow crescent moon.

"Those matched stimuli have very similar visual properties, but the main attribute in which they differ is edible versus inedible," Khosla says. "We could feed those arbitrary stimuli through the predictive model and see whether it would respond more to food than nonfood without having to collect the fMRI data."

From their analysis of the human fMRI data, the researchers found that in some subjects, the ventral food component responded slightly more to processed foods, such as pizza, than unprocessed foods, such as apples. In the future, they hope to explore how factors such as familiarity and like or dislike of a particular food might affect individuals' responses to that food.