NSF Stories

Dataset Bridges Human Vision and Machine Learning

BOLD5000 will enable neuroscientists to better learn how we see – and how machines might one day see as we do

Neuroscientists and computer vision scientists say a new dataset of unprecedented size -- comprising functional brain scans of four volunteers who each viewed 5,000 images -- will help researchers better understand how the brain processes images.

The research from Carnegie Mellon University and Fordham University is reported today in the journal Scientific Data.

Each volunteer participated in 20 or more hours of fMRI (functional magnetic resonance imaging) scanning. The decision to run the same individuals over so many sessions was necessary for disentangling the neural responses associated with individual images.

The resulting dataset, dubbed BOLD5000, allows cognitive neuroscientists to better leverage the deep learning models that have dramatically improved artificial vision systems. Originally inspired by the architecture of the human brain, deep learning may be further improved by pursuing new insights into how human vision works, and by having studies of human vision better reflect modern computer vision methods.

"This research will help address the challenges of understanding our own ability to see and recognize objects, and to devise ways to build those abilities in computer vision systems," said Betty Tuller, a program manager in NSF's Social, Behavioral, and Economic Sciences Directorate, which provided funding for the work.