Research News

Collective motion in crowds is largely determined by individual fields of vision

Researchers create model of human 'flocking' behavior

U.S. National Science Foundation-grantee researchers at Brown University used virtual reality to develop a new model that predicts human movement and motion in a crowd. The team's model successfully predicted individual trajectories in virtual crowd experiments and when using real-life crowd data. The findings were published in Proceedings of the Royal Society B.

To create the model, researchers tracked the movements and interactions of participants wearing virtual reality headsets as they walked in a virtual crowd. Like flocks of birds, herds of cattle and schools of fish, people in a group tend to move as a unit instead of moving in independent paths.

The scientists found that seeing other crowd members determines how a participant moves with the crowd. That is, the individual participant's perspective within the group determines what path the participant takes within the crowd.

The model was also remarkably accurate at predicting the overall crowd flow. This individual-based approach is a departure from previous work that looked at crowds from the perspective of an omniscient observer or distant onlooker.

"Most omniscient models were based on physics -- on forces of attraction and repulsion -- and didn't fully explain why humans in a group interact in the way that they do," said study author William Warren.

"We are the first group to provide a sensory basis for this type of coordinated movement," Warren said. "The model provides a better understanding of what individuals in a crowd are experiencing visually, so we can make better predictions about how an entire group of people will behave."

Warren said that models of crowd movement have many applications and can be used to inform the design of public spaces, transportation infrastructure and emergency response plans.