NSF Stories

Simulated human eye movement aims to train metaverse platforms

Engineers have developed "virtual eyes" that closely mimic human eye behavior

U.S. National Science Foundation grantee computer engineers based at Duke University have developed virtual eyes that simulate how humans look at the world. The virtual eyes are accurate enough for companies to train virtual reality and augmented reality applications.

"The aims of the project are to provide improved mobile augmented reality by using the Internet of Things to source additional information, and to make mobile augmented reality more reliable and accessible for real-world applications," said Prabhakaran Balakrishnan, a program director in NSF's Division of Information and Intelligent Systems.

The program, EyeSyn, will help developers create applications for the rapidly expanding metaverse while protecting user data. The study results will be presented at the upcoming International Conference on Information Processing in Sensor Networks.

"If you're interested in detecting whether a person is reading a comic book or advanced literature by looking at their eyes alone, you can do that," said Maria Gorlatova, one of the study authors.

"But training that kind of algorithm requires data from hundreds of people wearing headsets for hours at a time. We wanted to develop software that not only reduces the privacy concerns that come with gathering that sort of data, but allows smaller companies that don't have those levels of resources to get into the metaverse game."

Eye movements contain data that reveal information about responses to stimuli, emotional state, and concentration. The team of computer engineers developed virtual eyes that were trained by artificial intelligence to mimic the movement of human eyes reacting to different stimuli.

The information could be a blueprint for using AI to train metaverse platforms and software, possibly leading to algorithms customized for a specific individual. It could also be used to tailor content production by measuring engagement responses.

"If you give EyeSyn a lot of different inputs and run it enough times, you'll create a data set of synthetic eye movements that is large enough to train a [machine learning] classifier for a new program," Gorlatova said.

When testing the accuracy of the virtual eyes, the engineers compared the behavior of human eyes to the virtual eyes viewing the same event. The results demonstrated that the virtual eyes closely simulated the movement of human eyes.

"The synthetic data alone aren't perfect, but they're a good starting point," Gorlatova said. "Smaller companies can use this rather than spending time and money trying to build their own real-world datasets [with human subjects]. And because the personalization of the algorithms can be done on local systems, people don't have to worry about their private eye movement data becoming part of a large database."