About this event
Lecturer: Takeo Kanade
Can we digitize a three-dimensional, time-varying scene from the world into a computer as a 3D event, just like real-time CT can digitize body volume? Since mid 90's, the Carnegie Mellon University's Virtualized Reality (TM) project has been developing computer vision technologies for this purpose with the 3D Room - a fully digital room that can capture events occurring in it by many (at this moment 50) surrounding video cameras, including pan/tilt heads.
With this facility, we digitize the event occurring in the room and generate its complete three-dimensional, time-varying, and volumetric/surface representation. Then, not only can we render images from any viewpoint or angle - even those at which there were no cameras (the concept of "Let's watch the NBA on the court"), but also we can conceive of a whole new notion of "event archiving and manipulation." I will discuss this theory, our facility, computation, and results.
We worked recently for CBS Sports to develop a multi-robot camera system that was used to broadcast the Super Bowl XXXV on January 28, 2001. The system, which CBS calls "EyeVision", produced a surrounding view of various interesting and controversial plays during the game. I will present our contribution to and experience with that real-time system.
Can we digitize a three-dimensional, time-varying scene from the world into a computer as a 3D event, just like real-time CT can digitize body volume? Since mid 90's, the Carnegie Mellon University's Virtualized Reality (TM) project has been developing computer vision technologies for this purpose with the 3D Room - a fully digital room that can capture events occurring in it by many (at this moment 50) surrounding video cameras, including pan/tilt heads.
With this facility, we digitize the event occurring in the room and generate its complete three-dimensional, time-varying, and volumetric/surface representation. Then, not only can we render images from any viewpoint or angle - even those at which there were no cameras (the concept of "Let's watch the NBA on the court"), but also we can conceive of a whole new notion of "event archiving and manipulation." I will discuss this theory, our facility, computation, and results.
We worked recently for CBS Sports to develop a multi-robot camera system that was used to broadcast the Super Bowl XXXV on January 28, 2001. The system, which CBS calls "EyeVision", produced a surrounding view of various interesting and controversial plays during the game. I will present our contribution to and experience with that real-time system.