An inexpensive multi-purpose 'nano-camera' that can operate at the speed of light has been developed by a team of MIT researchers, including Indian-origin scientists.
The $ 500 camera could be used in medical imaging and collision-avoidance detectors for cars, and to improve the accuracy of motion tracking and gesture-recognition devices used in interactive gaming.
The three-dimensional camera was developed by researchers in the Massachusetts Institute of Technology Media Lab.
The camera is based on "Time of Flight" technology in which the location of objects is calculated by how long it takes a light signal to reflect off a surface and return to the sensor.
However, unlike existing devices based on this technology, the new camera is not fooled by rain, fog, or even translucent objects, said co-author Achuta Kadambi.
"Using the current state of the art, such as the new Kinect, you cannot capture translucent objects in 3-D," said Kadambi, a graduate student at MIT.
"That is because the light that bounces off the transparent object and the background smear into one pixel on the camera. Using our technique you can generate 3D models of translucent or near-transparent objects," Kadambi added.
In a conventional Time of Flight camera, a light signal is fired at a scene, where it bounces off an object and returns to strike the pixel.
Since the speed of light is known, it is simple for the camera to calculate the distance the signal has travelled and therefore the depth of the object it has been reflected from.
The new device uses an encoding technique commonly used in the telecommunications industry to calculate the distance a signal has travelled, said Ramesh Raskar, an associate professor of media arts and sciences.
Raskar was the leader of the Camera Culture group within the Media Lab, which developed the method alongside Kadambi, Refael Whyte, Ayush Bhandari, and Christopher Barsi at MIT and Adrian Dorrington and Lee Streeter from the University of Waikato in New Zealand.
In 2011 Raskar's group unveiled a trillion-frame-per-second camera capable of capturing a single pulse of light as it travelled through a scene.
The camera does this by probing the scene with a femtosecond impulse of light, then uses fast but expensive laboratory-grade optical equipment to take an image each time. This "femto-camera" costs around $ 500,000 to build.
In contrast, the new " nano-camera" probes the scene with a continuous-wave signal that oscillates at nanosecond periods.
This allows the team to use inexpensive hardware -- off-the-shelf light-emitting diodes (LEDs) can strobe at nanosecond periods, for example - meaning the camera can reach a time resolution within one order of magnitude of femtophotography while costing just $500.
The $ 500 camera could be used in medical imaging and collision-avoidance detectors for cars, and to improve the accuracy of motion tracking and gesture-recognition devices used in interactive gaming.
The three-dimensional camera was developed by researchers in the Massachusetts Institute of Technology Media Lab.
The camera is based on "Time of Flight" technology in which the location of objects is calculated by how long it takes a light signal to reflect off a surface and return to the sensor.
However, unlike existing devices based on this technology, the new camera is not fooled by rain, fog, or even translucent objects, said co-author Achuta Kadambi.
"Using the current state of the art, such as the new Kinect, you cannot capture translucent objects in 3-D," said Kadambi, a graduate student at MIT.
"That is because the light that bounces off the transparent object and the background smear into one pixel on the camera. Using our technique you can generate 3D models of translucent or near-transparent objects," Kadambi added.
In a conventional Time of Flight camera, a light signal is fired at a scene, where it bounces off an object and returns to strike the pixel.
Since the speed of light is known, it is simple for the camera to calculate the distance the signal has travelled and therefore the depth of the object it has been reflected from.
The new device uses an encoding technique commonly used in the telecommunications industry to calculate the distance a signal has travelled, said Ramesh Raskar, an associate professor of media arts and sciences.
Raskar was the leader of the Camera Culture group within the Media Lab, which developed the method alongside Kadambi, Refael Whyte, Ayush Bhandari, and Christopher Barsi at MIT and Adrian Dorrington and Lee Streeter from the University of Waikato in New Zealand.
In 2011 Raskar's group unveiled a trillion-frame-per-second camera capable of capturing a single pulse of light as it travelled through a scene.
The camera does this by probing the scene with a femtosecond impulse of light, then uses fast but expensive laboratory-grade optical equipment to take an image each time. This "femto-camera" costs around $ 500,000 to build.
In contrast, the new " nano-camera" probes the scene with a continuous-wave signal that oscillates at nanosecond periods.
This allows the team to use inexpensive hardware -- off-the-shelf light-emitting diodes (LEDs) can strobe at nanosecond periods, for example - meaning the camera can reach a time resolution within one order of magnitude of femtophotography while costing just $500.
No comments:
Post a Comment