1,000 times more accurate 3D imaging, for your smartphone camera
Wed 2 Dec 2015
Researchers looking to improve the resolution of 3D imaging devices [PDF] have claimed that by manipulating the polarisation of light they can increase quality by as much as 1,000 times.
The team, based at MIT, hopes that the technology can be used to produce high-quality 3D cameras built into mobile devices, and to enable photographed objects to be replicated using a 3D printer.
Achuta Kadambi, an MIT Media Lab PhD student and one of the project’s developers, explained: “Today, miniature 3D cameras fit on cellphones […] but they make compromises to the 3D sensing, leading to very coarse recovery of geometry. That’s a natural application for polarisation, because you can still use a low-quality sensor, and adding a polarising filter gives you something that’s better than many machine-shop laser scanners.”
The prototype, called Polarised 3D, integrates Microsoft Kinect technology to measure depth, with a standard polarising photographic lens placed in front of the camera.
During testing, the scientists took three photos of an object, changing the polarisation filter on each rotation. The system algorithms then compared the light intensities of the images.
Used alone, the Kinect can resolve features as small as 1cm across, at distances up to several metres. However, with the addition of the polarisation, the technology was able to resolve physical features at one thousandth of the size.
The experiments also compared the new system with a high-precision laser scanner, which required that the physical object be placed on the scanner bed. Polarised 3D still came up top, offering a much higher resolution.
While a rotating mechanical polarisation filter would be nonsensical for a mobile phone camera, there are commercial alternatives available such as tiny polarisation filter grids which can overlay individual pixels in a light sensor.
Eventually, the discovery could help the development of self-driving vehicle technologies. Current vision algorithms used in driverless cars are reliable under normal conditions, but rain, snow and fog present challenges as the water particles can scatter light, which makes the surroundings harder to read. “Mitigating scattering in controlled scenes is a small step,” said Kadambi, whose MIT design could handle the scattering by exploiting data contained in interfering light waves.