This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
A team of researchers in China have uncovered a way to help autonomous cars “see” better in the dark—boosting the vehicles’ driving abilities by more than 10 percent. The secret to the researchers’ success is in a decades-old theory on how the human eye works.
One way for autonomous cars to navigate is using a collection of cameras, which are each equipped with a special filter to discern the polarization of incoming light. Polarization refers to the direction of oscillation of light waves as they propagate—which can provide a lot of information about the object it last bounced off, including the object’s surface features and details.
However, while polarization filters provide an autonomous vehicles with additional info about the objects surrounding them, the filter involves some pitfalls.
“While providing further information, this double filter design makes capturing photons at night more difficult,” says Yang Lu, a Ph.D. candidate at the University of Chinese Academy of Sciences in Beijing. “The result is that in low-light conditions, the image quality of a polarization camera drops dramatically, with detail and sharpness being more severely affected.”
To overcome this problem, Lu and his colleagues turned to a theory that attempts to explain why humans are able to discern colors relatively well under low-light conditions. The Retinex theory suggests that our visual system is able to discern light in two different ways—namely, the reflectance and illumination components of the light. Importantly, even in low-light conditions, our eyes and brain are able to compensate for changes in illumination of the light enough to discern colors.
Lu’s team applied this concept to their autonomous car navigation system, which processes the reflective and luminescence qualities of polarized light separately. One algorithm—trained using real-world data of the same images in light and dark conditions—works like our own visual system to compensate for changes in brightness. A second algorithm processes the reflective properties of incoming light, removing background noise.
The researchers mounted cameras on cars to test their RPLENet model in the real world.Yang Lu
Whereas conventional autonomous vehicles tend to only process the reflective properties of light, this dual approach offers better results. “In the end, we get a [more] clearly rendered image,” Lu says.
In a study published 8 August in IEEE Transactions on Intelligent Vehicles, the researchers put their new approach, called RPLENet, to the test.
First, the team conducted simulations using real-world data of dim environments to verify that their approach could yield better low-light imaging. Then they mounted a camera that relies on RPLENet on a car and tested it in a real nighttime scenario. The results show that the new approach can improve driving accuracy by about 10 percent in experiments with autonomous driving algorithms.
Lu notes this new approach could lead to safer autonomous cars. “The excellent results we have achieved in our tests, especially in real nighttime scenarios, demonstrate the practical application potential of our method,” he says.
However, one challenge with the RPLENet approach is that it requires extensive training on datasets that are difficult to obtain (for example, images of the same object under different lighting scenarios).
“In the future, we plan to further explore weakly supervised and unsupervised learning methods to reduce the reliance on large amounts of labelled data,” says Lu. “This will accelerate the development of algorithms and help provide more efficient and cost-effective solutions in real-world applications.”
From Your Site Articles
Related Articles Around the Web