This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
A team of researchers in China have uncovered a way to help autonomous cars “see” better in the dark—boosting the vehicles’ driving abilities by more than 10 percent. The secret to the researchers’ success is in a decades-old theory on how the human eye works.
One way for autonomous cars to navigate is using a collection of cameras, which are each equipped with a special filter to discern the polarization of incoming light. Polarization refers to the direction of oscillation of light waves as they propagate—which can provide a lot of information about the object it last bounced off, including the object’s surface features and details.
However, while polarization filters provide an autonomous vehicles with additional info about the objects surrounding them, the filter involves some pitfalls.
“While providing further information, this double filter design makes capturing photons at night more difficult,” says Yang Lu, a Ph.D. candidate at the University of Chinese Academy of Sciences in Beijing. “The result is that in low-light conditions, the image quality of a polarization camera drops dramatically, with detail and sharpness being more severely affected.”
To overcome this problem, Lu and his colleagues turned to a theory that attempts to explain why humans are able to discern colors relatively well under low-light conditions. The Retinex theory suggests that our visual system is able to discern light in two different ways—namely, the reflectance and illumination components of the light. Importantly, even in low-light conditions, our eyes and brain are able to compensate for changes in illumination of the light enough to discern colors.
Lu’s team applied this concept to their autonomous car navigation system, which processes the reflective and luminescence qualities of polarized light separately. One algorithm—trained using real-world data of the same images in light and dark conditions—works like our own visual system to compensate for changes in brightness. A second algorithm processes the reflective properties of incoming light, removing background noise.
The researchers mounted cameras on cars to test their RPLENet model in the real world.Yang Lu
Whereas conventional autonomous vehicles tend to only process the reflective properties of light, this dual approach offers better results. “In the end, we get a [more] clearly rendered image,” Lu says.
In a study published 8 August in IEEE Transactions on Intelligent Vehicles, the researchers put their new approach, called RPLENet, to the test.
First, the team conducted simulations using real-world data of dim environments to verify that their approach could yield better low-light imaging. Then they mounted a camera that relies on RPLENet on a car and tested it in a real nighttime scenario. The results show that the new approach can improve driving accuracy by about 10…
Read full article: Self-Driving Cars Get Better at Driving in the Dark
The post “Self-Driving Cars Get Better at Driving in the Dark” by Michelle Hampson was published on 09/07/2024 by spectrum.ieee.org
Leave a Reply