Deep learning identifies dangerous roads from sound input
Mon 7 Dec 2015
Warning: array_merge(): Expected parameter 1 to be an array, null given in /home/customer/www/techerati.com/public_html/wp-content/plugins/social-warfare/lib/utilities/SWP_Requests.php on line 66
Warning: Invalid argument supplied for foreach() in /home/customer/www/techerati.com/public_html/wp-content/plugins/social-warfare/lib/utilities/SWP_Post_Cache.php on line 700
Every year wet pavement causes 74% of all weather-related crashes in the United States, with 384,032 persons injured and 4,789 persons killed. The problem of identifying when roads are dangerously wet, particularly in conditions which can obscure the fact, isn’t just one of public road safety in general, but is likely to be crucial in allowing new generations of self-driving cars to operate reasonably in ‘real world’ conditions without defaulting endlessly to inefficient slow speeds, handovers to operators or crisis-parking.
Video-based detection of dangerously slippery road surfaces can be hampered by fog, poor light or other environmental conditions. Some systems have attempted to use the presence of road reflections of other drivers’ headlights in order to determine precipitation, either using fixed or on-board cameras; but the former are remote to the driver in danger, and the latter relies on the presence of other vehicles on the road, and is reactive rather than proactive.
Now a new research paper entitled Detecting Road Surface Wetness from Audio: A Deep Learning Approach [PDF] suggests a system of determining the road surface conditions based on audio from the tyres. The problem has been approached before with the creation of an asphalt status classification system powered by a Support Vector Machine (SVM), but the range of surface types practicable within the system was limited and the results hampered by false predictions caused by unrelated input such as pebbles.
The researchers of the new work, led by Irman Abdić at the Institute of Electrical and Electronics Engineers (IEEE), used Recurrent Neural Networks to monitor audio input from tyre-road contact in real-world conditions and at varying speeds around the Greater Boston area, using an inexpensive shotgun microphone affixed near the rear tyre of a 2014 Mercedes CLA. The results achieved in these initial tests reached an unweighted average recall (UAR) of 93%, a notable success rate. Since the audio monitoring continues even when the car is at rest, wetness identification can still be achieved when the vehicle is at a standstill, though this is via the presence of other passing vehicles and the sounds that they are making.
This study, along with so much current machine learning research, seems to rely on the annotation and classification of existing infrastructure, rather than the development of geo-neutral filters and algorithms which could function discretely in undocumented environments. The performance of the vehicle and apparatus in the roads which the tests were conducted on was measured with the International Roughness Index, a standard of surface quality.
This study is the first time that Long short-term memory RNNs (LSTM-RNNs) have been used to attack this problem. LSTM-RNNs have been used extensively in audio-based work, including for the identification of phonemes, animal species identification and individuation of temporal structure in music.