The Stack Archive

Using Google Street View data to guide autonomous vehicles in bad weather

Tue 31 Jan 2017

With as little as two feet a critical divide between a smooth journey and collision, self-driving vehicles are challenged by neglected infrastructure, and all the solutions seem either to be prohibitively expensive or else to accept that SDVs will never thrive away from the most popular and well-funded routes, or the urban centres which have invested in them.

There are 4.12 million miles of road in the United States alone, and the prospect of lining them all with IoT-based sensors to aid autonomous driving is remote. Even when they’re not covered in snow or obscured by fog, dense rain or other meteorological impediments, a high number of them have perilously faded lane markings – a fact criticised by Tesla founder Elon Musk as ‘crazy’.

65% of American roads are estimated to be in poor condition, a figure which can rise even higher at state level. Similar concerns exist in the UK, where local or secondary roads, away from major government investment, are widely perceived as deteriorated.

Researchers from the University of Minnesota have addressed the challenge by devising a system which uses Google street view data – which is always taken in acceptable visual conditions – as an overlay on what the self-driving systems can currently see from the same GPS coordinates as the historical data. By finding enough common points of reference, it’s possible to generate an accurate lane divide indicator – which can even be overlaid visually on the otherwise obscured view of the road.

Faded markings in a New York City street are restored visually after SafeDrive confirms a GPS match for common features between historical data and the current obscured view

Faded markings in a New York City street are restored visually after SafeDrive confirms a GPS match for common features between historical data and the current obscured view

The system, called SafeDrive, must account for a number of variables in order to establish reliable enough matches for safe autonomous driving, and additionally provide at least approximate lane individuation in cases where GPS single may become noisy.

Though SafeDrive is currently operating via the Google Maps API, it is also capable of building up its own data set – an approach which would require either widespread adoption of the system, or contributing to a potential non-commercial mapping scheme under government auspices, which does not seem likely in the current economic climate.

The software is written in C++ and has been field-tested on a PC running Ubuntu 16.04 with an Intel Core i7 6700HQ. The researchers developed an Android-based application called DriveData to capture environmental information from a smartphone mounted to the windscreen interior – though they also believe exterior camera mounts are a viable option.


‘The process of extracting pixels with “common” visual content. The feature-based matching (in red lines) are used to choose the point features, and for each feature point, a square subwindow is extracted from the candidate image, centered on that feature point. Stitching together all these windows results in an image with most “uncommon” visual elements removed.’

At the moment the researchers are working on the core viability of SafeDrive, and are not addressing issues of network connectivity or potentially limited access to live Street View data (or data from any other remote source). On pre-planned journeys, it seems logical that it would be possible to download and ‘bake’ the necessary data into a planned route, with reasonable accommodation for a buffer zone in the event of unplanned digression and/or network loss.

Google Maps total volume of data was estimated at 21 million terabytes six years ago and is likely to have grown significantly since then, and even a home-spun SafeDrive equivalent would require significant storage for a large section of, for instance, the United States. So the challenge of estimating lane markings must overcome the critical need for low-latency computing, anticipate network issues and recognise the border of its applicability in very adverse conditions.

The Minnesota researchers have been working with uncompressed data so far, and plan to address this in future iterations of the project, as well as improving and optimising the feature matching process.


automation feature research self-driving cars U.S.
Send us a correction about this article Send us a news tip