Scaling Data Quality with Computer Vision on Spatial Data

Recent advancements in natural language technologies, including generative capabilities for understanding and rendering natural language on demand, have become salient in many contemporary Artificial Intelligence applications. Nevertheless, computer vision applications of object detection, image recognition, and other manifestations are no less dire to the enterprise—or to the millions, if not billions, of consumers who rely on this technology daily for spatial mapping data on mobile devices.

Granted, the advanced machine learning algorithms supporting this use case are implemented on the backend and aren’t directly accessed by digital map users. However, they’re essential for facilitating data quality at scale to allow Overture Maps Foundation, a purveyor of digital mapping data based on interoperable, open standards, to add nearly a billion buildings to its burgeoning collection of worldwide buildings in its latest digital map dataset.

In December 2023 Overture increased the buildings mapped in its dataset to over two billion, due in no small part to buildings from Google’s Open Buildings. According to Marc Prioleau, Executive Director of Overture Maps Foundation, when consolidating building footprints at this scale across sources there’s “got to be machine learning [involved].”

Achieving this objective entails de-duplicating entities, which is a classic data quality problem. Spurred by machine learning techniques, Overture Maps was able to add buildings gleaned from satellite imagery, disambiguate them, de-duplicate them, rank its results, then aggrandize them with its existing collection of buildings and make them available to the public via open standards.

Object Detection, Image Recognition

A fair amount of the buildings available in Overture Maps’ latest digital mapping dataset were discerned via computer vision applied to satellite imagery. Several of the buildings contained in Google’s Open Buildings were detected and recognized with this technology; another supplier of buildings, Microsoft Building Footprints, utilized a similar approach. “Microsoft had all this satellite imagery,” Prioleau noted. “They applied Artificial Intelligence to it. The Artificial Intelligence looks at the pixels in the imagery and says that’s a road. That’s a field. These pixels are a building.”

These machine learning applications require detecting objects and recognizing them as the different images Prioleau enumerated. Other sources of data contained in Overture Maps’ latest dataset include maps of buildings that governments have made available, as well as maps ‘crowdsourced’ by individuals. For the buildings obtained from the satellite imagery that Microsoft and Google had, respectively, “Machine learning and Artificial Intelligence automatically created building footprints,” Prioleau said.

De-Duplication and Data Currency

Implementing data quality on these and the other sources is imperative for a bevy of reasons. Obviously, some of the buildings from these sources could’ve been the same, requring de-duplication. In other instances, the data may have been unreliable or untrustworthy, particularly data disseminated from individuals mapping their neighborhoods. Data currency is another factor, as buildings and objects could have changed since they were last mapped. “So, what we did is took all these sources, merged them, and then what you have to do is de-duplicate them,” Prioleau explained. “Because, it turns out you mapped the buildings in your city’s database that Microsoft also captured. So, we had to look at those and say, okay, who do we trust the most?”

Computer vision is integral to determining duplicate entities of buildings. “A building footprint looks like a little box,” Prioleau commented. “If the building’s a rectangle, it looks like a little square. So what you’ve got, let’s say in a case where all four datasets have that, is you have a variety of squares that kind of overlap. They’re not accurate enough where they completely match up, but the algorithms look at that and discern that all four of those representations of a building are the same building.”

Ranking and More

The de-duplication step is influenced by what Prioleau termed a probabilistic calculation for determining that specific images are of the same building. In this case, or others in which different sources have mapped the same building, Overture Maps is responsible for selecting the best or most accurate image—which also entails data quality. “It turns out we trust crowdsourced first, government second, Google third, and Microsoft fourth,” Prioleau commented. “That’s just the priority we did. That’s just based on generic metrics of the quality of the data.”

However, there was still a review of the buildings on an individual basis, which was attributed to a ranking process of the duplicate results, to determine which ones would actually be made publicly available via Overture Maps,. “Once you’ve decided all those buildings are the same building, you choose the one that you judge to be the highest quality, the highest rank,” Prioleau mentioned. “Then you collapse them all into one building and assign it a stable identifier.”

Ongoing Development 

There’s no paucity of headlines detailing the considerable gains natural language technologies have made of late. Nonetheless, computer vision is still an extremely viable facet of advanced machine learning for the enterprise. Its utility for data quality is evinced from the Overture Maps use case. This technology can produce similar boons for other facets of the ever-shifting data ecosystem.

About the Author

Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW