Deep Learning on Aerial Imagery: What does it look like on a map?

Session details
Session Type: 
Experience level: 
Intermediate
The pace at which satellite and aerial imagery is being made available at begs the question: how do we get value out of this data when there are so many images that no team of humans could ever look at them all? Increasingly, computer vision and deep learning are being used to make sense of this vast source of data to create easier-to-consume derivative products and to bring human attention to images that algorithms flags as interesting. A key task in computer vision is semantic segmentation, which attempts to simultaneously answer the questions of what is in an image, and where it is located. Deep learning has proven itself to be well suited for this task, and seeing the results on an interactive map can give a great sense of where the algorithms get it right, and where they get it wrong, and where it amusingly has a tough time figuring it out (e.g. a large food truck: is it a car or a building?). In this talk, I will talk about how a research team at Azavea developed methods to train neural networks to perform semantic segmentation of high resolution drone imagery over Potsdam, Germany, as part of a machine learning challenge run by ISPRS. I'll also show how we used GeoTrellis, a LocationTech project, to compare the results of different neural network architectures on a map and gain insights into how the networks performed.
Schedule info
Session Time Slot(s): 
Wednesday, May 24, 2017 - 10:30 to 11:05

Our Sponsors

For information about becoming a sponsor, please visit the LocationCon 2017 sponsor prospectus page.

Host

Silver

Bronze

Non-Profit/Government/Educational