With the abundance of Remote Sensing satellite imagery, the possibilities are endless as to the kind of insights that can be derived from them. One such use is to determine land use for agriculture and non-agricultural purposes.
In this talk, we’ll be looking at leveraging Sentinel-2 satellite imagery data along with OpenStreetMap labels to be able to classify land use as agricultural or non-agricultural. Sentinel-2 data has a 10-meter resolution in RGB bands and is well-suited for land use classification. Using these two datasets, many different ML tasks can be performed like - image segmentation into two classes (farm land and non-farm land) or more challenging task of identification of crop type being cultivated on fields.
For this talk, we’ll be looking at leveraging Convolutional Neural Networks (CNNs) built with Apache MXNet to train Deep Learning models for land use classification. We’ll be covering the different Deep Learning Architectures considered for this particular use case along with the performance metrics for each of the different architectures.
We’ll be leveraging streaming pipelines built on Apache Flink for model training and inference. Developers will come away with a better understanding of how to analyze satellite imagery and the different Deep Learning architectures along with their pros/cons when analyzing satellite imagery for land use.