Upscale multispectral satellites images

By DEEP-Hybrid-DataCloud Consortium | Created: - Updated:

tensorflow, docker, deep learning, trainable, inference, pre-trained, api-v1

License: Apache 2.0

Build Status

With the latest missions launched by the European Space Agency (ESA) and National Aeronautics and Space Administration (NASA) equipped with the latest technologies in multi-spectral sensors, we face an unprecedented amount of data with spatial and temporal resolutions never reached before. Exploring the potential of this data with state-of-the-art AI techniques like Deep Learning, could potentially change the way we think about and protect our planet's resources.

This Docker container contains a plug-and-play tool to perform super-resolution on satellite imagery. It uses Deep Learning to provide a better performing alternative to classical pansharpening (more details in [1]).

Minimum requirements Working with satellite imagery is a memory intensive task, so an absolute minimum is 16 GB of RAM memory. But if you want to work with full images (not small patches) you will probably need in the order of 50 GB. If memory requirements are not met, weird Tensorflow shape errors can appear.

The PREDICT method expects a compressed file (zip or tar) containing a complete tile of the satellite. These tiles are different for each satellite type and can be downloaded in the respective official satellite's repositories. We provide nevertheless some samples for each satellite so that the user can test the module. The output is a GeoTiff file with the super-resolved region.

Right now we are supporting super-resolution for the following satellites:

Sentinel 2

Landsat 8

VIIRS

MODIS

More information on the satellites and processing levels that are supported can be found here along with some demo images of the super-resolutions performed in non-training data.

If you want to perform super-resolution on another satellite, go to the training section to see how you can easily add support for additional satellites. We are happy to accept PRs!

References

[1]: Lanaras, C., Bioucas-Dias, J., Galliani, S., Baltsavias, E., & Schindler, K. (2018). Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network. ISPRS Journal of Photogrammetry and Remote Sensing, 146, 305-319.

Run locally on your computer

Using Docker

You can run this module directly on your computer, assuming that you have Docker installed, by following these steps:

$ docker pull deephdc/deep-oc-satsr
$ docker run -ti -p 5000:5000 deephdc/deep-oc-satsr

Using udocker

If you do not have Docker available or you do not want to install it, you can use udocker within a Python virtualenv:

$ virtualenv udocker
$ source udocker/bin/activate
$ git clone https://github.com/indigo-dc/udocker
$ cd udocker
$ pip install .
$ udocker pull deephdc/deep-oc-satsr
$ udocker create deephdc/deep-oc-satsr
$ udocker run -p 5000:5000  deephdc/deep-oc-satsr

Once running, point your browser to http://127.0.0.1:5000/ and you will see the API documentation, where you can test the module functionality, as well as perform other actions (such as training).

For more information, refer to the user documentation.