Multi-task learning has been shown to be a promising strategy if models for related tasks can be trained jointly for mutual benefit. In our paper on “Multitask Learning for Human Settlement Extent Regression and Local Climate Zone Classification”, which is a nice joint venture of machine learning and remote sensing expertise, we have investigated whether the two tasks of human settlement extent prediction and local climate zone classification can be improved with multi-task learning. The paper is available in an open access manner here: Paper
Today, our paper on “Detection of Undocumented Building Constructions from Official Geodata Using a Convolutional Neural Network” has been published in MDPI’s Remote Sensing journal as part of the Special Issue Optical Remote Sensing Applications in Urban Areas. It’s a very nice demonstration of applied deep learning in remote sensing for a real-world task. Check out the paper here: Paper
SEN12MS has won the Open Data Impact Award by the Stifterverband, a German industry-sponsored organisation that seeks to address challenges in higher education, science and research! I am deeply honored and highly appreciate that an Earth observation dataset made the race. The prize money will be used to further work on EO datasets with a focus on Sentinel data, deep learning, and data fusion. More information about the award can be found on the homepage of the innOsci initiative of the Stifterverband.
Another paper on cloud removal from optical satellite images has been published online (“early access”) today. This time, it’s based on a CycleGAN architecture. And the best thing is, it comes with the open access multi-sensor cloud removal dataset SEN12MS-CR, which is now finally out.
Check out the paper here: Paper
Download the dataset here: SEN12MS-CR dataset
I have accepted the invitation to join the Editorial Board of Frontiers in Remote Sensing as an Associate Editor for the area Data Fusion and Assimilation. Let’s see where this goes - the open review concept they have is certainly interesting!
It’s finally published (in ISPRS Journal) and paginated: Lloyd Hughes’ “opus magnum”, redefining the state-of-the-art in SAR-optical image matching. If you’re interested in this topic, which is highly relevant for image co-registration, stereogrammetry etc., check out the paper here: Paper
From this day on I am part of the Department of Geoinformatics of the Munich University of Applied Sciences as a full professor for applied geodesy and remote sensing. I am very happy and proud that I have received this chance and will do my best to hold up to expectations. My main task will be teaching remote sensing-related and basic classes (e.g. Mathematics). Besides, I will now finally start my own research group on remote sensing, Earth observation, data fusion and machine learning applied to these fields. Really looking forward to this!
Today, Lloyd Hughes defended his PhD thesis on “Deep Learning for Matching High-Resolution SAR and Optical Imagery”. He is the first PhD candidate for whom I served as main supervisor and first examiner. Thus, his great and innovative contribution to this non-trival problem makes me particularly proud. Congratulations, Dr. Hughes!
Today, Chunping Qiu defended her PhD thesis on „Deep Learning for Multi-Scale Mapping of Urban Land Cover from Space“ - and as her daily supervisor, I was part of the examination comittee. Chunping really did an awesome job, has published numerous papers and contributed to a big leap forward in urban mapping with modern remote sensing data and deep learning-based anylsis techniques. Congratulations, Dr. Qiu!
I am extremely happy to finally announce the publication of our pioneering paper on cloud-removal from Sentinel-2 imagery, called “Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion”. Very unfortunately, the journal publication process consumed quite some time, so that other groups have published similar approaches in the meantime. The uniqueness of our work is that we have trained and tested on globally sampled data with spatially completely disjunct training and test sets. So, if you’re interested in a study paving the way for a generically applicable method for the declouding of Sentinel-2 imagery, check out the paper here: Paper
From today on, our paper “Multi-level Feature Fusion-based CNN for Local Climate Zone Classification from Sentinel-2 Images: Benchmark Results on the So2Sat LCZ42 Dataset” is available online via the IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. The contributions of this paper are several: First of all, we propose to standardize the experimental setup of local climate zone (LCZ) classification approaches using the So2Sat LCZ42 dataset with pre-defined data splits to ensure actual comparability of methods. Besides, we propose a simple and light-weith model, Sen2LCZ-Net, for LCZ mapping, which is shown to achieve state-of-the-art results while requiring less computational effort than standard network architectures. The paper can be accessed here: Paper
I am happy to announce that the data used during IEEE-GRSS DFC2020 is now publicly available - with disclosed locations, geotiff information and high-resolution land cover annotations! You can get the data via standard download or AWS S3 at the IEEE Dataport page of the contest. Direct link (which will start the download of a 10 GB zip file): Download
Freshly available online in an open-access manner: My new article “Potential of Large-Scale Inland Water Body Mapping from Sentinel-1/2 Data on the Example of Bavaria’s Lakes and Rivers”, published in PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science. Nothing fancy, but a small study, how to use Google Earth Engine and Sentinel-1/Sentinel-2 data to map inland water bodies (i.e. lakes and rivers) fully automatically for a whole state or country: Paper
Today, it became official: The Future Lab on Artificial Intelligence in Earth Observation (short: AI4EO) was approved by the German Federal Ministry of Education and Research and will receive short of 5 million Euros of funding over the course of the next 3 years. Bringing together scientists from numerous countries and aiming at the creation of a highly visible network of research institutions and industry, the Lab aims at bringing the fruitful combination of AI and Earth Observation a big step forward. Press release (in German only, unfortunately).
I was appointed Chair of the Working Group “Benchmarking” (BEN) of the IEEE-GRSS Image Analysis and Data Fusion Technical Comittee. Together with Seyed Ali Ahmadi from K. N. Toosi University of Technology in Tehran, Iran, I will contribute to the activities of the TC. If you want to contribute to benchmarks and datasets, get in touch!
The IEEE-GRSS Data Fusion Contest 2020 on weakly supervised land cover mapping is over and turned out to be a great success! Far more than 100 international teams have dared to tackle the difficult challenge, and about 40 teams have entered the final phase. Eventually, the winning teams were able to significantly outperform the baseline (i.e. an average accuracy of about 40%): In Track 1 (land cover classification with only low-resolution labels), a consortium from Georgia Tech, Yale, USC and Microsoft Research achieved a stunning 57.49%, while in Track 2 (land cover classification with low-resolution and high-resolution labels) a team from Ohio State University managed to score 61.42%. Among the other top-performing teams, we had participants from Wuhan University, the German Aerospace Center, and Xidian University, showcasing the high international impact of the contest. More info on the contest results can be found here.
Today, our paper “A framework for large-scale mapping of human settlement extent from Sentinel-2 images via fully convolutional neural networks” was published by ISPRS Journal of Photogrammetry and Remote Sensing. It proposes a straightforward, simple, yet effective fully convolutional network-based architecture, called Sen2HSE, for the fully automatic mapping of the human settlement extent. The framework uses existing geodata as training labels and can be applied to extended geographic regions of interest. The paper can be found here: Paper
Today, our paper “Mapping the land cover of Africa at 10 m resolution from multi-source remote sensing data with Google Earth Engine” was published by MDPI Remote Sensing. We have used an land cover mapping scheme based on a simplified version of the local climate zone scheme to map the entire African continent in the cloud. The paper can be found here: Paper
Today, our paper “Fusing Multi-seasonal Sentinel-2 Imagery for Urban Land Cover Classification with Residual Convolutional Neural Networks” was published online by the IEEE Geoscience and Remote Sensing Letters. It investigates different approaches for the fusion of cloud-free Sentinel-2 images acquired in different seasons and how this fusion can help to improve the final classification accuracy for local climate zone mapping. The paper can be found here: Paper
Today, the IEEE-GRSS Data Fusion Contest 2020 has officially started. Its goal is to foster the development of machine learning methods for global land cover mapping from weak supervision and noisy samples. And the best thing is that it uses SEN12MS as training dataset! More info can be found on the web page of the IEEE-GRSS, or the IEEE DataPort, which hosts validation and test data. The corresponding article in IEEE Geoscience and Remote Sensing Magazine can be found here: Paper
The Thematic Session “Learning to Predict Land Cover from Bad Examples” proposed by Jan D. Wegner (ETHZ) and me for ISPRS Congress 2020 in Nice was accepted! If you are working on supervised learning methods that can deal with label noise or weak supervision, feel invited to submit a manuscript by 3 February 2020.
Our latest paper “Matching of TerraSAR-X Derived Ground Control Points to Optical Image Patches Using Deep Learning” was finally published by ISPRS Journal of Photogrammety and Remote Sensing. The work is the result of a collaboration with Airbus Defense and Space and describes a deep learnig approach for SAR-optical image matching. The papre can be found here: Paper
SEN12MS is now officially out! While our dataset, which is currently the largest available curated dataset for the development of machine learning and data fusion techniques in the remote sensing context, has already been out in the form of a pre-print since June 2019, today it was finally officially released through my oral presentation at the Munich Remote Sensing Symposium (MRSS). For more info, please see the following resources:
Today, Hossein Bagheri defended his PhD thesis on „Fusion of Multi-sensor-derived Data for the 3D Reconstruction of Urban Scenes “ successfully. He was the first PhD student I have supervised from start to end, and it was my first time serving as reviewer in a PhD examination comittee.
Our research proposal „AI4Sentinels – Deep Learning for the Enrichment of Sentinel Satellite Images“ was accepted and will be funded by the German Federal Ministry for Econimic Affairs and Energy for three years, starting from 1 August 2019. In the project, we will develop image-to-image translation algorithms based on deep learning and SAR-optical data fusion, which mitigate the cloud cover problem in Sentinel-2 images.