OpenEarthMap

overview

OpenEarthMap Land Cover Mapping Few-Shot Challenge
Co-organized with the L3D-IVU CVPR 2024 Workshop


Submission of challenge papers is opened.
Challenge development phase (1st round) is opened!

About the Challenge

The OpenEarthMap is a remote sensing (RS) image semantic segmentation benchmark dataset consisting of aerial/satellite images covering 97 regions from 44 countries across 6 continents at a spatial resolution of 0.25–0.5m ground sampling distance for global high-resolution land cover mapping. It presents an advancement in geographical diversity and annotation quality, enabling models to generalize worldwide. This challenge extends the original RS semantic segmentation task of the OpenEarthMap benchmark to generalized few-shot semantic segmentation (GFSS) task in RS image understanding.
The challenge aims to evaluate and benchmark learning methods for few-shot semantic segmentation on the OpenEarthMap dataset to promote research in AI for social good. The motivation is to enable researchers to develop few-shot learning algorithms for high-resolution RS image semantic segmentation, which is a fundamental problem in various applications of RS image understanding, such as disaster response, urban planning, and natural resource management.
The challenge is part of the 3rd Workshop on Learning with Limited Labelled Data for Image and Video Understanding (L3D-IVU) in conjunction with CVPR 2024 Conference. The scientific papers of the best submissions will be presented in oral at the 3rd L3D-IVU Workshop @ CVPR 2024 Conference, and will also be published in the CVPR 2024 Workshops Proceedings.


Competition Phases

Phase 1 (Development Phase): Participants are provided with the OpenEarthMap Few-Shot dataset for training and validation to train and validate their algorithms (See the dataset section for details of the training and validation sets). Participants can submit results on the validation set to the Codalab competition submission portal to get feedback on the performance. The performance of the best submission from each account will be displayed on the leaderboard. In parallel, participants have to submit a challenge paper of their proposed method to be eligible to enter Phase 2.
Phase 2 (Evaluation Phase): This is the final phase. Participants receive the testset of the OpenEarthMap Few-Shot dataset (See the dataset section for details of the testset). Participants submit their results on the testset to the Codalab competition submission portal within six days from the release of the testing set. After evaluation of the results, the top winners are announced.
Note that the top winners of this challenge are determined not only on the performance on the leaderboard, but in addition to the novelty of their proposed method as described in the manuscripts they submit in the Phase 1. The manuscripts of the top winners will be included in the CVPR 2024 Workshops Proceedings.


Important Dates

All the deadlines are strict, no extension will be given (11:59 pm Pacific Time).

  • Feb 05, 2024: Opening challenge 1st round (development phase).
  • Mar 01, 2024: Opening paper submission.
  • Mar 22, 2024: Paper submission deadline.
  • Mar 23, 2024: Opening challenge 2nd round (evaluation phase).
  • Mar 29, 2024: Challenge submission deadline.
  • Apr 07, 2024: Paper notification to authors.
  • Apr 14, 2024: Camera-ready deadline.
  • Apr 21, 2024: Challenge winner announcement.

The L3D-IVU Workshop is on Tuesday, June 18, 2024.


Task & Metrics

The main task of the OpenEarthMap Few-Shot Challenge is a 5-shot multi-class semantic segmentation, i.e., generalized few-shot semantic segmentation (GFSS) task, with 4 novel classes and 7 base classes. Given a support set of images with their labels, participants are to predict segmentation maps of all images in a given query set (Note, the labels for the images in the query set are withheld and are used for evaluation).
The support set consists of 20 image-label pair examples, 5-set examples for each of the 4 novel classes. The labels for each image in the support set do not contain any of the base classes. Also, in each 5-set examples, the labels contain only one novel class (i.e., one novel class per 5-set examples). However, the labels for the images in the query set contain both the base classes and the novel classes. Below are examples of novel classes in the support_set (first two columns), and base classes + novel classes in the query_set (last two columns).
few-shot dataset
In this challenge, the GFSS task is in two phases: the development phase and the evaluation phase. In the development phase, the participants will be given training data for pre-training their backbone networks. The participants will use the training data and a support set of 20 image-label pair examples to predict the segmentation maps of 30 images in a given query set, which contains both base classes and novel classes. Here, the participants submit their results to get performance feedback. Note: participants need to submit their challenge paper to be eligible to enter the evaluation phase.
In the evaluation phase, the participants will use the training data and a different support set of 20 image-label pair examples to predict the segmentation maps of 80 images in a given query set, which also contains both base classes and novel classes. Note that both the support set and the query set for the development phase are different from the ones for the evaluation phase. However, in terms of class definition, the base classes are the same, but the novel classes are different. Here, the participants submit their results for final evaluation. The performance of the best submission from each submission account will be displayed on the leaderboard. See the challenge rules for the submission format.

For evaluation, we resort to the mean intersection-over-union (mIoU), which is the average value over the IoUs of all the target classes. Note that the mIoU is computed over the target classes only without the background. Following the GFSS literature, we will use the following three metrics:

  • base mIoU: The average value over the IoUs of all the target base classes in the query set.
  • novel mIoU: The average value over the IoUs of all the target novel classes in the query set.
  • average base-novel mIoU: The average of the base mIoU and the novel mIoU

However, in this challenge, more weight is placed on the novel mIoU. Thus, instead of using the average base-novel mIoU score for the performance evaluation and ranking, we resort to a weighted-sum of base mIoU and novel mIoU, which is computed using 0.4:0.6 weights for the base classes and the novel classes, respectively (0.4*base mIoU + 0.6*novel mIoU). The weights are derived from the state-of-the-art results presented in the GFSS baseline adopted in this challenge. Note that the top winners are determined not only on the performance on the leaderboard but in addition to the novelty of their proposed method described in their submitted manuscripts.


Rules of the Game

The participants are supposed to submit a single compressed `.zip` file which contains the predicted segmentation maps of all images in a given query set at the Codalab competition submission portal. The predicted segmentation maps are supposed to be in `xxxx.png` format. The `xxxx` is the name of the image in the query set. For example, given a query image `accra_2.tif`, the predicted segmentation map should be named as `accra_2.png`. Any other format of submission will not be accepted.

To ensure fair evaluation and reproducible results, the following rules are set for this challenge competition as well:

  • Submissions must not use any remote sensing dataset apart from the one distributed for this challenge.
  • All submissions are also required to submit a challenge paper describing their proposed methods.
  • Each team can have only one submission account.
  • A team can not have more than five members. A member can only be part of one team.
  • Each team can make only 10 submissions per day in the development phase.
  • Each team can make a total of 10 submissions in the evaluation phase.
  • The best submission of a team in the evaluation phase is used to rank the team, and it is selected for the final evaluation.
  • The top winners are determined by both the performance on the leaderboard and the novelty of their proposed method.
  • The top winners will be asked to submit their codes for cross-checking their results.
  • The top winners' submissions must exceed the baseline score of the challenge to be eligible for the challenge prize.
  • The organizers of the challenge are not allowed to participate.


The Dataset

The OpenEarthMap few-shot learning challenge dataset consists of only 408 samples of the original OpenEarthMap benchmark dataset for RS image semantic segmentation. The challenge dataset extends the original 8 semantic classes of the OpenEarthmap benchmark to 15 classes, which is split into 7:4:4 for train_base_class, val_novel_class, and test_novel_class disjointed sets, respectively (i.e., train_base_classval_novel_classtest_novel_class = ∅). See the challenge task for the purpose of each class split and the number of classes in each class split.
The 408 samples are also split into 258 as trainset, 50 as valset, and 100 as testset. The trainset is for pre-training a backbone network. It contains only the images and labels of the train_base_class split. Both the valset and the testset consist of a support set and a query set. The valset and the testset contain the images and labels of the val_novel_class and the test_novel_class splits, respectively. A detailed description of the challenge dataset can be found here, where the dataset can also be downloaded. The README in the baseline code also explains how the challenge dataset can be used with the baseline model.


Baseline Model

A GFSS framework, called distilled information maximization (DIaM), with a PSPNet architecture of EfficientNet-B4 encoder from the PyTorch segmentation models library is presented as a baseline model. The baseline code can be used as a starter code for the challenge submission. To run it, follow the README instructions presented here. After running the code, an output folder ``results`` which contains ``preds`` and ``targets`` folders of the model's segmentation prediction maps and the corresponding targets, respectively, are created. Based on the rules mentioned above, only the ``preds`` folder which contains the predicted segmentation maps in `.png` file format is required for the submission. Please feel free to contact the challenge organizers for any question regarding the baseline code.


Awards & Prizes

Three (3) teams will be declared as winners: the 1st, 2nd, and 3rd ranked teams.
The winning teams will have the opportunity to present their papers in oral at the 3rd L3D-IVU Workshop @ CVPR 2024 Conference.
The papers of the winning teams will be published in the CVPR2 024 Workshops Proceedings. Note that for a paper to be included in the proceedings, it should be full-length (5–8 pages, excluding references) paper and not published at CVPR 2024.
The authors of the winning teams will be awarded certificates on the day of the 3rd L3D-IVU Workshop @ CVPR 2024 Conference.
The 1st winning team will also receive a prize of 1000 USD.


Challenge Paper Submission

As part of the challenge evaluation, the participants are required to submit a 5–8 pages (excluding references) paper in parallel with the challenge submissions during the 1st round. The challenge winners are determined both by the performance on the leaderboard and the novelty of the proposed method as detailed in the submitted manuscripts. Each manuscript describes the challenge addressed problem (i.e., generalized few-shot semantic segmentation task in RS image understanding), the proposed method, and the experimental results.
The papers will be peer-reviewed under single-blind policy. The accepted papers of the top winners will be published in the CVPR 2024 Workshops Proceedings. Note that the paper has to be full-length (5–8 pages, excluding references) and not published at CVPR 2024. The accepted papers will also be presented in oral at the 3rd L3D-IVU Workshop @ CVPR 2024 Conference.

Submission guidelines:

  • Manuscripts are limited to 8-page maximum (excluding references).
  • Manuscripts must conform to a single-blind review policy.
  • Manuscripts should follow the CVPR 2024 paper style. Download a modified version from here for the purpose of this challenge.
  • Authors must add their `Codalab account` which is used for the final phase of the challenge submissions.
  • Supplementary material is not allowed.
  • Submitted manuscripts will be rejected without review if they: do not have accompanied challenge submissions; exceed the page limit and/or not in the CVPR 2024 paper style provided; and violate the single-blind policy.

The key dates of the challenge paper submission are as follows:

  • Opening paper submission: 1st March 2024.
  • Paper submission deadline: 22nd March 2024, 11:59 pm Pacific Time.
  • Notification to authors: 7th April 2024.
  • Camera-ready deadline: 14th April 2024, 11:59 pm Pacific Time.

Manuscripts must be submitted online via the CMT submission system. To submit, select "ChallengeL3DIVUCVPR24" under "Create new submission" menu at https://cmt3.research.microsoft.com/L3DIVUCVPR2024/.


Organizers


Winners/Accepted Papers

TBA

 


Citation

For any scientific publication using this data, the following paper should be cited:

@InProceedings{Xia_2023_WACV,
    author    = {Xia, Junshi and Yokoya, Naoto and Adriano, Bruno and Broni-Bediako, Clifford},
    title     = {OpenEarthMap: A Benchmark Dataset for Global High-Resolution Land Cover Mapping},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2023},
    pages     = {6254-6264}
}