Loading
Official round: Completed

LifeCLEF 2020 Bird - Monophone

Recognizing bird sounds in monophone soundscapes

9774
247
14
57

Update: We opened the official evaluation round for post-deadline submissions. You are free to use the evaluation results as benchmark scores in a publication. Please refer to the discussion forum if you have any questions or remarks. For this post-deadline round, the working note paper deadline does not apply.

Note: Do not forget to read the Rules section on this page. Pressing the red Participate button leads you to a page where you have to agree with those rules. You will not be able to submit any results before agreeing with the rules.

Note: Before trying to submit results, read the Submission instructions section on this page.

Challenge description

Automated recognition of bird sounds in continuous soundscapes can be a transformative tool for ornithologists, conservation biologists and citizen scientist alike. Long-term monitoring of ecosystem health relies on robust detection of indicator species – and birds are particularly well suited for that use case. However, designing a reliable detection system is not a trivial task and the shift in acoustic domains between high-quality example recordings (typically used for training a classifier) and low-quality soundscapes poses a significant challenge. The 2020 LifeCLEF bird recognition task for monophone recordings focuses on this use case. The goal is to design, train and apply an automated detection system can reliably recognize bird sounds in diverse soundscape recordings.

Data

Training, validation, and test data is now available; you can find  it under the “Resources” tab.

The training data consists of audio recordings for bird species from South and North America and Europe. The Xeno-canto community contributes this data and provides more than 70,000 high-quality recordings across 960 species to this year’s challenge. Each recording is accompanied by metadata containing information on recording location, date and other high-level descriptions provided by the recordists.

The test data consists of 153 soundscapes recorded in Peru, the USA, and Germany. Each soundscape is of ten-minute duration and contains high quantities of (overlapping) bird vocalizations.

A representative validation dataset will be provided to locally test the system performance before submitting. The validation data will also include the official evaluation script used to assess submissions.

Submission instructions

As soon as the submission is open, you will find a “Create Submission” button on this page (next to the tabs).

Before being allowed to submit your results, you have to first press the red participate button, which leads you to a page where you have to accept the challenges rules.

The goal of this challenge is to submit a ranked list of species predictions for each 5-second interval of each soundscape. These lists are limited to max. 10 species per interval and have to include timestamps and confidence values. Each participating group will be allowed to submit 10 runs.

Please refer to the "readme.txt" inside the test data zip file for more detailed information on the submission guidlines. Please do not hesitate to contact us if you encounter any difficulties while downloading or processing the data.

Evaluation criteria

For the sake of consistency and comparability, the metrics will be the same as in previous edition. Since participants are required to submit ranked species lists, two popular ranking metrics will be employed: Sample-wise mean average precision and class-wise mean average precision. We will assess the overall system performance across all recording locations and individual performance for each site as part of our overview paper.

Rules

The 2020 LifeCLEF bird recognition challenge for monophone recordings will feature a set of rules that is somewhat different from previous editions:

  • No other than the provided training data (audio and metadata) will be allowed to train a recognition system (this also excludes pre-trained models)
  • The validation data can be used to locally test the system but is not allowed for training (this also applies for the test data)
  • Only single-model performance will be assessed and ensemble approaches are not allowed (condensing multiple models into one will be allowed)
  • Participants are required to submit working notes that describe the approach

LifeCLEF lab is part of the Conference and Labs of the Evaluation Forum: CLEF 2020. CLEF 2020 consists of independent peer-reviewed workshops on a broad range of challenges in the fields of multilingual and multimodal information access evaluation, and a set of benchmarking activities carried in various labs designed to test different aspects of mono and cross-language Information retrieval systems. More details about the conference can be found here.

Submitting a working note with the full description of the methods used in each run is mandatory. Any run that could not be reproduced thanks to its description in the working notes might be removed from the official publication of the results. Working notes are published within CEUR-WS proceedings, resulting in an assignment of an individual DOI (URN) and an indexing by many bibliography systems including DBLP. According to the CEUR-WS policies, a light review of the working notes will be conducted by LifeCLEF organizing committee to ensure quality. As an illustration, LifeCLEF 2019 working notes (task overviews and participant working notes) can be found within CLEF 2019 CEUR-WS proceedings.

Important

Participants of this challenge will automatically be registered at CLEF 2020. In order to be compliant with the CLEF registration requirements, please edit your profile by providing the following additional information:

First name

Last name

Affiliation

Address

City

Country

Regarding the username, please choose a name that represents your team.

This information will not be publicly visible and will be exclusively used to contact you and to send the registration data to CLEF, which is the main organizer of all CLEF labs

Citations

Information will be posted after the challenge ends.

Prizes

Cloud credit

The winner of each of the challenge will be offered a cloud credit grant of 5k USD as part of Microsoft’s AI for earth program.

Publication

LifeCLEF 2020 is an evaluation campaign that is being organized as part of the CLEF initiative labs. The campaign offers several research tasks that welcome participation from teams around the world. The results of the campaign appear in the working notes proceedings, published by CEUR Workshop Proceedings (CEUR-WS.org). Selected contributions among the participants, will be invited for publication in the following year in the Springer Lecture Notes in Computer Science (LNCS) together with the annual lab overviews.

Resources

Contact us

Discussion Forum

Alternative channels

We strongly encourage you to use the public channels mentioned above for communications between the participants and the organizers. In extreme cases, if there are any queries or comments that you would like to make using a private communication channel, then you can send us an email at :

  • stefan[dot]kahl[at]informatik[dot]tu-chemnitz[dot]de
  • wp[at]xeno-canto[dot]org
  • glotin[at]univ-tln[dot]fr
  • herve[dot]goeau[at]cirad[dot]fr
  • fabian-robert[dot]stoter[at]inria[dot]fr

More information

You can find additional information on the challenge here: https://www.imageclef.org/BirdCLEF2020

Participants

Getting Started