Loading
Round 1: Completed

ImageCLEF 2019 VQA-Med

Hidden

6928
38
0
90

Note: Do not forget to read the Rules section on this page.

Motivation

With the increasing interest in artificial intelligence (AI) to support clinical decision making and improve patient engagement, opportunities to generate and leverage algorithms for automated medical image interpretation are currently being explored. Since patients may now access structured and unstructured data related to their health via patient portals, such access also motivates the need to help them better understand their conditions regarding their available data, including medical images.

The clinicians’ confidence in interpreting complex medical images can be significantly enhanced by a “second opinion” provided by an automated system. In addition, patients may be interested in the morphology/physiology and disease-status of anatomical structures around a lesion that has been well characterized by their healthcare providers – and they may not necessarily be willing to pay significant amounts for a separate office- or hospital visit just to address such questions. Although patients often turn to search engines (e.g. Google) to disambiguate complex terms or obtain answers to confusing aspects of a medical image, results from search engines may be nonspecific, erroneous and misleading, or overwhelming in terms of the volume of information.

Challenge description

Visual Question Answering is an exciting problem that combines natural language processing and computer vision techniques. Inspired by the recent success of visual question answering in the general domain, we conducted a pilot task in ImageCLEF 2018 to focus on visual question answering in the medical domain (VQA-Med 2018). Based on the success of the inaugural edition and the huge interest from both computer vision and medical informatics communities, we will continue the task this year with enhanced focus on a nicely curated enlarged dataset. Same as last year, given a medical image accompanied with a clinically relevant question, participating systems are tasked with answering the question based on the visual image content.

Data

The datasets include a training set of 3,200 medical images with 12,792 Question-Answer (QA) pairs, a validation set of 500 medical images with 2,000 QA pairs, and a test set of 500 medical images with 500 questions. In order to generate a more-focused set of questions for a meaningful task evaluation, we considered generating 4 categories of questions based on: Modality, Plane, Organ System, and Abnormality. Please see the readme file of the crowdAI dataset section for more detailed information.

Submission instructions


As soon as the submission is open, you will find a “Create Submission” button on this page (just next to the tabs)


Further instructions on the submission format will be published soon.

Citations

Information will be posted after the challenge ends.

Evaluation criteria

More on the evaluation criteria will be published soon.

Resources

Contact us

We strongly encourage you to use the public channels mentioned above for communications between the participants and the organizers. In extreme cases, if there are any queries or comments that you would like to make using a private communication channel, then you can send us an email at:

  • Asma Ben Abacha <asma.benabacha(at)nih.gov>
  • Sadid A. Hasan <sadid.hasan(at)philips.com>
  • Vivek Datla <vivek.datla(at)philips.com>
  • Joey Liu <joey.liu(at)philips.com>
  • Dina Demner-Fushman <ddemner(at)mail.nih.gov>
  • Henning Müller <henning.mueller(at)hevs.ch>

More information

Prizes

ImageCLEF 2019 is an evaluation campaign that is being organized as part of the CLEF initiative labs. The campaign offers several research tasks that welcome participation from teams around the world. The results of the campaign appear in the working notes proceedings, published by CEUR Workshop Proceedings (CEUR-WS.org). Selected contributions among the participants, will be invited for publication in the following year in the Springer Lecture Notes in Computer Science (LNCS) together with the annual lab overviews.

Datasets License

Participants

Leaderboard

01 Unknown User 0.624
02 Unknown User 0.620
02 Unknown User 0.620
04 Unknown User 0.616
05 Unknown User 0.616