Loading
Task 2: Completed
48.5k
371
28
2078

Problem Statements

MABe Task 1: Classical Classification

Detect Multi-Animal Behaviors from a large, hand-labeled dataset.

14.7k
1368
MABe Task 2: Annotation Style Transfer

Adapt Behavior Detectors to capture individuals' annotation style.

4751
425
MABe Task 3: Learning New Behavior

Learn to Recognise New Behaviors from limited training examples.

5244
285

AIcrowd

πŸ† $600 AWS CREDITS | COMMUNITY CONTRIBUTION PRIZE

⏰ Deadline 10th July, 2021 | $4,800 AWS Credit Prize Pool 


πŸŽ‰  MULTI-AGENT BEHAVIOR CHALLENGE WINNERS

πŸ•΅οΈ Introduction

If we did not need to move, we would have no need of a nervous system; conversely, we cannot fully understand the nervous system without understanding the structure and control of behavior.

For decades, studying behavior meant spending hours staring at a screen, watching video of animals and painstakingly annotating their actions, one video frame at a time. Not only does this "annotation bottleneck" slow down research, but different annotators often converge on different "personal" definitions of the behaviors they study, meaning two labs studying behavior can never be certain they are investigating precisely the same thing.

How can we use machine learning to defeat the annotation bottleneck?

A partial solution has come in the form of computer vision: using pose estimation software like DeepLabCut, scientists can now track the postures of behaving animals, in the lab or in the wild. The remaining challenge is to close the gap between tracking and behavior: given just the output of an automated tracker, can we detect what actions animals are performing? Moreover, can we learn different annotation styles, or to detect different behaviors, without having hours of training data to rely on?

The goal of this challenge is to determine how close we can come to fully automating the behavior annotation process, and where the biggest roadblocks in that process lie. The challenge is driven by a new, high-quality animal pose dataset, and each challenge task is modeled after real-world problems faced by behavioral neuroscientists.

Automating the annotation process would mean that scientists would be freed to focus on more meaningful work; larger, more ambitious studies could be run, and results from different laboratories could be compared with confidence. Making new annotation tools quick to train would also mean that scientists could study a richer collection of behaviors, opening the door to new discoveries that would have otherwise been overlooked.

🧠 Further reading about behavior analysis in neuroscience: To Decode the Brain, Scientists Automate the Study of Behavior.

πŸ’Ύ Dataset

We have assembled a large collection of mouse social behavior videos from our collaborators at Caltech. For each challenge task, teams are given frame-by-frame annotation data and animal pose estimates extracted from these videos, and tasked with predicting annotations from poses on held-out test data. To keep things simple, the raw videos will not be provided.

All videos use a standard resident-intruder assay format, in which a mouse in its home cage is presented with an unfamliar intruder mouse, and the animals are allowed to freely interact for several minutes under experimenter supervision. Assays are filmed from above, and the poses of both mice are estimated in terms of seven anatomical keypoints using our Mouse Action Recognition System (MARS).

To reflect the fact that behavior video data is cheap to obtain compared to expert annotations, we also provide pose data from several hundred videos that have not been manually annotated for behavior. You are encouraged to explore unsupervised feature learning or behavior clustering on the unlabeled dataset to improve performance on the challenge tasks.

πŸ“ Tasks

Our competition will have 3 tasks:

Task 1 - Classical Classification

Predict bouts of attack, mounting, and close investigation from hand-labeled examples. Training and test sets will be annotated by the same individual, annotator A.

Task 2 - Annotation Style Transfer

Different individuals have different rules for what they consider a behavior, particularly in the case of complex behaviors such as attack. Using the training data from annotator A above, as well as a small number of examples of the same behavior from annotator B, train a classifier that can capture annotator B’s β€œannotation style”.

Task 3 - Learning New Behavior

To what extent can we take advantage of self-supervision, transfer learning, or few shot learning techniques to reduce training data demands in behavior classification? Using the training data from annotator A above, as well as a small number of examples from a new class of behavior X scored by annotator A, train a classifier that can now recognize behavior X.

πŸ† Prizes

The cash prize pool across the 3 tasks is $9000 USD total (Sponsored by Amazon and Northwestern)

For each task, the prize pool is as follows. Prizes will be awared for all the 3 tasks

  • πŸ₯‡ 1st on leaderboard: $1500 USD
  • πŸ₯ˆ 2nd on the leaderboard: $1000 USD
  • πŸ₯‰ 3rd on the leaderboard: $500 USD

 

Additionally, Amazon is sponsoring $10000 USD total of SageMaker credits! πŸ˜„ 

Please check out this post to see how to claim credits.

 

πŸ”— Links

πŸ“« Contact

πŸ™ Sponsors

          

Participants

Leaderboard

01 benjamin_wild 5.000
02
6.000
03 Dexter 14.000
04 sungbinchoi 16.000
05 TONY_22 17.000

Notebooks

See all
Clustering of learned annotator embeddings
By
benjamin_wild
Over 3 years ago
0