Activity
Ratings Progression
Challenge Categories
Challenges Entered
Improve RAG with Real-World Benchmarks
Latest submissions
Small Object Detection and Classification
Latest submissions
Understand semantic segmentation and monocular depth estimation from downward-facing drone images
Latest submissions
See Allgraded | 218647 | ||
graded | 218645 | ||
graded | 218643 |
A benchmark for image-based food recognition
Latest submissions
See Allgraded | 181841 | ||
graded | 181840 | ||
failed | 181839 |
What data should you label to get the most value for your money?
Latest submissions
See Allgraded | 176460 | ||
failed | 176448 | ||
failed | 176371 |
Amazon KDD Cup 2022
Latest submissions
Machine Learning for detection of early onset of Alzheimers
Latest submissions
See Allgraded | 143694 | ||
failed | 143583 | ||
graded | 136222 |
A benchmark for image-based food recognition
Latest submissions
See Allgraded | 125259 | ||
graded | 124096 | ||
graded | 124095 |
Classify images of snake species from around the world
Latest submissions
Perform semantic segmentation on aerial images from monocular downward-facing drone
Latest submissions
See Allgraded | 218647 | ||
graded | 218645 | ||
graded | 218643 |
Participant | Rating |
---|---|
cadabullos | 0 |
saidinesh_pola | 0 |
Participant | Rating |
---|
-
Gaurav-Eric ADDI Alzheimers Detection ChallengeView
-
gs-sai Scene Understanding for Autonomous Drone Delivery (SUADD'23)View
-
RAGnarok_Retrievers Meta Comprehensive RAG Benchmark: KDD Cup 2024View
Semantic Segmentation
SUADD'23- Scene Understanding for Autonomous Drone
π¬ Feedback and suggestion
Almost 2 years agoI guess the leaderboard hyperlinks are invalid. A message on the page says βLeaderboard is not released yetβ but I can see the leaderboard.
Checkout this page: AIcrowd | Scene Understanding for Autonomous Drone Delivery (SUADD'23) | Leaderboards
Both the leaderboard links point to mono-depth-perception
one.
π₯ Looking for teammates?
Almost 2 years agoHey guys,
I am Gaurav Singhal working as a data scientist in health tech. I have been working on image segmentation for 2 years, won the food-image-recognition
challenge in 2021, and achieved #2 last year. I do not have a dedicated GPU except for Colab Pro and one shared server with V100.
With this challenge, I am looking to expand my domain knowledge and ofcourse would like to collaborate.
I have experience with CNN and Vision transformer-based methods for segmentation. I would like to improve the method further to establish new state-of-the-art in the process.
Here is my LinkedIn https://www.linkedin.com/in/gaurav-singhal93/. Ping me if you would like to collaborate.
Food Recognition Benchmark 2022
End of Round 2β±οΈ
Over 2 years agoThanks, @Mykola_Lavreniuk. I hope our techniques are different from each other to broaden our knowledge horizons. I would love to collaborate with you and your team if I will be invited to co-author.
Congratulation to @Lab_zi and @Camaro
End of Round 2β±οΈ
Over 2 years agoGreat work guys.
Thank you @mohanty, @shivam for making this challenge happen.
Thanks everyone on the leaderboard, it was really fun this time. In the previous week it felt like a roller coaster ride with all the score shuffling. I expect the final leaderboard will see a very small variance in the score and the results donβt shuffle by a huge gap.
Congratulations to @team_zi, and great work @saidinesh_pola, @nivedita_rufus and @unnikrishnan.r
Getting less mAP on MMdetection
Over 2 years agoI guess I know what top participants are using but I cannot reveal that at the moment since the competition is still ongoing. What I can tell you is:
- I am highly confident that they are using MMDetection
- You havenβt tried the previous challengeβs best solution. I know that because I won it, and as simple as Mask RCNN yielded slightly better performance than what you are getting now
- Data augmentation, Multi-scale training do make a significant improvement
- The hyper-paramters which are well known in image classification tasks play a role in instance segmentation but not too much, the research has its own hyper-parameters that you will find in
test_config
- Regarding ensemble. Yes, it is not working in this challenge because AP@50 is quite robust and naive ensembling will not be enough. You need some more post-processing steps to filter out false positives.
Local Run Produces Different AP
Over 2 years agoMy theory:
- Validation results that you see in Gitlab are just a model sanity check. I think it has nothing to do with what you see on your local machine. It just checks if your submission is worthy to assign pods to evaluate.
- Your local validation results are overfitted for the very obvious reason that some ~950 images are the same as that of the train. In case you have already removed these you will run into the problem of class imbalance with some classes not available at all. I created a new validation set for myself and I can say that itβs the best, the result I see on my local machine gives ~+6.0% (positive variance) jump on a test score.
Data Purchasing Challenge 2022
When will release round 2 baseline?
Almost 3 years agoRegarding baseline, I guess the AICrowd team is working on it and it is will be out soon, (itβs expected today). However, you can start your R&D with this notebook AIcrowd | [First Baseline + Explainer] Getting Started With ROUND 2 | Posts.
Regarding live score, I think itβs live but submissions are failing a lot lately because of time-out issues (at least for me).
:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live!
Almost 3 years agoEfficient-b0 tends to learn faster compared to its family members. Since the dataset is smaller, with 64 as batch size, unfreezing all layers would be better for b0 compared to the same config in b4, the quality of purchase depends on it, I think you may perform a small experiment with the best-unlabelled images (64 batches, 0.x LR, y epochs) and see if b0 outperforms b4.
In any case, I donβt think it matters. If all the evaluations will run on the same configuration then the performance will be equally good or bad for all the participants. However, with the above experiment, I guess you and we would be able to see if the purchase makes any sense or not.
Need Clarification for Round 2
Almost 3 years agotfreidel raised the same bug here (:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live! - #11 by tfriedel). I also think it should be aggregated_dataset
instead of training_dataset
. Although it is in local_evaluation.py
which will not be part of any sort of evaluation.
[Resolution] Bugs With Getting Started Of Round 2
Almost 3 years agoYep. Thanks.
Could you add extract, rename script in GitLab repo, or maybe change the local_evaluation.py
just like you did in colab.
[Resolution] Bugs With Getting Started Of Round 2
Almost 3 years agoThis post just focuses on the Magic box part. I havenβt checked out the methods yet, I hope thereβs nothing left out there but Iβll check and report any inconsistencies.
You are right, the notebook uses the public_
prefix in dataset declarations but the GitLab code doesnβt. In any case, the spelling is still messed up, no big of an issue but maybe you want to correct it.
[Resolution] Bugs With Getting Started Of Round 2
Almost 3 years agoSome issues that I faced with Getting started of Round 2.
- Dataset download with prefix
public_*
, howeverlocal_evaluation.py
uses directory withoutpublic_*
- Spelling mistake for unlabelled dataset, currently it is
public_unlabeled.zip
rather it should bepublic_unlabelled.zip
. You will see this once you download the dataset, not while listing it.
I may be wrong, if so please correct me @shivam @mohanty.
I have put together a Magic box
(based on magic box from Round 1) that will make things easy and make the repository ready to use. Here are a few actions that I am trying to achieve.
- Cloning the repository for Round 2
- Downloading datasets for Round 2 and putting them in relevant directories (abiding the latest
local_evaluation.py
file) - Renaming the dataset directories as per latest
local_evaluation.py
Magic Box for Colab
try:
import os
if first_run and os.path.exists("/content/data-purchasing-challenge-2022-starter-kit/data/training"):
first_run = False
except:
first_run = True
if first_run:
%cd /content/
!git clone http://gitlab.aicrowd.com/zew/data-purchasing-challenge-2022-starter-kit.git > /dev/null
%cd data-purchasing-challenge-2022-starter-kit
!aicrowd dataset list -c data-purchasing-challenge-2022 | grep -e 'v0.2'
!aicrowd dataset download -c data-purchasing-challenge-2022 *-v0.2-rc4.zip
!mkdir -p data/
!mkdir -p data/v0.2-rc4
!mv *.zip data/v0.2-rc4 && cd data/v0.2-rc4 && echo "Extracting dataset" && ls *.zip | xargs -n1 -I{} bash -c "unzip \*.zip > /dev/null"
!mv data/v0.2-rc4/public_debug data/v0.2-rc4/debug
!mv data/v0.2-rc4/public_training data/v0.2-rc4/training
!mv data/v0.2-rc4/public_unlabeled data/v0.2-rc4/unlabelled
!mv data/v0.2-rc4/public_validation data/v0.2-rc4/validation
Magic Box for Local System
#!/bin/bash
git clone http://gitlab.aicrowd.com/zew/data-purchasing-challenge-2022-starter-kit.git
cd data-purchasing-challenge-2022-starter-kit
aicrowd dataset list -c data-purchasing-challenge-2022 | grep -e 'v0.2'
aicrowd dataset download -c data-purchasing-challenge-2022 *-v0.2-rc4.zip
mkdir -p data/
mkdir -p data/v0.2-rc4
mv *.zip data/v0.2-rc4 && cd data/v0.2-rc4 && echo "Extracting dataset" && ls *.zip | xargs -n1 -I{} bash -c "unzip \*.zip > /dev/null"
mv public_debug debug
mv public_training training
mv public_unlabeled unlabelled
mv public_validation validation
Put the above code in magic_box.sh
and execute
>>> bash magic_box.sh
Please do let me know any improvements or questions in the comments below, I would be glad to help you.
Click on if this post was of any help
0.9+ Baseline Solution for Part 1 of Challenge
Almost 3 years agoBuying low-accuracy labels (dents
) make the most sense in this challenge, sounds easy but challenging. I have the exact same heuristic with the addition of one more policy (donβt judge by my score it is only a baseline, I didnβt submit the solution because I had too much on my plate ).
Just to give a perspective, here is the confusion matrix of dent_small
and dent_large
respectively.
[[7327 392]
[ 719 1562]]
[[8736 116]
[ 328 820]]
The sequence is - tn, fp, fn, tp
. fp+fn
for dent classes is much more than those of scratch class.
One of the approaches to deal with this can be the weighted loss function. I havenβt implemented it so canβt tell the improvement.
Error : no gpu
Almost 3 years agoYour code must be supported for CUDA + Put gpu:true
in aicrowd.json
If you still have the problem then only @shivam can help you.
Brainstorming On Augmentations
Almost 3 years agoRegarding 4, I didnβt want to make any comment on your coding skills. Itβs good to follow coding good practices, always useful, and makes the code reusable, understandable, etc. At least for me, I try to write code that should not require not much to make it production-ready.
I didnβt mean any offense.
Brainstorming On Augmentations
Almost 3 years agoI tried your code and nobody asked but here are my 2 cents:
- I donβt understand (if anybody does then please help) why you have written separate dataset classes. The dataset classes are self-sufficient on their own and are meant to be used the way they were created.
- You have completely neglected the
pre-training phase
. You are doing it in thepurchase phase
which is a different purpose altogether. - I tried your augmentations and hyper-parameters but wasnβt able to reproduce the results. I am using the provided dataset class and pre-training phase, not the way you have done it. Maybe this could be a reason why I am not able to reproduce the results.
- Sorry to say this but the code is really messy.
Learn How To Make First Baseline Model With 0.44+ Accuracy on LeaderBoard [π₯ Tutorial]
Almost 3 years agoThanks for the feedback. I have made the relevant changes.
Notebooks
-
[First Baseline + Explainer] Getting Started With ROUND 2 Boiler Plate Code With Abiding (COMPUTE AND PURCHASE BUDGET) For Getting Started With Round 2gaurav_singhalΒ· Almost 3 years ago
-
A New Baseline With 0.71 accuracy on LB One small change in previous baseline and you can get a new baseline with 0.71 accuracy on LBgaurav_singhalΒ· Almost 3 years ago
-
WANDB - Build better models faster with experiment tracking Track, compare, and visualize ML experiments with a few lines of code. Baseline as an ILLUSTRATION.gaurav_singhalΒ· Almost 3 years ago
-
Create your baseline with 0.4+ on LB (Git Repo and Video) Public Git Repo and Video to create a baseline which will get you 0.44+ accuracy on Leaderboardgaurav_singhalΒ· Almost 3 years ago
-
Apply the right Normalisation transformation for your data Apply the right normalisation transformations to different datasets.gaurav_singhalΒ· Almost 3 years ago
Meta's Segment Anything Model (SAM)
Over 1 year agoThe competition focuses on specialization, SAM is more of a generalized model. As of now, there are no guidelines on how to use this as a downstream. Out of curiosity, have you tried it already in the competition?