Activity
Ratings Progression
Challenge Categories
Challenges Entered
A benchmark for image-based food recognition
Latest submissions
See Allgraded | 67371 | ||
graded | 67369 | ||
graded | 67367 |
Participant | Rating |
---|
Participant | Rating |
---|
-
simon_mezgec Food Recognition ChallengeView
Food Recognition Challenge
Round 2 Finished
Over 4 years agoSince Round 2 of the competition ended, I just wanted to say a big thanks to the AIcrowd team, and @shivam in particular, for all the help throughout the competition - especially in the last couple of days. Every error and stuck submission were resolved in time, most of them very quickly, so thanks for that!
The competition itself is loads of fun. As someone who has worked in the food image recognition field for a good couple of years now, it’s fantastic to see not just a benchmark for this field, but also a competition associated with it as well. This should help attract more interest, researchers and data scientists to the problem, which should in turn speed up progress.
Eager to see where this dataset and competition go in the future!
Submissions taking too long
Over 4 years agoThanks to both @shivam and @ashivani - my subsequent submissions went through without any problems.
Submissions taking too long
Over 4 years agoMy new submission (67274) is very slow as well - over one hour in the evaluation queue. There seems to be an issue with the submissions today.
Submissions taking too long
Over 4 years ago@shivam - I think the submissions might be stuck again. My submission (67214) has been waiting in queue for evaluation for almost an hour now.
Round 2 End Time
Over 4 years agoI was just wondering when exactly does Round 2 end? So what date and time/hour (maybe in CET or GMT)?
Submissions taking too long
Over 4 years agoAh, interesting - good catch!
Uploaded another submission (66451) and the image was built successfully without a hitch upon adding the mmcv
version requirement like you suggested. Will re-upload the same model from submission 66390 later today.
Thanks a lot - really appreciate the quick fix for this!
Submissions taking too long
Over 4 years ago@shivam - encountered a new error (submission 66390) and I think it’s the server again.
By the way, sorry for pinging you here as well - didn’t know where you prefer it (here vs. GitLab).
Submissions taking too long
Over 4 years agoFantastic - I figured it was some kind of system-wide error related to Docker. Thanks for sorting it out!
Submissions taking too long
Over 4 years ago@shivam Submission 65948 seems to have failed but I don’t think it should have (similarly to my two submissions yesterday). Can you check it out?
Thanks!
Submissions taking too long
Over 4 years agoThanks @shivam!
My two submissions (65636 and 65637) got unstuck and finished successfully.
However, my new submission (65790) appears to also be stuck, so if you could get it unstuck as well, I would appreciate it.
Submissions taking too long
Over 4 years ago15 hours and the submissions are still waiting in queue for evaluation.
@shivam @mohanty - is there a server-side issue that’s causing the stuck submissions?
Submissions taking too long
Over 4 years agoAre submissions currently stuck? My submission has been waiting in queue for evaluation for over an hour (usually it’s a couple of minutes).
EDIT: the submission went through eventually.
EDIT 2: new submissions seem to be stuck again, this time frozen for over two hours currently.
New tag does not create an issue or evaluation
Over 4 years agoMy models (which are quite a lot bigger than 200MB) get checked into LFS with no issues. No special commands, just have LFS set up and then git add
the checkpoint of the model (.pth
file).
Train using mmdetection and submit via Colab (Round 2)!
Over 4 years agoAwesome - that was my assumption as well, but I figured it was also a possibility that the actual config file wouldn’t be shared. Thanks for the quick answer - much appreciated!
Train using mmdetection and submit via Colab (Round 2)!
Over 4 years agoThanks for this, particularly the try/except block for the training script was useful.
I have one question about the baseline MMDetection submission (food-round2
repo by @nikhil_rayaprolu): if it’s not a secret, what settings were used to generate the baseline model? Is it correct to assume that it was trained with the provided training hyperparameters (type='SGD', lr=0.0025, momentum=0.9, weight_decay=0.0001
), the default learning policy, no pre-training (load_from = None
& resume_from = None
) and then epoch 20 was taken for the final model?
The reason I’m asking is that I have been using similar settings to train the same architecture (except with pre-training) and I have gotten considerably worse results in 30 epochs with the evaluator, so I would like to troubleshoot if my settings aren’t optimal or there’s something else wrong.
Step by step tutorials
Over 4 years agoYou can start with this baseline submission that’s based on MMDetection, which itself is based on PyTorch:
Clone the above repository, change your username in the aicrowd.json
file, and then follow the instructions here (from # Add AICrowd git remote endpoint
forward):
This way, you should be able to upload a valid submission and get on the leaderboard with the baseline result (AP=0.478
and AR=0.687
).
After that, you can simply swap the included .pth
model with your own - just change the run.sh
script with the correct model filename. Keep in mind that the baseline submission also has scripts for training & evaluation, so you can use these tools to your advantage.
I ran into some problems while trying to use my own models with this baseline submission, but after cloning the repo again and changing only the model file, it works once more (the repo was updated recently).
I also don’t see my repo updated after the latest submissions (it did update for the first couple of submissions in February, and after that it stopped updating with every submission), but everything is successfully uploaded nonetheless. So just be mindful that you upload your model to LFS and push it, and you should be fine.
Best of luck!
Submissions not finishing
Over 4 years agoThanks for the reply - for some reason it didn’t appear for me until now, which is why I posted my previous reply (which I’ve since deleted).
The model file I uploaded (model.pth) is based on the same architecture (HTC R-50-FPN) as the baseline submission. I used the same config file to train a new model - the only changes done to the config file were to add the correct paths to the data, and changing the training hyperparameters. The idea was to start with the baseline submission and start experimenting with parameters before moving onto other architectures, but it seems that something went wrong, and I’m not sure what that is.
I tried debugging on my PC, but everything works correctly on my end (also the evaluation speed for the 418 validation images is the same as for the baseline model).
Round 2 Finished
Over 4 years agoThanks for the reply @mohanty!
I can imagine it’s quite a complex process on your end, and on the user side there’s a learning curve if you are not used to some of the tools, but it’s not too bad at all. Certainly the starter kit helps immensely in that regard, and the notebooks are also very useful.
I’m a big fan of this format (sharing data with the community and coming up with new best-in-class solutions together with it), and the competition is great as an additional motivator to get engaged with these problems. As much as time will allow, I do intend to try my hand at future rounds (if there will be future rounds, of course).
I’m fully willing (and eager) to participate in the community calls as well!