Location
Badges
Activity
Ratings Progression
Challenge Categories
Challenges Entered
Sample-efficient reinforcement learning in Minecraft
Latest submissions
See Allfailed | 25557 | ||
failed | 25556 | ||
failed | 25407 |
Participant | Rating |
---|
Participant | Rating |
---|
-
MeatMountain NeurIPS 2019 : MineRL CompetitionView
NeurIPS 2019 : MineRL Competition
Evaluation result says file too largeοΌ
About 5 years agoThxs for the log! But it is really strangeβ¦we didnβt encounter any problem for this part locallyβ¦we will go on debuging it any way. Thx again.
Evaluation result says file too largeοΌ
About 5 years ago(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)
Evaluation result says file too largeοΌ
About 5 years agoWe did consider this, yet the only βlong file nameβ we reach in code, is the record directory name of the human data, this works well on our local machine too.
Evaluation result says file too largeοΌ
About 5 years agoAIcrowd Submission Received #25266
please find it above. Thx.
Evaluation result says file too largeοΌ
About 5 years ago2019-11-25T14:04:59.69570723Z [Errno 27] File too large
2019-11-25T14:05:00.438503169Z Ending traning phase
This is the response of evaluation. But our uploaed package is totally no more than 4M.
The checkpoint model file will be larger than 30M, isnβt it acceptable??
About the rule on pre-trained model
About 5 years agoHi all,
According to the rule, any file larger than 15MB will be removed to prevent using pretrained models. So what if my pretrained model is much smaller? Say only 1.5MB. Is it accpetable?
The evaluation result does not match my local testing
About 5 years agoToo bad.
Check my above replay, see if could help.
The evaluation result does not match my local testing
About 5 years agoAs far as i can tell, it seems that 4-steps-1-episode thing happen whenever sth going wrong inside the main script, while the system wonβt report any of it (which sucks indeed). So what I can do is deleting part of my codes one by one and submit it again and again, until finding out the critical partβ¦
The evaluation result does not match my local testing
About 5 years agoHey, I just solved mine. The issue is about the Tensorflow version. By default the system will install TF2.0, while my code is running on TF1.13, and the thing unexpected is the system not reporting any error about it. HTH.
The evaluation result does not match my local testing
About 5 years agoHi all,
I have just submit a pretrain version for evaluation, yet the result indicates that it goes only 1 episode with 4 steps; however when I ran the evaluate_local.sh on my own machine the output is 5 episodes with each at least 5k steps. The code files are totally identical. Anyone have any ideas about this situation? Thx.
About the pip install format
About 5 years agoJust want to make sureοΌ
If i would like to install a specific version of some package, say tensorflow-1.12, how should I format it in my requirement.txtοΌ
About the submission content
Over 5 years agoHi,
There are two things I want to make clear about the submission rule, both of which are about Round 1:
- In the training phase, we are required to submit the trained model for evaluating, so do we need to submit the test.py for doing the inference? Because I didnβt find any document describing the submitted modelβs input/output rule.
- In the evaluation phase, we are required to submit the code for retraining. So should our code structure be EXACTLY like the ones in the StartKit? Is it ok to include more scripts in the case that my training logic is too much to be contained only in train.py?
Thanks.
Any way to completely terminate a submission?
About 5 years agoWe have submitted several incorrect version, but it seems that closing the issue wonβt stop the processing of the submission. Yet the maximum parallel submission is 3, could we have any ways to terminate them? Otherwise we have no time to try the correct ones.