Loading
0 Follower
0 Following
the_raven_chaser

Location

CN

Badges

2
1
1

Activity

Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Measure sample efficiency and generalization in reinforcement learning using procedurally generated environments

Latest submissions

See All
graded 94646
failed 94180
graded 93850

Self-driving RL on DeepRacer cars - From simulation to real world

Latest submissions

No submissions made in this challenge.

Sample-efficient reinforcement learning in Minecraft

Latest submissions

No submissions made in this challenge.

Multi-agent RL in game environment. Train your Derklings, creatures with a neural network brain, to fight for you!

Latest submissions

No submissions made in this challenge.

Sample-efficient reinforcement learning in Minecraft

Latest submissions

No submissions made in this challenge.

Multi Agent Reinforcement Learning on Trains.

Latest submissions

No submissions made in this challenge.

Robots that learn to interact with the environment autonomously

Latest submissions

No submissions made in this challenge.
Participant Rating
Participant Rating
  • May NeurIPS 2019 - Robot open-Ended Autonomous Learning
    View
  • zero NeurIPS 2020: Procgen Competition
    View

NeurIPS 2020: MineRL Competition

Error when downloading dataset

About 4 years ago

Hello,

I’m trying to download minerl dataset, but I keep receiving errors like the following one

ERROR:root:Error - HTTPSConnectionPool(host=β€˜minerl.s3.amazonaws.com’, port=443): Read timed out.
ERROR:root:Header from download attempt of https://minerl.s3.amazonaws.com/v3/data_texture_0_low_res.tar: {β€˜x-amz-id-2’: β€˜N+3ju+t3ckRz+EbA36wQRlOXjuizVBvBJX1iab97sMxNjBbF1Z4tv0Q8knTtTLOWJQUN7UugMbE=’, β€˜x-amz-request-id’: β€˜51D21B5C1E9E825D’, β€˜Date’: β€˜Wed, 18 Nov 2020 11:37:23 GMT’, β€˜x-amz-replication-status’: β€˜COMPLETED’, β€˜Last-Modified’: β€˜Tue, 16 Jun 2020 12:24:29 GMT’, β€˜ETag’: β€˜β€œ4bf36a445e836ff163b9d6e738d691f9-7758”’, β€˜x-amz-version-id’: β€˜eOnRrCN.6nfceKuVhqGSRL8aV5BlMBB1’, β€˜Accept-Ranges’: β€˜bytes’, β€˜Content-Type’: β€˜application/x-tar’, β€˜Content-Length’: β€˜65075773440’, β€˜Server’: β€˜AmazonS3’}
ERROR:minerl.data.download:IO error encountered when downloading - please try again
ERROR:minerl.data.download:None

Is there another way to download the dataset? BTW, I’m in China.

Is there any team that still has a seat?

About 4 years ago

Hi everyone.

I just finished my competition on Procgen and ranked 3rd place in round 2. I’d like to participate in MineRL too but I don’t have many resources for this competition – I get one GPU and $300+ AWS credit. Also, I think working with people would be a lot more interesting than working alone! Therefore, I want to ask and see if any team still open to new members.

I wish to join a team that’s active and eager to improve its agent and I promise that I’ll do my best in the rest time of the competition.

NeurIPS 2020: Procgen Competition

Is it a preknowledge that we should select one of existing submissions for the final evaluation?

About 4 years ago

Hi @dipam_chakraborty

The final evaluation evaluates generalization, but I did not use any regularizations such as batch normalization and data augmentations in my previous submissions. Also, in my latest few submissions, I chose to experiment with a newly introduced hyperparameter instead of using the one that performed well on my local machine.

Is it a preknowledge that we should select one of existing submissions for the final evaluation?

About 4 years ago

Hi @vrv

Thank you for the response. Yeah, I know that was my bad after reviewing thoroughly the overview page and the answer I linked before. However, we did not always follow these, right? For example, we used 6 public and 4 private test environments in the second round instead of 4 and 1 described on the overview page. Also, this answer said we got to pick 3 submissions but at the end of the day, we only pick one.

Maybe I should ask this before instead of wishfully thinking a new submission is viable. At this point, I don’t how which submission I should use as I said before, none of them were made for the final evaluation.

The purpose I posted this was to see if there was someone else facing a similar situation. If I’m the only one, I’ll accept it.

Although some of the above words may seem like complaining, I am not meant to. I’ve learned a lot during the competition and received a lot of help from you guys. Thank you all.

Is it a preknowledge that we should select one of existing submissions for the final evaluation?

About 4 years ago

Hi everyone,

I’m wondering if I’m the only one that learns we should select one of the previous submissions for the final evaluation. I cannot find any official statement about this and the only clue I can find now is this answer, which I’ve previously read but not paid much attention to the word β€œexisting”. That was a mistake of mine but I humbly don’t think such an answer in the forum could be counted as a formal statement.

It’s really frustrating to learn this at this point as none of my previous solutions was prepared for the final evaluations. I thought the challenge was to find a good solution but in the end, I found myself trapped in some word game. I am not meant to complain as I definitely should be responsible for the above mistake. However, if anyone feels the same way, please say something. Maybe, together we can make the game more interesting.

Is it possible to run the evaluation on all environments in the final week of round 2?

About 4 years ago

In the final week of round 2, is it possible to run the evaluation on all environments? To reduce the computation cost, maybe we can reduce the submission quota a bit.

Suggestion to switch from spot to on-demand

About 4 years ago

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

Number of environments in round 2

About 4 years ago

I am humbly against Feiyang. Not everyone gets that amount of computation resources to try out their ideas. A decent amount of daily limits on submissions are helpful.

On the other hand, I agree with @jurgisp and @quang_tran that we should relax the 2-hour limit. I think this makes the competition bias to on-policy algorithms. On-policy algorithms can take advantage of large batches and therefore use less training iterations. But off-policy algorithms usually work with a much smaller batch size and require more training iterations and more time to train.

Round 2 is open for submissions πŸš€

About 4 years ago

Hi @jyotish

What’s the use of the blind reward?

Running the evaluation worker during evaluations is now optional

Over 4 years ago

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

Running the evaluation worker during evaluations is now optional

Over 4 years ago

Hi @jyotish

How should I change train.py to disable the evaluation worker locally?

Inform the agent a new episode starts

Over 4 years ago

Is there a way to inform the agent that a new episode starts when defining trainer using build_trainer?

TF2 is by default not enabled?

Over 4 years ago

I see. But why should it be restricted to TF1.x even though I’ve set framework=tfe in the yaml file?

TF2 is by default not enabled?

Over 4 years ago

Adding the following code to this line of train.py

from tensorflow.python import tf2
print(tf2.enabled())
assert False

it prints False. Is there any way to enable tf2?

About Ray trainer and workers

Over 4 years ago

This file may contain something what you’re looking for

Rllib custom env

Over 4 years ago

Hi @jyotish

Could you please answer my above question?

If we use frameskip, does the framework count the number of frames in the right way?
For example, if we use frame_skip=2 , the number of interactions between the agent and environment is 8e6/2=4e6 when using only 8M frames. If we use the standard configuration which set timesteps_total=8000000 , will this stop correctly?

Rllib custom env

Over 4 years ago

If we use frameskip, does the framework count the number of frames in the right way?

For example, if we use frame_skip=2, the number of interactions between the agent and environment is 8e6/2=4e6 when using only 8M frames. If we use the standard configuration which set timesteps_total=8000000, will this stop correctly?

Rllib custom env

Over 4 years ago

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

Unusually large tensor in the starter code

Over 4 years ago

Thank you so much and sorry for the late response.

the_raven_chaser has not provided any information yet.