Organization
Location
Badges
Activity
Ratings Progression
Challenge Categories
Challenges Entered
Understand semantic segmentation and monocular depth estimation from downward-facing drone images
Latest submissions
Using AI For Buildingβs Energy Management
Latest submissions
Latest submissions
The first, open autonomous racing challenge.
Latest submissions
See Allgraded | 178630 | ||
graded | 178623 | ||
graded | 178619 |
Perform semantic segmentation on aerial images from monocular downward-facing drone
Latest submissions
Participant | Rating |
---|---|
stefan_podgorski | 0 |
james_bockman | 0 |
Participant | Rating |
---|
Learn-to-Race: Autonomous Racing Virtual Challenge
Clarification on input sensors during evaluation
Almost 3 years agoAny updates here? would be good to know before round 2 starts
Clarification on input sensors during evaluation
Almost 3 years agoAfter reading this thread I am still unclear about the availability of the ground truth segmentation masks during the β1 Hourβ training period for round 2. It is clear they will not be available during the evaluation period.
After the code change for using multiple cameras this line in evaluator.py
self.check_for_allowed_sensors()
throws an exception when trying to add them to the sim environment.
Access to these masks is important for anyone using a segmentation model
Clarification on input sensors during evaluation
Almost 3 years agoCheck that the sensors you want are enabled in the config.py file. See active_sensors, add the ones you want from the cameras dict in the Envconfig class.
class SimulatorConfig(object):
racetrack = βThruxtonβ
active_sensors = [
βCameraFrontRGBβ,
]
driver_params = {
βDriverAPIClassβ: βVApiUdpβ,
βDriverAPI_UDP_SendAddressβ: β0.0.0.0β,
}
camera_params = {
βFormatβ: βColorBGR8β,
βFOVAngleβ: 90,
βWidthβ: 512,
βHeightβ: 384,
βbAutoAdvertiseβ: True,
}
vehicle_params = False
Hope this is helpful
Need your Inputs for improving competition
Almost 3 years agoIs there a way to view/playback submitted evaluations? It would be a great asset to be able to view these so that irregular behavior can be diagnosed. I understand it cannot be done for round 2. I have noticed large discrepancy between scores, performance and agent behavior in a local simulator versus the evaluation results used for grading, even if you reduce the frame rate to match the evaluation server.
KeyError: βsuccess_rateβ
Almost 3 years agoHi @jyotish,
Could you please have a look at this problem, this error occurred today for my submission. The agent likely completed the entire course.
2022-02-04 07:46:05.823 | INFO | main:run_evaluation:81 - Starting evaluation on Thruxton racetrack
2022-02-04 07:46:09.866 | INFO | aicrowd_gym.clients.base_oracle_client:register_agent:210 - Registering agent with oracleβ¦
2022-02-04 07:46:09.868 | SUCCESS | aicrowd_gym.clients.base_oracle_client:register_agent:226 - Registered agent with oracle
/home/miniconda/lib/python3.9/site-packages/numpy/core/fromnumeric.py:3440: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
/home/miniconda/lib/python3.9/site-packages/numpy/core/_methods.py:189: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
Ground truth segmentation image in training phase seems to be invalid
Over 2 years agoI would also check the number of images. Obviously for single camera it should be 2 and you would expect segmentation mask to be index 1, might pay to check if 6 exist as per multi camera.