
Organization
Location
Badges
Activity
Challenge Categories
Challenges Entered
Generate Synchronised & Contextually Accurate Videos
Latest submissions
Improve RAG with Real-World Benchmarks
Latest submissions
Revolutionise E-Commerce with LLM!
Latest submissions
Revolutionising Interior Design with AI
Latest submissions
Multi-Agent Dynamics & Mixed-Motive Cooperation
Latest submissions
Advanced Building Control & Grid-Resilience
Latest submissions
Specialize and Bargain in Brave New Worlds
Latest submissions
Trick Large Language Models
Latest submissions
Shopping Session Dataset
Latest submissions
Understand semantic segmentation and monocular depth estimation from downward-facing drone images
Latest submissions
Audio Source Separation using AI
Latest submissions
Identify user photos in the marketplace
Latest submissions
A benchmark for image-based food recognition
Latest submissions
Using AI For Buildingβs Energy Management
Latest submissions
Learning From Human-Feedback
Latest submissions
What data should you label to get the most value for your money?
Latest submissions
Interactive embodied agents for Human-AI collaboration
Latest submissions
Specialize and Bargain in Brave New Worlds
Latest submissions
Amazon KDD Cup 2022
Latest submissions
Behavioral Representation Learning from Animal Poses.
Latest submissions
Airborne Object Tracking Challenge
Latest submissions
ASCII-rendered single-player dungeon crawl game
Latest submissions
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
Measure sample efficiency and generalization in reinforcement learning using procedurally generated environments
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
Self-driving RL on DeepRacer cars - From simulation to real world
Latest submissions
3D Seismic Image Interpretation by Machine Learning
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
Multi-Agent Reinforcement Learning on Trains
Latest submissions
A dataset and open-ended challenge for music recommendation research
Latest submissions
A benchmark for image-based food recognition
Latest submissions
Latest submissions
Sample-efficient reinforcement learning in Minecraft
Latest submissions
Latest submissions
5 Puzzles, 3 Weeks. Can you solve them all? π
Latest submissions
Multi-agent RL in game environment. Train your Derklings, creatures with a neural network brain, to fight for you!
Latest submissions
Predicting smell of molecular compounds
Latest submissions
Find all the aircraft!
Latest submissions
5 Problems 21 Days. Can you solve it all?
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
5 Puzzles, 3 Weeks | Can you solve them all?
Latest submissions
Latest submissions
Grouping/Sorting players into their respective teams
Latest submissions
5 Problems 15 Days. Can you solve it all?
Latest submissions
5 Problems 15 Days. Can you solve it all?
Latest submissions
Predict Heart Disease
Latest submissions
5 PROBLEMS 3 WEEKS. CAN YOU SOLVE THEM ALL?
Latest submissions
Latest submissions
Remove Smoke from Image
Latest submissions
Classify Rotation of F1 Cars
Latest submissions
Can you classify Research Papers into different categories ?
Latest submissions
Can you dock a spacecraft to ISS ?
Latest submissions
Multi-Agent Reinforcement Learning on Trains
Latest submissions
Multi-Class Object Detection on Road Scene Images
Latest submissions
Localization, SLAM, Place Recognition, Visual Navigation, Loop Closure Detection
Latest submissions
Localization, SLAM, Place Recognition
Latest submissions
Detect Mask From Faces
Latest submissions
Identify Words from silent video inputs.
Latest submissions
A Challenge on Continual Learning using Real-World Imagery
Latest submissions
Latest submissions
See Allgraded | 200977 |
Music source separation of an audio signal into separate tracks for vocals, bass, drums, and other
Latest submissions
Amazon KDD Cup 2023
Latest submissions
Amazon KDD Cup 2023
Latest submissions
Make Informed Decisions with Shopping Knowledge
Latest submissions
Generate Videos with Temporal and Semantic Audio Sync
Latest submissions
Improve RAG with Real-World Benchmarks | KDD Cup 2025
Latest submissions
Participant | Rating |
---|---|
![]() |
0 |
![]() |
0 |
![]() |
0 |
![]() |
0 |
Participant | Rating |
---|
-
powerpuff AI Blitz XView
-
teamux NeurIPS 2021 - The NetHack ChallengeView
-
tempteam NeurIPS 2022 IGLU ChallengeView
-
testing Sound Demixing Challenge 2023View
-
grogu HackAPrompt 2023View
-
apollo11 MosquitoAlert Challenge 2023View
-
testteam Commonsense Persona-Grounded Dialogue Challenge 2023View
-
temp-team Generative Interior Design Challenge 2024View
Brick by Brick 2024-bc2191

π Brick by Brick Challenge 2024 β Winner Announcement & Their Solutions
11 days agoHello everyone,
The Brick by Brick Challenge 2024 focused on automating sensor metadata labelling in smart buildings using time-series data. The Brick by Brick Challenge 2024 brought together 313+ participants from 40+ countries, resulting in an incredible 1,870+ submissions!
Letβs meet the winners and see how they tackled this problem!
MEET THE WINNERS
1st Place: Team yddm
Prize: 5,000 AUD + 2,500 AUD Travel Grant
Members: @mike_q (Chengfeng Qiu), @zoe1113(Jiahui Zhou), @leo2025vv (Yongfeng Liao)
Approach:
Team yddm developed a hierarchical feature extraction framework to capture the complex temporal dynamics in time-series building data. They transformed the multi-label classification task into a 91-class single-label problem, simplifying the process while maintaining high precision. The team used CatBoost, a gradient-boosting algorithm specifically designed for categorical data, which proved highly effective in managing the intricacies of time-series data. Their approach supported energy-efficient building management and contributed to the standardisation of data formats for practical deployment. The model achieved exceptional performance, winning first place in the Brick by Brick 2024 competition, demonstrating its potential for real-world applications in sustainable building operations.
Background:
Chengfeng Qiu works at NetEase and is an expert AI researcher, having won top prizes at NIPS, ICDM, and WWW. His expertise includes multimodal data, time-series analysis, and large language models, particularly in risk control algorithm development.
2nd Place: Team xiaobaiLan
Prize: 3,000 AUD + 2,000 AUD Travel Grant
Members: @xiaobaiLan (Meilan Xu), @20N (Zheng Wen), @wangbing106 (Bing Wang)
Approach:
The team applied multi-domain feature engineering, hierarchical probability calibration, and model optimisation to classify 94 sub-classes from irregular time-series data. The raw sensor data was segmented into daily intervals, and 78 features were extracted across statistical, temporal, spectral, and peak domains. The multi-label task was converted into a 91-class problem using label concatenation. A 64th-root probability calibration method was used to preserve label hierarchy and reduce extreme probabilities. Temporal difference features were the most discriminative.
The model, built with XGBoost, achieved macro F1 scores between 0.7366 and 0.7391 in validation. It recorded a macro F1 score of 0.6127 on the public leaderboard and an overall score of 0.544 across both public and private test sets. Prediction remapping (-1 to 0.1, 1 to 0.9) was applied to align with competition metrics. Training and inference were completed within 14 hours. The approach supports scalable, real-world building energy management and enables fine-grained equipment status classification for sustainable operations.
Background:
Meilan Xu is a database system administrator interested in data mining competitions across diverse industries.
3rd Place (1): Team NaiveBaes
Prize: 1,000 AUD + 1,500 AUD Travel Grant
Members: @joe_zhao7 (Haokai Zhao), @jonasmacken (Jonas Macken), @leocd (Leo Dinendra), @RuiyuanY (Ruiyuan Yang), @xenonhe (Yaqing He)
Approach:
NaiveBaes used feature engineering, hierarchical label structuring, data augmentation, and ensemble learning. They extracted features from statistical, temporal, and spectral domains. The multi-label task was divided into five tiers of multi-class classification to leverage label dependencies, with outputs from higher tiers used as inputs for lower tiers.
Data augmentation expanded the training set by 28 times and the test set by 2 to 3 times. The models, Random Forest and XGBoost, were trained using K-Fold cross-validation. Final predictions were aggregated using soft voting to balance precision and recall.
Background:
The team consists of postgraduate students from UNSW Sydney with backgrounds in AI, data science, and engineering. They collaborated through the universityβs AI Study Club.
3rd Place (2): Team bram
Prize: 1,000 AUD + 1,500 AUD Travel Grant
Member: @bram (Bram Steenwinckel)
Approach:
Bram converted the multi-label task into a multiclass classification problem by identifying 91 unique label combinations. He extracted 337 features from statistical, frequency, and temporal domains, including resampling the sensor data at different intervals to capture finer details. These features were reduced to the 40 most influential ones through feature selection techniques. He used an ensemble extra-trees classifier, which builds multiple decision trees with added randomness, to improve both the accuracy and stability of predictions. This well-structured approach not only delivered a strong performance on the competition leaderboard but also provided a clear pathway for integrating automated classification into smarter, energy-efficient building management systems.
Background:
Bram is a postdoctoral researcher at IDLab (Ghent University β imec), focusing on hybrid AI and the Semantic Web. He applies academic research to practical use cases in healthcare and predictive maintenance and is a Kaggle Competition Master.
3rd Place (3): Team chan_jun_hao
Prize: 1,000 AUD + 1,500 AUD Travel Grant
Member: @chan_jun_hao (Chan Jun Hao)
Approach:
Chan Jun Hao developed a multi-label time-series classification framework for building metadata labelling, with an emphasis on ratio- and correlation-based features that generalise well across buildings. His strategy involved:
- Minimal data cleaning, removing only abrupt drops to zero beyond a Z-score threshold, and treating missing values as informative signals by retaining them to reflect offline periods;
- Custom label-tuned class weights, using a tailored scaling function that slightly boosts minority labels without overly penalising major ones, to balance rare vs. common labels while maintaining both precision and recall;
- Relative feature focus, favouring ratio- and correlation-based features over absolute values to avoid generalisation issues due to varying building baselines. This approach demonstrated strong generalisation across buildings in the dataset.
Background:
With a background in architecture and current work in AI, Chan Jun Haoβs unique blend of domain expertise enabled him to approach the challenge with both technical depth and real-world insight.
A big thank you to all participants for your incredible efforts in the **Brick by Brick Challenge
Have feedback? Let us know: https://forms.gle/K7qh3pt1WBDyW5sEA
Join a new exciting challenge: AIcrowd | Challenges

π¬ Feedback & Suggestions
19 days agoHello @thomas_tartiere,
We are currently gathering the necessary information and will share a snippet about the winner and their approach in the official announcement post and email.

The Web Conf Announcement
About 1 month agoHi @kaushik_gopalan
Relaying the response from the organisers: They have received your submission on EasyChair and are currently reviewing it as a priority. If accepted, full registration will be required, even for remote participation. If your team is not among the top five, the registration fee will not be covered by the travel grant.
The organisers are currently preparing submissions for the camera-ready deadline and will share the review and notification with you soon. They extend their thanks and appreciation for your submission.

The Web Conf Announcement
About 1 month agoHello, Sorry itβs too late to make a submission now. Today is the last day to submit the camera-ready version of the paper.

The Web Conf Announcement
About 1 month agoHello @chan_jun_hao
Yes, winning teams can present virtually at the conference. However, in the case of a team, only one person can register and present the paper. Please note that a full conference registration is still required. I hope this answers your query.

Tentative Winner Announcement
About 2 months agoHi @leocd,
The final announcement is expected in the first week of March.
Based on our experience, the rankings will remain unchanged unless there is evidence of malpractice, misuse of dataset, or other violations. As you mentioned, we are currently validating the submission code and methods and checking for any use of external datasets.

Tentative Winner Announcement
About 2 months agoHello everyone,
Over the past few months, the Brick by Brick Challenge 2024 brought together 313+ participants from 40+ countries, resulting in an incredible 1,870+ submissions!
A huge thank you for participating in this challenge!
In the meantime, weβre excited to share the tentative winners of the challenge. Please note that this is not an official announcementβthey are subject to change following a due diligence review of the submissions.
Rank | Team Name | Members |
---|---|---|
![]() |
yddm | @mike_q, @zoe1113, @leo2025vv |
![]() |
xiaobaiLan | @xiaobaiLan, @20N, @wangbing106 |
![]() |
NaiveBaes | @leocd, @jonasmacken, @joe_zhao7, @xenonhe, @RuiyuanY |
![]() |
bram | @bram |
![]() |
chan_jun_hao | @chan_jun_hao |
Weβll be sharing more details about the winning teams and their solutions very soonβso stay tuned! Make sure youβre subscribed to AIcrowd emails and following us on Twitter and LinkedIn to be the first to know.
Got 2 minutes? Help us improve our challenges by filling out our feedback survey: https://forms.gle/K7qh3pt1WBDyW5sEA

The Web Conf Announcement
About 2 months agoHello @leocd
The travel grant covers one registration only - yes, only for one presenter, essentially for the paper to be registered. If theyβre remote, I guess they can all dial in together from the Zoom link. If they attend in person, they will get the whole travel grant, and they can decide how they use/share it

The Web Conf Announcement
About 2 months agoHello @kaushik_gopalan
Yes the session will be hybrid. The paper can be presented online via zoom. However, it still needs to be registered as a pre-conference registration rate (full ACM member/non-member rate). The virtual rate is only eligible for attendance, not paper registration.

The Web Conf Announcement
About 2 months ago-
Travel Grant: For the top 5 winners to claim the travel grants, you need to submit the documentation of your solutions to The Web Conf. 2025, and present it in person.
-
Every submission must be accompanied by at least one full (non-student) registration at the Pre-Conference. The early bird non-ACM member price is AUD 950, which is much less than the amount covered by the travel grant. Registration | International World Wide Web Conference 2025 ( WWW2025 | The Web Conf 2025 )
-
If you are among the top 5 winners but cannot attend the conference, you can still submit. A partial amount of the travel grant will be granted to cover the registration cost.
-
Everyone whose best submission is above the baselines is eligible and encouraged to submit to The Web Conf as well.
The submission link: Log in to EasyChair for WWW25 - Companion.
To submit your entry, please follow these steps:
- Go to the provided link: Log in to EasyChair for WWW25 - Companion.
- Click on βNew Submission.β
- Select βwww25-Competition: Brick-by-Brickβ from the dropdown menu.
- Follow the prompts to complete your submission.
Page limit: Maximum of 5 pages, including references.

Challenge Announcement
About 2 months agoLet me check with the organisers to confirm if they have received the request and can access the repository. Just to be extra sure, could you resend the invite? Thanks!

Challenge Announcement
About 2 months agoPlease Read
-
Submission Requirements
- All valid submissions must include accompanying documentation and code, which must be made publicly available under the Apache 2.0 license as per the competition rules. These will be hosted on the competition GitHub page: Competition GitHub.
- Additionally, participants are strongly encouraged to publish their documentation on ArXiv. More information about ArXiv can be found here, and submission guidelines are available here.
-
Workshop at The Web Conference 2025
- We encourage all participants to submit their work to and attend our workshop at The Web Conference 2025. Participation is at their own cost, except for the top 5 participants, who will receive travel grants.
-
βLessons Learnedβ Paper
- We plan to write a summary paper highlighting key findings and innovations from this competition. This paper will cite participant documentation if published in the workshop or on ArXiv.
Submission Documentation Requirement
As part of the challenge, we request all participants to document their submissions and methods with a formal write-up.
Winners are required to submit their documentation to The Web Conference 2025. However, all participants are strongly encouraged to share their approaches with the broader community.
To ensure consistency, please use the provided template for your submission documentation.
- Upload your completed PDF to the Google Form, including your name and AIcrowd username, no later than 3rd February 2025, 23:05 AoE.
How to Submit
Please submit your documentation and solution code via the following Google Form: https://forms.gle/WsM8d2XBFzXmpdkXA.
24-Hour Grace Period for Documentation & Repo Adjustments
Q: Can we re-upload and adjust our repository and documentation during the grace period using the same Google link?
Yes, you can update your submission via the same Google link, as it allows you to edit your response.
Q: Do we need to submit something before the deadline, even if itβs just an empty repository and documentation?
Before the deadline, participants must submit at least an abstract (a short paragraph) describing, at a high level, the approach being used.
Important: You will be disqualified from prize eligibility if you fail to submit documentation and do not open-source your code after the grace period.
How to Grant Access to Your GitHub/GitLab Repository
Please grant repository access to the following emails:
- arian.prabowo@unsw.edu.au
- sneha@aicrowd.com
Additional Clarifications on Submission Expectations
To ensure fair participation and transparency:
- By the challenge deadline, you should submit at least a brief abstract outlining your approach.
- The grace period is meant only for refining documentation and repository details, not for making a first submission.
- If you fail to submit any documentation and do not open-source your code within the grace period, you will not be eligible for prize money.
If you have any further questions, feel free to ask. Good luck with your submissions!

βΌοΈ βοΈ Important: Submission Documentation Guidelines
About 2 months ago1. Clarification on the Challenge End Date
The challenge officially ends on 3rd February 2025, 23:05 AoE. This means that the AIcrowd website will stop accepting submissions after 4th February 2025, 11:05 UTC. The deadline is now reflected on the challenge page.
2. 24-Hour Grace Period for Documentation & Repo Adjustments
Q: Can we re-upload and adjust our repository and documentation during the grace period using the same Google link?
Yes, you can update your submission via the same Google link, as it allows you to edit your response.
Q: Do we need to submit something before the deadline, even if itβs just an empty repository and documentation?
Before the deadline, participants are required to submit at least an abstract (a short paragraph) describing, at a high level, the approach being used.
Important: You will be disqualified from prize eligibility if you do not submit any documentation and fail to open-source your code after the grace period.
3. How to Grant Access to Your GitHub/GitLab Repository
Please grant repository access to the following emails:
4. Can We Use GitHub Instead of GitLab?
Our standard practice is to use GitLab for submissions.
Additional Clarification on Submission Expectations
To ensure fair participation and transparency:
- By the challenge deadline, you should submit at least a brief abstract outlining your approach.
- The grace period is meant only for refining documentation and repository details, not for making a first submission.
- If you fail to submit any documentation and do not open-source your code within the grace period, you will not be eligible for prize money.
Hope this clears up any confusion! Let us know if you have further questions.
All the best!
Sounding Video Generation (SVG) Challenge 2024

Potential bug with the evaluation metrics for Track 1
14 days agoWe are reviewing it and will get back to you.

π¬ Feedback & Suggestions
26 days agoHereβs a response from the organiser:
- The answer is av_align. Weβd like to refer you to the following challenge rules for details.
AIcrowd | Sounding Video Generation (SVG) Challenge 2024 | Challenge_rulesThe following explanation is an excerpt from the challenge rule page.
We use the AV-Align as the main metric for ranking and the CAVP score as the secondary metric to break ties. The other four metrics are used to exclude entries that provide low-quality data from the ranking. Specifically, if the score of the submitted model does not exceed the threshold value in any one of these four metrics, the model is excluded from the ranking. The threshold is set as follows: 2.0 for FAD, 900 for FVD, 0.25 for LanguageBind text-audio score, and 0.12 for LanguageBind text-video score.
- We might have misunderstood your question, but let us clarify what we meant. As explained in the challenge rules,
The top entries in the final leaderboard will be assessed by human evaluation, and the award winning teams will be selected based only on the results of this subjective evaluation.
We plan to assess top 10 models chosen according to av_align. In the 2nd phase, you can submit multiple systems, However, even if you occupy the leaderboard from the 1st place to 10th in the end, we will assess only the top-1 model of yours and pick out 9 models from other participants so that we can reward as many participants as possible.
Hope this answers your questions.

π¬ Feedback & Suggestions
27 days agoHello, Can you please specify the track for your first query?
Edit: As for your second query, at the end of the competition (subjective evaluation), the best system for each participant is used, even if three systems from the same participant are ranked 1st-3rd.

Can not submit to AIcrowd on Gitlab
28 days agoCan you please share the submission ID and the track where you are facing this issue?
Thanks
Temporal Alignment Track

Is there a timeout limit for submission?
17 days agoThere is a per sample prediction timeout of 120s.

Stucking in Submission
28 days agoHi Obito,
Could you please share the submission ID? I can see that your submissions are being routinely graded for the Temporal Track.
π¬ Feedback & Suggestions
11 days agoHello Thomas,
You can find details on their approach over here: π Brick by Brick Challenge 2024 β Winner Announcement & Their Solutions