Challenge UnboxedđŠ: AI BlitzâĄ9
đđŒWelcome readers to Challenge Unboxed: AI BlitzâĄ9!
Through this series, we dive deeper and explain some of the heuristics and methods used by winners of a challenge in the spotlight. This series seeks to introduce you to new tools and methods that will help you expand your Machine Learning horizons.
In this segment, weâre shining a spotlight on AI BlitzâĄ9: Hello NLP. The platformâs first all NLPBlitz challenge nudged participants to bring in some fresh approaches. Keep reading to know more.
đAbout the Challenge
The triumph of the human race can be greatly credited to our ability to communicate and the connections we are able to form because of it. This also translates into the requirements for the success of AI. Through AI BlitzâĄ9: Hello NLP, our first all NLP Blitz we wanted to explore and expand this AI territory.
Consisting of five meticulously designed NLP puzzles that are aimed to take participants on a learning journey where they encounter some of the essential prompts faced in the industry.
Let us look at the puzzles that the participants tackled like champions!
- Emotion Detection
- Research Paper Classification
- De-shuffling Text
- NLP Feature Engineering
- Sound Prediction
đWinning Heuristics
The challenge saw some very interesting approaches from participants. In this blog, we will break down the approach of the community contributor Falak Shah for the Sound Prediction puzzle. We will also explore the solution of the other Community Contributor winner Sean Ben Hur for the Feature Engineering puzzle.
đŁAutocorrect with DeepSpeech for Sound Prediction
Speech to text is a necessary technology that benefits the differently-abled. Due to its utility, this assistive tool is constantly being improved and perfected. Blitz 9 took a different approach to the classic speech-2-text problem in its Sound Prediction puzzle. Given a sound clip as an input, the participant needs to output only the numbers in a text format.
Keeping the Starter Kit for the puzzle in mind, Falak Shah came up with an interesting solution that managed to put him high on the leaderboard. The starter kit introduced a solution that used Mozillaâs DeepSpeech. It is a Speech-to-text engine that works on a model based on a paper from Andrew NGâs Baidu Research Lab.
Falakâs solution introduced a simple trick of using Autocorrect on the text predictions produced by the DeepSpeech model. The autocorrect feature provides a very optimized way of rounding up the near-perfect predicted words to the nearest number. The autocorrect module automatically increases the number of predicted numbers hence giving Falak a greater chance to score higher on the leaderboard.
For the Autocorrect module, Falak utilizes the Context Spell Corrector API provided by John Snow Labs. Context Spell Checker is a module that not only calculates the highly possible candidates for correct spelling but also takes into account the context of the word by judging preceding and upcoming words.
The solution helped Falak climb up to 4th place in the puzzle, with the least amount of lines added to the starter kit! Can you think of any such way that builds upon our Starter Kits?
đTF IDF for Feature Engineering
Feature Engineering is an integral part of training NLP Models. The process involves using the domain knowledge of the data to identify relevant features and create greater features from them that work in the favour of the model. This stems from the hypothesis that a data-driven approach will fetch us a better result than a model-driven approach.
Sean Ben Hur, in his community contribution winning solution, shows us some incredible ways of tackling feature engineering. He uses TF IDF as the vectorizer instead of One-Hot Encoding. Vectorizer takes up a very integral role in NLP. It is used to map words or phrases from vocabulary to a corresponding vector of real numbers which is used to find word predictions, word similarities/semantics.
TFIDF or Term Frequency-Inverse Dense Frequency is a statistical assessment of a word's relevance to a document in a collection of documents. It is calculated by multiplying how many times a word appears in a document and the inverse document frequency of the word across a set of documents. You can check out this video to know more about inverse document frequency.
The vectorizer proves successful in getting to the top 3 scores on the Feature Engineering Leaderboard with an F1 score of 0.803! The solution made Seanâs notebook stand out. See the real-time implementation over here.
Feeling motivated to put these methods into practice? How about checking out the other beginner-friendly NLP challenges available on our platform.
Let us know what do you wanna read next down in the comments or tweet them at AIcrowdHQ!đ„
Comments
You must login before you can post a comment.