Loading
2 Follower
0 Following
santiactis

Location

AR

Badges

5
3
1

Activity

Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

What data should you label to get the most value for your money?

Latest submissions

See All
graded 179037
graded 179028
graded 179027

Latest submissions

No submissions made in this challenge.

Machine Learning for detection of early onset of Alzheimers

Latest submissions

See All
graded 140406
graded 140404
graded 140400

3D Seismic Image Interpretation by Machine Learning

Latest submissions

See All
graded 93328
graded 90382
graded 89726

Predicting smell of molecular compounds

Latest submissions

No submissions made in this challenge.

5 PROBLEMS 3 WEEKS. CAN YOU SOLVE THEM ALL?

Latest submissions

See All
graded 93328
graded 90382
graded 89726
Participant Rating
roderick_perez 0
oduguwa_damilola 0
Participant Rating
santiactis has not joined any teams yet...

Data Purchasing Challenge 2022

[Announcement] Leaderboard Winners

Over 2 years ago

Congratulations everyone! Same as @tfriedel, really curious about solutions, this challenge was quite unpredictable!

[Utils] Final Stage model compilation

Over 2 years ago

Hi all! I have compiled all 5 models from the final stage of the challenge. You can replace the code chunks presented in the notebook on the corresponding .py files and enjoy evaluating all pipelines in one run.

Hope you find it useful for testing your strategies. Please, let me know if you have any comments or doubts, and don’t forget to leave a like :heart: !

[Discussion] Experiences so far on Phase 2?

Over 2 years ago

Hi all!

How have been your experiences with the second part of the challenge?

On my side, the highlight has been fighting with the results std, I have experienced +0.04 F1 Score deviation for the same experiments.

Some approaches I have taken throughout the challenge have included:

  • Variations of heuristic approaches from the Baseline: no big results beyond the maximum value for the Baseline, most of the experiments included changing the percentage of budget to be used.
  • Heuristic approaches from Baseline + my own heuristic approaches: most of these included taking into account cooccurence between labels and weights/counts from each label in the training set.
  • Training loss value for each image + image embeddings + PCA: this sounded promising at first, but local results didn’t add much value, still doing some further testing.

I have also noticed that one could heavily lay on golden seed hunting, but I think that with all the measures being taken, it looks like it doesn’t make much sense to follow this path.

Have you guys and gals been focusing mainly on heuristic approaches + active learning, or did you follow some interesting paths on data valuation, reinforced learning + novel approaches?

Looking forward to hear your experiences!

Baseline Released :gift:

Over 2 years ago

This is great! Going to try plugging in some of my ideas (just when I was about to post my low tier baseline :sweat_smile:).

Image Similarity using Embeddings & Co-occurence

Over 2 years ago

Hi all!

I have shared a new notebook with a co-occurence analysis over the labels for all the datasets. I have also made a brief analysis on image similarity using embeddings and PCA.

Have you used any of these ideas? Did you find them useful on your final score?

[Resources] Time To Focus On Purchase Phase Now

Over 2 years ago

Most of the method I’ve been looking to implement rely on probabilities (great resources by @leocd to get going if you haven’t implemented any btw).

But the other day, looking at the raw images I got the sense that much of them looked very alike. Maybe using some image similarity detection technique would help on not choosing twice (or more) images that have some kind of score that says they should be selected to be labelled, when they are actually almost the same image.

Still far from reaching that level of implementation, but maybe someone finds it doable/useful.

Brainstorming On Augmentations

Over 2 years ago

I was thinking about 3 today, I have always been training over the already trained model, but adding more probability to the random augmentations. Will try re-training from pre-trained weights and see how it goes.

Brainstorming On Augmentations

Over 2 years ago

Not sure I understood, you mean you are directly feeding pre-processed images (i.e. enhanced images) or the other way around?

Brainstorming On Augmentations

Over 2 years ago

NN’s tend to learn most of non-linear transformations, so generally (with the right amount of epochs and data) they should be able to learn most of the ones used in image processing. But I agree that, from an empirical point of view, one should expect things like sharpening, etc, should help the model on detecting scratches more easily.
Another thing to note is that most of us are using pre-trained models, so surely there is something to tell about how much that affects on using this kind of pre-processing we are talking about.

Brainstorming On Augmentations

Over 2 years ago

Yes, I think that there might be some added value in doing some pre-processing. Also, I remember @sergeytsimfer giving a detailed explanation why most of the seismic attributes (basically image pre-processing) were of no added value because the NN should be able to learn that kind of transformation solely from the images, so I’d prefer to go with the common augmentations, and if the runtime is good, maybe add some pre-processing to test what happens.

Brainstorming On Augmentations

Over 2 years ago

On augmentation I haven’t gone beyond the things you described. I tried to think of the universe of possible images for this kind of problem and, probably same as you did, things like Crop don’t seem to be the best approach.

I haven’t been working with changing the colors of the images yet (say RandomContrast, RandomBrightness. Do you think they add to the model generalization? Couldn’t it be that the channels (i.e. colors) add information for the model? (took the title for the thread very seriously here though lol)

[Explainer + Baseline] Baseline with +0.84 on LB

Over 2 years ago

This is great! Thanks for the advice! I was having the issue of not being able to implement the pre-trained weights and found the solution, clearly not the most elegant one :grin:

[Explainer + Baseline] Baseline with +0.84 on LB

Almost 3 years ago

Hi all! I have added a new submission for the Community Prize including a Baseline that achieves +0.84 in the LB.

Hope you find it useful!

Kudos to @gaurav_singhal for his initial Baseline that helped me get through creating a submission.

Learn How To Make Your First Submission [🎥 Tutorial]

Almost 3 years ago

Yesterday I was able to set it up myself, really hope I would have digged the discussion further to find this :grin: .

Baseline submission

Almost 3 years ago

Great addition! Thanks for this, for those of us who are not too experienced, it is always useful seeing other people’s implementation.

What did you get so far?

Almost 3 years ago

Same here, still trying to figure out how to plug something without breaking everything!

ADDI Alzheimers Detection Challenge

Community Contribution Prize 🎮 Gaming consoles 🚁 Drones 🥽 VR Headsets

Over 3 years ago

Hello there!

Amazing Challenge as usual! Question regarding the submissions: do they need to be created on Python or R? e.g. is it possible to create a Power BI report, or Tableau dashboard, etc, that helps on the EDA? Long story short, is it mandatory to also create these Community Challenge submissions in the Aridihia Workbench?

Thanks!

Seismic Facies Identification Challenge

[Explainer]: Seismic Facies Identification Starter Pack

About 4 years ago

Hi there!
The notebook has been updated, its END-TO-END solution throws an F1-score of .555 and an accuracy of .787 aproximately.

Of course this could be enhanced, but keep in mind that the objective of this notebook is to explain the process in an image segmentation problem and to keep the reader on topic on what it’s being done!

Hope some of you find it useful! Don’t forget to like this topic!

Colab Link

[Explainer] Need extra features? Different input approach? Try Seismic Attributes!

About 4 years ago

I am currently considering implementing coherency with a certain window so that the reflector packages (similarly behaving reflectors) get bunched up in some big blanks on the image. This would clearly miss out on the sand channels that have small developments on the seismic cube, but should enhance the definition of big chunks of facies. I’ve seen @sergeytsimfer colab (amazing work btw) on how the NN should be able to detect and learn the ‘augmentation’ provided by seismic attributes, but does this really stand when you’re changing the image radically? Does it stand when you are stacking these different ‘augmentations’ in different channels? I’ve seen a small improvement when using the seismic attributes, but can’t really say it is a confirmed improvement since I didn’t keep working on them (trying to get the model right first). I believe this is a nice discussion to have! I’d love to see your work on coherency @sergeytsimfer .
@leocd are you using an image approach or an ndarray approach? (Maybe a ‘too soon’ question though lol)

santiactis has not provided any information yet.

Notebooks

Create Notebook