This is our wrap up post for the SIIM-ISIC Melanoma Classification Kaggle competition. The competition was 3 months long and had 3,000+ teams competing with each other for a prize pool of $30,000.
In this competition, we (Kharpann) were responsible for building a Machine Learning algorithm that could aid dermatologists in predicting whether or not a patient’s mole was indicative of melanoma (cancer). This meant that our efforts were aimed at diagnosing early-stage cancer and building a human aiding tool that was trained on thousands of past patient’s skin-mole images as well as their diagnosis report.
And, yes, you read it right – we lost our chance to win USD $30,000 by a difference of 0.005. Ouch!
Private Leaderboard Score of Winner (Rank: 1) | Private Leaderboard Score of Kharpann (Rank: 32) |
0.949 | 0.944 |
But, knowing that the ‘leaderboard shakeup’ was imminent and how significant the score difference means in real-life, it was quite a relief to even get a silver medal for the competition.
What is a ‘Kaggle leaderboard shakeup’?
Kaggle (acquired by Google) is a common place for data science aspirants and experts to compete with each other. The platform is filled with multiple Machine Learning competitions and anyone who scores higher on the competition leaderboard is knighted as the respective competition’s winner.
However, there is a catch: Kaggle has a public and a private leaderboard for each competition. Both the leaderboards use different subsets of data to score a participating team’s Machine Learning model and the public leaderboard only has a fraction of the data used in the private leaderboard.
During the competition, the participants are only allowed to view the public leaderboard and most participants fix their model’s performance by trying to rank higher on the public leaderboard. Once the competition ends, the private leaderboard is revealed and it shows the actual ranking of the participants. The competition winner is decided based on the ranking of the private leaderboard.
A ‘leaderboard shakeup’ on Kaggle happens when the top leaders of the public leaderboard are nowhere to be found on the top positions of the private leaderboard. This means that their model did great for the public leaderboard but their model performed poorly on the private leaderboard. As you can guess, this is quite hard to digest for the participants.
How bad was the leaderboard shakeup on the SIIM-ISIC Melanoma Classification competition?
Although, all of us saw it was coming, the shakeup was quite bad!
The 1st rank participant of the public leaderboard is now at rank 279 on the private leaderboard. Similarly, the 1st rank participant of the private leaderboard was actually at rank 885 in the public leaderboard. This is a huge difference and an entirely new article would be needed to explain how this happened.
However, to learn more, you can read about how the first ranking participating in the public leaderboard knew he was going to have bad results on the private leaderboard by reading this discussion.
What about us? We were at rank 132 on the public leaderboard and we ended up at rank 32 on the private leaderboard. That was quite a positive leap for us.
Are we bummed out about not winning due to the small difference?
Not at all. Although the difference in model score was just 0.005 between the winners and us, it also meant that we were misdiagnosing at least 1-10 patients more than the winners and putting the patients at the risk of losing their lives.
Furthermore, we got a silver medal and we’re more than happy with the results considering that this was our first full-length Kaggle competition. We’ve participated on Kaggle in the past but we always used to join halfway through the competition. Also, we had an opportunity to collaborate with a talented data scientist who just graduated out of IIT Bhubaneswar and it was a learning experience for all of us.
A short summary of our approach
To anyone who is interested, here’s a short summary of our approach that we took for building our Machine Learning model for this competition:
- We collected images from past competitions as well as the present one and created a dataset of over 150+ GB. All of these images were converted into TFRecords.
- We used image augmentation methods such as shearing, rotation, saturation, etc. to diversify the training set.
- We used an ensemble of EfficientNet models as well as metadata models that were trained for hours on TPU.
- We depended on our diverse ensemble of models rather than the one that scored higher on the public leaderboard for the final submission.
Also, here is a short snippet of what the dataset looked like:
Some parting words
This competition has been a true indicator of how powerful Machine Learning algorithms can be in saving real lives. Also, we thank all the participating Kaggle members for being such a supportive and sharing community. We now have a model that can actually diagnose skin cancer very accurately and very reliably.
Do you want to learn Python, Data Science, and Machine Learning while getting certified? Here are some best selling Datacamp courses that we recommend you enroll in:
- Introduction to Python (Free Course) - 1,000,000+ students already enrolled!
- Introduction to Data Science in Python- 400,000+ students already enrolled!
- Introduction to TensorFlow for Deep Learning with Python - 90,000+ students already enrolled!
- Data Science and Machine Learning Bootcamp with R - 70,000+ students already enrolled!