ICDM Winner's Interview: 3rd place, Roberto Diaz

ICDM Winner‘s Interview: 3rd place, Roberto Diaz

This summer, the ICDM 2015 conference sponsored a competitionfocused on making individual user connections across multiple digital devices. Top teams were invited to submit a paper for presentation at an ICDM workshop.

Roberto Diaz, competing as team "CookieMonster", took 3rd place. In this blog, he shares how he became a Kaggle addict, what he values in a competition, and most importantly, details on his approach to this unique dataset. Congrats to Roberto for achieving his goal of becoming a top 100 Kaggle user!

407 players on 340 teams competed in ICDM 2015: Drawbridge Cross-Device Connections

The Basics

What was your background prior to entering this challenge?

In addition to being a Kaggle addict, I am a researcher at Treelogicworking in the machine learning area. In parallel I work on my PhD thesis at the University Carlos III de Madrid focused on the parallelization of Kernel Methods.

Roberto‘s Kaggle profile

Do you have any prior experience or domain knowledge that helped you succeed in this competition?

I didn‘t have any knowledge about this domain. The topic is quite new and I couldn‘t find any papers related to this problem, most probably because there are not public datasets.

How did you get started competing on Kaggle?

I started on the first Facebook competition a long time ago. A friend of mine was taking part in the challenge and he encouraged me to compete. That caught my initial curiosity so I accessed the challenge‘s forum and I read a post with a solution that scored quite well on the leaderboard and I thought "I think I can do better than that". At the end I scored 9th on the leaderboard.

For my second challenge (EMC Israel Data science challenge) I was on a team with my PhD mates. We finished 3rd receiving a prize.

After that it was too late for me, I had become an addict.

What made you decide to enter this competition?

The things I value most in a challenge are:

DÌaz-Morales, R., & Navia-V·zquez, A. (2015, September). Optimization of AMS using Weighted AUC optimized models. In *JMLR: Workshop and Conference Proceedings*, Vol. 42, pp. 109-127.
  • A domain unknown to me: It is the best way to learn about how to work with a different kind of data.
  • The need to preprocess and extract the features from raw data to build the dataset: It gives you the chance to use your intuition and imagination.

This challenge looked very interesting to me because all the conditions were met.

Let‘s Get Technical

What preprocessing and supervised learning methods did you use?

In this challenge we had a list of devices and a list of cookies and we had to tell what cookies belonged to the person using the device.

The most important part was the feature extraction procedure, they had to contain information about the relation between devices and cookies (for example, the number of IP addresses visited by each one and by both of them).

Once I had the features I tried simple supervised machine learning algorithms and complex ones (my winning methodology was Semi-Supervised learning procedure using Gradient Boosting + Bagging) and the score just grew up from 0.865 to 0.88.

What was your most important insight into the data?

A key part of the solution was the initial selection of candidates and the post processing:

  • Initial selection: It was not possible to create a training set containing every combination of devices and cookies due to the high number of them. In order to reduce the initial complexity of the problem and to create an affordable dataset, some basic rules were created to obtain an initial reduced set of candidate cookies for every device. The rules are based on the IP addresses that both device and cookie have in common and how frequent they are in other devices and cookies.
  • Supervised Learning: Every pattern in the training and test set represents a device/candidate cookie pair obtained by the previous step and contains information about the device (Operating System (OS), Country, ...), the cookie (Cookie Browser Version, Cookie Computer OS,...) and the relation between them (number of IP addresses shared by both device and cookie, number of other cookies with the same handle than the cookie,...).
  • Post Processing: If the initial selection of candidates did not find a candidate with enough likelihood (logistic output of the classifier) we choose a new set of candidate cookies selecting every cookie that shares an IP address with the device and we score them using the classifier.

The initial selection of candidates reduces the complexity of the problem and the post processing step find out most of the device/cookie pairs lost by that initial selection strategy.

Were you surprised by any of your findings?

Yes. When I sorted the scores obtained by the classifier for every candidate I saw that if the first score is high and the second is very low, is extremely likely that the first cookie belongs to the device. I made use of this information to create semi-supervised learning procedure updating some features in the training set and retraining the algorithm again with this new information to improve the results.

This picture shows the F05 score and the percentage of devices that fulfill the condition when we match devices and the first cookies candidate when the second candidate scores less than a threshold:

Which tools did you use?

This solution has been implemented in python and uses the external software XGBoost.

The libraries of python used were:

How did you spend your time on this competition?

I spent about 20% of the time in feature engineering, 10% in the supervised learning part and 70% eagerly awaiting for the results.

What was the run time for both training and prediction of your winning solution?

Too much, the training procedure takes around 9 hours using 12 cores.

The prediction procedure takes around 30 minutes, it is necessary to extract some features from the relational database.

Words of Wisdom

What have you taken away from this competition?

I was trying to reach a place in top 100 of the users global ranking and I finally got it.

Regarding the challenge:

  • I have learned how useful it is to save intermediate results in order to not repeat the full training procedure only to change the last steps of the algorithm.
  • A paper with my approach to the problem in the next ICDM 2015 workshop dedicated to the challenge.

Do you have any advice for those just getting started in data science?

"All hope abandon, ye who enter here".

No, seriously, at the beginning you may feel frustrated because it is difficult area but you are in the correct place if:

  • You love statistics more than other software engineers
  • You love software engineering more than other statisticians.

Bio

Roberto Diaz is a researcher in the R&D department of Treelogic, a SME Spanish company focused on Machine Learning, Computer Vision and Big Data that takes part in many EU Research and Innovarions programmes. In parallel he works on his PhD thesis in the University Carlos III de Madrid focused on the parallelization of Kernel Methods.

ICDM Winner's Interview: 3rd place, Roberto Diaz

时间: 2024-10-17 19:41:12

ICDM Winner's Interview: 3rd place, Roberto Diaz的相关文章

Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham

Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham Ben Graham finished at the top of the leaderboard in the high-profileDiabetic Retinopathy competition. In this blog, he shares his approach on a high-level with key takeaways. Ben finishe

Facebook IV Winner's Interview: 1st place, Peter Best (aka fakeplastictrees)

Facebook IV Winner's Interview: 1st place, Peter Best (aka fakeplastictrees) Peter Best (aka fakeplastictrees) took 1st place in Human or Robot?, our fourth Facebook recruiting competition. Finishing ahead of 984 other data scientists, Peter ignored

Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang

Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang The hugely popular Liberty Mutual Group: Property Inspection Prediction competition wrapped up on August 28, 2015 with Qingchen Wang at the top of a crowded leaderboard. A total of

How Much Did It Rain? Winner's Interview: 1st place, Devin Anzelmo

How Much Did It Rain? Winner's Interview: 1st place, Devin Anzelmo An early insight into the importance of splitting the data on the number of radar scans in each row helped Devin Anzelmo take first place in the How Much Did It Rain? competition. In

CrowdFlower Winner's Interview: 1st place, Chenglong Chen

CrowdFlower Winner's Interview: 1st place, Chenglong Chen The Crowdflower Search Results Relevance competition asked Kagglers to evaluate the accuracy of e-commerce search engines on a scale of 1-4 using a dataset of queries & results. Chenglong Chen

Otto Product Classification Winner's Interview: 2nd place, Alexander Guschin ¯\_(ツ)_/¯

Otto Product Classification Winner's Interview: 2nd place, Alexander Guschin ¯\_(ツ)_/¯ The Otto Group Product Classification Challenge made Kaggle history as our most popular competition ever. Alexander Guschin finished in 2nd place ahead of 3,845 ot

如何在 Kaggle 首战中进入前 10%

转载一篇文章 如何在 Kaggle 首战中进入前 10% Posted on 2016-04-29   |   In Data Science  | Introduction 本文采用署名 - 非商业性使用 - 禁止演绎 3.0 中国大陆许可协议进行许可.著作权由章凌豪所有. Kaggle 是目前最大的 Data Scientist 聚集地.很多公司会拿出自家的数据并提供奖金,在 Kaggle 上组织数据竞赛.我最近完成了第一次比赛,在 2125 个参赛队伍中排名第 98 位(~ 5%).因为是

教你如何在机器学习竞赛中更胜一筹

更多技术干活请关注:阿里云云栖社区 - 汇聚阿里技术精粹 作者:Team Machine Learning,这是一个机器学习爱好者团队,他们热衷于建立一个有希望在数据科学/机器学习方面建立事业的有抱负的年轻毕业生和专业人士的环境. 介绍 机器学习很复杂.你可能会遇到一个令你无从下手的数据集,特别是当你处于机器学习的初期. 在这个博客中,你将学到一些基本的关于建立机器学习模型的技巧,大多数人都从中获得经验.这些技巧由Marios Michailidis(a.k.a Kazanova),Kaggle

机器学习竞赛技巧

Kaggle 是目前最大的 Data Scientist 聚集地.很多公司会拿出自家的数据并提供奖金,在 Kaggle 上组织数据竞赛.我最近完成了第一次比赛,在 2125 个参赛队伍中排名第 98 位(~ 5%).因为是第一次参赛,所以对这个成绩我已经很满意了.在 Kaggle 上一次比赛的结果除了排名以外,还会显示的就是 Prize Winner,10% 或是 25% 这三档.所以刚刚接触 Kaggle 的人很多都会以 25% 或是 10% 为目标.在本文中,我试图根据自己第一次比赛的经验和