Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang

Liberty Mutual Property Inspection, Winner‘s Interview: Qingchen Wang

The hugely popular Liberty Mutual Group: Property Inspection Prediction competition wrapped up on August 28, 2015 with Qingchen Wang at the top of a crowded leaderboard. A total of 2,362 players on 2,236 teams competed to predict how many hazards a property inspector would count during a home inspection.

This blog outlines Qinchen‘s approach, and how a relative newbie to Kaggle competitions learned from the community and ultimately took first place.

The Basics

What was your background prior to entering this challenge?

I did my bachelor’s in computer science. After working for a few months at EA Sports as a software engineer I felt the strong need to learn statistics and machine learning as the problems that interested me the most were about predicting things algorithmically. Since then I’ve earned master’s degrees in machine learning and business and I’ve just started a PhD in marketing analytics.

Qingchen‘s profile on Kaggle

How did you get started competing on Kaggle?

I had an applied machine learning course during my master’s at UCL and the course project was to compete on the Heritage Health Prize. Although at the time I didn’t really know what I was doing it was still a very enjoyable experience. I’ve competed briefly in other competitions since, but this was the first time I’ve been able to take part in a competition from start to finish and it turned out to have been quite a rewarding experience.

What made you decide to enter this competition?

I was in a period of unemployment so I decided to work on data science competitions full-time until I found something else to do. I actually wanted to do the Caterpillar competition at first but decided to give this one a quick go since the data didn’t require any preprocessing to start. My early submissions were not very good so I became determined to improve and ended up spending the whole time doing this.

What made this competition so rewarding was how much I learned.As more or less a Kaggle newbie, I spent the whole two months trying and learning new things. I hadn’t known about methods like gradient boosting trees or tricks like stacking/ blending and the variety of ways to handle categorical variables. At the same time, it was probably the intuition that I developed through previous education that set my model apart from some of the other competitors so I was able to validate my existing knowledge as well.

Do you have any prior experience or domain knowledge that helped you succeed in this competition?

I have zero prior experience or domain knowledge for this competition. It’s interesting because during the middle of the competition I hit a wall and a number of the top-10 ranked competitors have worked in the insurance industry so I thought maybe they had some domain knowledge which gave them an advantage. It turned out to not be the case. As far as data science competitions go, I think this one was rather straightforward.

Histogram of all fields in the dataset with labels. Script by competition participant, Rajiv Shah

Let‘s Get Technical

What preprocessing and supervised learning methods did you use?

I used only XGBoost (tried others but none of them performed well enough to end up in my ensemble). The key to my result was that I also did binary transformation of hazards which turned the regression problem into a set of classification problems. I noticed that some other people also tried this method through the forum thread but it seems that they didn’t go far enough with the binary transformation as that was the best performing part of my ensemble.

I also played with different encodings of categorical variables and interactions, nothing sophisticated, just the standard tricks that many others have used.

Were you surprised by any of your findings?

I’m surprised by how poor our prediction accuracies were. This seemed like a problem that was well suited for data science algorithms and it was both disappointing and exciting to see such high prediction errors. I guess that’s the difference between real life and the toy examples in courses.

Which tools did you use?

I only used XGBoost. It’s really been a learning experience for me as I entered this competition having no idea what gradient boosted trees was. After throwing random forests at the problem and getting nowhere near the top of the leaderboard, I installed XGBoost and worked really hard on tuning its parameters.

XGBoost fans or those new to boosting, check out this great blog by Jeremy Kun on the math behind boosting and why it doesn‘t overfit

How did you spend your time on this competition?

Since the variables were anonymous there wasn’t much feature engineering to be done. Instead I treated feature engineering as just another parameter to tune and spent all of my time tuning parameters. My final solution was an ensemble of different specifications so there were a lot of parameters to tune.

What was the run time for both training and prediction of your winning solution?

The combination of training and prediction of my winning solution takes about 2 hours on my personal laptop (2.2ghz Intel i7 processor).

Words of Wisdom

What have you taken away from this competition?

One thing that I learned which I’ve always overlooked before is thatparameter tuning really goes a long way in performance improvements. While in absolute terms it may not be much, in terms of leaderboard improvement it can be of great value. Of course, without the community and the public scripts I wouldn’t have won and may still not know about gradient boosted trees, so a big thanks to all of the people who shared their ideas and code. I learned so much from both sources so it’s been a worthwhile experience.

Click through to an animated view of the community‘s leaderboard progression over time, and the influence of benchmark code sharing. Script by competition participant, inversion

Do you have any advice for those just getting started in data science?

For those who don’t already have an established field, I strongly endorse education. All of my data science experience and expertise came from courses taken during my bachelor’s and master’s degrees. I believe that without already having been so well educated in machine learning I wouldn’t have been able to adapt so quickly to the new methods used in practice and the tricks that people have talked about.

There are now a number of very good education programs in data science which I suggest that everyone who wants to start in data science to look into. For those who already have their own established fields and are doing data science on the side, I think their own approaches could be very useful when combined with the standard machine learning methods. It’s always important to think outside the box and it’s all the more rewarding when you bring in your own ideas and get them to work.

Finally, don’t be afraid to hit walls and grind through long periods of trying out ideas that don’t work. A failed idea gets you one closer to a successful idea, and having many failed ideas often can result in a string of ideas that work down the road. Throughout this competition I tried every idea I thought of and only a few worked. It was a combination of patience, curiosity, and optimism that got me through these two months. The same applies to learning the technical aspects of machine learning and data science. I still remember the pain that my classmates and I endured in the machine learning courses.

Just for Fun

If you could run a Kaggle competition, what problem would you want to pose to other Kagglers?

I’m a sports junkie so I’d love to see some competitions on sports analytics. It’s a shame that I missed the one on March Madnesspredictions earlier this year. Maybe one day I’ll really run a competition on this stuff.

Editor‘s note: March Machine Learning Mania is an annual competition so you can catch it again in 2016!

What is your dream job?

My dream job is to lead a data science team, preferably in an industry that’s full of new and interesting prediction problems. I’d be just as happy as a data scientist though, but it’s always nice to have greater responsibilities.

Bio

Qingchen Wang is a PhD student in marketing analytics at theAmsterdam Business SchoolVU Amsterdam, and ORTEC. His interests are in applications of machine learning methods to complex real world problems in all domains. He has a bachelor’s degree in computer science and biology from the University of British Columbia, a master’s degree in machine learning from University College London, and a master’s degree in business administration fromINSEAD. In his free time Qingchen competes in data science competitions and reads about sports.

Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang

时间: 2024-10-10 19:48:48

Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang的相关文章

Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham

Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham Ben Graham finished at the top of the leaderboard in the high-profileDiabetic Retinopathy competition. In this blog, he shares his approach on a high-level with key takeaways. Ben finishe

Facebook IV Winner's Interview: 1st place, Peter Best (aka fakeplastictrees)

Facebook IV Winner's Interview: 1st place, Peter Best (aka fakeplastictrees) Peter Best (aka fakeplastictrees) took 1st place in Human or Robot?, our fourth Facebook recruiting competition. Finishing ahead of 984 other data scientists, Peter ignored

How Much Did It Rain? Winner's Interview: 1st place, Devin Anzelmo

How Much Did It Rain? Winner's Interview: 1st place, Devin Anzelmo An early insight into the importance of splitting the data on the number of radar scans in each row helped Devin Anzelmo take first place in the How Much Did It Rain? competition. In

CrowdFlower Winner's Interview: 1st place, Chenglong Chen

CrowdFlower Winner's Interview: 1st place, Chenglong Chen The Crowdflower Search Results Relevance competition asked Kagglers to evaluate the accuracy of e-commerce search engines on a scale of 1-4 using a dataset of queries & results. Chenglong Chen

ICDM Winner's Interview: 3rd place, Roberto Diaz

ICDM Winner's Interview: 3rd place, Roberto Diaz This summer, the ICDM 2015 conference sponsored a competitionfocused on making individual user connections across multiple digital devices. Top teams were invited to submit a paper for presentation at

Otto Product Classification Winner's Interview: 2nd place, Alexander Guschin ¯\_(ツ)_/¯

Otto Product Classification Winner's Interview: 2nd place, Alexander Guschin ¯\_(ツ)_/¯ The Otto Group Product Classification Challenge made Kaggle history as our most popular competition ever. Alexander Guschin finished in 2nd place ahead of 3,845 ot

2017年世界500强榜单,500强亏损公司,强最赚钱的50家公司

2017年世界500强榜单发布:腾讯阿里首次登榜 2017年07月20日 20:03:51 财富中文网于北京时间2017年7月20日晚与全球同步发布了最新的<财富>世界500强排行榜. 沃尔玛连续四年排名第一位,2016年营业收入达4,858.7亿美元,同比提升0.8%.前三阵营中的其它两家为中国公司--国家电网和中石化.中石油和丰田汽车分列第四和第五.唯一新进入前十阵营的是沃伦巴菲特掌管的保险和投资集团伯克希尔-哈撒韦公司.如今伯克希尔收入中近四分之三来自经营业务而非财务投资,在挣脱巴菲特光

世界500强榜单出炉:中国公司首进三强 沃尔玛居首

http://world.gmw.cn/2014-07/07/content_11872941_2.htm 五家最赚钱的公司:沃达丰.两房.中国工商银行[0.88% 资金 研报].苹果 如果按利润将所有上榜公司从高到低排列,沃达丰利润大涨13794.5%至941亿美元,成为今年世界500强利润最高的公司. 房利美和房地美紧随其后,随着美国房地产市场回暖,其利润也一路走高,分别为840亿美元和487亿美元,它们也是今年最赚钱的美国500强公司. 中国最赚钱的500强公司则是中国工商银行,它的利润为

如何在 Kaggle 首战中进入前 10%

转载一篇文章 如何在 Kaggle 首战中进入前 10% Posted on 2016-04-29   |   In Data Science  | Introduction 本文采用署名 - 非商业性使用 - 禁止演绎 3.0 中国大陆许可协议进行许可.著作权由章凌豪所有. Kaggle 是目前最大的 Data Scientist 聚集地.很多公司会拿出自家的数据并提供奖金,在 Kaggle 上组织数据竞赛.我最近完成了第一次比赛,在 2125 个参赛队伍中排名第 98 位(~ 5%).因为是