Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
Africa Soil Property Prediction Challenge | Simple R/svm/e1071 solution for top 100 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Africa Soil Property Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This may be a little premature, but...</p>
<p>The following relatively simple svm approach may be of interest to some people who, like me, are relatively new to Kaggle. A variant of it placed me 83rd but the private leaderboard suggests it could have been closer to top 20 if I hadn't 'tuned' it further in the final week.</p>
<p>The CV and public leaderboard scores were both close to 0.40. I'll return with the CV code once I know which of my 15 submissions placed where (I'm assuming we learn that at some stage?).</p>
<p>My thanks to BreakfastPirate, Abhishek and all for their valuable forum posts. I'll be looking out for further advice on how people selected their best models for the final entries.</p>
<p>R code:</p>
<p><code>setwd("~/Documents/#RStudio/Africa Soil")</code></p>
<p><code>library(e1071); library(prospectr)</code></p>
<p><code># Data preparation; first derivative of half of spectra via SG (all the 'good stuff' seems to be in that half)</code></p>
<p><code>myData <- read.csv("training.csv", header = TRUE)</code></p>
<p><code>sg21 <- savitzkyGolay(myData[ , c(1800:3579)], p = 3, w = 21, m = 1)</code></p>
<p><code>sg21 <- data.frame(sg21)</code></p>
<p><code># Final models (after much CV exploration of parameters)</code></p>
<p><code>svm.Ca <- svm(sg21, myData$Ca, cost = 50, gam = 0.00005, ep = 0.01)</code></p>
<p><code>svm.SOC <- svm(sg21, myData$SOC, cost = 40, gam = 0.00005, ep = 0.05)</code></p>
<p><code>svm.pH <- svm(sg21, myData$pH, cost = 30, gam = 0.0001, ep = 0.15)</code></p>
<p><code>svm.P <- svm(sg21, myData$P, cost = 10, gam = 0.0001, ep = 0.1)</code></p>
<p><code>svm.Sand <- svm(sg21, myData$Sand, cost = 5, gam = 0.0001, ep = 0.12)</code></p>
<p><code># Read test data and make predictions</code></p>
<p><code>testX <- read.csv("sorted_test.csv", header = TRUE)</code></p>
<p><code>test.sg21 <- data.frame(savitzkyGolay(testX[, c(1800:3579)],</code><code>p = 3, w = 21, m = 1))</code></p>
<p><code>sub <- read.csv("sub Oct-04 sg21.csv", header = TRUE)</code></p>
<p><code>sub$Ca <- predict(svm.Ca, test.sg21)</code></p>
<p><code>sub$SOC <- predict(svm.SOC, test.sg21)</code></p>
<p><code>sub$pH <- predict(svm.pH, test.sg21)</code></p>
<p><code>sub$P <- predict(svm.P, test.sg21)</code></p>
<p><code>sub$Sand <- predict(svm.Sand, test.sg21)</code></p>
<p><code>write.csv(sub, file = "sub xxxxx.csv", row.names = FALSE)</code></p> |
Africa Soil Property Prediction Challenge | Solutions sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Africa Soil Property Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to the winners! This competition was pretty interesting. Here is our approach:</p>
<p>Before merging with Abhishek, I used two models: R svm with spectral features and R gbm with derivative and spatial features. I used grid search to select features for svm by each target separately. Gbm did not give good result, but it gave good improvement in the ensemble with svm. One more thing I noticed: multiplying by constant improves the score. The only problem this constant was 1.02 on CV and 1.08 on LB. At the end I chose one submission with constant 1.02 and one with 1.08.</p>
<p>After merging, I got neural network model from Abhishek and it gave another improvement on CV and LB. It turns out that on CV neural network was the best model (though on LB svm was better).</p>
<p>In principle, my purpose was to make model simple and robust (2nd on Public, 7th on Private, very good for such kind of competition), that's why I used Landscapes (thanks to BreakfastPirate for it!) for CV score calculations.</p> |
Airbus Ship Detection Challenge | 10th MaskRCNN without ensemble and TTA solution. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Airbus Ship Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the winners ! Since most team used Unet-based solution so I think others may be interested in MaskRCNN based solution.</p>
<p>My pipeline is ordinary: Unet101 with 384*384 + MaskRCNN.
No TTA and ensemble because the score all drop in public LB. (But actually better in Private LB)</p>
<h2>What Works</h2>
<ol>
<li><p>I implemented <a href="https://arxiv.org/abs/1604.03540">Online-Hard-Example-Mining</a> selecting top 128 roi's for training.</p></li>
<li><p>Multi-Scale Training: Since small object are hard for detection, I resize the image randomly from 1200 ~ 2000.</p></li>
<li>I used <a href="https://arxiv.org/abs/1704.04503">softnms</a> for post-processing. I used mask's overlap instead of box overlap to rescore each instance to avoid dealing with the rotated box problem.</li>
<li>MaskRCNN is prone to overfitting (The lesson learned from DSB2018). Data augmentation with MotionBlur, GaussNoise, ShiftScale, Rotate, CLAHE, RandomBrightness, RandomContrast ...</li>
<li>Enlarge mask crop from 28 to 56, using diceloss + BCE loss.</li>
</ol>
<h2>Doesn't Work</h2>
<ol>
<li>Add stride 2 featuremap in FPN and add size 16 anchor. Improve local cv to 0.61 but worse public LB and private LB.</li>
<li><a href="https://arxiv.org/abs/1712.00726">Cascade-RCNN</a> No improvement.</li>
<li>Try to turn mask prediction into box as post-processing.</li>
</ol>
<p>I have implemented TTA and checkpoint ensemble (see eval.py) but all results in worse public LB. Turns out they are better in private LB (best 0.853 TTA with scale range(1200, 1400 ... 2000)).
The final score (0.851) are based on single model without TTA. </p>
<p>Hope other team using Detection-Based solution can share their experience too ! I wonder how rotated box will perform.</p>
<p>The un-cleaned code is <a href="https://github.com/tkuanlun350/Kaggle_Ship_Detection_2018">here</a>. (will be cleaned after my recent deadline ...)</p>
<p>Thanks ! I will make it more user friendly as soon as possible. </p>
<p>Parameter tuning is painful in MaskRCNN :(
What is your inference size in MaskRCNN ? Detection based solution always suffer from small object. Using 2000 make it impossible to use > 1 batch size (only got 1 gpu )</p> |
Airbus Ship Detection Challenge | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Airbus Ship Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>the best way is to use a full size image, but the traing could be too slow. The size of the crop is the compromise between speed of training, batch size, GPU memory and amount of information for the model to train. Even if there will be only a part of the ship, it will be stiil usefull for the model to train, as it trains on the pixel basis and there are also original images only with parts of ships. If there is a crop without ships, it's still good for the model to train negative case, to avoid false positives, but it's important to provide enough various data for model to train, balance positive and negative cases. This is why in some cases I calculate the centers of ships and crop aroun them, to have for most cases at least parts of ships.</p>
<p>The label image is also cropped by given coordinates and the loss is calculated for crops as well.</p>
<p>First of all congratulations to the winners!</p>
<p><strong>Data</strong></p>
<p>The main challenge of this completion, from my point of view, was very unbalanced data for ship/no ship, split/no split cases, very different types of images from distance, quality etc point of view.</p>
<p>For the training I took only images with ships from the training set and created two types of labels for them to prepare the separating of close ships:</p>
<ul>
<li>full ship contours</li>
<li>only separation line</li>
</ul>
<p>one channel for body, one contour or split</p>
<p>Examples:
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/421773/10682/c_edec35a72.png" alt="enter image description here">
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/421773/10683/c_52554d6ee.png" alt="enter image description here">
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/421773/10680/s_edec35a72.png" alt="enter image description here">
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/421773/10681/s_52554d6ee.png" alt="enter image description here"></p>
<p>On the beginning I used only split masks, but there were too few training images where the splitting needed and the model trained not so well. In final models I used only contour labels, but may be better idea would be to use both in different channels to improve the quality of the splitting. </p>
<p><strong>Model</strong></p>
<p><strong>Classification (ship/no ship)</strong></p>
<p>First result of segmentation was not so good, as there were a lot of FP, mostly clouds and waves on empty images. As there were much more empty images than images with ships, I’ve decided to train a simple classifier for the first phase and use segmentation model only for images with ships in the second phase.</p>
<p><strong>Segmentation</strong></p>
<p>Encoder: Resnet34, se_resnext50</p>
<p>Decoder: hypercolumn, scSE, classification-based attention</p>
<p><strong>Training image augmentation</strong></p>
<p>From imgaug I used: flips(horizontal and vertical),
PerspectiveTransform, CropAndPad, Affine(scale, translate_percent, rotate, shear),
One of (ContrastNormalization, Color(Multiply, Grayscale))
One of (GaussianBlur, AverageBlur, MedianBlur, BilateralBlur, AdditiveGaussianNoise, ElasticTransformation)</p>
<p><strong>Training</strong></p>
<p>Optimizer: Adam </p>
<p>Loss function: Lovasz(elu+1)</p>
<p>step1 30-60 epoch (with reduce on plateau)
step2 10-20 epoch fine tuning with smaller LR</p>
<p>9/10 images were used for training, 1/10 for validation. </p>
<p>StratifiedKFold was used to split folds by ship size, close ships.</p>
<p>The training was running on the crops 224x224. Every batch was the mix of random crops around the centers of ships and totally random crops.</p>
<p>The validation was done on full size images.</p>
<p><strong>TTA</strong></p>
<p>Original image + vflip+hflip</p>
<p><strong>Postprocessing</strong></p>
<p>The watershed was used to split the labels</p>
<p><strong>Final ensemble</strong></p>
<p>The local validation score for se_resnext50 based models was much better, than Resnet34, but on public LB vice versa. Looks like better encoder se_resnext50 just overfitted on the this data for me.</p>
<p>The final score is a simple average of 3 x Resnet34 and 1 x se_resnext50.</p>
<p><strong>Hardware</strong></p>
<p>1 x 1080Ti</p>
<p><strong>Software</strong></p>
<p>Before the leak break I was using Keras, but during the TGS Salt moved to pytorch. Final models are only pytorch.</p>
<p><strong>Did not work for me</strong></p>
<p>In the postprocessing I tried to do the labels “more rectangle”, but it made the score only worse.</p>
<p><strong>Should try</strong></p>
<p>Another heavier encoders, like dpn, densenet, with more augmentations against overfitting. </p>
<p>will try to clean up and share in next days</p> |
Airbus Ship Detection Challenge | 14th place solution: data and metric comprehension | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Airbus Ship Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The div factor represents the min to max learning rate through a training cycle. With a max_lr of 2e-3 a div factor of 100 means a lr moving through 2e-5 to 2e-3 while 25 is 8e-5 through 2e-3. A small difference and frankly my starting point divs for frozen and unfrozen training. That's the beauty of 1cycle in my experience, making the lr hyperparameter choice less fragile than it may otherwise be.</p>
<p>Ensembling simply means combining multiple predictions, for instance those made with different models. You can take the average of many predictions, weight averages, take the median, or employ various voting strategies. Different ensembling techniques work in different circumstances. </p>
<p>My team and I are happy to achieve a 14th place. Congratulations to everyone else in medal positions! And many thanks to my team mates <a href="https://www.kaggle.com/radek1">radek</a> and <a href="https://www.kaggle.com/keremt">kerem</a> for inspirational collaboration.</p>
<p>This is my first attempt at a Kaggle competition with tiers/points. 12 months ago I had not heard of kaggle or pytorch, deep learning was a mystery, and python a snake, so it feels gratifying to do well. For that, I must thank both <em>kaggle</em> and <em>fast.ai</em> for creating communities and tools that allow autodidacts such as myself with a single GPU to progress so rapidly, and sponsors like Airbus for being so brave as to be open with data. I give back by explaining our solution ...</p>
<p><strong>The TL;DR; Solution</strong></p>
<p>The solution is based on the idea of</p>
<ul>
<li>careful data preparation (read: exploit data issues)</li>
<li>understanding the balance between false positive rate and segmentation score (read: exploit the metric)</li>
<li>local validation (read: trust local results not public scores)</li>
<li>likelihood of the public leaderboard and private leaderboard differing in score-effecting ways (read: exploit LB carving)</li>
<li>using dead simple out of the box models until the above had been exhausted</li>
</ul>
<p><strong>Data Prep</strong></p>
<p>I was glad to learn the train data wasn't being re-released as the overlaps and leaks within it created opportunity. I created mosaics slightly differently to others, by using the train-masks rather than the train-images, which possibly made it much simpler and perhaps even better as these smaller ship-dense mosaics, separated by sea, could be stratified more homogeneously. I couldn't find tools to do this, I just used python hash values of the masks and re-assembled tiles by position. A 768px window was slid over mosaics in 256px steps to create a 50k image train set, striped into 5 folds.</p>
<p><strong>Binary Classifier</strong></p>
<p>A simple 256 resnet34 classifier, best used at the end not the start of a prediction pipeline.</p>
<p><strong>Segmentation</strong></p>
<p>A rudimentary resnet34 encoded unet, direct from <a href="https://www.fast.ai/2018/10/02/fastai-ai/">fast.ai v1</a>, which uses Leslie Smith's <a href="https://arxiv.org/abs/1803.09820">1cycle policy</a> for learning. Training directly on 768px for 4 frozen then 32+ unfrozen epochs, small batch size of 4. Dihedral augmentation with some limited brightness/contrast augmentation in training.</p>
<p>I used a single GPU with only 8GB memory use, albeit on a 2080 ti with mixed precision training (thanks again to fast.ai v1) and 16 hour training runs. With this and the good data, single folds of 0.849 were achieved. Evidently a 5-fold CV didn't help any on the private LB despite big public LB differences (.729-.741).</p>
<p>Our team ensembled this with resnet18 and resnext50-se models for a 0.001-2 boost. Our one regret is we didn't explore these ensembles further, due to public LB disappointment, and lack of time for local validation examination, but I am confident higher scores could have been achieved. Our best ensemble submission would have scored 11th if selected.</p>
<p>We used dice loss, focal loss, BCE loss, and mixed versions, and do not consider the choice of loss function to have been significant.</p>
<p><strong>Post processing</strong></p>
<p>The intuition here was that the severe penalty (0 score) for labelling a ship where there is none meant it was necessary to sieve the results based on ship size, f2 score for that size, number of ships, circularity, whether the blob was on an edge, and classifier confidence for the image so as to avoid the penalty. </p>
<p>And that the difference in the ship-laden public LB and the ship-barren private LB created an opportunity, albeit a simple algebraic one yet one that seems to have passed many people by.</p>
<p>Ships <100px had f2 scores of just .2-.3 in validation meaning 1 false positive image in every 4-5 would wipe out any gain. Whereas ships > 5000px had an f2 of .7 and are unlikely to trigger a false positive. So in local validation we took the c2900 ship images that the segmenter said existed and used a crude heuristic to sieve out c500 ships: small ships if not very confident (.99+) with the binary classifier, and medium ships if not somewhat confident (.95+). We only submitted predictions for 2400 ie 2/3 of the 3700 images with ships (from the .765 private LB).</p>
<p>While the curious data situation and puzzling LB split created an opportunity and helped us achieve a good result in this case, I plea for well provisioned data and more representative LBs going forward, if only to preserve sanity.</p>
<p><strong>Attempted but didn't seem to help</strong></p>
<ul>
<li>More complex models didn't seem to help as much as understanding the data, the metric, and local validation results did.</li>
<li>We trained a 'coast and structures' classifier </li>
<li>We trained a segmentation model for 20-100px ships</li>
<li>A <a href="http://www.itfind.or.kr/Report01/200302/IITA/IITA-2015-037/IITA-2015-037.pdf">Circle Frequency Filter</a> gave cool results on images, but didn't have time to include in a model and to refine ship widths in post processing </li>
<li>Fitting BB's, going so far as to try IOU measured BBs over blobs.</li>
</ul>
<p><strong>Things we didn't have time to explore enough that may have helped</strong></p>
<ul>
<li>Varying the pixel selection threshold away from 0.5 for different sized ships</li>
<li>Ensembling strategies</li>
<li>Stacking ship characteristics with segmentation predictions</li>
<li>Other mask strategies e.g. trimaps</li>
</ul>
<p><strong>What wasn't attempted</strong></p>
<ul>
<li>More test time augmentation</li>
<li>Breaking apart conjoined ships</li>
<li>A 'wake detector' that may have helped binary classification of very small sub 20px ships.</li>
<li>Probing to see a ship-size makeup of the public LB</li>
</ul>
<p><strong>fast.ai</strong></p>
<p>As an addendum, I can't sing the praises of fast.ai enough. It really does make deep learning 'uncool' and accelerates experimentation around problem solving, not model making. No need to be a programming whiz. No need to subscribe to paid DL services. Indeed I don't need to make any code public, as the training needed to get 14th place was as simple as:</p>
<pre><code>#prep data
#define unet learner
learn = get_learner(data)
lr=2e-3
learn.freeze_to(1)
learn.fit_one_cycle(4, lr, div_factor=100, pct_start=.3)
learn.unfreeze()
learn.fit_one_cycle(32, [lr/64,lr/8,lr], div_factor=25, pct_start=.3)
</code></pre> |
Airbus Ship Detection Challenge | 1st place solution for Algorithm Speed Prize | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Airbus Ship Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone!
I've spent a couple of nights checking the timings.</p>
<p><strong>TL; DR</strong> - We measured time incorrectly. Wrapping the code with .time() calls is not enough.
I have assembled a kernel that should explain what happened:</p>
<p><a href="https://www.kaggle.com/marvelousninja/pytorch-synchronization-issue-1-0">https://www.kaggle.com/marvelousninja/pytorch-synchronization-issue-1-0</a></p>
<p>For the kernel with SE-ResNet50, it resulted in approximately 120 seconds gain.
With synchronization enabled, it brings the time to 5.5 minutes.</p>
<p>Considering the nature of the bug, I suggested to use a different kernel as my best submission.
With synchronization enabled, it has a time of 3.96 minutes, which is still enough to win.
It uses ResNet-18 as a classifier on 224x224 images and LinkNet for segmentation on 768x768 images.</p>
<p>As a show of good sportsmanship, I'm ready to publish this kernel.
Of course, if <a href="/inversion">@inversion</a> and <a href="/jeffaudi">@jeffaudi</a> approve this.</p>
<p>Ok, here is the kernel!
<a href="https://www.kaggle.com/marvelousninja/kaggle-airbus-3-96-minutes">https://www.kaggle.com/marvelousninja/kaggle-airbus-3-96-minutes</a></p>
<p>But according to this benchmark: <a href="https://arxiv.org/pdf/1810.00736.pdf">https://arxiv.org/pdf/1810.00736.pdf</a>
SE-ResNet50 is much faster than VGG16. It also correlates with my experiments.</p>
<p>You can also try to reproduce the results. I used the implementation from here:
<a href="https://github.com/Cadene/pretrained-models.pytorch/blob/master/pretrainedmodels/models/senet.py">https://github.com/Cadene/pretrained-models.pytorch/blob/master/pretrainedmodels/models/senet.py</a></p>
<p>Yes, I included the time to copy tensors to from CPU to GPU.
However, reading images from the disk was ignored.</p>
<p>Hi there!
Here's a quick breakdown of the 1st place solution:</p>
<ol>
<li>PyTorch.</li>
<li>SE-ResNet50 as a classifier: took around 30 seconds to infer all images.</li>
<li>LinkNet as a segmentation network: took around 140 seconds to infer all positively classified images.</li>
<li>Extracting ship instances from binary mask via scipy.ndimage. Ignore instances with small area (less than 80px): took around 40 seconds.</li>
<li>Didn't use TTA.</li>
</ol>
<p>Initial profiling has shown that I/O is the slowest part of the pipeline, so I've spent at least a week trying to optimize it. However, after a some rule clarifications, it was clear that I/O should be ignored completely, so I switched to optimizing the networks. After a bunch of experiments, I ended up with a slightly modified LinkNet, which was at least x10 faster than my model from the first stage of the competition.</p>
<h1>Key insights:</h1>
<ol>
<li>Adapt to the kernel. GPU kernel has 2 very slow CPU cores. I won 1 minute of inference time just by transferring image normalization to the GPU.</li>
<li>Classifier speed and accuracy is critical: it saves a lot of time, since segmentation network is much slower.</li>
<li>Predicting ship borders and/or using watershed requires too much post-processing.</li>
</ol>
<h1>Training the classifier:</h1>
<ol>
<li>Train and predict on resized 224x224 images.</li>
<li>Nesterov SGD with LR 0.001, batch size 16, weight decay 1e-3, momentum 0.9.</li>
<li>BCE loss.</li>
<li>Augmentations: rotate90, flip, random brightness, gamma, bunch of blurs.</li>
<li>Use location-based stratification and split on 5 folds.</li>
<li>Balance the dataset (roughly 40/60 ratio between images with ships/without ships)</li>
</ol>
<h1>Training the segmentation network:</h1>
<ol>
<li>Train in 3 stages: on 256x256 crops containing ships, then finetune on 384x384, and finally on 512x512. Inference on full-sized 768x768 images.</li>
<li>Augmentations: rotate90 and flip. LinkNet had a trouble converging with heavy augmentations.</li>
<li>Lovasz hinge loss with ELU + 1 trick.</li>
<li>Drop last residual connection.</li>
<li>Use location-based stratification and split on 5 folds.</li>
<li>Balance the dataset (roughly 40/60 ratio between images with ships/without ships)</li>
</ol>
<h1>Some of the failed experiments:</h1>
<ol>
<li>FP16. K80 GPUs do not seem to support half-precision very well.</li>
<li>Separable convolution. Got marginal speed improvement on LinkNet, but the model couldn't reach good F2 score.</li>
<li>Simplifying larger network (U-Net based FPN with ResNet34). I used this model during the first stage of the competition. Couldn't get inference speed below 15 minutes.</li>
<li>Simpler classifier networks. I tried smaller ResNets, but couldn't get them to the same level of accuracy. What they gave in terms of speed, was then taken by false positives during segmentation.</li>
<li>Resizing images on GPU. Resizing itself was quicker on GPU, but transfer of large images from RAM and GPU was super-slow.</li>
<li>Replacing MaxPool layers with a strided Conv. Again, marginal speed improvement, but pretty low F2 score.</li>
<li>More aggressive pooling inside the network and upsampling at the final layer.</li>
</ol>
<h1>If I had more time I would:</h1>
<ol>
<li>Prune the network.</li>
<li>Try to compete on CPU kernel, but with quantization and stuff. It would require using OpenVINO or PyTorch Glow.</li>
<li>Reimplement scipy.ndimage on GPU. It took around 40 seconds of overall time.</li>
</ol> |
Airbus Ship Detection Challenge | 6th Place Solution (41st in the Public LB) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Airbus Ship Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It's just one of the semi-supervised learning methods. You could read more about it e.g. here: </p>
<ol>
<li><a href="http://deeplearning.net/wp-content/uploads/2013/03/pseudo_label_final.pdf">http://deeplearning.net/wp-content/uploads/2013/03/pseudo_label_final.pdf</a></li>
<li><a href="https://datawhatnow.com/pseudo-labeling-semi-supervised-learning/">https://datawhatnow.com/pseudo-labeling-semi-supervised-learning/</a></li>
</ol>
<p>First of all, I'd like to thank my teammates <a href="https://www.kaggle.com/zfturbo">ZFTurbo</a> and <a href="https://www.kaggle.com/nicksergievskiy">Nick Sergievskiy</a> for the nice teamwork and their great effort!</p>
<p>We've entered the competition and merged into the team pretty late. Thus, we haven't had enough time to make lots of experiments. Another problem was the validation. While our local validation score was growing, Public LB score remained the same, at about 0.738, throughout the last week. The best reward for this struggle was a move from the 41st place in the Public LB to the 6th place in the Private LB.</p>
<p>There have been already shared lots of great methods and ideas from the top teams. So, here is our brief solution outline.</p>
<h2>Local Validation</h2>
<p>We created 5 folds validation without a leak. As already mentioned, there were inconsistencies between Local and Public LB score movement. However, we tried to trust only the local validation.</p>
<h2>Models</h2>
<ol>
<li>Classification (empty vs non-empty images). InceptionResNetV2, trained on 299x299 by <a href="https://www.kaggle.com/zfturbo">ZFTurbo</a></li>
<li>Semantic segmentation: ResNet34 + U-Net by me. Trained on 256x256 random crops, prediction on a full-size 768x768</li>
<li>Semantic segmentation: ResNet152 + U-Net by <a href="https://www.kaggle.com/zfturbo">ZFTurbo</a>. Trained on 224x224 random crops, prediction with a sliding window</li>
<li>Instance segmentation: ResNet18/ResNet50 + Mask R-CNN trained on 1000x1000 by <a href="https://www.kaggle.com/nicksergievskiy">Nick Sergievskiy</a></li>
</ol>
<h2>Sampling</h2>
<p>We used different percentages of non-empty / empty images in the batch. It was 50/50 for ResNet34 U-net, and even 90/10 for ResNet152 U-net. So, it generated lots of False Positive ships, and the role of the Classifier was pretty crucial.</p>
<h2>Performance</h2>
<ol>
<li>Ensemble of 4 models + TTA for ResNet34 + U-Net. Private LB: 0.846 -> 0.848 with Classification (deleting masks for confident empty images)</li>
<li>Ensemble of 3 models + TTA for ResNet152 + U-Net. Private LB: 0.775 -> 0.848 with Classification</li>
<li>Ensemble of 3 models + TTA for Mask R-CNN. Private LB: 0.843 -> 0.850 with Classification</li>
</ol>
<h2>Final Ensemble</h2>
<p>As a final ensemble we used:</p>
<ol>
<li>Geometric mean of 7 U-Net models including one model with pseudolabels and one 2nd level model on OOF predictions. Denote predictions of this ensemble as <strong>unet_mask</strong></li>
<li>Ensemble of 3 Mask R-CNN models</li>
</ol>
<p>For the Mask R-CNN ensemble <a href="https://www.kaggle.com/nicksergievskiy">Nick Sergievskiy</a> chose two thresholds: thr_high and thr_mid. They gave the most confident predictions (<strong>rcnn_mask_high</strong>) and just confident predictions (<strong>rcnn_mask_mid</strong>). Further, <strong>rcnn_mask_high</strong> had the highest priority and replaced <strong>unet_mask</strong> objects; <strong>rcnn_mask_mid</strong> were added only if there was an intersection with <strong>unet_mask</strong> objects.</p>
<p>The architecture used is fully convolutional. So, the network input could be of arbitrary size. You can read about fully convolutional networks here: <a href="https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf">https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf</a></p>
<p>It's hard to tell which one is better. 50:50 mimics the distribution of empty images in the test set. And 90:10 finds almost all the ships, but has lots of False Positives. So, it needs a good emptiness Classifier on top.</p> |
Airbus Ship Detection Challenge | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Airbus Ship Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You mean "An ensemble of fastai resnet34 and resnext50" ?</p>
<p>For each image, resnet34 has a predicted probability of being a ship; resnext50 also has a predicted probability of being a ship. Take a weighted average would be the ensemble's output.</p>
<p>Congratulations to the winners, especially Victor and Selim of topcoders, whose 2018 DSB winning <a href="https://github.com/selimsef/dsb2018_topcoders">solution</a> we adopted for this competition.</p>
<h2>Finding overlap</h2>
<p>Due to the overlapping issue, we wrote an algorithm to find all the overlapping pairs of images in train set, then ran the <a href="https://leetcode.com/problems/friend-circles/description/">friend circles</a> algorithm to find all the 57877 groups of images. Images in each group is supposedly from the same acquisition and images from different groups do not overlap.</p>
<p>We then did the train/val split to have 2920 groups (10442 images) in val set while making sure train and val have same distribution of number of ships per image. We did this split before test_v2 was released and used it for the majority of the completion but switched to a balanced 4 fold towards the end which have distribution (in ship number and ship size) closer to test set.</p>
<h2>Binary classification model</h2>
<p>An ensemble of fastai resnet34 and resnext50 models trained on our split based on Iafoss’s <a href="https://www.kaggle.com/iafoss/fine-tuning-resnet34-on-ship-detection/notebook">kernel</a>.</p>
<h2>Labels</h2>
<p>We used topcoder’s dsb2018 winning solution code to generate contour (i.e. divider) layer and used 3 channel labels: ship, contour and background. Contour layers are generated in order to help the model separating close ships. An example:
<img src="https://i.imgur.com/KanhOpD.png" alt="label"></p>
<h2>Model</h2>
<p>Best model is Unet densenet169 implemented in Keras by topcoders. We also tried resnet152, resnet101, inception resnet v2</p>
<h2>Loss</h2>
<p>Double head loss (i.e. half binary cross entropy, half dice loss) for ship layer. For contour layer, we tried both double head loss and only dice loss since the pixels are too imbalanced.</p>
<h2>Input size</h2>
<p>512 and 768</p>
<h2>Thresholds</h2>
<ol>
<li>Best classifier threshold is 0.5–0.7 depending on the model. Images below this threshold have empty prediction in submission; images above this threshold go to Unet model which may still end up having empty predictions from Unet. </li>
<li>Unet pixelwise threshold
We noticed that for all our models, best pixelwise threshold on local validation (usually around 0.5) is never the best on public LB. After some simulations and LB probing, we highly suspected discrepancy between train and test label standards: test set masks are more “tight”, i.e. higher threshold on test set (0.6-0.8 depending on model) works out better on public LB. This turned out to be the case just for the public LB.</li>
</ol>
<h2>Training techniques</h2>
<p>All the standard ones: data augmentation, cyclic learning rates, TTA</p>
<h2>Postprocessing (mostly using skimage)</h2>
<ul>
<li>Algo to separate "weakly connected" masks, with erosion and watershed. We start from an intuition that generally masks are rectangular and when the model predicted ship boundaries are 'weak', they touch along the longer side of rectangles. To separate this case, we first create a thin rectangular structuring element that is aligned with the major axis orientation of the original mask. Applying erosion with this structuring element will help separate the boundary. Since erosion reduces the pixels from the original mask, we try to make this up by applying a watershed algorithm on top of erosion. Example:
<img src="https://i.imgur.com/rYRXeLQ.jpg" alt="weakly"></li>
<li>to cut corners (probing the possibility that test set labels may be made as actual ship shape instead of rectangles) </li>
<li>to “rectanglize” unet output masks by finding minimal bounding box and shrink it to same size</li>
</ul>
<h2>Final models</h2>
<p>Best one in local validation and best on public LB. The best on public LB severely overfitted, like quite some teams did. The best one in local validation turned out to be 9th place. </p>
<h2>Other things tried</h2>
<ul>
<li>Ensembling different models by averaging predicted probability matrixes – not working well </li>
<li>Combining different models based on number of ships per image and ship size (some models work better for certain images or ships) – may overfit public LB</li>
<li>MaskRcnn – much worse than unet</li>
<li>Adding ships predicted in one model to the results of another model – slightly improved but too complicated</li>
</ul>
<h2>Team and hardwares</h2>
<p>We are full time data scientists and coworkers. We don’t do image tasks at our job. This is the first deep learning competition for most of the team members. We teamed up very early. We have 6 Nvidia 1080Ti or equivalent and 2 smaller ones</p>
<h2>Questions for other top teams</h2>
<ul>
<li>How do you deal with competitions like this where the public LB is too small and delta between local CV and public LB too unstable? Do you always faithfully trust your local CV no matter what? </li>
<li>What if the delta is because of difference in distribution (or even labeling standards as discussed <a href="https://www.kaggle.com/c/airbus-ship-detection/discussion/70221">here</a>) between train and test sets? </li>
<li>I noticed bestfitting submitted many times, so there's must be some value in public LB?</li>
</ul>
<p>We haven't cleaned up the code yet, but you can check out topcoder's implementation for a similar task for a previous competition: <a href="https://github.com/selimsef/dsb2018_topcoders/blob/master/victor/create_masks.py">https://github.com/selimsef/dsb2018_topcoders/blob/master/victor/create_masks.py</a></p>
<p>I don't understand your second question.</p>
<p>Thanks Ray</p>
<p>Thanks Terence. Hope you have better luck with private LB next time!</p> |
Airbus Ship Detection Challenge | Do not trust the LB, trust your CV. (5th/8th in public/private LB) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Airbus Ship Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank u! </p>
<p>haha, let's doodle!</p>
<p>Thank u again!</p>
<p>Congratulations to all the winners in this competition. Congratulate and thank my teammate Tom, earhian ,sheep and CHAN, It's memorable to team with you all.
Do not trust the LB, trust your CV. This is what I learned in this competition.</p>
<p><strong>Our Model Design</strong>
1. binary classifier on ship/no ship (all images) -> resnet34, se-resnext50 based model
2. unet models (ship images only)
3. unet models with deepsupervison (all images)
4. mask rcnn (ship images only)</p>
<p>Our final submission combines results of different models above, details will be updated and some of our code will be on <a href="https://github.com/SeuTao/Kaggle_Airbus2018_8th_code">https://github.com/SeuTao/Kaggle_Airbus2018_8th_code</a></p>
<p>PS: It's really lucky for us to survive the huge shakeup.</p>
<p>Actually pseudo labeling worked in this competition (single model 853 in private)</p>
<p>Thanks!</p>
<p>I start to train model on 384 resized image with adam then fintune in 768. </p>
<p>Threre are about 12 gpus in our team.</p>
<p>Thank u!</p> |
Airbus Ship Detection Challenge | Few lessons learned (4th place) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Airbus Ship Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to all who has been a part of this competition - our second segmentation story in a row and finally gold for me and <a href="/ddanevskyi">@ddanevskyi</a> , <a href="/vshmyhlo">@vshmyhlo</a> . We hadn't stopped training our model till the last hour and that's a nice story about how we stuck around top-30 on public but kept challenging and improving our models till the end.</p>
<p>Data science is more about understanding the task and therefore making proper validation. Keep calm watching public leaderboard - 12% of highly unbalanced dataset makes no real view. The story was not only about detecting ships but more to fight false positives (wave glare etc). We found having a good ship/no-ship classifier to be much more important for this competition than having a very good performing segmentation model. That's not hard part to obtain.</p>
<p>About validation not much to say - folds based on this shinny <a href="https://www.kaggle.com/manuscrits/create-a-validation-dataset-correcting-the-leak/notebook#x">kernel</a> (cheers <a href="/manuscrits">@manuscrits</a> !). Clear, no leaks, evaluated on ship images only with a small amount of no-ship images added (around 9k images totally). We got all kinds of U-Nets tuned from TGS Competition so we trained separately shallow/deep encoders to blend 4 of them as a final model. We made a really nice grid search over U-Net based architectures. </p>
<p>Few words about lessons learned:</p>
<ul>
<li><p>Check your U-Net concatenations - if you want to predict small objects then try to avoid poolings/strides>1 where they can be avoided. Also size matters - we found that missing central unet layer (< x / 32) leaded to worst predictions of huge ships (up to 300px length).</p></li>
<li><p>Deep networks not always lead to better results unless you tune and train them forever. Pick the best encoder to fit your particular resources, unet architecture, finetuning etc. Our best validation model was resnet34 (oh yes) trained 300 +epochs on cropped images. </p></li>
<li><p>Finetuning on the full size images might boost you performance significantly if tuned properly (from 0.490 to 0.520 local-ships only validation as example).</p></li>
<li><p>Part of success was an accurate intelligent 256 cropping.</p></li>
<li><p>Some of losses lead to better shapes of boats but force ships to be more homogeneous which makes them hard to split. BCE-based will gave you worse shapes but more chances to split em.</p></li>
</ul>
<p>Also there are more to say about what additional channels might help you to fit instance segmentation, to split ships but we didn't achieve our best - unfortunately we didn't finalize our solutions for boundaries predictions (due to dice based loss probably). Looking forward to read about that from other teams.</p>
<p>Thanks my talented team. Thanks the best community <a href="http://ods.ai">ods.ai</a> for a wonderful sense of humor!</p> |
Airbus Ship Detection Challenge | Oriented SSD (solution without segmentation) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Airbus Ship Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I implemented Oriented SSD for this competition. Though LB score is not so good (226th), this model have interesting characteristics. For example, it ensure that predicted shape must be "rectangle".</p>
<p>Code: <a href="https://github.com/toshi-k/kaggle-airbus-ship-detection-challenge">https://github.com/toshi-k/kaggle-airbus-ship-detection-challenge</a>
<img src="https://raw.githubusercontent.com/toshi-k/kaggle-airbus-ship-detection-challenge/master/img/diagram.png" alt="diagram"></p>
<p>Hi Jeff,</p>
<p>Thank you for hosting interesting competition.
I thought this competition was chance to try Rotatable Bounding Box, because most target masks were rectangle.
When you can't use segmentation by some reason, or want to estimate coordinates of ship directly, please use my solution.</p>
<p>Hi Iafoss,</p>
<p>I wonder same things with you. The masks form U-Net have rounded corners and seem like banana 🍌. One of the reason may be evaluation metric. In this competition, the metric (average F2 score) evaluate pixel-wised unions, so the shape of masks are not so important.</p> |
Allstate Purchase Prediction Challenge | Solution sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Allstate Purchase Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I found it in the data exploration phase. I did a cross-product of state vs each A, B, C, D, E, F, G's final value to get counts. I saw some states had zeros for certain product values. So I looked to see if anyone in those states had those values for their last quote plan. In most cases no one did, but for GA/FL there were a few.</p>
<p>And congrats Prazaci team for your surprise jump from 11th to 1st!</p>
<p>I think one key to this competition was choosing what *not* to change from the baseline. We didn’t change B or E at all. We only changed a small percentage of ACDF – usually only ones with shopping_pt=2. Before teaming up about a week ago, Alessandro had a top 10 score only changing G. We basically used his G and my ACDF. I think our solution only had about 2,500 rows that differed from the baseline (so less than 5% difference).</p>
<p>Something else that helped was finding the “Georgia/Florida tricks.” No customers in Georgia had C=1 or D=1 in their final purchase. But some customers had C=1 or D=1 as part of their last quoted plan in the test set. Changing these to 2 gave improvement. Similarly, no customers in Florida had G=2 in their final purchase. Did anyone find any other situations like these?</p>
<p>In addition to the base features, features I found useful were the A, B, C, D, E, F, and G from the previous shopping_pt. Also cost change from the previous shopping_pt.</p>
<p>I used GBM to predict individual ACDF values. Something that made this challenge difficult was that customers who could be safely predicted to change one product also had a high propensity to change multiple products – and getting multiple changes correct for one customer was difficult. So the customers that were easiest to predict for individual products turned out to be difficult to predict overall.</p>
<p>Thank you to Allstate and Kaggle for a fun competition!</p> |
Amazon.com - Employee Access Challenge | How I modified Miroslaw's code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Amazon.com - Employee Access Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I thank again Miroslaw for posting his great code.</p>
<p>Not only it was a very performing code but it gave us all a fantastic occasion to learn and test how to work with Python. Thank you again.</p>
<p>I slightly modified Miroslaw 's code making it a bit more performing and I would like to share it with other Kagglers, so it could be an occasion to improve furthermore our abilities and knowledge when working with Python on similar problems.</p>
<p>My results were entirely based on such a code.</p>
<p>Here are the changes I applied:</p>
<p><br>1) An option to start from a set of predictors<br>2) An option to immediately compute the final solution without further feature selection<br>3) Multiprocessor, automatically choses the best number of jobs for maximum computation speed<br>4) Introduced a small_change variable fixing the minimum change in a model to be acceptable in order to avoid overfitting<br>5) Features with less than 3 cases are clustered together in a rare group<br>6) After inserting a new variable it checks if the already present variables are still meaningful to be enclosed in the model (pruning)<br>7) As for as cross validation, it fixes test_size=.15 and it uses median not mean to average the cross validation results<br>8) It prints out only significative model changes, history of the model, best C value<br>9) Randomized start, final CV score saved with the filename</p> |
Amazon.com - Employee Access Challenge | My modifactions to the Miroslaw's code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Amazon.com - Employee Access Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Many thanks to everybody participating in this competition and forum, especially Paul Duan and Miroslaw Horbal for sharing their code. My solution was based on Miroslaw's code. I put the code on github:</p>
<p> </p>
<p>https://github.com/piokuc/amazon_eac</p>
<p> </p>
<p>I don't have time now to explain the little modifications I made, but if someone has a specific question regarding it I will be happy to answer.</p>
<p>Again, many thanks to everybody, it was an amazing learning experience for me, I really feel like I learned a lot in a short time.</p>
<p> </p> |
Amazon.com - Employee Access Challenge | Winning solution code and methodology | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Amazon.com - Employee Access Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone,</p>
<p>I just released the code for our solution on Github. It is published under a MIT License:</p>
<p><a href="https://github.com/pyduan/amazonaccess">https://github.com/pyduan/amazonaccess</a></p>
<p>I did some refactoring but more may be needed. In particular, I have pretty much left Ben's code as is (in BSMan/) and only made minor modifications to files that I have integrated into my own code but haven't written myself (in external/). I'll do some additional cleaning if I have time.</p>
<p>As of now, the code that is being ran when launching classifier.py is my model (which include some of Ben's features) -- to obtain the final blend, one must run Ben's code independently then merge the predictions (which can be done by slightly modifying combine.py, which was written ad-hoc and which I haven't generalized yet). As you can see, the skeleton is basically the same as the one I used in the starter code I published.</p>
<p>Also, special thanks to Miroslaw, whose code I shamelessly copied to create some of my datasets.</p>
<p> </p>
<p><strong>Methodology</strong>:</p>
<p>The final prediction is a weighted average of my code (2/3) and Ben's (1/3).</p>
<p> </p>
<p><span style="text-decoration: underline">Benjamin Solecki</span></p>
<p>Ben's approach is itself a blend of a logistic model and a mixture of tree-based models. He explained his approach for the logistic model in more detail <a href="../../c/amazon-employee-access-challenge/forums/t/5258/how-i-modified-miroslaw-s-code/28026#post28026">here</a>. As for the tree-based models, it is a combination of Random Forests, GBMs, and Extremely Randomized Trees that are grown using features based on counts and frequencies (e.g. number of resources managed by a given manager, etc.). I'll let him explain his approach further if he wishes to -- we merged comparatively late in the game (2 weeks before the deadline) so I would risk misrepresenting his train of thoughts.</p>
<p> </p>
<p><span style="text-decoration: underline">Paul Duan</span></p>
<p>As for mine, it was mainly driven by the fact that t<span style="line-height: 1.4">he structure of the dataset contained all categorical variables, with a large number of categories and some rare features. This meant that stability was a high priority, ie. models should be relatively robust to changes in the composition of the dataset. I believe this is what in the end helped our score to drop less than the other top competitors from the public leaderboard to the private one.</span></p>
<p>As such:</p>
<p>- I trusted my CV score above all else, which was obtained by repeating a 80/20 train/test split 10 times. I would then select the models not only based on raw score, but also on the standard deviation between the 10 folds.</p>
<p>I made no attempt at fixing the discrepancy between CV and leaderboard scores that was due to the fact that all categories in the test set appeared in the train set, which was not necessarily the case when doing a CV split. The reasoning being that the best model would need to be resistant to a change in the split.</p>
<p>- I removed role 1 and role 2 from the original columns, as their effect was too strong and seemed to cause too much variance; I suspect that the algorithms weren't too good at dealing with unknown role IDs (they were the ones with the fewest number of categories, so the models tended to trust them too much)</p>
<p>- I spent comparatively very little time on feature selection (which was a big subject in this competition, judging by the forums), as they would be highly dependent on the makeup of the dataset. I did, however, reuse some of Miroslaw's code to generate three different datasets built by using greedy feature selection with different seeds. This was not critical to the raw performance of the blend but did help diversifying it/reducing the variance.</p>
<p>- I considered feature <em>extraction</em> to be much more important to feature <em>selection</em><em><br></em></p>
<p>- my classifier consists of a large combination of models (~15 currently) that are each either using a different algorithm or a different feature set. The top 3-5 models are probably enough to reach .92+ on the private leaderboard, but I found adding additional variants to the datasets helped minimize variance.</p>
<p>I then combined them by computing their predictions using cross-validation, and combining them using a second model (stacking). When training the second model, I also added meta-features (nb of time this resource/manager/etc appeared in the training set, etc.), the idea being to try to determine dynamically which model should be trusted the most (some perform better when the manager is unknown, etc.).</p>
<p> Each dataset consists of what I called base data sets (combination of the original columns) and extracted feature sets.</p>
<p>The extracted feature sets are based on cross-tabulating all categories and looking at the counts manager/resource/etc X had been associated with role/department/etc Y, and so on (in feature_extraction.py, this is what I called the pivot tables; the features sets are lists of lambda functions that extract the relevant cell in the table).</p>
<p> </p>
<p><strong>Usage</strong> </p>
<p>To run the code, you'll need Python, numpy/scipy and pandas installed. Pandas is not really necessary for the most part, but it is used in some of the external code; I may rewrite them to remove this dependency when I have the time.</p>
<p>The usage is described on the Github page (to create a submission made up of a simple average of the models in my blend, simply run python classifier.py -v -f testsubmission and it should show up in your submissions/ folder. This should net you a private leaderboard score of .92330. To get the full .92360, you'll need to also run BSMan's code in the BSMan/ folder then combine all predictions according to a 2/3 1/3 split, where BSMan's final prediction is itself a 2/3 1/3 combination of his logistic model and ensemble models.</p>
<p>Runtime for my models should be ~30 minutes per fold, considerably less if you remove some models (simply comment out some strings in the model list, especially the GBC ones), and considerably more if you enable stacking as this will require cross-validated predictions on the training set to be computed as well.</p>
<p>Everything is cached so you'll never have to compute the same thing twice -- it also means that you should have some disk space available (the cache may grow to a couple gigabytes).</p>
<p> <span style="line-height: 1.4"> </span></p>
<p>Paul</p>
<p>I see. What is happening is that your OS is using another dtype for integer arrays than int64 -- replace the condition with:</p>
<p><code>if issubclass(Xg.dtype.type, np.integer):</code></p>
<p>(I updated the code on Github)</p>
<p>This should be more general. I had added this condition so that it wouldn't try consolidating numeric datasets (like the "effects" one).</p>
<p>I added the triples back (I had removed them since I wasn't using them). Not too sure about the tuples_cf thing though -- it generates fine for me. Is it the only _cf that doesn't generate for you?</p>
<p>[quote=Haru;28129]</p>
<p>Hint for Windows. In Paul's code all "open()" corresponding to pickle call must have parameter 'wb' for write or 'rb' for read.</p>
<p>[/quote]</p>
<p>Good catch. I updated the flags in the code. Has anyone tested it on Windows? I have only run it on UNIX systems so far.</p> |
American Epilepsy Society Seizure Prediction Challenge | ESAI-CEU-UCH solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Epilepsy Society Seizure Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, our solution is totally ready and functional. We have found some problems during the code revision, and a new v1.0 of the solution is prepared under the branch name "v1.0" in github.</p>
<p>The repository is available here:</p>
<p>- https://github.com/ESAI-CEU-UCH/kaggle-epilepsy</p>
<p>And the report and README, in the same repository:</p>
<p>- https://github.com/ESAI-CEU-UCH/kaggle-epilepsy/raw/master/report.pdf</p>
<p>- https://github.com/ESAI-CEU-UCH/kaggle-epilepsy/blob/master/README.md</p>
<p>Regards</p> |
American Epilepsy Society Seizure Prediction Challenge | My code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Epilepsy Society Seizure Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>https://github.com/MichaelHills/seizure-prediction</p>
<p>I'm still doing some final testing to make sure everything works OK, but it's more or less there. It may be lacking in documentation in some places but I'll be more than happy to answer any questions.</p> |
American Epilepsy Society Seizure Prediction Challenge | Our Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Epilepsy Society Seizure Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Summary:</p>
<p>Our winning submission was a weighted average of three separate models: a Generalized Linear Model regression with Lasso or elastic net regularization (via MATLAB's lassoglm function), a Random Forest (via MATLAB's TreeBagger implementation), and a bagged set of linear Support Vector Machines (via Python's scikit-learn toolkit).</p>
<p>Before merging as a team, we developed different feature sets for our models, but both sets were a combination of time- and frequency-domain information. For more detailed information, see our detailed report located under our GitHub repo here:</p>
<p>https://github.com/drewabbot/kaggle-seizure-prediction/blob/master/report.md</p> |
American Epilepsy Society Seizure Prediction Challenge | Our solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Epilepsy Society Seizure Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>[quote=Andy;58798]</p>
<p>I still stress that for a real-life practical application all this test data based manipulation is complete non-sense. The real performance is 80.8%, all the rest is fake boost. However, to win some money when it is allowed why not :)</p>
<p>[/quote]</p>
<p>Hi, Andy</p>
<p> You are totally right, the competition rules and winning criteria doesn't reflect real-life medical-device. But anyway the basic gap between this competition and a real medical-device is huge.</p>
<p>A real medical-device will be based on wearable device with fewer channels, on humans, that are functioning at home.</p>
<p>This competition is a "Machine learning" one, and it demonstrate the ability of the competitors to deal with certain rules and data. I believe that competitors that did well here, have a good chance to perform well and find the right algorithm and "tricks" while developing a real-medical device or even while solving similar problems in other fields.</p>
<p> Nir</p>
<p>Hi, my name is Nir Kalkstein, and I represented Medial-Research in the competition (with tips from my colleagues). <br>Medial-Research is a private research institute that set its goal to develop medical applications that can help people, based on big data and machine learning (ML) methods.<br>An example of a product that resulted from research performed by Medial-Research is a personalized scoring solution for colorectoral cancer risk, based on analysis of simple, standard blood count results and demographic data. http://medialcs.com/<br>This competition was my first opportunity to work on EEG signals in general and the epilepsy problem domain in particular. All the code that I developed for this competition was developed from scratch, written in native C, on Visual C++ environment using my general generic experience in ML and signal-processing.<br>From a relatively short time period after I joined the competition (I joined almost a month after it started), I realized that this competition is "very small data" big data competition. For example, for dog 1 there are only 4 independent cases of seizures. <br>Since this was the case, my intuition was that the algorithm should be super regulated against over fitting, and that KNN or tree-based algorithms couldn't work good enough. <br>I chose to use my "special", highly regulated, iterative LS Ensemble, and made a one-out LS Ensemble environment for finding the features and improving the algorithm. The fact that the algorithm was developed by me from scratch in native C, enabled me to put many “tricks” and regulations into it, and to put as many features as I want (something like 900-1500 features, depending on number of channels).<br>The features that I found were based on:<br>- General energy average and energy STDV over time (15 seconds bands), for each channel.<br>- FFT, Log of energy in different frequency bands for each channel.<br>- FFT, Log of energy, and total energy (without log) in different frequency bands for the average of all the channels.<br>- Correlation of energy in frequency bands between channels.<br>- SQRT or POW for each feature, in order to exploit nonlinear behavior of each feature.<br>One more “trick” was the post processing (I believe that I could have won the competition even without this) - I found pairs of 10 minutes that are most likely to be together in time, and generally I did a MAX function of this pair scores.<br>We are really in a great dilemma as to whether to apply for the prize, I am very proud to have won this competition but the concept of the generic part of our algorithms, which implemented by me from scratch for this competition, is currently being used by some of our works and may be used in the furture in other medical applications, so we may eventually decide not to take the prize money (maybe it could be used for future competitions), in order not to publish the exact source code of it. We still have 8 days to decide :)<br>Thank you all, it was a great competition, I really enjoyed it, this competition made me and the team at Medial-Research think of how we may continue this research in order to make a real medical-device that might help many people around the world.<br>Nir Kalkstein</p> |
American Epilepsy Society Seizure Prediction Challenge | Swedish Institute of Computer Science solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Epilepsy Society Seizure Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello all,</p>
<p>better late than never we are releasing the code for our solution, which
got us a top-30 finish.</p>
<p>A few specific contributions we made might prove useful;
like new CV rules for sklearn that allowed us to split segments in
a way that reduced over-fitting and our parallelized data input/output
methods.</p>
<p>You can find the code here: <a href="https://github.com/sics-lm/kaggle-seizure-prediction">https://github.com/sics-lm/kaggle-seizure-prediction</a></p>
<p>Regards, <br>
Theodore</p> |
American Express - Default Prediction | (66th) My Reflections On This Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congratulations to all winners and medalists! </p>
<p>I have been reflecting on what I should have done better after my team fell 40+ places on the private leaderboard. Confusingly, looking at the private scores of my team’s submissions, I haven’t found any of my strategies/approaches would have deterministically outperformed the others in the private leaderboard. Here’s my list of thoughts. </p>
<ol>
<li><code>Single Model</code> vs <code>Model Ensemble</code>: Most of the top scoring submissions are ensembles, but there are some single models on the top scoring list as well. I will still opt for an ensemble next time though.</li>
<li><code>Heavy Ensemble</code> vs <code>Light/Simple Ensemble</code>: No obvious pattern found. Many light ensemble models (1-layer stacking with logistic regression on just a few models) outperform heavy ensembles and vice versa. </li>
<li><code>Specific Ensemble Strategy</code>: Nothing seems to always work. For each strategy, we have many similar ensembles. They can range from top 30 to maybe 200th with similar CV and public LB.</li>
<li><code>CV</code> vs <code>LB</code> vs <code>?</code>: We have quite a few private 808 subs and none of them are best CV or best LB. I can easily find a similar model with a better CV and LB and scores a lot worse on the private LB. </li>
<li><code>NN/GBDT Ensemble</code> vs <code>Pure-GBDT</code>: Again no conclusive observation here. Including NN in the ensemble doesn’t seem to deterministically make performance better or worse in my case. </li>
</ol>
<p>Sadly no immediate takeaways from my own solutions ☹️☹️☹️. </p>
<p><strong>Things I learned post-competition that worked well for others:</strong></p>
<ol>
<li><a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/347641" target="_blank">Knowledge Distillation</a> from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> </li>
<li>Capturing time-series dynamics within each column <br>
<a href="https://www.kaggle.com/code/pavelvod/27-place-sequentialencoder?scriptVersionId=104154431" target="_blank">SequentialEncoder</a> from <a href="https://www.kaggle.com/pavelvod" target="_blank">@pavelvod</a> and <a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/347651" target="_blank">Mini-LSTMs</a> from <a href="https://www.kaggle.com/fritzcremer" target="_blank">@fritzcremer</a></li>
<li><a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/347688" target="_blank">Ensemble of multiple TabNets</a> from <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> </li>
</ol>
<p><strong>Hope you can share your experience here regarding what consistently worked great for you or your takeaways.</strong></p> |
American Express - Default Prediction | 10th Place Solution: XGB with Autoregressive RNN features | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>What a competition! I really enjoyed it and only hope I could have found more time. First of all, I would like to thank Raddar <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a>, Martin <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a>, and many others who generously shared codes and datasets! The public solutions are of amazing quality. And it also determines my game plan: <strong>create something original and blend it with the best public solution.</strong> My solution is based on RAPIDS cudf for dataframe processing, XGB for training, and pytorch lightning for feature extraction.</p>
<p>There are two vital observations of this dataset:</p>
<ul>
<li>the big test data which is from the future is available.</li>
<li>short sequences (<13) are culprits of bad performance.</li>
</ul>
<p>Let's start with the latter observation:<strong>the sequence length of each customer profiles</strong> plays a critical role in the model performance:</p>
<pre><code>import cudf
path = '/raid/amex'
train = cudf.read_parquet(f'{path}/train.parquet',columns=['customer_ID'])
trainl = cudf.read_csv(f'{path}/train_labels.csv')
train = train.merge(trainl,on='customer_ID',how='left')
train['seq_len'] = train.groupby('customer_ID')['target'].transform('count')
train = train.drop_duplicates('customer_ID',keep='last')
train.groupby('seq_len').agg({'target':['mean','count']}).sort_index(ascending=False)
</code></pre>
<p>output:</p>
<pre><code> target
mean count
seq_len
13 0.231788 386034
12 0.389344 10623
11 0.446737 5961
10 0.462282 6721
9 0.450164 6411
8 0.447300 6110
7 0.418430 5198
6 0.387670 5515
5 0.392635 4671
4 0.416221 4673
3 0.358602 5778
2 0.318465 6098
1 0.335742 5120
</code></pre>
<p>It is obvious that <strong>sequence length 13</strong> is the most common but also with a significantly lower mean default rate. At first glance, I thought it meant shorter sequences are easier to predict since they have more positive samples. But I'm quickly proven wrong when checking my cross-validation results:</p>
<pre><code>Fold 0 amex 0.7990 logloss 0.2144
Fold 0 L13 amex 0.8214 logloss 0.1928
Fold 0 Other amex 0.6724 logloss 0.3289
</code></pre>
<p>The 1st line is the overall score. The 2nd line is the score of sequences of length 13 and the 3rd line is the score of all the rest sequences. Apparently, shorter sequences have a much worse score than the full sequences of length 13. This is also an implication of how the short sequences are truncated: the more recent profiles are deleted, which could explain the big degradation of the score because more recent profiles have more predicting power in general. For example, let's say for 13 consecutive months (M1~M13) and sequence A is of length 13 and sequence B is of length 8:</p>
<pre><code> M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13
A 1 1 1 1 1 1 1 1 1 1 1 1 1 1
B 1 1 1 1 1 1 1 1 1 0 0 0 0 0
</code></pre>
<p><br>
where <code>1</code> means features exist and <code>0</code> means features missing. Of course, there is another possibility:</p>
<pre><code> M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13
A 1 1 1 1 1 1 1 1 1 1 1 1 1 1
B 0 0 0 0 0 1 1 1 1 1 1 1 1 1
</code></pre>
<p><br>
We can actually find out which one is more plausible by <code>unstacking</code> dataframes in the above two ways and run xgboost with them, respectively. As expected, the former has a better CV score which indicates it is likely how the truncation of short sequences is done.</p>
<p>If we could somehow predict the missing profiles of sequence <code>B</code>, the life of the downstream XGB models would be made much easier. An intuitive choice is to generate the missing profiles using a one-dimension auto-regressive RNN. Bascially we want to predict the features of the next month based on the feature values of the current month and all previous months. And when we have the prediction for the next month, we can use it as part of the input and predict again and so on so forth. This is also where the availability of the big test data really shines. Since we are predicting features, not <code>target</code>, we can train our models using both <code>train</code> and <code>test</code> data. The RNN structure is very simple: just one GRU layer and some FC layers. The RNN performance is pretty decent. In terms of RMSE of all 178 numerical features, the GRU achieves validation <code>RMSE 0.019</code>. For simplicity, all features are log-transformed and <code>fillna(0)</code>. You might wonder how good is <code>RMSE 0.019</code>. We can simply compare it with the naive baseline: just repeat the last available month. For example, if I'm asked to predict features of M2, the naive baseline is just output features of M1. The RMSE of this naive baseline is <code>0.03</code> so our RNN actually learns something and could be useful.</p>
<p>The rest would be straightforward, after predicting missing months, now every sequence is of length 13 so I just unstack the dataframe to increase the number of features 13x. For example, instead of having one feature <code>P_2</code> of the last month, now we have 13 features <code>P_2_M_1</code> to <code>P_2_M_13</code>. These are the most useful features I created. For downstream classifiers I only use XGB so that it is <em>not similar</em> to <a href="https://www.kaggle.com/code/ragnar123/amex-lgbm-dart-cv-0-7977" target="_blank">the great LGB DART notebook</a>. By varying RNN hyperparameters, XGB hyperparameters and different combination of features, I end up with 7 XGB models, whose ensemble is 0.7993 CV and 0.799 public LB. Averaging it with the best public solution and with extraordinary luck, my final submission ended up in the gold zone.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F100236%2Facf9b55d595c8c030770291e3dd69844%2FWechatIMG5474.png?generation=1661401124767084&alt=media" alt=""></p>
<p>The final thought is my best auto-regressive features are generated 1 hour before the deadline. I'm very happy it worked!</p>
<p>Another reason I stopped at 14th is I found that to improve overall AMEX metric, it is more important to make the poor better instead of making the rich richer.</p>
<pre><code>Fold 0 amex 0.7990 logloss 0.2144
Fold 0 L13 amex 0.8214 logloss 0.1928
Fold 0 Other amex 0.6724 logloss 0.3289
</code></pre>
<p>Many times I found <code>L13 amex</code> is improved with the new features but <code>Other amex</code> with short sequences are worse or not improved, and the overall <code>amex</code> is not improved. If the metric is <code>log loss</code>, I would spend more time optimizing <code>L13</code>.</p>
<p>Yes, I did. I only tried to predict the 14th statement of customers with 13 statements and it does help. Maybe predicting more is better. I stopped because the prediction of customers with 13 statements is already very accurate, above 0.82 AMEX metric. So I focused more on shorter sequences.</p>
<p>Thank you for the question. The short answer is I ignored the gaps in the sequence. I think what's important is if the more recent profiles of a customer are available, not what's missing in the past. Please also refer to my response to Tonghui Li <a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/347668#1913603" target="_blank">here</a>.</p>
<p>Thank you for the comment! Yes, I think you are right and I wasn't clear in my post. If we align two sequences with the date <code>S_2</code> like Feb/2018, it is the 2nd scenario as you suggested. But if we consider aligning two sequences relative to when the defaults actually happened, it might become the 1st scenario. </p>
<p>For example, if <code>M1</code> to <code>M13</code> represent <code>Feb/2018</code> to <code>Mar/2019</code>, it is aligned as the 2nd case. </p>
<pre><code> M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13
A 1 1 1 1 1 1 1 1 1 1 1 1 1 1
B 0 0 0 0 0 1 1 1 1 1 1 1 1 1
</code></pre>
<p><br>
so when do the defaults happen? My assumption is default of <code>B</code> happened sometime after default of <code>A</code>, something like this:</p>
<pre><code> M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13 M14 M15 M16 M17 M18
A 1 1 1 1 1 1 1 1 1 1 1 1 1 1 x
B 0 0 0 0 0 1 1 1 1 1 1 1 1 1 x
</code></pre>
<p><br>
where <code>x</code> indicates default. The point is sequence B is harder to predict which indicates its more recent profiles are missing. So if we align the default dates <code>x</code>, it looks like scenario 1 in my original post:</p>
<pre><code> M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13 M14 M15 M16 M17 M18
A 1 1 1 1 1 1 1 1 1 1 1 1 1 1 x
B 1 1 1 1 1 1 1 1 1 0 0 0 0 0 x
</code></pre>
<p>Of course, this is all my hypothesis and by no means a precise alignment since the true default dates are unknown. However, the idea might be roughly correct and useful. Eventually, it all comes down to what the new feature should mean. <code>P_2_M_13</code> could mean the value of <code>P_2</code> in <code>Mar/2019</code> or it could mean the value of <code>P_2</code> in the most recent month relative to when the default happens. I think the downstream XGB learns better with the latter.</p>
<p>Hi Chris, these are great questions! I'll come back to this and give more details. The short answers are:</p>
<ul>
<li>I only trained one RNN model of length <code>8</code> to generate future profiles. The input data are truncated or padded to make length 8 and they are right aligned.</li>
<li>No, I don't give a flag to xgb to indicate the features are generated or real. But this is an interesting idea. might work.</li>
<li>Yes, I used simple k-fold. but I found very late in the competition that a stratified k-fold on sequence-length might be better.</li>
<li>Yes, I used xgb early stopping but very large <code>early_stopping_rounds</code>.</li>
</ul>
<p>yes, I tried both. Keeping both predicted 14th and actual 13th (aka last) works best with xgb. Last minus predicted last doesn't add value.</p>
<p>Thank you for the question!</p>
<blockquote>
<p>After generating the features, did you also predict the target for these new records, or what value for target did you use?</p>
</blockquote>
<p>I just used their original targets. The purpose is to generate more recent features with respect to the original targets.</p>
<blockquote>
<p>Does the RNN predicts all the features at once?</p>
</blockquote>
<p>Yes. please also refer to my answer <a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/347668#1914873" target="_blank">here</a></p>
<p>To be precise, RNN maps an input <code>[B,S,N]</code> tensor to an output <code>[B,S,N]</code> tensor, where <code>B</code> is the batch size, <code>S</code> is the sequence length and <code>N</code> is the number of all numerical features. The output tensor is one step ahead of the input tensor (output is for the future). In inference, we only take the last step of the output tensor and attach it to the input tensor and predict again. </p>
<p>Yes the RNN predicts all features at once. Both input and output are 8*188 but the output is one time step ahead.</p>
<p>I forgot to thank <a href="https://www.kaggle.com/pyagoubi" target="_blank">@pyagoubi</a> Art Vandelay for his great kernel: <a href="https://www.kaggle.com/code/pyagoubi/amex-eda-evolvement-of-numeric-features-over-time" target="_blank">https://www.kaggle.com/code/pyagoubi/amex-eda-evolvement-of-numeric-features-over-time</a><br>
here is a screenshot of his drawing:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F100236%2F3364ddfbcdfea257e2c047971aaaf0c2%2FScreen%20Shot%202022-08-26%20at%201.24.11%20PM.png?generation=1661534676162068&alt=media" alt=""><br>
The pattern shows a clear and predictable trend, which motivates me to go down this path. </p> |
American Express - Default Prediction | 11th Place Solution (LightGBM with meta features) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you !</p>
<p>No, we have not experimented with such a meta feature.<br>
The only default information given is "will they default within 120 days of the latest statement", so I feel it is difficult to predict whether they will default in the following month.</p>
<p>Thank you comment !</p>
<blockquote>
<p>What Features did you remove from the train?</p>
</blockquote>
<p>The deleted features are "R_1, D59, S_11, B_29".</p>
<p>Thank you!<br>
Please refer to the code in the comment addressed to chris.</p>
<p>Thank you!</p>
<p>The reasons why I think meta feature is effective are as follows.</p>
<ul>
<li>When aggregating by customer_ID, some information is inevitably lost.</li>
<li>By assigning a target to the train data before aggregation, we can calculate the "default probability of a given customer given only one month of data".</li>
<li>The meta feature calculated in this way has a new property of "default probability at the time of each month," and by aggregating it, the part that could not be captured by the existing features can be compensated (e.g., how the default probability changed).</li>
</ul>
<p><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <br>
Thank you comment!</p>
<p>OOF include 13 prediction. Created as follows<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2930293%2Fff38c51bc4fe862e1f9ff919af07a536%2Foof_pred.png?generation=1661433784065595&alt=media" alt=""></p>
<p>OOF prediction is aggregated as follows.</p>
<pre><code>def make_meta_features(df):
n_months_list = [3, 6, 13]
df_list = []
for n_month in n_months_list:
df_period = df[df["n_month_from_latest"] <= n_month-1] # min n_month=0
df_period = df_period.groupby("customer_ID")["prediction"].agg(['mean', 'median', 'std', 'min', 'max'])
df_period.columns = [f"meta_{col}_latest{n_month}" for col in df_period.columns]
df_list.append(df_period)
df_periods = pd.concat(df_list, axis=1)
df_periods.reset_index(inplace = True)
df_periods["meta_last"] = df.groupby("customer_ID")["prediction"].agg(['last']).values
# make slope (≒rate)
df_periods["meta_slope_last_latest3"] = df_periods["meta_last"] / df_periods["meta_mean_latest3"]
df_periods["meta_slope_latest3_latest6"] = df_periods["meta_mean_latest3"] / df_periods["meta_mean_latest6"]
df_periods["meta_slope_latest6_latest13"] = df_periods["meta_mean_latest6"] / df_periods["meta_mean_latest13"]
# make meta pred
df_periods["meta_pred_latest3"] = df_periods["meta_last"] * df_periods["meta_slope_last_latest3"]
df_periods["meta_pred_latest6"] = df_periods["meta_last"] * df_periods["meta_slope_latest3_latest6"]
df_periods["meta_pred_latest13"] = df_periods["meta_last"] * df_periods["meta_slope_latest6_latest13"]
return df_periods
</code></pre>
<blockquote>
<p>And then add the result as a new column in train dataframe to be used when training another model?</p>
</blockquote>
<p>Yes. A new model was created by adding meta features to the original features.</p>
<p>Thank you for comment !</p>
<blockquote>
<p>The only difference is that I used the oof prediction as 13 numeric features directly and you did some aggregation.</p>
</blockquote>
<p>It did not occur to me to use the 13 numeric features directly. That seems to work too.</p>
<p>※Previously titled 12th Place Solution. It is now 11th due to the fixed ranking.</p>
<p>Thank you to everyone who participated in the competition and to everyone involved in organizing it.<br>
I learned a lot through this competition.</p>
<h2>Score & Result</h2>
<ul>
<li>My best submission<ul>
<li>Local CV:0.79922</li>
<li>Public: 0.80088</li>
<li>Private: 0.80852</li></ul></li>
<li>Results<ul>
<li>Public:6th → Private 12th</li></ul></li>
</ul>
<h2>Feature Engineering</h2>
<ul>
<li>Base features are from public notebook<ul>
<li><a href="https://www.kaggle.com/code/thedevastator/amex-features-the-best-of-both-worlds" target="_blank">https://www.kaggle.com/code/thedevastator/amex-features-the-best-of-both-worlds</a></li></ul></li>
<li>Delete some features<ul>
<li>Round up features</li>
<li>Duplicate features (ex. groupby counts for all category features)</li></ul></li>
<li>Add some features<ul>
<li>Features aggregated by time period <ul>
<li>min, max, mean, std for the last 3 and last 6 months</li></ul></li>
<li>Rate and diff features in time series<ul>
<li>ex. last - latest_3month_mean, last_3month_mean / last_6month_mean</li></ul></li>
<li>Null count features</li>
<li>Date features</li>
<li><strong>Meta features (most important features!)</strong><ul>
<li>how to make<ol>
<li>Train_labels are assigned to train data (before aggregation by cid) and train model.</li>
<li>Make oof prediction for train data.</li>
<li>Aggregate oof prediction by time period.</li></ol></li>
<li>Using this feature, I reached 0.800 PublicLB from 0.799 PublicLB in single model.</li>
<li>Referring to the DSB2019's 2nd place solution method.<ul>
<li><a href="https://www.kaggle.com/c/data-science-bowl-2019/discussion/127388" target="_blank">https://www.kaggle.com/c/data-science-bowl-2019/discussion/127388</a></li></ul></li></ul></li></ul></li>
</ul>
<h2>Validation strategy</h2>
<ul>
<li>Use stratfiedKfold(k=5).</li>
<li>I think Public LB is more important than local CV to measure Private's performance.<ul>
<li>Data size is about the same for train and public.</li>
<li>In terms of time, public data is closer to private data than train data.</li>
<li>Even after adversarial validation, train/private was farther away from the data than public/private.<ul>
<li>train/private:AUC 0.99</li>
<li>public/private:AUC 0.82</li></ul></li></ul></li>
<li>While focusing on publicLB, we also looked at local CV to determine if there was any improvement.<ul>
<li>Also checked local logloss because amex_metric was not stable.</li>
<li>It was hard to find a few digits of publicLB.</li></ul></li>
</ul>
<h2>Model</h2>
<ul>
<li>LightGBM<ul>
<li>Use dart.</li>
<li>Hypyer_parameter is the same as base notebook.<ul>
<li><a href="https://www.kaggle.com/code/thedevastator/amex-features-the-best-of-both-worlds" target="_blank">https://www.kaggle.com/code/thedevastator/amex-features-the-best-of-both-worlds</a></li></ul></li>
<li>Get best amex metric model (use callback)<ul>
<li><a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/332575" target="_blank">https://www.kaggle.com/competitions/amex-default-prediction/discussion/332575</a> </li></ul></li></ul></li>
</ul>
<h2>Feature Selection</h2>
<ul>
<li>Adversarial validation<ul>
<li>Delete features of high importance in train/private adversarial validation.<ul>
<li>Drop_features: R_1, D59, S_11, B_29</li>
<li>After the change, the AUC was 0.8.</li></ul></li></ul></li>
<li>Null importance<ul>
<li>Use features actual importance larger than mean importance with shuffled target.</li>
<li>Before:4300 features → After:1300 features</li></ul></li>
</ul>
<h2>Ensemble</h2>
<ul>
<li>Use 3 LightGBM models and rank ensemble (weighted average).</li>
<li>Each model use different feature set.<ul>
<li>Model_1: Not use meta features.</li>
<li>Model_2: Use meta features & large features (not use null importance feature selection).</li>
<li>Model_3: Use meta features & small features (use null importance feature selection).</li></ul></li>
<li>Ensemble weight<ul>
<li>model_1:model_2:model_3 = 4:4:2</li>
<li>weight decided while looking at public LB</li></ul></li>
</ul>
<h2>Select Submission</h2>
<ul>
<li>I chose two sub's: <ol>
<li>BestLB sub</li>
<li>Sub with risk of time-series changes in features.<ul>
<li>Features that were not important in adversarial validation are not used. </li></ul></li></ol></li>
<li>The second model was the best in privateLB.</li>
<li>Perhaps the trend of some features changed over time. adversarial validation was very helpful.</li>
</ul>
<p>Thank you.</p>
<p><a href="https://www.kaggle.com/leewook" target="_blank">@leewook</a> <br>
Yes.<br>
I used LightGBM for oof prediction to create meta features.</p>
<p>The assignment of labels to train data is performed with code like this<br>
<code>
train = pd.read_csv("train_data.csv") # original train data
train_label = pd.read_csv("train_labels.csv")
train.merge(train_label, how="left", on=["customer_ID"])
</code></p>
<p>The creation of meta features in the test data is done by predicting models trained on the train data.</p>
<p>The creation of meta features in the test data is done by predicting models trained on the train data.<br>
A model is created by assigning train_label to the train data before aggregating by customer ID.</p>
<p><a href="https://www.kaggle.com/susnato" target="_blank">@susnato</a> <br>
The adversarial validation I performed was train/private.<br>
After removing "R_1, D59, S_11, B_29" the AUC dropped to about 0.8 with simple gbdt.<br>
I thought that removing many features would have more disadvantages, so I removed only the 4 features that had the greatest impact.<br>
The optimal number of features to be removed may be different because we did not conduct multiple experiments.</p>
<p><a href="https://www.kaggle.com/delai50" target="_blank">@delai50</a> <br>
The round-up feature was removed because it was considered unimportant because it was very highly correlated with before round-up. Due to tight memory, we removed the less important features.</p>
<p>As you said, I created 4300 features first and then narrowed them down with null importance. Local CV was about 0.0005 better, but private score was worse. I think it made sense because the score got better when I put them in the ensemble.</p>
<p><a href="https://www.kaggle.com/alexbruskin" target="_blank">@alexbruskin</a><br>
I am not sure if this is the intent of the question, but I will answer.<br>
The meta feature is not created for all categorical features.<br>
To create the oofpred for the meta feature, we also use about 190 features that existed in train, in addition to n_month_from_latest.<br>
Please refer to the 5th solution for an easy-to-understand illustration.<br>
<a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/348097" target="_blank">https://www.kaggle.com/competitions/amex-default-prediction/discussion/348097</a></p>
<p><a href="https://www.kaggle.com/xxxxyyyy80008" target="_blank">@xxxxyyyy80008</a> <br>
The adversarial validation was done to see the difference between train and private distributions and for feature selection.<br>
The code is similar to the following notebook. (although the notebook is private and public)<br>
<a href="https://www.kaggle.com/code/zakopur0/adversarial-validation-private-vs-public" target="_blank">https://www.kaggle.com/code/zakopur0/adversarial-validation-private-vs-public</a></p> |
American Express - Default Prediction | 12th Place Gold (1/2) - lgbm+xgboost+FCN+Transformer | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Gracias Santiago</p>
<p>Thank you Amex and Kaggle for hosting! Very fun competition with a lot of insights. </p>
<h2>Solution</h2>
<p>This is the first of two posts of our ensemble <a href="https://www.kaggle.com/gandagorn" target="_blank">@gandagorn</a> . My leg of the ensemble consisted of taking different preprocessing methods and passing them through different architectures (Thank you <a href="https://www.kaggle.com/jiweiliu" target="_blank">@jiweiliu</a> <a href="https://www.kaggle.com/lucasmorin" target="_blank">@lucasmorin</a> <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> ! for your excellent notebooks) <br>
second part is <a href="url" target="_blank">https://www.kaggle.com/competitions/amex-default-prediction/discussion/348058</a></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4900136%2F7a02721f85aa7042adf4202cb9e7c624%2Fensemble.png?generation=1661418572999162&alt=media" alt=""></p>
<h2>Preprocessing</h2>
<ol>
<li>Used the <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> integer features and raw features and then created separate pre-processing pipelines including standard aggregations (min, max means, last, diff) </li>
<li>Added top 50 PCA Features. </li>
<li>Globally ranked all the features and built products of features that shared the same trend. This boosted all the models. </li>
<li>Used embedded layers for all categorical features. </li>
</ol>
<h2>PostProcessing</h2>
<p>Sequentially removed features accordingly to split importance. </p>
<h2>Models</h2>
<h3>LGBM with Focal Loss + logloss</h3>
<p>Standard lgbm trained with dart with one-sided focal loss added to the log-loss. The addition of the one-side focal loss was with the objective of having a better classification of the positive classes.</p>
<h3>XGBoost</h3>
<p>Nothing special.</p>
<h3>FCN</h3>
<p>Fully connected architecture using the same features that were used in the boosted trees. 20 folds CV.</p>
<h3>FCN + Transformer Encoder</h3>
<p>Probably the most interesting model. A NN with double inputs: tabular features (from previous preprocessing) and full-time series of ranked raw features. The training process was as follows:</p>
<ol>
<li>Freeze Transformer weights and train the FCN architecture for ~30 epochs to produce 1d output [0,1]</li>
<li>Freeze FCN weights and train the Transformer for ~30 epochs to produce 1d output [0,1]</li>
<li>Freeze Transformer weights and train the FCN architecture for ~30 epochs to produce 1d output [0,1]</li>
</ol>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4900136%2F31f1f8ded2cba706ad095ed7595847fc%2Farchitecture.png?generation=1661419863567301&alt=media" alt=""></p>
<p>For the Transformer, I used the standard TransformerEncoderBlock with 8 heads and 2 layers</p>
<p>Maybe next time! </p>
<p>I am afraid I cant because I use custom wrappers for the models that are not public so will take me looooong to re-write the code :/</p>
<p>No only min,max,mean last, and diff last-first</p>
<p>Hi Raimondo, no I did not</p>
<p>Hi Gaurav, its on private wrappers so I cant share them </p>
<p>Hi </p>
<ol>
<li>All of them.</li>
<li>I considered all consumers.</li>
<li>FCN and all my DL approaches perform ~.002 worst than other methods.</li>
<li>Yes.</li>
<li>(CV like in 3) Correct as you described it. </li>
</ol> |
American Express - Default Prediction | 135th place solution : +1483 shake-up | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>great question! I didn't check and compare all of my experiments, but i guess that ensembling NN & tree-based models (various types of models) is most important in my cases.</p>
<p>When ensembling, similar type of models doesn't give a significant boost on some levels, but mixing various types of models gives an improvment.</p>
<p>Thank you!</p>
<p>Hello everyone!</p>
<p>First, thank you Amex for hosting a fun competition! Also, congratulations to all the winners!</p>
<h1>TL;DR</h1>
<p>I couldn't spend lots of time on the competition (only made 30 submissions :(). In the meantime, the competition metric is kinda noisy and we also expected a shake-up/down (not a planet-scale, but for some cases). So, my strategy is focused on protecting a shake-down as possible i can (instead of bulding new features).</p>
<h1>Overview</h1>
<p>My strategy is <code>building various datasets, folds, seeds, models</code>. I'll explain them one by one.</p>
<h2>Data (Pre-Processing)</h2>
<p>My base dataset is based on the raddar's dataset (huge thanks to <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a>). Also, most of the pre-processing logic can be found in the <code>Code</code> section.</p>
<p>The differences are </p>
<ol>
<li>using more lagging features (to 3 months)</li>
<li>not just using a single dataset, but multiple datasets (I just added features incrementally) for the variousity.<ul>
<li>A dataset</li>
<li>B dataset = A dataset + (features)</li>
<li>C dataset = B dataset + (another features)</li></ul></li>
</ol>
<p>I didn't check the exact effectiveness of using the datasets on multiple models, however, it seems that positive effects when ensembling in my experiments.</p>
<h2>Model</h2>
<p>I built 6 models (3 gbtm, 3 nn) to secure the variousity and roboustness. Also, a few models (LightGBM, CatBoost) are trained on multiple seeds (1, 42, 1337) with the same training recipe. Lastly, some models are trained with 10, 20 folds.</p>
<ul>
<li>Xgboost</li>
<li>CatBoost</li>
<li>LightGBM (w/ dart, w/o dart)</li>
<li>5-layers NN</li>
<li>stacked bi-GRU</li>
<li>Transformer</li>
</ul>
<p>Here's the best CV by the model (sorry for the LB, PB scores, I rarely submitted a single model)</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>Note</th>
</tr>
</thead>
<tbody>
<tr>
<td>bi-GRU</td>
<td>0.787006</td>
<td></td>
</tr>
<tr>
<td>Transformer</td>
<td>0.785647</td>
<td></td>
</tr>
<tr>
<td>NN</td>
<td>0.789874</td>
<td></td>
</tr>
<tr>
<td>Xgboost</td>
<td>0.795940</td>
<td>only using the given(?) cat features as <code>cat_features</code></td>
</tr>
<tr>
<td>CatBoost</td>
<td>0.797058</td>
<td>using all <code>np.int8</code> features as <code>cat_features</code></td>
</tr>
<tr>
<td>LighGBM</td>
<td>0.798410</td>
<td>w/ dart</td>
</tr>
</tbody>
</table>
<p>The CV score of the single neural network model isn't good. Nevertheless, when ensembling, It works good with the tree-based models.</p>
<h2>Blend (Ensemble)</h2>
<p>Inspired by the discussion <a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/329103" target="_blank">log-odds</a>, I found weighted ensemble with log-odds probability is better than a normal weighted ensemble (I tuned the weights with <code>Optuna</code> library based on the OOF). But, one difference is not <code>ln</code>, but <code>log10</code>. In my experiments, It's better to optimize the weights with <code>log10</code>. However, It brings little boost (4th digit difference).</p>
<p>I ensembled about 50 models, and there's no post-processing logic.</p>
<h1>Summary</h1>
<p>The final score is</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ensemble</td>
<td><code>0.8009</code></td>
<td><code>0.7992</code></td>
<td><code>0.8075</code></td>
</tr>
</tbody>
</table>
<p>Last day of the competition, I selected about 1600th Public LB solution (my best CV solution). Luckily, <code>Trust CV score</code> wins again :) (Actually, my best CV is also my best LB, and when the cv score increases, lb score increases, so there's little difference between best CV & LB for my cases)</p>
<p>After the competition, I checked the correlation among the scores (CV vs Private LB, CV vs Public LB). then, I found the CV score is more correlated with Private LB than Public LB in my case.</p>
<h2>Works</h2>
<ul>
<li>blending various models (gbtm + nn), even if there're huge CV gaps <ul>
<li>e.g. nn 0.790, lgbm 0.798</li></ul></li>
<li>(maybe) various datasets, models, seeds bring a robust prediction I guess</li>
</ul>
<h2>Didn't work</h2>
<ul>
<li>pseudo labeling (w/ hard label)<ul>
<li>maybe <code>soft-label</code> or <code>hard label</code> with a more strict threshold could be worked i guess.</li></ul></li>
<li>deeper NN models<ul>
<li>5-layers nn is enough</li></ul></li>
<li>num of folds doesn't matter (5 folds are enough)<ul>
<li>there's no significant difference between 5 folds vs 20 folds</li></ul></li>
<li>rank weighted ensemble</li>
</ul>
<p>I hope this you could help :) Thank you! </p> |
American Express - Default Prediction | 13th Place Gold Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we would like to thank kaggle and the staff for hosting such a great competition. We really miss interesting tabular challenges these days.</p>
<h1>1. Summary</h1>
<p>Our solution is based on extensive data cleaning and multi-model weighted average and stacked ensembles using ranked probs. Using extensive data cleaning, our single model was boosted to cv 0.0004~0.0008 compared to the public Raddar's dataset.The final solution (PVT 0.80842/PUB 0.80105) which shaked up to gold zone is the average of 3 ensemble models:</p>
<ul>
<li>LGBM stack with 61 models</li>
<li>CMA(Covariance Matrix Adaptation) weighted average with 54 models</li>
<li>Weighted average with 55 models</li>
</ul>
<h1>2. Extensive data cleaning</h1>
<p>Starting from Raddar’s (thanks <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a>) preprocessed dataset, some other features were cleaned, added and modified in order to build a second version of the dataset. The idea here was to clean the data even more and add some diversity to the ensemble. Some clusters of clients were added based on the pattern of missing variables for each set of features: B, D, P, R and S. Also, some continuous variables showed a curious noise pattern. Looks like a uniform noise was added in the range of (0 - 0.01] of some variables. Take a look at feature B_11 distribution. White noise was clearly added just in the 0.01 range.</p>
<p><img src="https://raw.githubusercontent.com/chumajin/AMEX/main/giba2.jpg" alt=""></p>
<p>Spent some time trying to figure out a way to remove that noise and found some combinations of filters using other features that worked pretty well. For example, the B_11 feature can be cleaned by using a filter in B_1. Taking indices when B_1 is in range of 0-0.01 and inverting the signal of B_11 based on these indices, the new histogram becomes:</p>
<p><img src="https://raw.githubusercontent.com/chumajin/AMEX/main/giba3.jpg" alt=""></p>
<p>This kind of cleaning was made with some other features like: B_1, B_5, B_7, B_11, B_15, B_17, B_18, B_21, B_23, B_24, B_26, B_27, B_29, B_36, B_37, D_58, D_60, D_69, D_71, D_102, D_133, D_144, R_1, R_6, S_16, S_17, S_19, S_22 and S_27. Extra feature cleaning helped to boost GBDT scores and added some diversity when stacking with other models.</p>
<h2>2.1 Example of boosting cv using cleaning data</h2>
<ul>
<li><p>LGBM : 0.7976 → 0.7983 ( +0.0007 )</p></li>
<li><p>XGB : 0.7978 → 0.7986 ( + 0.0008 )</p></li>
<li><p>CatBoost : 0.7964 → 0.7968 ( + 0.0004 ) </p>
<p>※ clean data plus minor change </p>
<ul>
<li>LGBM 5kfold → 15kfold (+0.0013)</li>
<li>CatBoost longer earlystop (+0.002 )</li></ul></li>
</ul>
<h1>3. Single model</h1>
<h2>3.1 Features and modeling</h2>
<p>Basically, we used the features and the models of public notebooks.<br>
Thank you for <a href="https://www.kaggle.com/thedevastator" target="_blank">@thedevastator</a>, <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a>, <a href="https://www.kaggle.com/ambrosm" target="_blank">@ambrosm</a>, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a></p>
<p>Note: some of the 1st level models don’t include features (B_29, R_1, D_59) from adversarial validation.</p>
<h2>3.2 Representative best single model in each model type</h2>
<table>
<thead>
<tr>
<th>model</th>
<th>features</th>
<th>kfold</th>
<th>cv</th>
<th>public lb</th>
<th>private lb</th>
<th>comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>LGBM</td>
<td>1784</td>
<td>15</td>
<td>0.7991</td>
<td>0.79961</td>
<td>0.80679</td>
<td>dart with early stop</td>
</tr>
<tr>
<td>XGB</td>
<td>3413</td>
<td>5</td>
<td>0.7986</td>
<td>0.79873</td>
<td>0.80662</td>
<td>pyramid <a href="https://www.kaggle.com/code/roberthatch/xgboost-pyramid-cv-0-7968" target="_blank">REF</a></td>
</tr>
<tr>
<td>CATBoost</td>
<td>2290</td>
<td>5</td>
<td>0.7968</td>
<td>0.79832</td>
<td>0.80629</td>
<td></td>
</tr>
<tr>
<td>MLP</td>
<td>446</td>
<td>5</td>
<td>0.7919</td>
<td>0.79102</td>
<td>0.79993</td>
<td>KD using LGBM oof prediction</td>
</tr>
<tr>
<td>GRU</td>
<td>188</td>
<td>5</td>
<td>0.7948</td>
<td>0.79418</td>
<td>0.80363</td>
<td>KD using LGBM oof prediction</td>
</tr>
<tr>
<td>Transformer</td>
<td>188</td>
<td>5</td>
<td>0.7932</td>
<td>0.79498</td>
<td>0.80379</td>
<td>KD using LGBM oof prediction</td>
</tr>
</tbody>
</table>
<h1>4. Ensemble</h1>
<p>We used the two methods for ensemble with ranked probs. One is the LGBM stacking, the other is the CMA (Covariance Matrix Adaptation) Evolution Strategy<br>
<a href="https://www.scm.com/doc/params/python/optimizers/cmaes.html" target="_blank">REF</a></p>
<p>And final submission is the average of the following 3 ensemble models(case1~3) ::</p>
<table>
<thead>
<tr>
<th>Case-Method</th>
<th>CV</th>
<th>public</th>
<th>private</th>
<th>#GBDT</th>
<th>#1/2dcnn</th>
<th>#MLP</th>
<th>#TCN</th>
<th>#GRU</th>
<th>#Transf</th>
</tr>
</thead>
<tbody>
<tr>
<td>1-LGBM stacking</td>
<td>0.8010</td>
<td>0.80084</td>
<td>0.80882</td>
<td>48</td>
<td>2</td>
<td>5</td>
<td>2</td>
<td>3</td>
<td>1</td>
</tr>
<tr>
<td>2- cma</td>
<td>0.8032</td>
<td>0.80086</td>
<td>0.80794</td>
<td>41</td>
<td>2</td>
<td>5</td>
<td>2</td>
<td>3</td>
<td>1</td>
</tr>
<tr>
<td>3- cma</td>
<td>0.8028</td>
<td>0.80068</td>
<td>0.8079</td>
<td>49</td>
<td>0</td>
<td>3</td>
<td>0</td>
<td>3</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>As you can see, the public lb is almost the same, but the private lb shows much better LGBM stacking. We would have been hard pressed to get the gold medal without LGBM stacking.</p>
<h2>4.1. Final sub (ensemble results of case 1 ~ 3)</h2>
<p>The ensemble result of the above case 1-3 is our final submission.<br>
public lb 0.80105, private lb 0.80842 (13th).</p>
<p>※ Note that : LGBM stacking alone (0.80882) has the capability of ranking 6th.</p>
<h1>5. Relationship of cv and lb</h1>
<p>As for reference, we share the relationship of cv and lb. It is easy to see without looking at the plot with cma, but the lgbm stacking is a clean extension of the cv/lb straight line of the single model to some extend (Maybe cma is overfitting…)</p>
<p><img src="https://raw.githubusercontent.com/chumajin/AMEX/main/AMEX_cv_lb.jpg" alt=""></p>
<h1>6. Other tips that worked well</h1>
<ul>
<li>knowledge distillation</li>
<li>longer early stop (3000~10000)</li>
<li>large kfold</li>
<li>full training</li>
<li>pseudo labeling</li>
</ul>
<h1>7. Didn't work well</h1>
<ul>
<li>post process</li>
<li>using dow average data</li>
<li>changing weights of samples in LGBM</li>
<li>adding focal loss to LGBM</li>
<li>optimizing 0.4G+0.6D, 0.8G+0.2D, etc</li>
</ul>
<h1>8. Late submission result</h1>
<p>After the end of competition we realized that we had 2 ensembles with PVT LB > 0.80920 (3-4th place) - however, were not considered due to lower CV scores.</p>
<p>1) avg rank of 3 stack models (XGB+LGB+Catboost) - PVT 0.80928, PUB 0.80067, CV 0.8008</p>
<p>2) LGB stack with 61 models + meta features (top 50 eng. features) - PVT 0.80926, PUB 0.80072, CV 0.80097</p>
<h1>9. Team organization</h1>
<p>We used github for code storage, wandb for experiments tracking and kaggle datasets for OOF storage and sharing with the team.</p>
<p>Team members: <br>
<a href="https://www.kaggle.com/titericz" target="_blank">https://www.kaggle.com/titericz</a><br>
<a href="https://www.kaggle.com/yamsam" target="_blank">https://www.kaggle.com/yamsam</a><br>
<a href="https://www.kaggle.com/imeintanis" target="_blank">https://www.kaggle.com/imeintanis</a><br>
<a href="https://www.kaggle.com/chumajin" target="_blank">https://www.kaggle.com/chumajin</a><br>
<a href="https://www.kaggle.com/kurokurob" target="_blank">https://www.kaggle.com/kurokurob</a></p>
<p>I tried training models using pseudo label and in general it got worse Public but better Private scores. So I didn't used it. </p> |
American Express - Default Prediction | 14th Place Gold – NN Transformer using LGBM Knowledge Distillation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/gogogopp" target="_blank">@gogogopp</a> You'll get Kaggle competition master soon! I checked out your achievements. You are doing fantastic solo. you got 2 solo silvers and 1 solo bronze. That is very difficult. Great job!</p>
<p>To become Kaggle competition master, you will need 1 gold. You should consider teaming up in your next competition. That will help you get gold because you can ensemble your model with your teammates' models. For Kaggle competition master, you don't need a solo gold.</p>
<p>Yes, exactly. It's very simple. It's like pseudo labels. Just save your OOF and test predictions. Then train your new model using the probabilities predictions from OOF and test preds using cross entropy. The loss cross entropy works when the targets are continuous preditcions between 0 and 1 (i.e. the targets targets don't need to be zeros and ones). </p>
<p>Imagine you have two models, model A and model B. First train model A. Then makes predictions with model A on some dataset. Next train model B using the probabilities from model A and cross entropy (on that dataset). Then model B has learned knowledge distillation from model A.</p>
<p>Afterward, you can either stop there or fine tune model B on more data. In this competition, I further trained on train data targets which are zeros and ones.</p>
<p>Congratulations Gaurav and team! I forget to mention that I added a GRU layer after my transformer before the final classification layers. Both transformer and GRU helped. I will update my post.</p>
<p>Thank you Amex for sharing your data and hosting a fun tabular competition. Thank you Kaggle. Thank you Kagglers for sharing many helpful discussions and notebooks. Thank you Raddar and Martin for your contributions.</p>
<h1>Solution Overview</h1>
<p>My solution is a 50%/50% ensemble of LGBM and NN Transformer. The LGBM is based on Martin’s amazing public LGBM <a href="https://www.kaggle.com/code/ragnar123/amex-lgbm-dart-cv-0-7977" target="_blank">here</a> and the NN Transformer is based on my public Transformer <a href="https://www.kaggle.com/code/cdeotte/tensorflow-transformer-0-790" target="_blank">here</a>. </p>
<p>The secret sauce is how we train the Transformer. We first use knowledge distillation from our trained LGBM before fine tuning with the train targets. Furthermore, both train and test data are used for knowledge distillation which helps the Transformer learn the test data distribution.</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2022/summary.png" alt=""></p>
<h1>NN Transformer Training</h1>
<p>My public notebook transformer has 2 layers with skip connections (to make training easy). When using knowledge distillation, we can train a deeper transformer successfully. My final solution uses a 4 layer transformer without skip connections. We also added a GRU layer after transformer blocks and before final classification layers.</p>
<p>Training is done using 4 cycles of cosine learning schedule. In the first cold start cosine cycle, we pretrain (i.e. Knowldege Distillation) the Transformer using concatenated rows of both LGBM OOF preds and LGBM test preds and leave probabilities between 0 and 1 (i.e. soft labels). During the second cosine cycle, we use a warm start, reduce the learning rate and train with the hard (0 or 1) train targets. For the third and fourth cycle, we repeat cycles one and two.</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2022/schedule.png" alt=""></p>
<h1>Model Performance</h1>
<p>When creating our submission.csv file from our two models, we can use the normal <strong>K-Fold</strong> LGBM OOF preds and normal LGBM test preds. So making a submission is fast an easy. Additionally, we average 5 seeds per model (and slight model variations) for improved performance.</p>
<p>To tune our two models and compute optimal hyperparameters, we need a leak free reliable CV score. Leak-free CV score is created using <strong>Nested K-Fold</strong> CV. We divide each of 10 outer folds into 10 inner folds. We then train 100 models using GBT. Then each 1 of 10 outer folds has its unique OOF preds and unique test preds. These individualized OOF and test preds are created using only train targets from within the corresponding outer fold train data.</p>
<p>When computing leak free CV score, we find that our NN Transformer has <strong>CV 0.798 / LB 0.799</strong>, our LGBM has <strong>CV 0.799 / LB 0.799</strong> and our 50%/50% ensemble has <strong>CV 0.800 / LB 0.801</strong>.</p>
<h1>Fast Experimentation</h1>
<p>Fast experimention was done using GPU. Thank you <a href="https://www.nvidia.com/en-us/" target="_blank">Nvidia</a> for providing me compute resources for this competition. Experiments were accelerated using 4xV100 32GB.</p>
<p>Feature engineering was explored using <a href="https://rapids.ai/" target="_blank">RAPIDS cuDF</a> which performs operations like dataframe groupby aggregation on GPU 10-100x faster than using CPU. Many GBT experimental models were trained and evaluated using fast GPU XGB. With 1xV100 GPU, XGB can train 100 models for Nested 10 in 10 K-Fold (i.e. 100 models) on full data in only 2 hours. </p>
<p>Feature selection was performed using both XGB feature importance and permutation importance. Using <a href="https://docs.rapids.ai/api/cuml/stable/api.html#forest-inferencing" target="_blank">RAPIDS FIL</a>, we can perform permutation importance where we randomly shuffle each feature column 10 times for each of 10 folds (and average 100 results) in blazing speed! </p>
<p>Each of 1000s of feature columns, we infer 100 times. This is a total of 100,000s of model inferences where each model is 1000s of individual trees! Using <a href="https://docs.rapids.ai/api/cuml/stable/api.html#forest-inferencing" target="_blank">RAPIDs FIL</a>, we can perform this quickly! Note that we can even take an existing CPU LGBM Dart model and convert it into a GPU <a href="https://docs.rapids.ai/api/cuml/stable/api.html#forest-inferencing" target="_blank">RAPIDS FIL</a> inference model and perform permutation importance on existing LGBM Dart Models in blazing speed!</p>
<p>After saving an LGBM Dart model (with LGBM's save feature not joblib nor pickle), we can use </p>
<pre><code>import cuml
cuml.ForestInference.load('model.txt', model_type='lightgbm')
val_pred = model.predict(x_val)
</code></pre>
<p>And <code>x_val</code> can be a RAPIDS cuDF dataframe on GPU. The documentation for FIL is <a href="https://docs.rapids.ai/api/cuml/stable/api.html#forest-inferencing" target="_blank">here</a></p>
<p>Hi Tonghu, Congratuations to you and team on your great finish!</p>
<p><code>RAPIDS FIL</code> is a library that speeds up inference. The idea is that you can train your forest wtih XGB or LGBM or Sklearn Random Forest etc. Then we load the saved model with <code>RAPIDS FIL</code> and RAPIDS FIL will infer 10x-100x faster than using XGB, LGBM, Sklearn Random Forest.</p>
<p>This helps if a company deploys a model in production. And in this competition, it helps us perform permutation importance. To compute permutation importance, we must repeatedly infer a model over and over. If we load a saved LGBM Dart on CPU then inferring over and over will takes minutes or hours. If we load a saved LGBM Dart using RAPIDS FIL unto GPU, then we can infer over and over in seconds!</p>
<p>Congrats <a href="https://www.kaggle.com/gogogopp" target="_blank">@gogogopp</a> and <a href="https://www.kaggle.com/jacksonyou" target="_blank">@jacksonyou</a> for achieving Silver! You guys did great!</p>
<p>Yes ensembling with GRU and/or Transformer adds a lot of diversity and helps public and private LB. </p>
<p>Below is my updated model architecture with GRU layer added: (also embeddings improved from <code>(10,4)</code> to <code>(10,8)</code> and skip connections removed):</p>
<pre><code>def build_model():
# INPUT - FIRST 11 COLUMNS ARE CAT, NEXT 177 ARE NUMERIC
inp = layers.Input(shape=(13,188))
embeddings = []
for k in range(11):
emb = layers.Embedding(10,8)
embeddings.append( emb(inp[:,:,k]) )
x = layers.Concatenate()([inp[:,:,11:]]+embeddings)
# "EMBEDDING LAYER"
x = layers.Dense(feat_dim)(x)
# TRANSFORMER BLOCKS
for k in range(num_blocks):
transformer_block = TransformerBlock(embed_dim, feat_dim, num_heads, ff_dim, dropout_rate)
x = transformer_block(x)
# REGRESSION HEAD
x = tf.keras.layers.GRU(units=128, return_sequences=False)(x)
x = layers.Dense(64, activation="relu")(x)
x = layers.Dense(32, activation="relu")(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs=inp, outputs=outputs)
opt = tf.keras.optimizers.Adam(learning_rate=0.001)
loss = tf.keras.losses.BinaryCrossentropy()
model.compile(loss=loss, optimizer = opt)
return model
</code></pre>
<p>Hi Joe, congratulations to you and team for achieving 24th out of 5000 teams. That's great.</p>
<p>I prepared data exactly like i did in my public notebook (with new targets of test preds and oof preds). I also tried many alternatives like adding padding correctly in time. (My public notebook adds all pads in front of sequence for customer with missing statements). I also tried different ways to represent NA and padding. I also tried adding position embedding. I did lots of things but the data representation didn't make much difference.</p>
<p>I only made 4 change to my public notebook</p>
<ul>
<li><code>emb = layers.Embedding(10,4)</code> changed to <code>(10,8)</code></li>
<li>after transformer blocks added <code>x = tf.keras.layers.GRU(units=128, return_sequences=False)(x)</code> before dense layers</li>
<li>for submission.csv, average multiple random seed trained transformers with 2, 3, 4 blocks and remove skipped connections</li>
<li>trained for multiple cosine cycles with varying learning rates.</li>
</ul>
<p>The big help was pretraining with test predictions. This allowed us to use all the future test data features that Amex gave us and helped the model train it's attention and other stuff. This boosted CV score from 0.790 to 0.798</p>
<p>(It's like NLP unsupervised pretraining. Even without labels, just training a transformer on lots text helps the model understand language. And in this comp, our transformer will learn more about credit cards by seeing all the test features even without correct test labels)</p>
<p><a href="https://www.kaggle.com/mrinath" target="_blank">@mrinath</a> Below is my opinion, it might not be fully correct.</p>
<p>Pseudo labeling is the process of assigning labels to unlabeled data (so that we can train with and extract info from the unlabeled data). Generally pseudo labels are converted into hard targets, i.e. 0's and 1's and only confident predictions are used. For example, after making predictions on unlabeled data we keep all samples with confident predictions less than 0.05 or greater than 0.95. Then round the predictions to 0 and 1. This allows us to perform supervised training on the previously unlabeled data which gives the model the benefit of using the unlabeled data's feature values.</p>
<p>Knowledge distillation is the process of transferring one model's (or ensemble's) learning to another model (frequently used to transfer the performance of a complicated ensemble into a simple single model for purpose of efficient production inference). To facilitate the transfer, we can use any data whether it was originally labeled or not labeled. The original labels are discarded and the teacher model predicts new labels. These predicted labels are not converted to 0's and 1's but rather kept as probability values between 0 and 1. This allows for maximum transfer of the teacher's knowledge to the student model.</p>
<p>We can also perform hybrids of the two. Like keeping pseudo labels soft. Then we simultaneously transfer knowledge from a teacher to a student and extract info from unlabeled data's features. My solution here is probably a combination of both.</p>
<p>Cool idea <a href="https://www.kaggle.com/bjjiang" target="_blank">@bjjiang</a> </p>
<p>Thanks <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> !</p>
<p>Congratulations Nischay on your fantastic solo performance.</p>
<p>My NN Transformer has a better private LB than LGBM below are the stats</p>
<ul>
<li>NN Transformer, leak-free-CV 0.7980, Public LB 0.7988, Private LB 0.8076</li>
<li>LGBM, CV 0.7990, Public LB 0.7992, Private LB 0.8070</li>
</ul>
<p>===== Below is more info =====</p>
<p>The NN Transformer CV is accurate and leak free. Because the LGBM is trained using 10 (inner) folds within 10 (outer) folds (i.e. Nested K-Fold). The outer folds are typical K-Fold. When training outer fold 1 we use train targets from folds 2 thru 10. We then split this group of 9 folds into its own 10 folds. Each of these inner 10 folds we train 1 model. These inner models never see the targets from validation outer fold 1. Using these inner models, we create OOF for outer fold 1 which gives us leak free predictions for outer folds 2-10. We also use these inner models to predict test preds. (Next we train 10 inner folds for outer fold 2, etc etc. In total we train 100 models)</p>
<p>This gives us a unique <code>lstm_OOF_outer_fold_1.csv</code> and <code>lstm_Test_preds_outer_fold_1.csv</code> for each of 10 outer folds for a total of 20 CSV. When we train fold 1 of our NN Transformer, we use these two CSV. When we train fold 2 of our NN Transformer we use <code>lstm_OOF_outer_fold_2.csv</code> and <code>lstm_Test_preds_outer_fold_2.csv</code> etc etc. This prevents CV leaks.</p>
<p>Feature selection/creation did not help me nor improve Martin's LGBM model. I explored adding many new groupby, diff, and product features. I also located hundreds of features that could be dropped without affecting CV. </p>
<p>But in the end, my final submission is Martin's model as is with his features (none added nor removed). I trained his model locally with 10 folds and various seeds. My solution's LB boost comes from the NN Transformer.</p>
<p>Knowledge distillation is the transfer of knowledge from 1 model to another model. First we train an LGBM. Next we perform teacher student. We use the predictions (both OOF and test preds) of LGBM to train an NN Transformer. Finally we finetune NN Transformer on the true train targets.</p>
<p>Thanks Lucas. Thanks for your helpful notebooks and discussions.</p>
<p>yes code is the same. It appears that all teams' models did <code>+0.007</code> better in private LB than public LB.</p>
<p>Yes <a href="https://www.kaggle.com/susnato" target="_blank">@susnato</a> , we use test data directly into the training pipeline (and use test preds from LGBM as labels). Note that we leave the test preds as is (as continuous numbers between 0 and 1) without converting them to 0's and 1's. This is a combination of both "pseudo labeling" and "knowledge distillation". You can google these two terms to learn more.</p>
<p>Knowledge disllation helps the Transformer learn more accurately and quickly (by gaining the knowledge that LGBM has already learned but storing it in a new transformer way). And pseudo labeling allows us to access all the information in the unlabeled test data. Namely the features which are 1 million time series of length 13 from the 188 features. This is very useful information that Kaggle provided us!</p>
<p>Transformers are hard to train from scratch. The model needs to learn weights for its self attention and weights for all its layers. When we train with soft targets from another trained model, it provides more information than just using the original train hard targets.</p>
<p>The original hard targets are 0's and 1's. But the other model's soft targets are 0.1, 0.2, 0.3, …, 0.9, 1.0 and all numbers in between. This contains much more information for the NN Transformer to learn with. So the knowledge distillation from LGBM helps the NN learn. Furthermore, NN is fundamentally different than LGBM, so once the knowledge is transferred, it will be "represented differently" and add it's own "personality" to the predictions and make it effective in ensemble.</p>
<p>The biggest reason why this approach worked so well is that it utilizes the 11 million rows of unlabeled test data. Even though we don't have labels for test data, we still have 1 million time series of length 13 for the different features in test data. This teaches our models how the features change over time and helps the model predict what the features will be in the future. If we never use test data, we never gain access to these 1 million time series with all its information.</p>
<p>Take feature <code>P_2</code> for example. This is like a customer's credit score. If we knew this value for 18 months into the future after the last credit card statement, we could probably predict every customer's default perfectly. What better way to guess what <code>P_2</code> will be in the future than watching the 1 million time series of length 13 in the test data to learn how it changes over time. And using that to train our model.</p>
<p>Now that I think about it, i think i tried to remove too many features. I tried removing a few hundred. I didn't consider statistical significance. I just checked, only 35 out of 1366 features are statistically significant bad. I will try training a new LGBM Dart with these 35 removed.</p>
<p>The reason many Kagglers had trouble with permutation importance is because you need to infer all 10 folds and you need to shuffle each column multiple times (like 10) and average. Therefore each feature needs to be inferred 100 times. If you tried using CPU this would take literally a month or more. </p>
<p>If you do not infer each column 100 times, then you will only see random noise results because the metric has so much variance. When we do infer 100 times, then each average metric score has standard deviation around <code>0.00003</code> which means that if we observe an increase in metric score of <code>0.0001</code> then we are 99% confident that feature is bad. There are only 35 features that are statistically significant hurt the model.</p>
<p>The orange line is the baseline metric score of 0.7977. The model is 10 folds of public notebook LGBM Dart by Martin. The dataframe images show the best 10 and worst 10 features. The last column <code>z</code> shows the statistical z-test score. If <code>z<-3</code> that means we are 99% confident this feature helps. If <code>z>3</code>, we are 99% confidence this feature is bad.</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2022/feat1.png" alt=""><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2022/feat2.png" alt=""><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2022/df_h.png" alt=""><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2022/df_t.png" alt=""></p>
<p>For permutation importance, i just wrote a simple for-loop. Iterate over all the features. Then inside the for-loop, randomly shuffle the chosen feature, infer model with RAPIDS FIL, compute metric. For each feature, i would shuffle the column with 10 different seeds for each of 10 folds. I would infer all 100 times and average the result. Then save the result into a dataframe.</p>
<p>Also I kept everything on GPU. The dataframe to infer is on cuDF GPU, the predictions are kept on GPU, and metric is computed on GPU. And result stored on GPU. The entire for-loop exectutes entirely on GPU and was unbelievably fast like a few seconds for each feature. (If I remember correctly, I think using CPU LGBM to infer took minutes per feature).</p>
<p>Permutation importance worked well. It allowed me to drop hundreds of features without affecting CV LB but it never boosted the CV nor LB. So in the end, I just kept all the features anyway. In other comps, I have seen CV LB boosts with permutation importance feature selection but not here.</p>
<p>The power of ensemble is using diverse models. Using both GBM and NN makes a great high performing ensemble!</p>
<ol>
<li><p>Knowledge distillation is commonly used to accelerate production in real life model deployment. i.e. we can convert a many model ensemble into a single simple model. In competitions, it is often helpful to distill one model (or ensemble) into another model that is diverse from the first because the new model will store the information differently and add boost when added to an ensemble. Furthermore when used in combination with unlabeled data it is powerful. (Then it is a combination of pseudo label and knowledge distillation).</p></li>
<li><p>Trial and error. I first found the best schedule using just train data (without pretrain). Next I trained using this schedule with pretrain and this same schedule using finetune. Next I started adjusting the finetune schedule to maximize CV</p></li>
<li><p>I did not train with 100% data. For each model, i trained and predicted 10 fold models and averaged them. Then I changed the seed and changed some architecture like perhaps changing layers from 3 to 4. I trained another 10 fold models and averaged those new 10 folds. Lastly I took all my averages and averaged those equally. Then this was my "NN model". Next I did this procedure with my "LGBM model". And then 50% 50% those two.</p></li>
</ol>
<p>Thanks KhanhVD!</p>
<p>Thank you, you are right. My rank is 14th now. I'll update the title soon.</p>
<p>Thanks Athar!</p>
<p>The following notebook <a href="https://www.kaggle.com/code/cdeotte/lstm-feature-importance" target="_blank">here</a> shows how to do permutation importance with a NN. This isn't the exact code to create the dataframe above, but its the foundation.</p>
<p>NN is an abbreviation for neural network. It refers to transformers, CNN (convolution neural networks), RNN (recursive neural networks like LSTM and GRU), MLP (multi-layer perceptron), etc</p>
<p>People also use NN to refer to DL (i.e. deep learning solutions)</p>
<p>I stacked the two models. I used the predictions from LGBM as inputs to NN</p> |
American Express - Default Prediction | 15th Place Solution Meta features ,FE, DART, CAT, XG , Tabnet , MLP , ensemble 😊 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks first to Kaggle for hosting this interesting competition , I was personally interested in this as it cynosure with the financial domain which I spend majority of my time working on . We had a great team with <a href="https://www.kaggle.com/sirius81" target="_blank">@sirius81</a> <a href="https://www.kaggle.com/liji11" target="_blank">@liji11</a> <a href="https://www.kaggle.com/tonymarkchris" target="_blank">@tonymarkchris</a> and <a href="https://www.kaggle.com/hanzhou0315" target="_blank">@hanzhou0315</a> who each brought there unique skills to the competition.. A big thank you to all of them .</p>
<h1>Feature Engineering</h1>
<p>Our feature engineering structure was influenced by this great notebook <a href="https://www.kaggle.com/code/jiweiliu/rapids-cudf-feature-engineering-xgb" target="_blank">rapids-cudf-feature-engineering-xgb</a> from <a href="https://www.kaggle.com/jiweiliu" target="_blank">@jiweiliu</a> </p>
<h2>Meta Features</h2>
<p>Our meta features were the differentiations to boost our ensembles .These are similar and well described here <a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/347786" target="_blank">12th Place Solution</a> and a big thanks for <a href="https://www.kaggle.com/sirius81" target="_blank">@sirius81</a> to come up with those . As Sirius also mentioned in that post we used them a s 13 numerical features per CID after flattening .</p>
<h2>For models without meta features</h2>
<ul>
<li>Great features suggested by Ragnar helped like last-mean features and last-min/max features for some columns, also we did apply diff 1,2 lags as well .After pay features were also helpful .</li>
<li>We did usual aggregates std,mean,min,max for categoricals (nunique,count,first,mean). We also tried MAD (Mean Absolute deviation) and it did help our cv though Public LB was bit less so we didn't include the model based on mad in final scores though it could have helped in hindsight ,</li>
<li>We also did add percentage change features , which basically do the percentage change on numerical features and thought they would work better than just a numeric diff . They did help us in some models .</li>
<li>Also one feature that was there in most of our good models was keeping last, first and middle component of the statements as we hypothesized that will help us cover most variation for a customer as we had aggregated features .</li>
<li>Another feature we tried was to say calculate spend/balance or spend_sum/balance_sum ratios , It did help in some models but not most .</li>
<li>Trimming of non impactful features also did help to reduce number of features in 1k-2k range .</li>
</ul>
<h2>Hybrid approach (meta+aggs) :</h2>
<p>We also gave a shot at mixing the flattened meta features with aggregates and it did help our models specially LGBM CAT XG and Tabnet.<br>
Cause of meta features this usually converged faster (fewer rounds with early stopping in XG) so we had to <strong>Lower LR</strong> for this approach .</p>
<h1>Models</h1>
<p>All models were trained on meta features and/or engineered aggregates . NN were scaled for the most part with GaussianScalar</p>
<p><strong>LGBM (Max cv 0.79932 Private 0.80731)</strong>: This was as with most based on notebook by <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> . We did do little tweaks to the hyperparams but most were similar .Here as noted in discussions lowering the LR certainly helped in the case of DART and also us adding meta features (around <em>0.0075 LR</em>).<br>
<strong>XG (Max CV 0.7984 Priv. 0.80687)</strong>: XG was based on <a href="https://www.kaggle.com/jiweiliu" target="_blank">@jiweiliu</a> great notebook<br>
<strong>CAT (Max CV 0.7972)</strong>: Cat we did tune a bit with below params and it helped us a lot .</p>
<pre><code>CatBoostClassifier( random_state=CFG.seed,
bootstrap_type='Bernoulli',
task_type="GPU",
devices='0:1',
use_best_model = True,
iterations = 11500,
num_leaves =64,
subsample = 0.74,
grow_policy = 'Lossguide',
depth = 9)
</code></pre>
<p><strong>Tabnet (MAX CV 0.793133 Private 0.80447)</strong>: Tabnet used mostly standard params and did give better results with meta features .<br>
<strong>MLP (Private 0.80109).</strong> We tried models with tabnet using cat embeddings and without (by onehotencoding). Also <strong>GaussianScaler</strong> helped with scaling of numerical features in case of NN and tabnet<br>
<strong>AutoML (MAX CV 0.79714)</strong>: <a href="https://www.kaggle.com/liji11" target="_blank">@liji11</a> tried automl which did help our ensemble <br>
<strong>Tabformer ( MAX CV 0.79507 Private 0.80480)</strong> : This mostly based on the public <a href="https://www.kaggle.com/code/gauravbrills/tabtransformer-training" target="_blank">notebook</a> with more heads as per the paper . This we noticed was better than tabnet in private LB but we didnt choose it as was not working well in ensemble 😑. Hyperparams below </p>
<pre><code>NUM_TRANSFORMER_BLOCKS = 6 # Number of transformer blocks. 6 paper recommends
NUM_HEADS = 8 # Number of attention heads. 8 Heads paper recommends
EMBEDDING_DIMS = 16#10 # Embedding dimensions of the categorical features. check 16,32
DROPOUT_RATE = 0.1
MLP_HIDDEN_UNITS_FACTORS = [
4,
2,
] # MLP hidden layer units, as factors of the number of inputs. =>(4,2)
NUM_MLP_BLOCKS = 4 # Number of MLP blocks in the baseline model. Paper 4
MLP_ACTIVATION = keras.activations.selu
</code></pre>
<p><strong>TCN MLP</strong>: TCN over 13 sequences , Not chosen Max CV 0.78<br>
Goes like this inspired by the amex paper </p>
<pre><code> embeddings = []
for k in range(11):
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[cat_cols[k]]
#print(f"cat {cat_cols[k]} index {k} len {len(vocabulary)}")
emb = tf.keras.layers.Embedding(len(vocabulary),EMBEDDING_DIMS)
embeddings.append(emb(inputs[:,:,k]))
in_ = tf.keras.layers.Concatenate()([inputs[:,:,11:]]+embeddings)
activation = 'swish'
l1 = 1e-7
l2 = 4e-4
reg = 4e-4
# SIMPLE Wavenet TCN BACKBONE
_x = BatchNormalization()(in_)
x = TCN(nb_filters=256, kernel_size=4,return_sequences=False, dropout_rate=0.0, dilations=[2 ** i for i in range(9)])(_x)
#x = TCN(nb_filters=128, kernel_size=3,return_sequences=False,use_layer_norm=True,dropout_rate=0.05,dilations=[2 ** i for i in range(7)])(x)
x0 = Dense(128,
kernel_regularizer=tf.keras.regularizers.L1L2(l1=l1,l2=l2),
# activity_regularizer=tf.keras.regularizers.L1L2(l1=l1,l2=l2),
activation=activation,
)(x)
x0 = Dropout(0.1)(x0)
... MORE OF MLP ....
x_output = Dense(1,
activation='sigmoid',
)(x)
</code></pre>
<h1>Ensemble techniques</h1>
<p>All ensembles were done on log odds as we found it worked better when ensembling NN models . We had also done ensemble with rank method but that didnt work well when ensembling with NN .</p>
<p>We tried a bunch of meta ensemble techniques with are many models .The ones that worked were <a href="https://www.kaggle.com/code/cdeotte/forward-selection-oof-ensemble-0-942-private" target="_blank">forward selection</a> by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> , Optuna weights and Elasticnet on top of oof preds . We also tried stacking and voting techniques but they weren't very successful.</p>
<h1>What did not work or DID ?</h1>
<ul>
<li><a href="https://www.kaggle.com/code/gauravbrills/tabtransformer-training" target="_blank">Tabformer</a> : We finally made tabformer reach 0.795 cv but it was not part of our final submission. This I will say <em>did work</em> but did not ensemble well though later we realized was our best Private LB scoring NN .</li>
<li>TCN wavenet : We tried to use 13 customer history sequences in a tcn wavenet + MLP implementation but results were only around 0.78 ish so we dropped the idea .</li>
<li>Saint and widedeep models : We also gave a stab on wide deep package to try out <a href="https://arxiv.org/abs/2106.01342" target="_blank">SAINT</a> but were not able to get far with that . reference <a href="https://github.com/jrzaurin/pytorch-widedeep" target="_blank">pytorch-widedeep</a></li>
<li>Some features as described above and some dropped based on permutation or zero feature importance .</li>
<li>We tried to do multi seed ensembles for all our good models . This somehow did not give us good results at the end compared to sticking with 42 seed.</li>
<li>Sirius also tried Target encoding at the end but we had really very little time to check on these in ensemble .</li>
<li>We also tried GAN techniques to impute missing values but dropped them at the end .</li>
</ul>
<p>Finally luckily we selected a good enough final submission for 🏅(There was as usual a lot of confusion thanks <a href="https://www.kaggle.com/tonymarkchris" target="_blank">@tonymarkchris</a> for voting for this 😄), though seems some with lower LB were better so should have trusted cv a bit more :) </p>
<p><strong>✅ Best selected submission Private LB 0.80838 ( Optuna weighted models , xg, tabnet.mlp,lgbm,cat and automl)</strong><br>
<strong>📮 Best submission : Private LB 0.80852 ( Optuna xg,cat,tabnet,mlp,automl and lgbm)</strong></p>
<p>Thanks for reading 😄</p>
<p>Yess 🙌 <a href="https://www.kaggle.com/liji11" target="_blank">@liji11</a> </p>
<p>Yes <a href="https://www.kaggle.com/sirius81" target="_blank">@sirius81</a> your features were the 🪄 for us .. Thanks 👍</p> |
American Express - Default Prediction | 18th Place Gold | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Overview</h2>
<p>This competition got me my first (solo) gold medal, so I am sure you can imagine how happy I am and how much I enjoyed it. Big thanks to the organizers and the kaggle team!</p>
<p>My work was also based on other people's great effort, so big thanks and shoutout to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> and <a href="https://www.kaggle.com/thedevastator" target="_blank">@thedevastator</a> I will mention all their contributions that I used further.</p>
<p>I built a 3 stage model - 39 stage one base models, 2 stage two ensemble models and stage 3 is a simple average of the 2 stage two models.</p>
<h2>First Stage - 39 base models</h2>
<p>I used same CV strategy for every model <code>fold = argsort(customer_id)%5</code>, for every model I generated out-of-fold predictions and test predictions averaged across folds.</p>
<h3>1. lightgbm with <a href="https://www.kaggle.com/datasets/raddar/amex-data-integer-dtypes-parquet-format" target="_blank">dataset</a> from <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a></h3>
<ul>
<li>computed simple features, fractions of "first / last" feature value and "mean / last" value when features are nonzero in train and test, else difference of these combinations</li>
<li>simple hand tuning of hyper-parameters</li>
<li>13 models altogether</li>
</ul>
<h3>2. <a href="https://www.kaggle.com/code/ragnar123/amex-lgbm-dart-cv-0-7977" target="_blank">lightgbm</a> from <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a></h3>
<ul>
<li>8 models altogether</li>
<li>1 original model</li>
<li>5 models with tuned learning rates</li>
<li>2 scaled models: original_num_trees * N, original_learning_rate/N for N in [2,4]</li>
</ul>
<h3>3. <a href="https://www.kaggle.com/code/thedevastator/amex-bruteforce-feature-engineering" target="_blank">lightgbm</a> from <a href="https://www.kaggle.com/thedevastator" target="_blank">@thedevastator</a></h3>
<ul>
<li>3 models altogether </li>
<li>1 original model</li>
<li>2 scaled models: original_num_trees * N, original_learning_rate/N for N in [2,4]</li>
</ul>
<h3>4. my custom CNN implementation with custom dataset</h3>
<ul>
<li>13 models altogether</li>
<li>different architectures (number of filters and convolution layers)</li>
</ul>
<h3>5. <a href="https://www.kaggle.com/code/cdeotte/tensorflow-transformer-0-790" target="_blank">transformer</a> from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a></h3>
<ul>
<li>only 1 original model</li>
</ul>
<h3>6. gaussian naive bayes</h3>
<ul>
<li>only 1 model, using same dataset as 1. lightgbm</li>
</ul>
<h2>Second Stage - 2 MLPRegressors</h2>
<p>I tried different ensembles of different groups of base models. These were my findings:</p>
<p>Average of 3 models of the same parameters scaled with factor n=1,2,4 for the public lightgbms (stage 1, models 3 and 4) worked pretty well.</p>
<p>Average of "few better models" and their ensemble showed better CV, but LB improvements didn't correspond.</p>
<p>This led me to build ensembles using all the models I built, even Naive Bayes with CV 0.55.</p>
<p>I tried different approaches - ElasticNet, BayesianRidge, LogisticRegression, My custom non-negative linear model (weights are either zero or positive, max 1, add up to 1), KNN, Lightgbm/XGboost and MLPRegressor.</p>
<p>I found out MLPRegressor works the best, so I ran a random gird search for 100 models: randomly choose number of hidden layers from 1 to 3, for each layer select randomly number of neurons up to 100.</p>
<p>Since MLPRegressors are non-linear, I tried not only single best models, but also average of few best MLPRegressors.</p>
<p>My final 2 second stage models were 2 MLPRegressors both with 2 hidden layers, first one had 52 and 94 neurones in the hidden layers, second one had 10 and 20 neurones.</p>
<h2>Third Stage - simple average</h2>
<p>I averaged the outputs of the second stage models.</p>
<p>I saw discrepancy between my CV and LB (higher CV had lower LB score) so I computed CV two ways - mean of the 5 scores of each split: <em>CV1</em>, and single out-of-fold score, where probabilities where min/max scaled per fold: <em>CV2</em>.</p>
<p>Firstly I selected best LB submission with CV1 0.79861 and CV2 0.79871, which was the better out of the two final submissions, with 0.80833 private and 0.80074 public score.</p>
<p>For second submission I conservatively selected submission with the highest and most similar CV computed both ways with CV1 0.79928 and CV2 0.79924 which scored 0.80814 on private and 0.8004 on public LB.</p>
<p>My best submission which I didn't select with CV1 0.79947 and CV2 0.79871 scored 0.80849 on private and 0.80060 on public and was exactly the same as the described solution, with one more MLPRegressor with single hidden layer of size 54.</p> |
American Express - Default Prediction | 19th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to AMEX for hosting this competition!<br>
I just want to give a quick overview of the stuff I did in this competition.</p>
<h1>Main concepts</h1>
<p>I used an NN based models and LGBM based models for this competition. </p>
<h4>Pre-processing</h4>
<p>Because of some features with very spread out distributions, I figured that log-transforming several features for the NN models might make sense. I log-transformed a good fraction of the features. I decided to transform or not by gut feeling after looking at the histogram of each feature.</p>
<p>I noticed 2 features that behaved weirdly in private data, so I removed them for all NN models.<br>
Those features <strong>D_59</strong>, <strong>D_86</strong>. <strong>D_59</strong> even had a different private test data distribution than public test data distribution.</p>
<h4>Mini-LSTMs</h4>
<p>For both model types, I constructed new features by training a simple LSTM on every single feature (13x1 input dimension). Those models still predict the original target.<br>
This results in 189 different models which generate probabilities given the sequence of only a single feature.</p>
<h4>LGBMs</h4>
<p>For the LGBM based model, I used the great <a href="https://www.kaggle.com/code/ragnar123/amex-lgbm-dart-cv-0-7977" target="_blank">notebook</a> by <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> as a starting point. I added the previously generated features from the LSTM models and changed tweaked a few hyperparameters. I also did one training with all data.</p>
<h4>NN</h4>
<p>For the NN I have first pre-trained a transformer-model on all data which imputed random missing values. I then embedded this pre-trained model into an NN which also used several other features (e.g. last sequence element) and trained it. This improved the NN performance quite a bit. I also passed the Mini-LSTM model features into the NN.</p>
<h4>Pseudo-labeling</h4>
<p>I also trained the previous NN on pseudo labeled test data (labeled by all other previous models), which gave this nn a score of 0.799 on public LB. This was however difficult the validate, since the pseudo labels created a heavy leak. Therefore I validated this model only on the public LB data. This helped to further increase my score.</p>
<h4>Final model</h4>
<p>In the end I ensembled the NNs and LGBMs with roughly 30/70 (NN/LGBM) weight.<br>
I also included this <a href="https://www.kaggle.com/code/swimmy/tuffline-amex-anotherfeaturelgbm" target="_blank">public submission</a> in the ensemble.</p>
<h1>Things that did not work</h1>
<p>Or at least things that didn't help much.</p>
<p>I tried using pre-trained autoencoders, ELECTRA style pretraining, Tabnet, CNN, LSTM, XGB, other losses (pearson, mse), and some smaller stuff…<br>
I tried <strong>a lot</strong> of things with some of these but it just didn't really help. I actually used embeddings generated by the ELECTRA style pre-trained transformer model for my NN, but I think this only contributes to a minor improvement.</p>
<p>Anyway, that's basically it. I am sure all of it 100-200 hours and I feel like there is still a lot of room for improvement (e.g. I did not test enough features for LGBMs, could still train more models since I only use like 25 single models in the final version). I focused a lot on playing with NN in this competition, but a lot of stuff just wouldn't work.</p>
<p>Oh I just used tensorflow to build my own model and made several adaptations to fit the task. The model needed to predict for each position if it is a real sample, or generated by the generator. I used this approach since I felt like dealing with the vastly different and strange distributions of features might be difficult with something like an autoencoder.</p>
<p>Oh that's interesting, I though about an LGBM approach as well but thought LSTM might outperform them, because of the sequence structure and relatively low dimensionality of the inputs. For sharing, I am not sure how feasible it is. It would be quite a mess since so many files are involved and I worked locally on my computer.</p>
<p>Yes, so I though about a way to generate embeddings by pre-training on all data. I was inspired by the training process of Google's language model ELECTRA. I used a generator and a discriminator. The generator learned to produce sequence elements (e.g. the entire last statement) and swapped them with the actual data. The discriminator needed to decide, which of the 13 sequence elements have been created by the generator (e.g. 5, 7, 10, 11 have been produced by the generator). This produced embeddings in the last hidden layer that had quite a decent predictive power (though way too weak to use on it's own), so I experimented with merging those features with the NN and the LGBM. With the NN I saw a minor improvement and with the LGBM no improvement (this was probably due to noise since each training went a bit different anyways). For the NN it also could have been noise but I just kept it in.</p>
<p>And actually I did not add the features itself, but rather I trained another NN to produce a single probabiltiy from those embeddings, and fed this single feature to the NN. Overall, my pipeline feels a bit complicated and things like these could probably be removed.</p>
<p>Sure, no problem :) Some things probably look a bit confusing.<br>
For example, lines like these<br>
<code>dense = layers.Multiply()([dense, tf.tile(tf.expand_dims(input_mask, -1), (1, 1, transformer_dim))])</code><br>
are there to pad the sequence with 0s when there are less then 13 statements.<br>
And these 2 lines</p>
<pre><code>tile = layers.Lambda(lambda x: tf.tile(tf.expand_dims(x, -1), (1, 1, 13)))
mask = layers.Multiply()([tile(input_mask), tf.transpose(tile(input_mask), perm=(0, 2, 1))])
</code></pre>
<p>are there to create the attention mask (again, for dealing with shorter sequences).</p>
<p>This is the entire model for imputing:</p>
<p>input_x1 - numericals<br>
input_x1_nan - binary mask of missing values<br>
input_x2 - categoricals<br>
input_mask - mask for the sequence length (usually a value of 13)</p>
<pre><code>def get_model(seed=0):
tf.random.set_seed(seed)
input_x1 = Input((max_len, dim_f1), dtype=tf.float32)
input_x1_nan = Input((max_len, dim_f1), dtype=tf.float16)
input_x2 = Input((max_len, dim_f2), dtype=tf.int32)
input_mask = Input(max_len)
oh0 = layers.Lambda(lambda x : tf.one_hot(x, depth=cat_counts[ 0]))(input_x2[:, :, 0])
oh1 = layers.Lambda(lambda x : tf.one_hot(x, depth=cat_counts[ 1]))(input_x2[:, :, 1])
oh2 = layers.Lambda(lambda x : tf.one_hot(x, depth=cat_counts[ 2]))(input_x2[:, :, 2])
oh3 = layers.Lambda(lambda x : tf.one_hot(x, depth=cat_counts[ 3]))(input_x2[:, :, 3])
oh4 = layers.Lambda(lambda x : tf.one_hot(x, depth=cat_counts[ 4]))(input_x2[:, :, 4])
oh5 = layers.Lambda(lambda x : tf.one_hot(x, depth=cat_counts[ 5]))(input_x2[:, :, 5])
oh6 = layers.Lambda(lambda x : tf.one_hot(x, depth=cat_counts[ 6]))(input_x2[:, :, 6])
oh7 = layers.Lambda(lambda x : tf.one_hot(x, depth=cat_counts[ 7]))(input_x2[:, :, 7])
oh8 = layers.Lambda(lambda x : tf.one_hot(x, depth=cat_counts[ 8]))(input_x2[:, :, 8])
oh9 = layers.Lambda(lambda x : tf.one_hot(x, depth=cat_counts[ 9]))(input_x2[:, :, 9])
ohA = layers.Lambda(lambda x : tf.one_hot(x, depth=cat_counts[10]))(input_x2[:, :, 10])
tile = layers.Lambda(lambda x: tf.tile(tf.expand_dims(x, -1), (1, 1, 13)))
mask = layers.Multiply()([tile(input_mask), tf.transpose(tile(input_mask), perm=(0, 2, 1))])
dense_nan = layers.Dense(transformer_dim)(input_x1_nan)
dense_nan = dense_nan + positional_embedding_tf
nan_attention = layers.MultiHeadAttention(num_heads=4, key_dim=32, value_dim=32, output_shape=transformer_dim)(dense_nan, dense_nan, attention_mask=mask)
nan_attention = tf.cast(nan_attention, tf.float16) * tf.cast(tf.expand_dims(input_mask, -1), tf.float16)
nan_attention = layers.Dense(dim_f1)(nan_attention)
oh_features = layers.Concatenate(axis=2)([oh0, oh1, oh2, oh3, oh4, oh5, oh6, oh7, oh8, oh9, ohA])
features = layers.Concatenate(axis=2)([tf.clip_by_value(input_x1 * tf.cast(1-input_x1_nan, tf.float32) + nan_attention, -clip_val, clip_val), oh_features])
dense = layers.Dense(1024, activation="elu")(features)
dense = layers.Dense(transformer_dim, activation="elu")(dense)
dense = layers.Multiply()([dense, tf.tile(tf.expand_dims(input_mask, -1), (1, 1, transformer_dim))])
dense = dense + positional_embedding_tf
attention = layers.MultiHeadAttention(num_heads=4, key_dim=32, value_dim=32, output_shape=transformer_dim)(dense, dense, attention_mask=mask)
attention = layers.Multiply()([attention, tf.tile(tf.expand_dims(input_mask, -1), (1, 1, transformer_dim))])
add = layers.Add()([dense, attention])
norm = layers.LayerNormalization(axis=2)(add)
dense = layers.Dense(512, activation="elu")(norm)
dense = layers.Dense(256, activation="elu")(dense)
output = layers.Dense(dim_f1)(dense)
model = Model(inputs=[input_x1, input_x1_nan, input_x2, input_mask], outputs=output)
model_emb = Model(inputs=[input_x1, input_x1_nan, input_x2, input_mask], outputs=norm)
return model, model_emb
</code></pre>
<p>Basically, I first convert the categoricals to one hot vectors. I set all the masked values to 0 and have an attention layers that uses the binary mask as input, and adds the results to the features (the idea being, that the model can do some pre-processing depending on what and how many values need to be imputed. Then I pass the features into a dense layer and add a positional encoding (taken from the Attention Is All You Need paper). Then I have the actual attention and predict all values and use a custom loss which only counts the masked positions. Also, I only impute the non-categorical values since I found the categorical values hard to deal with. </p>
<p>In the final model, I cut the pre-trained model off after the norm (therefore the function returns 2 models) and add some other features.</p>
<p>Yes, that is correct! They are also very small to reduce overfitting.</p>
<p>Thank you <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>! I just noticed by reading your post that I slipped into Gold :D I hope it stays that way (not sure what happens if one of the leaderboard redactions will be reversed?).</p>
<p>About the mini-lstm, the Idea was that I still predict the original target, but the model only uses one feature as input each time. But the other Idea might also be interesting since it allows training on all the data.</p>
<p>This is the code for the model that handles single numerical features:</p>
<pre><code>def get_small_model(use_activation, seed):
tf.random.set_seed(seed)
input_x1 = Input(max_len, dtype=tf.float32)
input_x1_nans = Input(max_len, dtype=tf.float32)
input_mask = Input(max_len)
lag = input_x1[:, 1:] - input_x1[:, :-1]
lag = tf.concat([lag[:, :1], lag], axis=1) * input_mask # Pad the lag features to length 13
fmin = tf.reduce_min(input_x1 + 1000 * (1 - input_mask), axis=1, keepdims=True) # Min of the feature (+1000 on the padded part to ignore it)
fmax = tf.reduce_max(input_x1 - 1000 * (1 - input_mask), axis=1, keepdims=True) # Max of the feature (-1000 on the padded part to ignore it)
fstd = tf.sqrt(tf.math.reduce_sum(tf.square(input_x1), axis=1, keepdims=True) / tf.math.reduce_sum(input_mask, axis=1, keepdims=True)) # Std of the feature
fmean = tf.math.reduce_sum(input_x1, axis=1, keepdims=True) / tf.math.reduce_sum(input_mask, axis=1, keepdims=True) # Mean of the feature
fmean_nans = tf.math.reduce_sum(input_x1_nans, axis=1, keepdims=True) / tf.math.reduce_sum(input_mask, axis=1, keepdims=True) # Fraction of nan values
dense = layers.Dense(16, activation="swish")(tf.stack([input_x1, lag, input_mask], axis=2)) # Pass features through dense
lstm = layers.LSTM(10)(dense)
concat = layers.Concatenate()([lag[:, -1:], input_x1[:, -1:], fmin, fmax, fstd, fmean, fmean_nans, lstm])
dense = layers.Dense(8, activation="swish")(concat)
output = layers.Dense(1, activation=use_activation)(dense)
model = Model(inputs=[input_x1, input_x1_nans, input_mask], outputs=output)
return model
</code></pre>
<p>and the model that handles categorical features:</p>
<pre><code>def get_small_model_cat(use_activation, seed):
tf.random.set_seed(seed)
input_x1 = Input(13, dtype=tf.int32)
input_mask = Input(max_len)
oh = layers.Lambda(lambda x : tf.one_hot(x, depth=20))(input_x1)
lstm = layers.LSTM(10)(oh)
dense = layers.Dense(8, activation="swish")(lstm)
output = layers.Dense(1, activation=use_activation)(dense)
model = Model(inputs=[input_x1, input_mask], outputs=output)
return model
</code></pre>
<p>Since the 179 training-runs (for each feature) took quite a long time, I only trained with one validation fold as hold-out set and did some precautions to not overfit (very small model, max 16 epochs). This way I was also able to get quite a nice overview of the single feature predictive power:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6735218%2F9fa502558e06c614ae2135703b47612b%2Fsingle%20feature%20predicitveness.png?generation=1661792977828903&alt=media" alt=""></p>
<p>Each dot is a feature and the colors correspond to the different features groups:</p>
<p>S - red<br>
P - green<br>
D - blue<br>
B - purple<br>
R - black</p>
<p>Further optimizing the mini-lstms might help but along with my pre-trained impute model, I think they gave me the largest boost.</p>
<p>Yes, thanks :D I couldn't believe it ^^</p> |
American Express - Default Prediction | 1st solution(update github code) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I run all on my local machine. Maybe I will share code at github.</p>
<p>128g RAM+3090</p>
<p>Yes, I tried feature selection, but it didn't work for public. So I didn't use it for final submission.</p>
<p>About 6000</p>
<p>First time be a solo winner, I must say there is luck in winning the competition. </p>
<p>My best result is a heavy ensemble with LGB and NN.</p>
<ol>
<li><p>Data<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1098915%2F7fe1db545b78d6ded19e76a3c8497052%2Fdata.jpg?generation=1661566215137056&alt=media" alt=""></p></li>
<li><p>Model</p></li>
</ol>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1098915%2F3c5fa34a54082283ce20f7b373d36524%2Fmodel.jpg?generation=1661562190912958&alt=media" alt=""></p>
<p>for NN model, all data fillna(0) and using nn.utils.rnn.pack_padded_sequence to pad.</p>
<p>I have to look for my early stage model, sorry for small font size in figure.</p>
<p>update: I release a clean code at <a href="https://github.com/jxzly/Kaggle-American-Express-Default-Prediction-1st-solution" target="_blank">https://github.com/jxzly/Kaggle-American-Express-Default-Prediction-1st-solution</a>.<br>
note: You may not be able to reproduce the best result due to random fluctuations.</p>
<p>Sorry for my mistake, it's user-based rank.</p>
<p>I release a clean code at <a href="https://github.com/jxzly/Kaggle-American-Express-Default-Prediction-1st-solution" target="_blank">https://github.com/jxzly/Kaggle-American-Express-Default-Prediction-1st-solution</a></p>
<p>best single model: 0.799</p>
<p>Yes. My assumption is that the more recent more weight of the data.</p>
<p>Maybe, hah</p>
<p>ensemble with probs not ranked</p>
<p>Weights is not very important in private. But it also is divine luck.</p>
<ol>
<li><p>df.groupby('cid')[num_features].rank(pct=True), df.groupby('year-month')[num_features].rank(pct=True)</p></li>
<li><p>nth value - nth-1 value</p></li>
<li><p>the training set is train_data.merge(train_y,how='left',on='cid')</p></li>
<li><p>GreedyBins is a operation in LGB. <br>
<a href="https://blog.katastros.com/a?ID=01800-4e3a4f7c-6981-40af-b4dd-3224074d705a" target="_blank">https://blog.katastros.com/a?ID=01800-4e3a4f7c-6981-40af-b4dd-3224074d705a</a></p></li>
<li><p>when cv and lb boosting, the feature were selected.</p></li>
</ol>
<p>My assumption is that the more recent more weight of the data, so I picked the last 3/6 rows and choose features by cv and lb.</p>
<p>It's very beneficial for score due to difference. </p>
<p>Maybe you need VPN.</p>
<p>I have tuned learning_rate, max_depth, num_leaves, bagging_fraction, feature_fraction, min_data_in_leaf, max_bin, min_data_in_bin, lambda_l1, lambda_l2 for LGB and layer_num, hidden_size for NN. Usually tune to boost cv, but also public score in this competition.</p>
<p>I have a big machine. You can reduce memory by using float16/32 and int8/16.</p>
<ol>
<li>The history features may boost 0.005-0.01.</li>
<li>Yes, it's a part of my solution.</li>
</ol>
<p>Mean stands for probability of one category. Std. maybe needless.</p>
<p>This is mainly to try to summarize the features using the model, which is different from hand-made.</p>
<p>I used the public score to determine the best weights.</p> |
American Express - Default Prediction | 2061 place overview (!) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First and foremost we would like to thank Raddar for his cleaned and compressed dataset <a href="https://www.kaggle.com/datasets/raddar/amex-data-integer-dtypes-parquet-format" target="_blank">[1]</a> (which we used) and Martin for his two magnificent DART notebooks <a href="https://www.kaggle.com/code/ragnar123/amex-lgbm-dart-cv-0-7963" target="_blank">[2]</a><a href="https://www.kaggle.com/code/ragnar123/amex-lgbm-dart-cv-0-7977" target="_blank">[3]</a>. We finally used a stacking of XGBoost, LightGBM and TabNet. When blending our solution with that of Martin we obtained a Private score of 0.80761, and playing the hypothetical <em>what if</em> game this would have landed around 138th on the LB. However, although acutely aware that our solution was not competitive, we still went ahead with selecting only our own work, and followed the sage advice of Raddar in submitting our best CV, and (our best CV + our best LB)/2 works. Doing this left our CV score on the wrong side of the CV of the work published by Martin, and consequently way down the LB, but with no regrets whatsoever. </p>
<p>Things perhaps of mention:<br>
Feature engineering: We spent an inordinate amount of time with the features in conjunction with the 13GB GPU memory limit on kaggle, and with each new feature sadly eventually having to sacrifice in the end around 30 other features (team-mate <a href="https://www.kaggle.com/danielhanchen" target="_blank">@danielhanchen</a> helped with that process) to stay within the notebook memory. In retrospect we should really have heavily sub-sampled the training data (if it wasn't for the noisy <em>D</em> component in the metric needing lots of data that would definitely be the way to go) in order to keep as many of these features as possible as almost all of them added just a little something (we did try the random under-sampling <a href="https://imbalanced-ensemble.readthedocs.io/en/latest/api/ensemble/_autosummary/imbalanced_ensemble.ensemble.RUSBoostClassifier.html" target="_blank">RUSBoostClassifier</a> from <a href="https://imbalanced-ensemble.readthedocs.io/en/latest/index.html" target="_blank">imbalanced-ensemble</a>, but it did not work well).<br>
As well as a lot of futher cleaning to remove outliers <em>etc</em>, things of particular note were:</p>
<ul>
<li>The features <code>S_8</code> and <code>S_13</code> seemed reminiscent of some sort of FICO credit rating scores. With that in mind we created an additional two new binary features with a split (<code>np.where</code>) chosen by fitting just these features alone with a RandomForest 'stump' with the idea of 'good' or 'bad' creditor in mind.</li>
<li>As well as generating the usual aggregated features for all of the (up to) 13 statements, we found that also using such aggregations just over the last 3 statements, and the last 6 statements, proved to produce some very informative features.</li>
<li>We found that the feature <code>S_3/S_7</code> helped.</li>
<li>We found that the average of the three features <code>D_115</code>, <code>D_118</code> and <code>D_119</code> also helped a little</li>
<li>On the float features we used the <code>round(2)</code> trick suggested by Jiwei Liu <a href="https://www.kaggle.com/code/jiweiliu/amex-catboost-rounding-trick" target="_blank">[4]</a></li>
<li>Our final number of features after aggregations was 1145</li>
</ul>
<p>Estimators:</p>
<ul>
<li>We started with also using CatBoost, but in the end we found the CV scores were not on par with those of XGBoost and LGBM</li>
<li>We used DART with LGBM thanks to the work of Martin (but not with XGBoost; incredibly slow!)</li>
<li>Our only novelty with XGBoost was using the <code>num_parallel_tree=7</code> parameter, which slowed down the training by around 7 times, but made for a smoother training curve and a CV with a lower standard deviation.</li>
<li>As per the magnificent notbook by Chris Deotte <a href="https://www.kaggle.com/code/cdeotte/xgboost-starter-0-79391" target="_blank">[5]</a> we used surprisingly shallow estimators (<code>max_depth=4</code>) as going deeper didn't really seem to help</li>
<li>Combined multiple runs of each estimator, each using different CV splitting seeds</li>
<li>Although TabNet alone did not have a spectacular CV score despite the best efforts of my team-mate <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a> , it helped every ensemble it touched.</li>
</ul>
<p>The OOF predictions of these estimators were then calibrated and then fed into a stacking ensemble.</p>
<p>We were also aware that the distribution of some of the features in the test data were significantly different from those in the training data (some wonderful notebooks on this were made on this by Pavel Vodolazov <a href="https://www.kaggle.com/code/pavelvod/amex-eda-revealing-time-patterns-of-features" target="_blank">[6]</a><a href="https://www.kaggle.com/code/pavelvod/amex-eda-even-more-insane-time-patterns-revealed" target="_blank">[7]</a>). In view of the fact that usually tree based estimators cannot extrapolate we tried out the <a href="https://github.com/cerlymarco/linear-tree" target="_blank"><code>linear-tree</code></a> estimator, and also the <code>gblinear</code> booster in XGBoost, but neither were used in the end; they may well have eventually performed better on the test data for the features that had covariate shifts, but there was little way of telling beforehand. </p>
<p>All in all a very enjoyable competition and many thanks to AmEx for providing us with tabular data to play with, and kaggle for the computational resources!</p>
<p>All the best,<br>
carl</p>
<p>Dear <a href="https://www.kaggle.com/pabuoro" target="_blank">@pabuoro</a> </p>
<p>Thanks! Despite the very modest result I think that, just as with science, it is useful sometimes to read about what didn't make the cut as well as the wonderful work that did!</p>
<p>All the best,<br>
carl</p>
<p>Dear <a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> </p>
<p>That is really interesting to know! </p>
<p>I did vaguely think about paying for some cloud time; apart from big ones like <code>P_2</code> most of the features contributed very little but <em>something</em>, and all together they do make a difference, so I didn't enjoy dropping any of them (even though the <a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/331131" target="_blank">permutation importance work of AmbrosM</a> indicated that some of the features were decidedly unhelpful. Chris Deotte found that <a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/347641#1915259" target="_blank">"<em>There are only 35 features [of his 1366 features] that are statistically significant hurt the model.</em>"</a>) and would have liked to keep them all and let each estimator do its work. I cant remember exactly where but I remember one winner back in the golden age of kaggle tabular competitions saying that in general he never dropped any features.</p>
<p>So resource availability and/or management was a big factor in this competition. There was an interesting topic posted a little back by James Trotman <a href="https://www.kaggle.com/discussions/general/322995" target="_blank">"Incoming Feature: Paid User Plans?"</a> which led to some thought provoking questions about the long term future regarding the size of kaggle competitions w.r.t. kaggle resources.</p>
<p>All the best,<br>
carl</p>
<p>Dear <a href="https://www.kaggle.com/danielhanchen" target="_blank">@danielhanchen</a> </p>
<p>It was wonderful having you onboard! Working together as a team is so much greater than the sum of all its parts; meeting new collaborators and learning together is one of the greatest rewards there is on kaggle, or anywhere else for that matter! </p>
<p>Un gran abrazo,<br>
carl</p>
<p>Dear <a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> </p>
<p>One thing I would really like to see in the future would be kaggle tabular competitions (!) that perhaps move away from monolithic <code>csv</code> files and perhaps provide a connection to a database via PySpark. For example, <a href="https://databricks.com/blog/2021/10/04/pandas-api-on-upcoming-apache-spark-3-2.html" target="_blank">as of Spark 3.2 one can use pandas</a> via <code>import pyspark.pandas as ps</code> without the need for UDFs, and would also provide an environment where people could practice their SQL skills (something that all employers want) and, thanks to lazy evaluation, may not be so taxing on notebook CPU/GPU…</p>
<p>All the best,<br>
carl</p> |
American Express - Default Prediction | 21st Solution and Code Sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>A huge huge thank you to <a href="https://www.kaggle.com/andrew60909" target="_blank">@andrew60909</a> and <a href="https://www.kaggle.com/ryotak12" target="_blank">@ryotak12</a> for an excellent team experience. We learned a lot and had fun, and I appreciate their patience with my extended, largely unsuccessful efforts to make a transformer work really well 😂. Thanks also to Amex and Kaggle for hosting this interesting competition and to <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> and everyone else who shared resources that made this competition much more accessible. </p>
<p>I've made my personal code public here: <a href="https://github.com/JEddy92/amex_default_kaggle" target="_blank">https://github.com/JEddy92/amex_default_kaggle</a>. I tried to create a reasonably general structure for division of FE and modeling, with reusable functionality for training and logging results across a variety of models. There is still some clean up to do and it's not as polished as would be ideal!</p>
<p>Feel free to ask us anything if you have questions about our approach!</p>
<h2>Solution Summary</h2>
<p>We used logistic regression to ensemble ~60 sets of predictions with diversity obtained from different models, feature sets, and data views. Models included the usual mix of gradient boosters (LGBM, LGBM dart, XGBoost, CatBoost), MLPs, TabNet, and Transformers with a few special tricks. Data views included aggregated data, fully flattened data (13 raw features per customer), sequential data (Transformer), and augmented data (shifted statements forward by 1 and appended to original). Very detailed feature information to follow -- they are split across our 3 different team members' different models, but this aims to be a complete compilation of what we all used. </p>
<hr>
<h2>Datasets & Notation</h2>
<ul>
<li>Everything is derived from Raddar's dataset</li>
</ul>
<p>\(\mathcal{D}_{fulltrain}\) : Original full size train-set (5531451 rows) </p>
<p>\(\mathcal{D}_{train}\) : Train-set (458913 rows) </p>
<p>\(\mathcal{D}_{fulltest}\) : Original full size test-set </p>
<p>\(\mathcal{D}_{test}\) : Test-set </p>
<hr>
<h2>Primary Features</h2>
<p>The train-sets with the best CV score for each of us are generated by \(\mathcal{D}_{fulltrain}\) with these features: </p>
<ul>
<li><p><strong>Numeric</strong> </p>
<ul>
<li>Mean, std, max, min, first, last </li>
<li>Last-mean, last/mean, last/first, last/std, max/min, linear-weighted mean </li>
<li>Similar aggregates as above but limited to last 3, 5, etc. statements</li>
<li>Recent diff features and aggregates of all diffs </li>
<li>Monthly-based ranking</li>
<li>Multiplication and Division: create all possible pairs for all features, except for categorical features and features highly correlated with other features. Multiplication and division is done after scaling with MinMaxScaler(0-1) and clipping outliers</li>
<li>Null aggregates (including starting streak vs. at random)</li>
<li>Date features (e.g. stats on customer's statement gaps)</li></ul></li>
<li><p><strong>Trend</strong> : Oldest month as 1 and the newest month as 13, take the average over the following periods </p>
<ul>
<li>mean(13,12) - mean(11,10)</li>
<li>mean(13,12,11) - mean(10,9,8)</li>
<li>mean(13,12,11) - mean(3,2,1)</li>
<li>mean(13,12,11,10,9,8) - mean(7,6,5,4,3,2)</li></ul></li>
<li><p><strong>Category</strong> </p>
<ul>
<li>first, last </li>
<li>entropy: Shannon entropy on frequency table </li>
<li>Nunique: Number of unique category values</li>
<li>Counts of each value in each category (both raw counts and tf-idf counts)</li>
<li><code>P_2</code> mean (across train and test) of each category</li>
<li>SVD factorization over time of each category</li>
<li>freq1name : the number of most frequent category </li>
<li>freq1ratio : the number of most frequent category / group size </li>
<li>freq_last1name : the number of least frequent category </li>
<li>freq_last1ratio : the number of least frequent category / group size </li></ul></li>
<li><p><strong>PCA</strong></p>
<ul>
<li>(N, 13*raw_features) → (N, 64)</li></ul></li>
<li><p><strong>KNN-based target encoding methods</strong></p>
<ul>
<li>We selected the nearest 500 points for each sample with euclidean distance(with 2 groups of most important features, each has 14 features. One with last statement of each customer, another one with mean of all statements), calculate the average of target as features</li>
<li>Follow the above method, using nearest 500 points but applying average with weight by the distance matrix</li>
<li>Aggregate the distance matrix, by mean, max, min, std</li>
<li>We also calculated the cosine similarity of each customer's past N months of data in one dimension.(N=1,3,6,13) The average of the 500 nearest neighbor customers' targets is used as the feature. This feature was binned because the distribution is somewhat different between Train and Test</li></ul></li>
<li><p><strong>Nested model</strong></p>
<ul>
<li><p>We added the labels to \(\mathcal{D}_{fulltrain}\), then trained a LGBM model. The purpose of doing this is to capture what kinds of records, behavior and attributes will cause default. Instead of aggregating features into 458913 rows, this method can let the model learn some extra information from a more "base" level. After that, for each <code>customer_ID</code> we will have a "predicted target" that has the same length as the number of records in each <code>customer_ID</code>. </p></li>
<li><p>Finally, to merge it into \(\mathcal{D}_{train}\) : Train-set (458913 rows), we aggregate similarly to other features: mean, std, max, min, first, last, man-min, last-mean, last/std. </p></li>
<li><p>An interesting point here is that the last value of our "predicted value" with \(\mathcal{D}_{fulltrain}\) scores 0.785 on the Amex metric, while the score of the max value is around 0.62 -- indicating the importance of the last value.</p></li></ul></li>
</ul>
<hr>
<h2>Other Feature Methods:</h2>
<ul>
<li><p><strong>Target/Count encoding</strong>: On \(\mathcal{D}_{train}\)</p></li>
<li><p><strong>Target encoding</strong> with \(\mathcal{D}_{fulltrain}\)</p></li>
<li><p><strong>Other functions to aggregate</strong> \(\mathcal{D}_{fulltrain}\)</p>
<ul>
<li>Exponential weight average</li>
<li>Skewness, Kurtosis</li>
<li>The ratio beyond 1std </li>
<li>Median Absolute Deviation</li>
<li>Mean, max, min, std on 1st derivative(t=1,2,…13), timestamp of max and min also used </li>
<li>Shannon entropy: estimate of the spectral density of sequence</li>
<li>Stability, Lumpiness: variance of the means and variance of the variances on tiles of windows</li>
<li>Crossing points: number of times a sequence crosses the median line</li>
<li>Some measurements from <a href="http://isadoranun.github.io/tsfeat/FeaturesDocumentation.html" target="_blank">here</a> </li></ul></li>
<li><p><strong>Clustering methods</strong>: Kmeans, DBSCAN on important feature(e.g. <code>P_2_last</code> or <code>B_1_last</code>(including other measurements)). Using raw clusters largely degraded our CV and we're not sure about the reason, so count/target encoding was used here.</p></li>
<li><p><strong>pred0 feature</strong>: using full data to predict specific important features via 2 datasets: \(\mathcal{D}_{train}\) and the fullsize one. For example, if we want to predict <code>P_2</code> in the former case or predict <code>P_2_last</code> in the latter case, we will excluded all "P-related" feature and train a lgbm model on it.</p></li>
<li><p><strong>Probit, Logit models with regularization</strong> on different parts of features (e.g. S-related, B-related sets), like how we make pred0 features, we also exclude related features when we construct the predictions</p></li>
<li><p><strong>Time-series based methods</strong> to predict important features in the next statement(e.g. <code>P_2</code>, <code>B_1</code>….)</p></li>
<li><p><strong>Curve fitting, smoothing, denoising</strong>:</p>
<ul>
<li>Linear fit: return coefficient and the value of the next 1,3,6,12 months as features</li>
<li>Poly fit(2,3,4): take the coefficient, and the prediction in next 1 month</li>
<li>LOWESS fit: only with the <code>customer_ID</code> that have >10 statements.</li>
<li>Wavelet based method of denoising: Least Asymmetric, Haar, Daubechies(16) then construct some agg features, this method only applies on several important features</li></ul></li>
<li><p><strong>Leaf-embedding</strong>: An old trick. We mixed the approaches from <a href="https://scontent-tpe1-1.xx.fbcdn.net/v/t39.8562-6/240842589_204052295113548_74168590424110542_n.pdf?_nc_cat=109&ccb=1-7&_nc_sid=ad8a9d&_nc_ohc=nd2mJEAuPSkAX-EVY6m&_nc_ht=scontent-tpe1-1.xx&oh=00_AT9NGACeKxD36CIc-jVHSZ_hyBN5NsVETQihOZqmm7nZ1A&oe=630C598A" target="_blank">this paper</a> and <a href="https://www.kaggle.com/code/mmueller/categorical-embedding-with-xgb" target="_blank">this kaggle code</a>. We use lgbm with 1round + huge amount of leaves + super high regularization to embed the "whole" data, which has only 1 dimension. And of course using only categorical data or expanding the embedding matrix to the dimension of <code>nrounds</code> is also possible. </p></li>
<li><p>Most of the above methods are not really able to improve the best CV in a single model, so we turn to utilize it in different models to generate more diversity. Most of our lgbm models with the above selected methods can have a score ranging from CV.796~.801/ PbLB.797~.80 / PriLB.804~.806, and some of them can generate pretty good diversity. But we are not going to do a detailed ablation study to check the improvement in the PrivateLB of each method :P</p></li>
</ul>
<hr>
<h2>Feature Selection Strategies</h2>
<ul>
<li><strong>LightGBM w/ permutation importance</strong>: During the training process, we observed that the metric D is quite noisy and unstable. Probably as a result, permutation importance selection with Amex-Score didn't work here. So we selected features by monitoring G (Gini score) alone and it worked much better! For example, Ryota's FE generated over 20k features, but the final number of features utilized was about 1200.</li>
</ul>
<hr>
<h2>Model Details</h2>
<ul>
<li><p><strong>GBDT</strong></p>
<ul>
<li>LightGBM (gbdt, dart) - classification, regression: CV 0.796~0.801, Public 0.799, Private 0.805~.806</li>
<li>XGBoost - classification: CV 0.797~0.800, Public 0.798, Private 0.805</li>
<li>CatBoost - classification, regression CV 0.796~0.799, Public 0.797, Private 0.804</li>
<li>LGB linear_tree, ExtraTrees, RF, RGF and rmse objective to generate some diversity</li>
<li>Some LGB dart models trained on the fully flattened view of the data (don't score well individually but ensemble well)</li>
<li>Some LGB dart models trained on features derived from augmented data view </li></ul></li>
<li><p><strong>TabNet</strong></p>
<ul>
<li>TabNet: CV 0.793 Public 0.794 Private 0.801</li>
<li>Residual Learning (LightGBM): CV 0.795, Public 0.796 Private 0.802 (specifically, we predict <code>ground truth - tabnet’s prediction</code> with LightGBM and then finalize predictions as <code>tabnet's prediction + resid prediction</code></li></ul></li>
<li><p><strong>MLP</strong></p>
<ul>
<li>MLP: CV 0.796 Public 0.794 Private 0.802</li></ul></li>
<li><p><strong>Transformer</strong></p>
<ul>
<li>CV: .793
-> .794 with residual learning </li>
<li>All nulls imputed with LGB trained on all other features, categories encoded as mean <code>P_2</code> value from entire train+test. Data Augmented ~2x by statement shift strategy</li></ul></li>
</ul>
<p>Example Dart hyper-parameters from Angus, no special sauce here:) :</p>
<pre><code>lgb_param <- list(boosting_type = 'dart',
objective = "binary",
metric = amex,
learning_rate = 0.02,
num_leaves = 48,
feature_fraction = 0.1,
#bagging_freq = 1,
#bagging_fraction = 1,
min_child_weight = 1,
lambda_l1 = 1,
lambda_l2 = 64,
skip_drop = 0.8
)
</code></pre>
<hr>
<h2>Some Other Interesting Things we Tried</h2>
<ol>
<li><p><strong>Private set specialization</strong>: We tried using the KNN result mentioned above to estimate which samples are closer to Private LB data. In particular, we calculated the nearest 500 points for each sample, then the ratio of each set in those 500 points. After that, we took the top 30% of samples which are closer to private LB and ran a logistic regression to decide the weight of our final ensemble. But unfortunately, the CV score dropped a lot. The CV of the top 30% data is ~0.88, but overall(450k data) Amex-score is ~0.7812. We also tried the top 30% "public ratio" one. The CV is ~0.7259 and the overall CV is 0.7896711. It's possible that using adversarial validation predictions trained on top importance features would be a more reliable option.</p></li>
<li><p><strong>Transfer learning on <code>P_2</code></strong>: this idea unfortunately came up very close to the end before it could pay off and probably was worth pursuing further. This was an attempt at exploiting the test data - pretrain a transformer on train+test to predict <code>P_2</code> (because it's strongly related to the target), then fine-tune on the actual target with train. This is like a worse version of the knowledge distillation techniques that were used to exploit the test data more for neural net training. </p></li>
<li><p><strong>Psuedo labeling</strong>: We only got 0.00005 improvement on CV (LB doesn't seem tio change), and we have tuned the size and what proportion of test-set can be used for it but didn't have real luck. However, log loss and AUC may be high enough to do this more optimally and get a considerable improvement.</p></li>
<li><p><strong>Ensembling methods</strong>, we tried lgbm with 50 seeds bagging, LR and optimization methods like <code>md1*par1 + md2*par2....</code> with Nelder-Mead or L-BFGS-B solver. LR performed the best (CV.8031), and lgbm seemed to be severely overfitting. </p></li>
</ol>
<hr>
<h2>Reflections</h2>
<p>We fell hard from 3rd Public -> 21st Private, with 6 submissions that would have scored gold. In hindsight, CV here did not seem to be a great measurement for the private LB, even though CV improvement aligned very well with public LB. We probably had some bad luck, but 2 things we may have benefited from doing differently are:</p>
<ul>
<li>Exploiting the test data more with knowledge distillation, pretraining, etc.</li>
<li>Hedged our submission strategy more - the two submissions we chose (best Public LB and best CV) were not that different from each other, and choosing a different backup could have been a better way to game the noisiness here</li>
</ul>
<p><br></p>
<p>Thanks for the kind words Chris!</p>
<p><a href="https://www.kaggle.com/andrew60909" target="_blank">@andrew60909</a> brought this method to the team and referenced the <a href="https://www.kaggle.com/c/mercari-price-suggestion-challenge/discussion/50256" target="_blank">winning Mercari solution</a>. That's exactly it, we took OOF predictions, trained a regression LGBM model on some of our feature sets to predict <code>resid = y_train - oof_pred_train</code>, and then generated the test predictions as <code>pred_test + pred_resid_test</code> (and new OOF as <code>oof_pred_val + pred_resid_val</code>). This improved our TabNet and Transformer models by .001-.002 while still yielding a diverse final model. We did find it to be a little finicky - it overfits quickly so needs to be well regularized, and I think only really good features help. </p>
<p>Would love to hear if Angus has more thoughts, but to me it's almost like a mini post-processor, or a hybrid gradient boosting model where you're boosting off of a strong prediction initial prediction instead of the usual best constant prediction starting point. </p>
<p>Haven't tried this, but I wouldn't expect it to help an ensemble. It should work if there is a systematic way to use features and a different model/view to explain errors made by a specific model/view, but I would think everything systematic is already in an ensemble that includes the GBT methods as base models. Also the error margins that the method can work with just become slimmer and slimmer as the model its applied to improves. I think part of why we saw it work for Transformer and TabNet but not MLP is that the data view for the former two is further removed from the flat aggregated data + GBT view.</p>
<p>Thanks!</p>
<p>That method was specifically what <a href="https://www.kaggle.com/ryotak12" target="_blank">@ryotak12</a> used for selection (especially for some of the feature sets that would otherwise just be enormous). For my models, I actually didn't use selection, more like feed-forward additions (and actually included different/older versions of models that had a subset of the total feature set I used, these made the cut in the ensemble). Feature selection is hard, especially on this problem, and I thought it'd be more time efficient to prioritize diversity for my work. </p>
<p>If I'm remembering correctly, top 2 solutions both didn't use stacking so that may be one differentiating point. Perhaps our stacking overfits to CV / Public LB relative to the highest scoring Private solutions.</p> |
American Express - Default Prediction | 25th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Intro</h2>
<p>As a senior AI researcher in a start-up company called <strong>pecan.ai</strong> I mostly deal with similar types of transational data. I used my work experience to get a smooth start. Also a huge thanks to my bosses for providing me with computational power :) It will absolutely paid off by amount of usefull directions I learned from such a warm community :)</p>
<h2>Feature Engineering:</h2>
<p>We used a couple of versions of feature engineering (it was kind of chaotic) and not all techniques described here were applied for all models. It’s done for couple of reasons:</p>
<ul>
<li>Make datasets slightly more diverse</li>
<li>Reengineering features after some time to be sure that there are no bugs.<br>
as a basis, we used <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> 's dataset</li>
</ul>
<h3>1. Pre-flattening - engineering done on sequences:</h3>
<p>We started with “after-pay” features and then advanced into feature interaction: We iterated over all possible combinations of the numerical features, calculated the difference between each pair of features and filled NAN with 0, if the linear correlation was significantly higher than either one of the standalone features then this difference, became a new standalone feature and was added to the model, e.g. <code>P_2–B_3</code>.<br>
We also used amex metric instead of linear correlation.<br>
In one of datasets categorical features was one-hot encoded into multiple binary sequences.</p>
<h3>2. Flattening - reducing to a single value:</h3>
<p>Basics - mean, min, max, std, first, last (indeed).<br>
Also - middles: <code>mean(1:12)</code>, <code>mean(2:13)</code>, <code>mean(2:12)</code>. The reason - post-flattening of <code>last-mean(1:13)</code> makes less sense because <code>mean</code> already has <code>last</code> inside, so <code>last-mean(1:12)</code> sounds as more correct solution. In practice, I do not know to measure how it is usefull.</p>
<h3>3. Post-flattening - operating on created features:</h3>
<ul>
<li>Last - first, last - middle, middle - first, relative_pos: (last - min) / (max - min)</li>
<li>Row features: count of NANs, sum of normalised values, difference between normalised values of P,D,S,B,R e.g. sum(normalisedP) – sum(normalisedD)</li>
</ul>
<h3>Specialized features</h3>
<ul>
<li>feature value momentum</li>
<li>linear predictions of each feature 180 days into the future using the last 3 months only</li>
<li>days since last NAN value</li>
<li>(last value – prev value ) / (last_date – prev_date) </li>
<li>and some other variation of the above features.<br>
Feature Engineering was done in Python and C# .Net:</li>
</ul>
<h3>SequentialEncoder</h3>
<p>We also created sequential features described in <a href="https://www.kaggle.com/code/pavelvod/27-place-sequentialencoder?scriptVersionId=104154431" target="_blank">this</a> notebook (they was very significant). </p>
<h2>Feature Selection</h2>
<p>We tested some simple methods, but they was reducing our score. We decided that loosing 4th point of CV does not worth it, we are still able to run a model with 3.5k features dataset (our biggest one), so risk will not paid off.</p>
<h2>Modeling:</h2>
<h3>GBDT:</h3>
<p>LightGBM Dart, CatBoost, XGBoost </p>
<h3>Tabnet</h3>
<p>We did not manage to get useful results (all ensembles almost nullify its contribution).<br>
<strong>But!</strong> then we tried the trick I tested a couple of years ago. We took our best model (lightgbm dart) and calculated <strong>shap values</strong> in an out-of-fold manner. Back then I called it self-supervised pre training, but technically it's a feature transformation. As a result we got a dataset which is much easier to digest for NN models, because it was extracted with GBT. Then I trained tabnet using this dataset and achieved <strong>0.797</strong> on LB. That solution not so diverse, as straight-forward tabnet model. But Tabnet learned predictions in different manner than GBT, so it still was very useful for the ensemble. <br>
You can find more detailed explaination <a href="https://www.kaggle.com/code/pavelvod/gbm-supervised-pretraining" target="_blank">here</a> (from some previous competition)</p>
<h3>Node</h3>
<p>But my greatest excitement was trying the <strong>NODE</strong> model for this competition. It achieved good results - I do not think my results were optimal, I believe If I would finetune it more - it would achieve better results. But It was probably the heaviest tabular model I ever tried - my 12 gb GPU (thanks <strong>pecan.ai</strong>) was screaming with only 64 batch size.</p>
<h3>Public models:</h3>
<p>We used TF transformer by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> with 3 seeds.</p>
<h3>Sequential models</h3>
<p>We used <code>tsai</code> library which has plenty of sequential models, such as LSTM, 1D-CNN and many many more their advanced versions and implementations. Nothing was found usefull for us.</p>
<h3>Training parameters</h3>
<p>We used logloss for all our models both as loss and stopping metric. We tried FocalLoss and RankingLoss, but they not worked for us.<br>
We used 5-fold CV.<br>
Almost all models was trained and averaged with 4 seeds (3 seeds was stratified by target and 4th was stratified by target and P_2_last - idea by <a href="https://www.kaggle.com/bogorodvo" target="_blank">@bogorodvo</a> </p>
<h3>Ensemble</h3>
<p>As an EnsemblerClassifier, we developed an iterative process, known as a forward selection process, which was able to find the optimum weights to maximize AMEX score.<br>
We also tried other methods which optimised LogLoss, but they was not good enough.</p>
<h2>Worth trying</h2>
<p>Couple of key ideas that worked well for other people was also in our plans, but we somehow we gave up on this ideas. I will not mention them.<br>
The only thing I regret that I did not tried (and still not found someone tested it) is Tabnet self-supervised pretraining on the whole train and test data and then finetuning on the train data. I started to regret after seeing the knowledge distillation solution of <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> </p>
<p><a href="https://arxiv.org/abs/1909.06312v2" target="_blank">https://arxiv.org/abs/1909.06312v2</a></p>
<p>I used implementation in <code>pytorch_tabular</code> library</p>
<p>Thanks!<br>
AFAIR It tooks about 8 hours for 5-fold training and predicting oof + test.<br>
Increasing number of trees or number of layers was going into GPU memory overflow (12 gb) with 512 batches or even less<br>
While other models such as tabnet used about 2-3 Gb of GPU memory with much bigger batch_size (e.g. 2048) and radical changes in hyperparameters.</p>
<p>Here is my config (I used <code>pytorch_tabular</code> package)</p>
<pre><code>data_config = DataConfig(
target=['target'],
continuous_cols=feature_names,
categorical_cols=[],
)
trainer_config = TrainerConfig(
auto_lr_find=False,
batch_size=512,
gpus=1,
)
optimizer_config = OptimizerConfig()
model_config = NodeConfig(
task="classification",
num_layers=3,
num_trees=512,
learning_rate = 1e-3
)
model = TabularModel(
data_config=data_config,
model_config=model_config,
optimizer_config=optimizer_config,
trainer_config=trainer_config,
)
</code></pre> |
American Express - Default Prediction | 260th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank to AMEX for this competition and thank to <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> for his wonedful data compression notebook.<br>
My model is the enesemble of DART and Neural networks.</p>
<p>Neural Network models: Tabular data(same used for Tree learners), Gru, Transformer and Stack Model(Gru+Tabular Model).</p>
<p>Following are the scenarios where neural nets struggled compared to the tree based learners.<br>
1 . Outlier numbers in some features.(lightgbm process it as quantiles, hence roboust)<br>
2 . Understanding if the value is missing; even if we fix it as -1, still will be difficult to process.<br>
3 . Dominant easy examples.(Gradients are not getting propogated to hard/varied examples as most examples are easy to classify).</p>
<p><strong>1. Preprocessing:</strong><br>
Clip the values of the features to 95th or 99th percentile of the feature. <a href="https://www.kaggle.com/code/narendra/amex-tabular-nn-data-clip-map" target="_blank">Tabular Data Clip values</a>, <a href="https://www.kaggle.com/code/narendra/amex-sequence-data-clip-map" target="_blank">Sequential Data Clip values</a> </p>
<p><strong>2. Add Embeddings to the Missing values</strong><br>
For both FFN (tabular) and sequential data(GRU, transformer), for each feature given more information that the values are missing or not. Adding this information helps the model to converge faster and better generalization.</p>
<p>Tabular Model: <a href="https://www.kaggle.com/code/narendra/amex-nn-ranking-train-1024#model" target="_blank">here</a><br>
Gru Model: <a href="https://www.kaggle.com/code/narendra/amex-gru-with-tab-embedds-train-v3#Sequence-Model" target="_blank">here</a><br>
Stack Model: <a href="https://www.kaggle.com/code/narendra/amex-model-stack" target="_blank">here</a><br>
Transformer Model: <a href="https://www.kaggle.com/code/narendra/amex-transformer-train-v3" target="_blank">here</a></p>
<p>thanks <a href="https://www.kaggle.com/jacksonyou" target="_blank">@jacksonyou</a> </p> |
American Express - Default Prediction | 27th place, +720 place shake up with NN model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you guys for the comments! Hope we'll do our best in upcoming competitions!</p>
<p><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <br>
Thank you for your comment! Your notebook helped us a lot! I attached the efficientnet at the output layer rather than the input layer. So the data preparation is exactly the same as your transformer notebook. Since the transformer models can handle continuous data better than efficientnet, first I fed the data to the transformer. Then I flatten the output from the transformer, reshaped the data into the efficienet input shape, and fed it to the efficientnet. </p>
<p><a href="https://www.kaggle.com/nyleve" target="_blank">@nyleve</a> <br>
I'll try to train with the rest o 40% data! Though it is really kind of you!! Thank you so much!!</p>
<p>Thank you Amex for hosting the amazing competition. Thank you to the whole Kaggle community and Kagglers who have attributed to this competition and provided the amazing opportunity to compete and learn!!</p>
<h2>Abstract</h2>
<p>I, fortunately, got the 30th place with 720 places shake-up. I think it is an interesting case since no one in the upper rankers has got the shake-up as much as I did. I'd like to share the experience of that for the record, and please comment if you guys have any ideas why this huge shake-up happened. Also, I'm looking for a team to join the Kaggle competitions. If there's anyone who considers it, feel free to contact me.</p>
<h2>720 places Shake-up</h2>
<p>I couldn't believe it when I woke up and saw on the private leaderboard that my submission was in 30th place. Last night when I checked it, the public leader board was 750th place😨. For your information, I am a university student in japan and there's 20 hours difference😃. I thought that the ensemble of NN + LGBM was the most likely solution, therefore I didn't really care about the public leader board though it struck me with a major surprise. It proves that we don't know what's gonna happen in the private leader board, and most of the participants were overfitting for the public leader board.</p>
<h2>Solution overview</h2>
<p>My solution was similar to his. The main idea was the 50%/50% ensemble of LGBM and NN Transformer.<br>
<a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/347641" target="_blank">Mr. Chris Deotte's solution</a><br>
In my case, I'm a broke university student, and I couldn't afford the computational resource. I used the free quote of Kaggle GPU and the resource was very limited. I attached Efficientnet with a transformer, and I could only use 60% of the competition data for the training session. That's one thing I regret. For LGBM models, I simply used the public LGBM notebook for the LGBM ensemble. I think the reason why I got this place was the NN model part. Although I had poor resources and couldn't do my best, I think that the transformer+efficientnet structure was the reason for my rank. That shows the infinite possibility of the NN!!</p>
<h2>What I learned</h2>
<p>I myself cannot believe that I got the top 0.6% place in this competition with this poor resource. Even though I could only use 60% of training data for NN, and public notebooks for the LGBM model, the private score was good. I think it means that this solution was one of the best answers for this competition. I'd love to see what happens if I could have used the full 100% of the training data for the NN models and properly train the LGBM models. Could I win the gold medal, maybe?</p>
<h2>Closure</h2>
<p>Thanks again to everyone who contributed to this competition. And again, I am desperately looking for a team. I'd love to learn from the team and I believe I can be helpful to the whole team. Please contact me if you can let me join your team😊 </p> |
American Express - Default Prediction | 2nd place solution - team JuneHomes (writeup) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> it's a huge honor to hear such words from you.</p>
<p>Right from the start we decided to make "internal" team for this competition but I was so many times close to break that rule and ask you to join us - I was sure that you had fantastic and a very unique NN approach (and you had it) - congrats with solo gold with transformers. </p>
<p><em>Note: Full code to retrain single model will be shared here in a 2 weeks.</em></p>
<p>I would like to say thank you to competition hosts and Kaggle - it was a great pleasure to participate in tabular data competition after many months and years without one. </p>
<p>Thank you to all participants (and of course winners - <a href="https://www.kaggle.com/daishu" target="_blank">@daishu</a> huge jump during last 3 days and fantastic solo result) - our success is your success - you forced us to try harder - without all of you It would be impossible to learn so many new things and achieve such result.</p>
<p>And special "thank you" goes to my fantastic teammates:</p>
<ul>
<li>Danila</li>
<li>Alexey</li>
<li>Igor</li>
</ul>
<p>No words would be enough to say how much each of you contributed to the end result.<br>
"Do Data Scientists have hobby? Yes -> DS competition". </p>
<hr>
<h3>Competition announcement</h3>
<p>We were very hyped by the new tabular data competition release (sorry for the external link: <a href="https://www.linkedin.com/posts/konstantin-yakovlev-010b46125_american-express-default-prediction-kaggle-activity-6935250015619592194-hqnM" target="_blank">link</a>) and immediately decided to participate. Slack notification -> rules and perspective advertisement (100% chance too lose summer holidays and all free time) -> and here we are - four members. Only one member of the team had previous experience in DS competitions participation.</p>
<p>Few rules were established from the beginning:</p>
<ul>
<li>Only free time for competition</li>
<li>No "second" accounts on Kaggle (even wife/friends to exclude any cheating suspicion)</li>
<li>No competition discussion outside of the team</li>
<li>We are here to learn and try our best</li>
</ul>
<hr>
<h3>Infrastructure and pipelines:</h3>
<p>Each of us had own machines / resources (GCP/AWS/local). We used Kaggle platform just for a few times. So the first thing we wanted to solve - unified machine to save all artefacts / experiments. We decided to go with AWS. I would say that it is possible to achieve the same result that we have just with Kaggle resources, but it would be bit more stressful for team management. We didn't want to spend a lot of money on AWS but sometimes (during very hot hours) RAM spikes were 500GB+ to permit simultaneous work.</p>
<p>We tried to use neptune.ai for ML tracking but from July it was not very effective as we entered in brute force zone. </p>
<p><em>Advise: Resources management is very critical - find bottleneck and remove it to make your team most effective. At the same time don't burn money recklessly - limit your budged. If any optimization possible - do it as soon as possible to save time and resources.</em></p>
<hr>
<h3>Project Structure</h3>
<p>Each run was internally versioned (ex. v1.1.1 - (major version).(fe version).(model version))<br>
Overall project structure:</p>
<ul>
<li>Initial preprocess -> artifact cleaned and joined df</li>
<li>FE -> Many aligned (by uid) dfs with separated features </li>
<li>Features selection -> dictionary we selection metadata</li>
<li>Holdout Model (fe check and tuning) -> Local validation oof preds / holdout preds/ model / model metadata</li>
<li>Full model run -> Model / Model metadata</li>
<li>Prediction -> each fold oof predictions / cv split metadata / test predictions</li>
</ul>
<p>All these permitted us to go back and forward and check what worked well and what did not and restore experiments in each particular step.</p>
<hr>
<h3>Initial preprocess</h3>
<p>We wanted to achieve several things with this step:</p>
<ul>
<li>Join Train and Test -> due to many people involved I was afraid that some missed transformation on private test part will be unnoticed. So we sacrifice memory and speed optimization for overall stability and security.</li>
<li>Remove detected noise -> (we had options here but ended with unified single one)</li>
<li>Transform Customer ID to unified uid </li>
<li>Create internal subset feature -> Train / Public / Private</li>
<li>Create unified kfold and holdout split -> To align all experiments</li>
<li>Separate columns by type and store them separately to minify memory use and load time</li>
</ul>
<h4>Remove detected noise</h4>
<p>We didn't use public notebooks for cleaning. Radar's Dataset is fantastic and it is 99% similar to our own transformations.<br>
We used "isle" identification without any pre-build coefficients.<br>
dummy code is something like this:</p>
<pre><code> for col in process_columns:
df = temp_df[[col]].sort_values(by=[col])
df = df[df[col].notna()].drop_duplicates(subset=[col]).reset_index(drop=True)
df['temp'] = np.floor(df[col] * 100000)
df['group'] = ((df['temp'] - df['temp'].shift()).abs() >= 100).cumsum()
i = 0
while True:
min_val = df[df['group']==i]['temp'].min()
if min_val>0:
break
i += 1
df['temp2'] = np.where(df['temp']>=0,
np.floor(df['temp']/min_val).astype(np.int32),
np.round(df['temp']/min_val).astype(np.int32))
mapping = dict(zip(df[col],df['temp2']))
temp_df[col] = temp_df[col].map(mapping)
print(col, df['group'].nunique(), df[col].nunique())
print(df.groupby(['group'])['temp','temp2'].agg(['min','max','count','nunique']).head(40))
</code></pre>
<h4>Create internal subset feature</h4>
<p>We used last statement month to create 0/1/2 feature and store in in "index" df</p>
<h4>Create unified kfold and holdout split</h4>
<p>Fixed random seed (of course 42) to make spits and then took 20% of customers to holdout group (to test stacking / blending / etc)</p>
<h4>Separate columns by type</h4>
<p>After cleaning we had several columns "groups".</p>
<pre><code>all_files = [
'p_columns', -> just p columns as we thought that they are very different (and P_2 is internal amex "scoring" model)
'objects_radar_columns', -> order encoding (we were checking where out cleaning differs from public approaches and here was the unique place)
'objects_columns', -> onehot encoding
'categorical_cleaned__D__columns', -> no noise categoricals
'categorical_binary__S__columns', -> cleaned binary
'categorical_binary__R__columns', -> cleaned binary
'categorical_binary__D__columns', -> cleaned binary
'categorical_binary__B__columns', -> cleaned binary
'categorical__D__columns', -> removed noise categoricals
'categorical__B__columns', -> removed noise categoricals
'cleaned__B__columns', -> removed noise continuous
'cleaned__D__columns', -> removed noise continuous
'cleaned__R__columns', -> removed noise continuous
'cleaned__S__columns', -> removed noise continuous
'rest__B__columns', -> have no idea what to do with it -> floor
'rest__D__columns', -> have no idea what to do with it -> floor
'rest__R__columns', -> have no idea what to do with it -> floor
'rest__S__columns', -> have no idea what to do with it -> floor
]
</code></pre>
<p>Thanks again to <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> we always used your preprocess as a baseline.</p>
<p>We were able to load just portion of data -> do fe -> concat to "index" as all dfs were aligned by index. Also such split permitted us to do fe by feature type to accelerate process and see more statistically valuable metric change. </p>
<hr>
<h3>FE</h3>
<p>We started with careful fe column by column or small subset and it worked well until 1xx features and then any metric improvement or degradation was not statistically significant and many features "overlapped" on importance and significance. </p>
<p><em>Note: I believe that it is possible to build silver zone robust model with only 3xx features</em></p>
<p>So we started from scratch with brute force))) Of course there was no need to apply "nunique" (for example to binary features) and our previous step helped us to limit fe.</p>
<pre><code>all_aggregations = {
'agg_func': ['last','mean','std','median','min','max','nunique'],
'diff_func': ['first','mean','std','median','min','max'],
'ratio_func': ['first','mean','std','median','min','max'],
'lags': [1,2,3,6,11],
'special': ['ewm','count_month_enc','monotonic_increase','diff_mean','major_class',
'normalization','top_outlier','bottom_outlier','normalization_mean','top_outlier_mean','top_outlier_mean']
}
</code></pre>
<ul>
<li>pca (horizonal and vertical) + horizontal combinations + horizontal aggregations.</li>
</ul>
<p>agg_func -> normal aggregations by uid<br>
diff_func -> diff last - xxx -> std diff worked better than any other<br>
ratio_func -> ratio_func last/xxx<br>
lags -> diff last - Nx<br>
special -> some special transformations -> count_month_enc worked well for categorical / emw for continous </p>
<p>We ended up with about 7k features (stored file by group and by agg type for faster loading). <br>
Next thing was to figure out what works and what not -> this topic was the most challenging for us. </p>
<h4>Normalizations</h4>
<p>It's better to call it Standardization (x - m) / s -> as we had also normalization test the name became constant "normalization")))</p>
<pre><code>df.groupby(['dt_month','subset'])[col].agg(['mean','std'])
</code></pre>
<p>dt_month -> month of the statement<br>
subset -> train / public / private<br>
and mean and std from clients that had full statement history.</p>
<p>We have to have temporal shift to make it work. So we did a "trick" removed last statement for each client and applied exactly same transformation for each client and merged appropriate labels. So we had 2 lines in training set for almost each client BUT validated results only on last statement during CV runs and Holdout checks. It more or less same as adding noised data but we had temporal drift and model was able to work better on unknown future data with "possible" data drift.</p>
<hr>
<h3>Features selection</h3>
<p>Ooohh that was really fun. </p>
<p>We used gbdt boosting type during experiments as it was very aligned with dart mode but was significantly faster.<br>
Also, we used ROC AUC score during our experiments as we believed that due to amex instability we can't use it for decision making (of course we tracked log loss and amex).</p>
<p>In previous step we brute forced many features and now is time to clean them out.<br>
All feature selection was done with 5 CV folds training + independent check on 20% holdout data.</p>
<ol>
<li><p>Zero importance -> Right from the start we were able to through away 1.5k features that had exactly 0 importance (lgbm importance). That means that with 250 bins and 2**10 data in leaf those features are not participating in any split.</p></li>
<li><p>Stepped hierarchical permutation importance -> we defined 300 initial features and looped over all other features subsets (600+) - was very time consuming but very stable.<br>
Note: we shuffled order of the subset to force model try different combinations.<br>
Add features subset -> train model -> permutate -> drop negative features (negative mean over 5 seeds) -> add new subset -> …<br>
During this part that took almost 3 days we limited features to 3k -> 0.800 lb</p></li>
<li><p>Stepped permutation importance.<br>
Take all features -> train model -> permutate -> drop 20% of worst performed features (only negative) -> repeat. Final subset was 25xx features (and different from previous step) -> 0.800 lb</p></li>
<li><p>Forward feature selection.<br>
We defined 300 initial features and simply added subset by subset and compared ROC AUC if metric change was > 0.0003 we kept the subset. -> 0.800 lb</p></li>
<li><p>Time series CV.<br>
For very doubtful features as PCA and Normilized values we used to different validation stratagies:</p></li>
</ol>
<ul>
<li>Train on first 6 month values (last statement of the first 6 months went to train set) and validate on last 6 (also just last statement of the last 6 months). We trained model without temporal feature and then with if result was better on CV and on holdout we added to final features subset.</li>
<li>We used P_2, B_1, B_2 as a proxy target and MSE loss with combined Train and Test to see if we did right transformation and result did not degrade.</li>
</ul>
<p>Many other options we tried but result was not stable.</p>
<p>Final subset came from "Forward feature selection" plus overlapped features from other technics minus overlapped negative combination. -> lb 0.801 single model.</p>
<p>We tried to blend many models with different subset as we believed that it should give huge LB boost (based on holdout blending tests) but it didn't work well for lb. </p>
<hr>
<h3>Model</h3>
<p>In my own experience, DART never worked better and here we have proof that in DS "all depends." We did experiments with DART in the beginning and it did not show any metric improvement with our params and baseline model features subset. Later we found <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> notebook and gave it one more try and it worked marvelously.</p>
<p>From the beginning, we tried to build a more complex model with 2**7+ leaves and 0.7+ features but failed. It still puzzles me why a simple model with a very low number of features works here. </p>
<p>I saw such behaviour mostly on synthetic data and stacking - so we tried to find out if data is syntetic (at least partly) and deanonimize internal scoring values - but didn't make it.</p>
<p>Our best single lgbm model was trained on 29xx features. 5 folds CV - no stratification by any option. Training data - 2 last staements for each client (transformed independently). Params:</p>
<pre><code>lgb_params = {
'boosting_type': 'dart',
'objective': 'cross_entropy',
'metric': ['AUC'],
'subsample': 0.8,
'subsample_freq': 1,
'learning_rate': 0.01,
'num_leaves': 2 ** 6,
'min_data_in_leaf': 2 ** 11,
'feature_fraction': 0.2,
'feature_fraction_bynode':0.3,
'first_metric_only': True,
'n_estimators': 17001, # -> 5000 for gbdt
'boost_from_average': False,
'early_stopping_rounds': 300,
'verbose': -1,
'num_threads': -1,
'seed': SEED,
}
</code></pre>
<p>Blend -> Power (2) rank blend of Dart lgbm (0.801 public) / GBDT lgbm (0.799 public) / Catboost models (0.799 public)</p>
<p>Single lgbm with 3 last statements showed even better CV by we didn't have enough time to retrain it (full DART run for 5 folds took 12+ hours there).</p>
<p>It was obvious that clients with a little number of statements will not get benefit from all 2k features. So we created a special model that was trained only on 300 features with custom params (also dart). Predictions for clients with <=2 statements came exclusively from such model and were not blended with other models. </p>
<p>How did we combine the result from 2 independent models to not destroy the final ranking?</p>
<table>
<thead>
<tr>
<th>Client id</th>
<th>Number of statements</th>
<th>Basic ranking</th>
<th><=2 prediction</th>
<th>Final ranking</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>13</td>
<td>5</td>
<td>…</td>
<td>5</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>4</td>
<td>0.1</td>
<td>3</td>
</tr>
<tr>
<td>1</td>
<td>13</td>
<td>8</td>
<td>…</td>
<td>8</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>3</td>
<td>0.5</td>
<td>4</td>
</tr>
<tr>
<td>1</td>
<td>13</td>
<td>7</td>
<td>…</td>
<td>7</td>
</tr>
<tr>
<td>1</td>
<td>13</td>
<td>2</td>
<td>…</td>
<td>2</td>
</tr>
</tbody>
</table>
<p>we kept ranking for >2 statements and for rest resorted within initial ranking group (hope it's clear enough))). Was is perfect - no, but it was very stable with really tiny improvement (because of number of such clients in public and private test parts).</p>
<p>What also worked well:</p>
<ul>
<li>Train on all data without folds splitting and stop just 1000 rounds further then CV showed. </li>
</ul>
<p>What didn't work:</p>
<ul>
<li>Stacking by any mean</li>
<li>Many models with different seed and fe order</li>
<li>Massive blend of different models with different types and features (blend worked well till 0.799 and then any low performed model 0.795- made public score worse - we use public CV as additional holdout set and used approaches that worked well on local CV and LB - anything that worked partially was not used in the end).</li>
</ul>
<p>Again, nothing really fancy here. The main thing that helped us align Train / CV with LB was very high 'min_data_in_leaf'.<br>
No optuna used -> just manual old school tuning based on data feeling.<br>
We did many experiments with weights and loss functions but none of them worked.</p>
<p>Due to AMEX metric specification it was obvious that focal loss should work but it didn't. We tried several times to switch loss function during the competition period and the result was the same. </p>
<p>Error analysis showed that model makes errors without any "pattern" -> stacking didn't work for holdout set (25% of data) and we had doubts that it will work on private/public test parts. We kept only LR/Lasso(0.02) for blending options to choose submissions.</p>
<p>Cross validation -> standard 5folds CV split by client ID. The unique thing that we did here is "prespliting" to align all CV between team members to be able to compare results directly.</p>
<hr>
<p>What left without mentions:</p>
<ul>
<li>EDA on data</li>
<li>Denoising experiments</li>
<li>Data deanonymization -> didn't manage to make it</li>
<li>Features pairs and triples combinations -> that didn't work well</li>
<li>NaN filling -> didn't work</li>
<li>Clusterization -> didn't work</li>
<li>Hundreds of experiments with features selection process and internal discussions about it.</li>
<li>Adding noised data (noise / swap noise) that leaded to interesting but doubtful results</li>
<li>Model tuning </li>
<li>Removing absolute values and keep only diff or ratios -> should be more stable for future data but we saw some lb degradation and didn't proceed</li>
<li>pseudo labeling</li>
</ul>
<p>What we always wanted but didn't found time to do:</p>
<ul>
<li>NN - we have no NN in our final blend</li>
<li>P_2 or any other column prediction (1/2/3/4 months ahead) with combined data and use it as meta information for lgbm main model</li>
<li>11 / 12 / 13 statements joined training on different subsets (df was too large and training was slow)</li>
</ul>
<hr>
<h3>Internal initial plan</h3>
<pre><code>########################### Data preprocessing and Data evaluation
#################################################################################
## Added noise removal -> GOOD2DO
# There is no doubt that some Noise was injected in data
# https://www.kaggle.com/competitions/amex-default-prediction/discussion/328514
# https://www.kaggle.com/competitions/amex-default-prediction/discussion/327649
# https://www.kaggle.com/competitions/amex-default-prediction/discussion/327651
# We need to find a way to remove it
# the best option to not follow public approach
# At least with columns where columns have overlaped population
## Data minification for FE -> GOOD2DO
# Datatype downcasting
# Pickle/Parquet/Feather
# Be careful with floats16 as it may lead to bad agg results
# Also float16 may lead to some signal degradation due to precision and values changes
## Evaluate values distributions and NaNs -> GOOD2DO
# Full 13 months history
# Train against Test Public and Test Private
# https://www.kaggle.com/competitions/amex-default-prediction/discussion/327926
#
# Need to try:
# Kolmogorov–Smirnov test -> GOOD2DO
#
# Adversarial validation -> GOOD2DO
# https://www.kaggle.com/code/carlmcbrideellis/what-is-adversarial-validation/notebook (just as simple example)
#
# Entropy / Distances / etc....
#
# Visual checks)))
#
# We need to find if ANY feature has very different distribution in PRIVATE test set
# If that feature works for Public part it doesn't mean that it will work for Private
########################### Targets
#################################################################################
# We need to find a way to get more targets -> GOOD2DO
# as we currently training on a single point by client we could greatly improve results
# by extending our training set with new targets
#
# Find default periods in current client history and make appropriate labeling -> GOOD2DO
# Make 2 level model -> predict p_2 values as normal time-series model and feed it to 2nd level GBT
########################### Separate Models
#################################################################################
# Probably it's a good idea to make separate models for each subdatasets
# Full history (13 months)
# Less than 13 months
########################### External Data
#################################################################################
# We can try to add "Consumer index" or any other independent temporal feature
# Will not work if we will not be able to expand targets and add temporal feature
########################### FE
#################################################################################
# We didn't make anything special here -> July
# AGGS (Stats by client)
# Rollings
# History length feature (not sure if it will help with Private Test)
# Should we correct statements dates and add NaNs?
# ReRanking categorical features by P_2 or Target
# Clusterization (4+ groups feature by feature)
# Count and Mean encodings for categorical features
# Features combinations (sum/prod/power) -> bruteforce
# PCA or any other dimension reduction by features groups
# We need to find if there is "connection" between clients in Train -> Public Test -> Private Test
# we have 458913 + 924621 -> 1383534 If I were AMEX I would export 1M clients (or other round number)
# so may be 384 534 Clients are overlaps
# Clip by 5 - 95 percentile
########################### Features Selection
#################################################################################
# Permutation importance (use all fold only!!!) -> recursive elimination (because of quantity of features -> 3-4 rounds with 0 and 50% negative drop)
# SHAP
# Highly correlated features (.98+?)
# Forward selection (may take ages and due aggs may be not effective - probably by feature block)
# Backward elimination (may take ages and due aggs may be not effective - probably by feature block)
########################### CV
#################################################################################
# Mean Target differs my "history length" -> could be wise to do GroupedStratifeidFolds by history length
# For sure Splits should be done by client
# Target stratification to balance folds
########################### Loss function / Metric
#################################################################################
# Clean and fast np/torch metric
# Now it's in helper (need to cleanup that)
# https://www.kaggle.com/code/rohanrao/amex-competition-metric-implementations
#
# I don't believe that we will have better results with different loss function
# But it worth to try at least focal loss
# https://maxhalford.github.io/blog/lightgbm-focal-loss/
#
# Weights -> we should try change weights there
# weights by class
# weights by some history length
# weights by internal fe group
#
# We need custom metric for catboost
# example https://catboost.ai/en/docs/concepts/python-usages-examples#logloss1
########################### Models
#################################################################################
## First level choice
# LGB/XGB/CTB -> our main models here for sure
# After stabilizing the baseline model and base feature we need to make 1st round tuning
## Catboos specials
# Categorical features
# Embeding features
## NN (GPU/TPU) -> RNN / LSTM / Transformer
# TPU -> tensorflow (as it works better there)
## NN -> AE / VAE / DAE -> as a denoising model hidden layer as input for GBT models
# No need complex approach - just fast check the idea and in case of success move to big model
########################### Blending
#################################################################################
# Weighted Average
# Power Average
# Weighted Rank Average
# Linear/SVM
# Postprocessing?
</code></pre>
<p>Solution is still in process. Was posted just initial plan and some information about model. Sorry for the delay, but will need 3-4h more to finish it. Hope you'll find it interesting.</p>
<p>It's becoming a long writeup))) will need bit more time</p>
<p>Few examples how noise cleaning works:<br>
For the most features nunique groups within 998 -> 1001 is a sign of noised cleaning possibility.</p>
<p>Simple feature cleaning - in this case group is our new value:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2405813%2Fea9cd72dbd9666335a1194331479fbc8%2F2022-08-28%20%2012.05.28.png?generation=1661684893433642&alt=media" alt=""></p>
<p>More complex feature with negative "categories" we keep negative values (temp2 is a new value):<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2405813%2F652007f94ba3b47bc300dfe32da85ce1%2F2022-08-28%20%2012.06.17.png?generation=1661684841167050&alt=media" alt=""></p>
<p>Very "noised" feature (we find initial coefficient and then apply it to groups with low population) - temp2 is a new value:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2405813%2F2d811e27de9544d6eaeca47cf268b650%2F2022-08-28%20%2012.11.55.png?generation=1661685137975362&alt=media" alt=""></p>
<p>Here is example how we augmented training Data:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2405813%2F7db49977cfebf34a15d87e5b4c95ec3f%2F2022-08-28%20%2012.41.40.png?generation=1661686959865932&alt=media" alt=""></p>
<p>We were removing last statement per client as it would never exists and then applied exactly the same transformations as we did for full history.</p>
<ul>
<li>Last values are different but within distributions </li>
<li>Agg values are different but within distributions</li>
<li>Model has temporal component </li>
</ul>
<p>We wanted to completely avoid absolute values and with data augmentation the CV and LB were almost on the same level as with absolute values - of course it would not work with clients with 1-2 statements but we had a separate model for them. I believe that that would give much more robust results but on kaggle even slight metric increase is important and we kept "lasts".</p>
<p>Also, we were planning to go in -2 max depth and used only 1,2,3,6,11 (most common time-series lags) diff lags to have fe aligned. </p>
<p>Combining original data with -1 statement or with -2 statement gave significant CV boost on ROC AUC score (our main metric) and tiny boost on AMEX metric. Combination of 3 sets gave just a tiny boost on ROC AUC but was significantly more memory consuming and slower to train model.</p>
<p>Also, CV splits per original client uid to exclude leaks and validation was performed only on original data to exclude metric degradation tracking for shorten history predictions.</p>
<p>Here is a dataset with 2 submissions:</p>
<p>single_lgbm_model_v12.0.1.csv -> best single lgbm (mean over folds) -> 0.80891 private / 0.80101 public<br>
2nd_place_submission_ranked.csv -> ranked 2nd place -> 0.80938 private / 0.80134 public</p>
<p>interesting that single lgbm could give 6th place on lb</p>
<p>feel free to blend it and check results on lb<br>
<a href="https://www.kaggle.com/datasets/kyakovlev/amex-submissions" target="_blank">https://www.kaggle.com/datasets/kyakovlev/amex-submissions</a></p>
<p>btw diff_mean is: </p>
<pre><code>df[f'{col}__diff_mean] = df[col] - df.groupby(['uid'])[col].shift()
df[f'{col}__diff_mean] = df.groupby(['uid'])[f'{col}__diff_mean]].transform('mean')
df = df.drop_duplicates(subset=['uid'], keep='last').reset_index(drop=True)
</code></pre>
<p>std features worked better than others - diff with std for example</p>
<p>normal diff with client mean was in <br>
'diff_func': ['first','mean','std','median','min','max']</p>
<p>and my bad - which of specials.<br>
monotonic functions worked really well</p>
<p><a href="https://www.kaggle.com/shahilap96" target="_blank">@shahilap96</a> our team was formed at the very early stage of the competition and exclusively from my company DS department - many other DS colleagues wanted to participate but were not able to made such 3month commitment and decided not to join. Kaggle experience is nice to have but it’s not an obligatory condition - the most important is a “fire in the eyes” and passion to learn and try something new.</p>
<p>Yes, my teammates are novice on kaggle but not novice in analytics and DS. </p>
<p>One thing that I can say - all my teammates work with the same passion with regular work tasks and trying really hard to achieve goals. I feel very proud of them.</p>
<p>Binary if current more or equal to previous and then took mean.</p>
<p>We wanted to pass to the model information if last statements increase or decrease is normal for the client or not. If due payments are constantly increasing for us it was a sign that risk is growing.</p>
<p>And such information is not based on absolute values - we constantly wanted to avoid absolute values to be able to generalize well on future data and data with drifts.</p>
<p>Thank you for your kind words and very nice questions.</p>
<p>1 | Brute force is a dead end by all meanings - it will not work on production, it will not provide any interpretability or explainability, it is not reproducible result. Column by column and distilled carefully created features are always better and preferable. Even in this competition, manual feature engineering would work better if we could have access to raw data and columns description (meaning).</p>
<hr>
<p>2 | Initial params always depend on data, but I start most of times with 0.7/0.7 sampling/features sampling, 2^7 leaves and 2^8 min data in leaf (all other params as default). <br>
First baseline run normally gives you a lot of information how to tune it further:</p>
<ul>
<li>if train score is much higher than CV and holdout -> try regularizations / lower subsampling / less complex model</li>
<li>if naive score is better than lgbm or difference is minimal -> bad features or/and need more complex model (2**7 is complex enough for majority of tasks)</li>
<li>if train score is low but very aligned with CV/holdout -> underfitting -> more complex model with less data in leaf<br>
Then try to "tweak" loss function that is more appropriate for your task.<br>
Then try to fit goss/dart/extra_trees -> extra_trees works very nice sometimes -> DART never worked better before)))</li>
</ul>
<hr>
<p>3 | Params tuning normally takes several iterations during model creation - for competitions it means a slight tuning every 2 weeks (after significant feature sets changes or data structure changes) and final deep tuning round 2 weeks before competition ends.</p>
<hr>
<p>4 | Not much to add here - you are right with your assumption:<br>
mean + cleaned__B__columns produces:<br>
agg_func__mean__cleaned__B__columns|__B_4__mean<br>
agg_func__mean__cleaned__B__columns|__B_16__mean<br>
agg_func__mean__cleaned__B__columns|__B_20__mean<br>
agg_func__mean__cleaned__B__columns|__B_22__mean<br>
agg_func__mean__cleaned__B__columns|__B_41__mean<br>
…</p>
<hr>
<p>5 | It works really nice for the start but with more features it becomes less informative and slower. It didn't show any overfitting but we didn't trust it fully as we were training and validating on the same time period (and didn't have any aligned validation set in the “future”).</p>
<hr>
<p>6 | We saw that ROC AUC score on train data went up to 0.9999 and CV/Holdout had cap of 0.96xxx -> many teams decided to use regularization (l1 - punish features with low importance but we would like to keep even small signals, l2 - adds regularization to features with high importance -> that is good move here but we didn't see any statistically significant CV score boost). We decided to make less complex model with 2^6 leaves and increase min data in leaf to force our model generalize better and found that increasing this param to 2^9 - 2^11 works very well on CV/holdout and also worked well on LB. </p>
<p>Yes, we will share it for sure. Our plan is to make cleanup during the weekend (3-4 September), few days for review and share everything. Last weeks of the competition were very intense and we are taking small “break”. Sorry for the delay but this is the fastest possible timeline for us.</p> |
American Express - Default Prediction | 3rd solution--simple is the best | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>feature engineering is all you need</h2>
<p>First of all, Big thanks to Amex and Kaggle for organizing a very interesting competition.I am too busy my work so that I haven't done this competition for about a month and a half. I came back a few days before the competition ended. I am very happy to rank 3rd in this competition. Thanks to <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> denoise idea and <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> public notebook.</p>
<h2>my solution</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4921130%2Fe26b7df663bf54181e721d156902c4d7%2FAmex_top3.png?generation=1662130516395369&alt=media" alt=""></p>
<p>I will share my code after winning call.Thanks for you reading!</p> |
American Express - Default Prediction | 45th place with XGBoost in first Kaggle competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Some luck and some innovation lead to a surprising top 1% in my first competition!</p>
<p>49th overall, 5th out of solo rookies. 3rd out of amateur solo rookies. (7th and 10th place overall were soloing their first kaggle competition, but profile shows they are professional data scientists.) To say the least, I'm very happy with my result! Who knows, but I might've had one of the top 10 or top 20 single models with my 0.80798 XGB score on private.</p>
<p>I'm a professional software engineer at Intel, but only a ML hobbyist (started with stock market related some years ago), so professional problem solver, amateur at ML. I spent far too much time on this competition, but a lot of it struggling with basics like "Oh, that's not a pandas df, it's a cudf df. Now I can look at the right documentation. …Oh, the kaggle version of rapids cudf (20.x) doesn't have this function call that I'm staring at the cudf v21.x documentation of right this second." Only more painful and recurring small issues even than it sounds. I didn't touch NN. I touched LGBM along with XGB, but even spending time on two models was a bit more than I could easily do.</p>
<p>I'll cover my submission report first, and hopefully later add my general takeaways from my first experience with kaggle competitions. I like semi-stream of consciousness long-winded posts, so buckle up! :)</p>
<h1>Submission Report</h1>
<h2>Score and Result</h2>
<p>49th place. 0.80798 private LB. 0.79889 public.</p>
<p>Went from 406th (ensemble of two of mine with LGBM dart public) -> 49th (my standalone model submission).</p>
<h2>Solution Overview</h2>
<p>My solution was pure XGBoost model for each step. At the high level, ignoring the timeline of my journey of discovery, I did these things:</p>
<ul>
<li>Note: I used auc score not logloss nor amex metric for any optimization or tuning along the way</li>
<li>Created the XGB Pyramid via pathfinding and pure theorycrafting.<ul>
<li><a href="https://www.kaggle.com/code/roberthatch/pyramid-api-for-easy-deployment" target="_blank">https://www.kaggle.com/code/roberthatch/pyramid-api-for-easy-deployment</a></li></ul></li>
<li>Basic popular feature aggregation and a few of my own. Most notably moving averages, though I only used them on all statements. Hull moving average, and exponential averages. 16 total aggregations per base numerical feature.<ul>
<li><a href="https://www.kaggle.com/code/roberthatch/amex-feature-engg-gpu-or-cpu-process-in-chunks" target="_blank">https://www.kaggle.com/code/roberthatch/amex-feature-engg-gpu-or-cpu-process-in-chunks</a> </li>
<li><a href="https://www.kaggle.com/code/roberthatch/exponential-averages-amex-feature-engineering" target="_blank">https://www.kaggle.com/code/roberthatch/exponential-averages-amex-feature-engineering</a> </li>
<li>I dropped B_29 and never looked back!</li></ul></li>
<li>Meta-feature: Predict for every row in train and every row in test, the chance of missing the next month's payment, meaning days overdue increase by a large value, and ending at or over 28.<ul>
<li><a href="https://www.kaggle.com/code/roberthatch/amex-fe-02-days-overdue-label" target="_blank">https://www.kaggle.com/code/roberthatch/amex-fe-02-days-overdue-label</a> </li>
<li>Inspired by: <a href="https://www.kaggle.com/code/raddar/deanonymized-days-overdue-feat-amex" target="_blank">https://www.kaggle.com/code/raddar/deanonymized-days-overdue-feat-amex</a> </li>
<li>Not just using that row's data to predict, I wanted to use that row AND all past data via backwards aggregation. To fit in memory, forward feature selection (on normal target predictions) using various shortcuts, ending with 280 features.</li>
<li>I also predict the chance that next month's days overdue will be non-zero as an independent meta-feature. </li>
<li>Low on GPU and time ~48 hours to go, just do single five fold oof predictions. Better to average 3-5 models. Could predict last statement with all models, since they couldn't be used for training, but for simplicity only predict "oof" last.</li></ul></li>
<li>Main model was step two, taking the aggregated meta-features (2*16) and the 3000 other features, convert to float 16 for memory, and run the XGBoost Pyramid.<ul>
<li>Train 4 models on entire train dataset with no CV using a set number of rounds based on inspecting when early stopping happened on CV models.</li></ul></li>
</ul>
<h2>Notes</h2>
<p>I had the hardest time doing (and failing at) permutation importance. My biggest bottleneck was my own time, and I didn't want to write it from scratch, but I couldn't use sklearn with my xgb model, and the things I tried kept failing for various reasons (including attempting to do an sklearn version of my xgb model, and including trying to leverage a non-sklearn version of permutation importance library). And I think I gave up on the non-sklearn one just because it was so slow.</p>
<p>In any case, thats why I ended up doing homebrew forward feature selection. Which I spent way WAY too much time on (but was kinda fun). At first I did selection only from last. Then added max, then e7. (from other experiments I had done forward feature selection of entire aggregation styles, and if forced to do only 3x aggregations, last, max, e7 was best for me. I got 70 features individually before finally stopping that slow approach. That allowed me to split the base columns into three groups, the "good" the "decent" and the "didn't seem good". So I did grouped forward feature selection based on the 16 aggregations, done on 1 of the three base feature subgroups, so 48 groups in total of 40-80 features each. I got to 3xx, but ran out of memory and to keep moving forward backed up to 280 features total. Not all base features were represented at all.</p>
<p>I had a ton of other random ideas, but all along was convinced that predicting using the test sets and days overdue was my best single shot at a good idea and great score. I didn't have time to try other good ideas. I may still keep going on this competition for more fun, even if it's over. :)</p>
<p>I used really small learning rate 0.005. Combined with XGB pyramid meant I was doing 10 rounds of 200 tree boosted forests to start, and along with a lot in the middle, ending with 0.0025 learning rate for last 9600 rounds. So each single model run was a bit of an ensemble by itself.</p>
<p>Train 5 times on entire dataset vs 5 fold CV didn't seem much different, maybe marginally better, when I tried it with prior model, but I didn't have much time at the end so just went with it. Maybe 10 fold CV would be better, good diversity and still train on almost all data.</p>
<p>I trained two models at the end, one that dropped R_1, S_11, D_59, S_9. I was hoping that days overdue mega-feature would reduce the need for them at step two, and thus eliminate the concern of private LB shakeup on those features. However, it got much worse score on public LB, so I wasn't willing to try it, and did submit my best model in the end.</p>
<p>The next thing I really wanted to try was inspired by reading about a prior credit kaggle competition, I wanted to use KNN to create features.</p>
<p>Especially after reading other people's great ideas, I really wonder if multiple ways of using diverse approaches to extract features from the 13 statements would be ideal.</p>
<p>In other words, for each customer and base feature col with 13 statements, do things like:</p>
<ul>
<li>Predict a few different things:<ul>
<li>'days overdue' (or predict target, but then you have to be careful with nested folds)</li>
<li>P_2</li>
<li>predict next in sequence</li></ul></li>
<li>Using a few approaches:<ul>
<li>XGB or LGBM</li>
<li>LSTM, RNN or something</li>
<li>KNN</li>
<li>linear regression</li>
<li>predict next in sequence with simple least squares line</li></ul></li>
<li>Using data diversity:<ul>
<li>besides using all 13 statements, what about using 3, 4, 5, and 6 statements to predict <em>next</em> on: days overdue, P_2, and next in sequence? This gets a lot more than 1 training sample per customer for training each of those models, and should do well with the "why not both?" approach that throws all features to the model to figure out.</li></ul></li>
</ul>
<p>Other thoughts:</p>
<ul>
<li><p>Maybe a good approach would've been to spend a lot more time trying to create any and all aggregate features…. on P_2 only, and see how well I could predict using ONLY P_2? Then create meta features aka mega features like days overdue prediction, and use what worked well on P_2 on that and maybe on all feature columns.</p></li>
<li><p>I probably made a couple interesting mistakes:</p>
<ul>
<li>I should've allowed B_29 in the days overdue model, I think. I forgot to try it.</li>
<li>I allowed everything to be float16. I think that might've really hurt on my super feature for days overdue prediction, I should've allowed the top few features (at least) to be float32.</li></ul></li>
</ul>
<p>Thanks!</p>
<p>Amateur I'm only using in the technical sense, as contrast with, for example, second place team of 5 people with day jobs as data scientists. Even there, as I highlighted, I am a professional engineer, a professional problem solver. Just not in the realm of data science. </p>
<p>Rookie I don't think I can claim anymore, though ;) Though I should probably tune my first NN of my life before I completely give up all claim to rookie status, haha. </p>
<p>I spent countless hours wrestling with python ML basics this competition, it did have a big impact, even though I did well and got lucky final result. (+0.009 on private LB with my best submission, most scores all go up right about +0.007)</p> |
American Express - Default Prediction | 5th Place Solution - Team 💳VISA💳(Summary&zakopuro's part) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you.<br>
Four "meta-features" were used. However, in the case of GBDT, the use of "meta-features" does not improve the score of the single model much. The model using this feature is more effective for ensembles.<br>
In Transformer's case, increasing the number by more than four had no effect.</p>
<p>Patrick will write for more details.<br>
My understanding is that the pretrain for GBDT features and finetune using target.</p>
<p>Thank you very much.<br>
I have learned a lot from you in this competition as well!</p>
<p>I would like to thank the organizers for organizing this competition, the participants for sharing their many insights, and my teammates( <a href="https://www.kaggle.com/baosenguo" target="_blank">@baosenguo</a> <a href="https://www.kaggle.com/wimwim" target="_blank">@wimwim</a> <a href="https://www.kaggle.com/scumufeng" target="_blank">@scumufeng</a> ). Now <a href="https://www.kaggle.com/scumufeng" target="_blank">@scumufeng</a> and I are promoted to kaggle master and <a href="https://www.kaggle.com/wimwim" target="_blank">@wimwim</a> is reaching for GM.<br>
This competition will be unforgettable for me:)</p>
<h1>Summary</h1>
<p>Our solution is the result of ensembling several GBDT models , Transfomr, 2d-CNN, and GRU.<br>
We noticed that ensemble weights are determined based on Public LB and overfit if based on CV.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1985486%2F868d637a78b88773048c6f2c9b615402%2F2022-08-27%20063723.png?generation=1661549864910264&alt=media" alt=""></p>
<h1>Features</h1>
<p>We are using the dataset shared with us by raddar. The features are based on those shared by ragnar.(Thanks to both of you.)</p>
<h3>meta feature</h3>
<p>I did not know this is called a meta feature.<br>
This feature was useful not only in GBDT, but also in Transformer.<br>
If added to <a href="https://www.kaggle.com/code/cdeotte/tensorflow-transformer-0-790l" target="_blank">chris's Transformer</a>, the LB will increase from 0.790 to 793.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1985486%2F9bc12bb44eee04b66decedd22ae185f1%2F2022-08-27%20055050.png?generation=1661547064786565&alt=media" alt=""></p>
<h3>Pivot</h3>
<p>Combine all features horizontally.</p>
<pre><code>P_2_0 , P_2_1 , P_2_3 , P_2_4 , ... , P_2_12 , B_30_0 , ...
XXX , XXX , XXX , XXX , ... , XXX , YYY , ...
</code></pre>
<h1>Model</h1>
<h3>GBDT</h3>
<ul>
<li>LightGBM<ul>
<li>Almost no change from <a href="https://www.kaggle.com/code/ragnar123/amex-lgbm-dart-cv-0-7977" target="_blank">this Notebook</a></li>
<li>stratfiedKfold: 10</li></ul></li>
<li>Catboost<ul>
<li>Use GPU(I was surprised at how fast it was.)</li>
<li>parameter : default</li></ul></li>
</ul>
<h3>Transformer</h3>
<h4>zakopuro</h4>
<ul>
<li>Based on <a target="_blank">chris's Notebook</a></li>
<li>Some additional features.(Mainly meta features)</li>
</ul>
<h4>Patrick Yam</h4>
<p>This is his solution.<br>
<a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/348118" target="_blank">https://www.kaggle.com/competitions/amex-default-prediction/discussion/348118</a></p>
<h4>mufeng</h4>
<ul>
<li>Add meta feature to Patrick's transformer.<ul>
<li>LB increased from 0.795 to 0.798.</li></ul></li>
</ul>
<h1>Ensemble</h1>
<ul>
<li>Use 21 models.</li>
<li>Ensemble weight<ul>
<li>Determined based on Public LB</li>
<li>In our case, the LB score will be lower if based on CV.(CV is 0.8016 or higher.)</li>
<li>We trusted Public LB more than CV because it is close to Private and has a sufficient amount of data.</li></ul></li>
<li>Weights are not complicated. (For example, 0.1,0.2,… etc.)</li>
</ul>
<h1>Select Submit</h1>
<ul>
<li>Best LB<ul>
<li>Public : 0.80199(2nd)</li>
<li>Priavte : 0.80881(6th)</li></ul></li>
<li>Best LB*0.5 + Best CV *0.5<ul>
<li>Public : 0.80154</li>
<li>Private : 0.80862(Gold zone)</li></ul></li>
<li>Correlation check<ul>
<li>Check the Public and Private correlation values for all predictions used in the ensemble to see that there are no significant differences.</li></ul></li>
</ul>
<p>All posts above 0.801 in Public LB were in the Gold zone in Private LB.(I prayed on the last day not to Shake down😣)</p>
<p>Let's enjoy kaggle! Thank you!!!!</p>
<p>This is his solution.<br>
<a href="https://www.kaggle.com/competitions/amex-default-prediction/discussion/348118" target="_blank">https://www.kaggle.com/competitions/amex-default-prediction/discussion/348118</a></p>
<p>Thank you!</p>
<ol>
<li><p>The "pivot" part is only for the Transformers, correct?<br>
-> This is a feature for the GBDT, not for the Transformer.</p></li>
<li><p>From picture I interpret that each team member developed his own features, how different are from Martin's features?<br>
-> There are no major differences; diff features are added, rounding with float, etc.</p></li>
<li><p>I'm curious about the 2d-CNN model, do you have more info?<br>
-> It is similar to this content.(<a href="https://www.kaggle.com/competitions/lish-moa/discussion/202256#1106810" target="_blank">https://www.kaggle.com/competitions/lish-moa/discussion/202256#1106810</a>)</p></li>
</ol>
<p>It used both.</p> |
American Express - Default Prediction | 72nd place solution(ensemble of LightGBM and Sequential NN) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>hi <a href="https://www.kaggle.com/chaudharypriyanshu" target="_blank">@chaudharypriyanshu</a> congratulations for your great result too, thank you! <br>
I didn't try them… I tried only linear interpolation or fillna(0) as masking values.</p>
<p>Dear Kagglers.</p>
<p>Thank you to the competition organizers hosting this interesting competition.<br>
Thank you to everyone involved in this competition. We learned a lot from public notebooks and discussions.</p>
<h1>Our Final Result</h1>
<ul>
<li>Our submission<ul>
<li>Local CV:0.80011</li>
<li>Public: 0.80040</li>
<li>Private: 0.80780</li></ul></li>
<li>Result<ul>
<li>Public: 61st → Private 72nd</li></ul></li>
</ul>
<h1>Summary of Our Solution</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1452109%2F8a13e3e0ee80dcd0e41dbb726d7461b2%2Famex-solution.drawio.png?generation=1661445323488649&alt=media" alt=""></p>
<h1>Single Models</h1>
<h2>LightGBM(DART)</h2>
<p>Based on the public notebook <a href="https://www.kaggle.com/code/ragnar123/amex-lgbm-dart-cv-0-7977" target="_blank">here</a>, we trained lightgbm(dart) models with several feature patterns and achieved a public score of <strong>0.799</strong>.</p>
<ul>
<li>feature engineering patterns<ul>
<li>basic aggregation per customer (mean, std, max, min, first, last, count, nunique)</li>
<li>combinations of aggregated features (diff as last - mean, fraction as last / mean, etc.)</li>
<li>last difference features(aggregation with diff(1).iloc[-1], diff(2).iloc[-1], …)</li></ul></li>
</ul>
<p>we made about 1,000~3,000 features for each model.</p>
<h2>GRU or Transformer Encoder Model</h2>
<p>Based on the public notebook <a href="https://www.kaggle.com/code/cdeotte/tensorflow-gru-starter-0-790" target="_blank">here</a>, we trained NN models and achieved a public score of <strong>0.792</strong>.</p>
<ul>
<li>Improvements from the public notebook<ul>
<li>one hot encoding of each category features</li>
<li>adding NA indication columns of features that have NA</li>
<li>filling NA by linear interpolation</li>
<li>multiple layers of GRU or TransformerEncoder(4~8 layers)</li></ul></li>
</ul>
<h1>Ensemble</h1>
<p>NN models by themselves had only a low score on the Public Leaderboard, but we noticed that the public score improved from 0.799 to 0.800 by ensembling the LightGBM and NN models.<br>
We then experimented with ensemble patterns of multiple models and found that Stacking by LogisticRegression or MLP yielded particularly high scores.</p>
<h2>finally</h2>
<p>Advice is always welcome!<br>
Thank you for your attention.</p> |
American Express - Default Prediction | 82-nd place (Silver) solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, mate - all the best to you!</p>
<p>Hey Dear Kagglers :)</p>
<p>First of all, congratulations to all winners and participants!<br>
And many thanks to all contributors for their amazing kernels and topics!</p>
<p>A brief overview of my solution below:</p>
<h3>Feature engineering:</h3>
<ul>
<li>Simple agg features</li>
<li>Feature interactions - lagg/diff</li>
<li>Trend features</li>
<li>Correlations between normalized feature and time index</li>
<li>Autocorrelations</li>
<li>Moving averages – and then aggregations like first/last/range</li>
<li>Target encoding for categorical</li>
<li>OHE</li>
<li>Cluster based features</li>
<li>PCA reduced features</li>
<li>Latent vectors exported from deep autoencoders (heavy regularization + data augmentation)</li>
<li>Other</li>
</ul>
<h3>Data sets</h3>
<p>Total of 12 training sets:</p>
<ul>
<li>Combinations of features above</li>
<li>Re-balanced sets (downsampling majority)</li>
<li>Reduced sets</li>
<li>Pseudo labeling</li>
</ul>
<h3>Modeling/Architecture – 4 levels:</h3>
<p><strong>1) Base models trained:</strong></p>
<ul>
<li>LGB/XGB/Catboost</li>
<li>GLM</li>
<li>MLP</li>
<li>DCN - Deep and cross neural networks (residual) with entity embeddings for the categoricals</li>
</ul>
<p><strong>2) Some 2-nd level models trained on the best data set but with base models predictions added – it resulted in very high performance models</strong></p>
<p><strong>3) 3-rd level models trained on predictions from 1-st and 2-nd level models + PCA + Cluster features:</strong></p>
<ul>
<li>XGB</li>
<li>NN with entity embeddings for the cluster features</li>
</ul>
<p><strong>4) Final blend of 3-rd level model predictions</strong></p>
<p>CV amex score maximization was pursued which worked well – managed to choose (almost) the best model; Model performs better on private than on public lb.</p>
<p>This was all done on a single Core I5/16GB/geforce 1060 6gb - so with limited resources…</p>
<p>Let me know if you have any questions in the comments.</p>
<p>Cheers mate :)</p>
<p>Thank you!</p> |
American Express - Default Prediction | 9th Place Solution ( XGBoost+LGBM+NN ) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Many thanks to AMEX,Kaggle and all contributors of discussion during the entire competition ( <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> , <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> , <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> ,…). Congratulations to all winners and new Experts, Masters and Grandmaster!</p>
<h2>Score & Result</h2>
<p>My best submission<br>
CV: 0.799106 Public: 0.80062 Private:0.80875<br>
My result<br>
CV: 0.799194 Public: 0.80057 Private:0.80868</p>
<h1>Feature Engineering</h1>
<p>I only used the integer dataset provided by <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a>. </p>
<ul>
<li>Base features<br>
aggregated features like mean,max,min,std,sum,medium,last,first</li>
<li>other rate and diff features with datediff<br>
last-first,last1-last2,last1-last3,last-mean,max/last,sum/last and so on</li>
<li>Date features<br>
is it a holiday </li>
</ul>
<h1>Model</h1>
<ul>
<li>LightGBM <br>
3 models of LGBM with different data representation & parameters give CV in the range [0.796-0.799] and LB in [0.797-0.799] (2 model with dart-LGBM , 1 model with goss-LGBM)</li>
<li>XGBoost<br>
6 models of XGB, with different data representation & parameters give CV in the range [0.794-0.796] and LB in [0.795-0.796]</li>
<li>NeuralNet<br>
4 models of NeuralNet with different parameters give CV in the range<br>
[0.788-0.790] and LB in [0.790-0.792]<br>
(I am not so proud of NNs Thank again @cdeotte for sharing his great public kernel NN.)</li>
</ul>
<h1>Ensemble(stacking)</h1>
<p>Using 13 models to stack with 10-fold cross-validation , Hyperparameter-tuning and appropriate early stopping can give Private in the range [0.80853-0.80875] </p>
<h1>some ideas</h1>
<p>Predict if a customer will default when customers have already used credit to consume ,trending of the features changed over time(like consumption frequency ,Change in consumption amount) is very important ,especially in the last few months.</p>
<p>I am very grateful to this competition. I learned a lot in this competition for a newbie in kaggle .Thanks you all😎</p>
<p>Thanks Shibata !</p>
<ol>
<li>Using Holiday features can give about +0.0002 boost in CV.</li>
<li>The model is LinearRegression for stacking with rounded oof score ( oof=oof.round(5) )</li>
</ol>
<p>Thanks Santiago !</p>
<p>Thanks delai!<br>
I didn't use all features in one big model, because it's difficult to find useful new features from feature importance and mabye some new features is useful but the result is not significantly improved. Actually, every single-model is created by different base features and other new features. For example, XGBoost-model-1 is created by base features like 'mean', 'max', 'sum' and diff/rate/date features between first month and last month, XGBoost-model-2 is created by base features like 'medium','max,'min' and diff/rate/date features between the last two months.So every model has different data representations and some new useful features can give a relatively obvious improvemen.</p> |
American Express - Default Prediction | AmbrosM solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I couldn't find any <a href="https://www.kaggle.com/ambrosm" target="_blank">@ambrosm</a> write up, maybe he didn't share anything? He is a legend in tabular data problems so, if that is the case, I would like encourage him to share something about his approach in the competition (especially about his 0.798 CatBoost with very few features 👀)</p> |
American Express - Default Prediction | My Journey(public 223 -> private 1234)😱 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It's a competition I worked hard for 3 months, but it's a pity that I didn't make it to the medal rankings.<br>
But I do share my solution because I think it will be helpful later.</p>
<p>The code and description are on github, so please refer to it.<br>
<a href="https://github.com/ds-wook/amex-default-prediction" target="_blank">my-solution</a><br>
P.S)Meta-feature is the code generated after the contest.</p> |
American Express - Default Prediction | Private Leaderboard 3442 - A Beginner's Simple Solution and Key Takeaways | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone!</p>
<p>Don't think this would be useful to anyone with experience on Kaggle but I thought I would share my rather basic solution. As this was my first actual project, I mostly ended up trying out all the things I had learned from tutorials in a real project rather than making a significant model. It proved a good experience and I look forward to becoming better at DS using the next steps for education identified through this competition.</p>
<p>My best performing solution's notebook is available <a href="https://www.kaggle.com/code/shahilap96/private-leaderboard-3442-my-best-solution" target="_blank">here</a> for anyone that wishes to take a look.</p>
<p>Final model:</p>
<ol>
<li>Denoised <a href="https://www.kaggle.com/datasets/raddar/amex-data-integer-dtypes-parquet-format" target="_blank">dataset</a> by <strong><em>raddar</em></strong>.</li>
<li>Last two transactions per customer as suggested by this <a href="https://www.kaggle.com/code/junjitakeshima/amex-try-to-improve-lgbm-starter-eng" target="_blank">notebook</a>. I tried all variations and found that two worked best for me.</li>
<li>Dropped Customer_ID and S_2 columns. I found that keeping all other columns (including those with high number of nulls gave better results.</li>
<li>scikit-learn's imputation and standard scaling.</li>
<li>Two models -> LGBM (n_estimators=300) and CatBoost (default parameters).</li>
<li>Averaged my LGBM and CatBoost prediction probabilities for final submission. This gave the best solution.</li>
</ol>
<p>Things I tried:</p>
<ol>
<li>XGBoost - didn't perform as well as LGBM and CatBoost alone and when averaged with their scores.</li>
<li>Simple NN - last minute trial, didn't perform well, probably due to lack of tuning.</li>
<li>Random Forest, Decision Tree and other similar ML models.</li>
<li>Removed columns with high number of nulls (50% null, 66% null, 75% null).</li>
<li>Cross validation for measuring performance.</li>
<li>Dask and Vaex for working with the large volume of data initially.</li>
<li>Hyperparameter tuning using GridSearchCV.</li>
</ol>
<p>Things I need to look at next:</p>
<ol>
<li>Denoising data</li>
<li>EDA</li>
<li>Feature engineering</li>
<li>Understanding higher ranking participants solutions</li>
<li>Handling large volume of data when training models.</li>
</ol>
<p>I definitely have a long way to go but this competition was a great starting point and the community around it made it an awesome experience.</p>
<p>Happy Kaggling!</p>
<p>Thanks Felipe. I didn't do any complex ensembles like the high performing participants however, I did try stacking, max, min and average of my CatBoost and LGBM models. My final solution was average of prediction probabilities produced by the top CatBoost and LGBM models I made.</p>
<p>All the best to you as well. Cheers!</p>
<p>"This is not a scam" is very suspicious however, I did end up doing the survey after taking a look at your discussion post regarding the survey.</p>
<p>All the best with your study. Cheers!</p> |
American Express - Default Prediction | Silver (211 th) Long story short | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you all and congrats to the winners!</p>
<p>My solution is very simple and straight-forward:</p>
<ul>
<li>Created ~6000 features: aggregations, diff(1 to 5), moving average and statistics, lags divisions, trends, many interactions among the top 100 features, KMeans, PCA etc.</li>
<li>Selected features by permutation importances on the validation set (ended up with ~5 different features sets with ~650 to ~1400 features)</li>
<li>Used XGB, LGBM, Catboost models with different preprocessing strategies, feature sets, hyperparameters</li>
<li>Ensembled everything with different seeds (simple average of ~40 models)</li>
<li>XGB was always the best model for me</li>
<li>time spent: 10% struggling with data volume, 10% algorithms and ensemble, 80% feature engineering and selection</li>
</ul>
<p>Single models metrics (5 fold):</p>
<ul>
<li>amex ~ 0.797xx</li>
<li>roc_auc ~ 0.877xx</li>
<li>accuracy ~ 0.905xx</li>
<li>f1 ~ 0.816xx</li>
<li>precision ~ 0.813xx</li>
<li>recall ~ 0.819xx</li>
</ul>
<p>What did not work:</p>
<ul>
<li>I could not get dart boosting to outperform gbtree on my setup</li>
<li>Stacking features</li>
<li>Knowledge distillation</li>
<li>OOF and confusion matrix analysis (false negatives and false positives): could not find any insight that I could be sure it led to better performance (I blame the anonymized features for that 😆)</li>
<li>Denoising autoencoder (not sure if I implemented it correctly)</li>
<li>TabNet</li>
<li>GRU</li>
<li>MLP (keras)</li>
</ul> |
American Express - Default Prediction | Silver Medal (105th Place) - Simple Solution (2x xgboost + NN). | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi there,</p>
<p>Congratulations to everyone who did some genuine own work at this competition, given the private lb score distribution and that the difference between the top 1 and top 1000 solutions is less than 0.35% I believe many people here would do well in a real life credit risk modelling reality, even if you are rank-wise far away from a medal. </p>
<p>My solution was quite simple compared to what I’ve seen so far, decided to share to show people we don't need to reinvent the wheel to get a good score. What I basically did was:</p>
<p><strong>1) FEATURE ENGINEERING AND INITIAL MODEL</strong></p>
<p>I spent some time on feature engineering and models experimentation, here I noticed the feature engineering was working but I wasn’t able to beat the public DART model score, also the dart model training time on my computer was way too long so I decided to stay away from that, do something completely uncorrelated and at the end ensemble my models to the best public DART model score, then select one submission with my own work and another with my work + public work, that decision paid off.</p>
<p>I created a total of 30K+ features considering:<br>
- The initial 188 features Avg / Max / Min / Std / Slope of the last 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 statements.<br>
- Transpose of the 188 features per customer and statement.<br>
- Categorical features Weight of Evidence encoding.<br>
- Ratio and Difference of previous engineered features.</p>
<p>The Model I used initially was a leaf-wise growth xgboost:</p>
<p>XGBClassifier(tree_method='gpu_hist', grow_policy='lossguide', eval_metric='auc', min_child_weight=50, subsample=0.65, colsample_bytree=0.6, colsample_bylevel=0.6, colsample_bynode=0.55, learning_rate=0.01096, max_depth=8, max_bin=320, max_leaves=0)</p>
<p><strong>2) FEATURE SELECTION</strong></p>
<p>As I had way too many features to work with, I created an iterative process to get rid of the less important ones. It consisted in randomly allocate features to different “buckets” then use the initial xgboost model to get the shap feature importances and drop the bad ones, I repeated until I had 4 final buckets with ~550 features in each. Then I did some Aversarial Validation to drop unstable features.</p>
<p><a href="https://www.kaggle.com/code/pabuoro/amex-ultra-fast-adversarial-validation-shap" target="_blank">https://www.kaggle.com/code/pabuoro/amex-ultra-fast-adversarial-validation-shap</a></p>
<p><strong>3) MODELLING</strong></p>
<p>I used optuna to get 2 different sets of optimal hyperparameters for the leaf-wise growth xgboost model and an optimal keras MLP layout, always based on the AUC score to measure the overall discrimination and not only the top 4% as in the amex metric. For the MLP I scaled the data using rank gauss. In total fitted:<br>
- 4x xgboost 1 (Full sample + 10 CV Folds) * 5 Seeds = 4x 55 models<br>
- 4x xgboost 2 (Full sample + 10 CV Folds) * 5 Seeds = 4x 55 models<br>
- 4x Keras MLP (Full sample + 10 CV Folds) * 10 Seeds = 4x 110 models</p>
<p><strong>4) ENSEMBLE</strong></p>
<p>Ensembling was done using oof predictions and finding the best weights using scipy minimizing the negative amex metric.<br>
Ensembling with public DART score ,instead, the weights which optimize public lb score.</p>
<p>That was it on summary. Hope to see you guys in future competitions.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2987349%2F6a7c39a66bab9c3786282ce5d250b6cb%2Fkaggle%20amex.JPG?generation=1661421546923377&alt=media" alt=""></p>
<p>Best Regards,<br>
Paulo</p> |
American Express - Default Prediction | [16th place solution] Features Diversity and Ensemble | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a ton, we made our final subs yesterday. our best sub was 0.80839 which we didn't choose. We also tried one late submission it turned out that if we would have given more weight to TABNET and LGBM we would be on the 13th spot with 0.80846. However, I am still happy with what we achieved.<br>
Regards</p>
<p>We would like to thank the organizers and the Kaggle community for providing such a great competition. <br>
I would like to thank <a href="https://www.kaggle.com/shivamcyborg" target="_blank">@shivamcyborg</a> and <a href="https://www.kaggle.com/eventhorizon28" target="_blank">@eventhorizon28</a> for their support and contribution, our team's collective hard work helped us achieve this position. <br>
Special thanks to <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a>, <a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a>, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/jiweiliu" target="_blank">@jiweiliu</a>, <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> for the awesome work they published without their analytics, This competition would have a different direction from what it is now.</p>
<p>Here brief Explanation of our solution.</p>
<p><em>FEATURE ENGINEERING</em><br>
<em>DIVERSITY IN MODELS</em></p>
<h4>Feature Engineering</h4>
<p>We used different features for various model training (mean, std, and last features were common). We trained our models in three ways.</p>
<ol>
<li>using only HMA(hull moving average) features</li>
<li>using only diff(features)</li>
<li>using HMA + diff features ( worked only with cat boost)</li>
</ol>
<p>Using all the diff features was not the right call for some of the models like NN and XG Boost as they were introducing some leakage while training and CV and LB didn't correlate at all. For us, HMA features proved to be much better featured than diff features.</p>
<h4>Models</h4>
<p>We used various models that include, LGBM, XG Boost, Cat Boost, and Neural Networks with 2 different architectures and TABNET.</p>
<p>Here are our best single model scores</p>
<table>
<thead>
<tr>
<th>models</th>
<th>Cross Val</th>
<th>private LB</th>
<th>public LB</th>
<th>Description</th>
<th>Core Features</th>
</tr>
</thead>
<tbody>
<tr>
<td>LGBM</td>
<td>.7973</td>
<td>0.80687</td>
<td>0.79906</td>
<td>3 models with different seed+ 2 public models</td>
<td>Diff+Last</td>
</tr>
<tr>
<td>XG Boost</td>
<td>.7972</td>
<td>0.80639</td>
<td>0.79718</td>
<td>3 models with different seed+ 1 public model</td>
<td>HMA+Last</td>
</tr>
<tr>
<td>CAT Boost</td>
<td>.7952</td>
<td>0.80468</td>
<td>0.79614</td>
<td>3 models with different seed</td>
<td>HMA+diff+Last</td>
</tr>
<tr>
<td>NN-1</td>
<td>.7923</td>
<td>0.80190</td>
<td>0.79240</td>
<td>3 models with different seed</td>
<td>HMA+Last</td>
</tr>
<tr>
<td>NN-2</td>
<td>.7921</td>
<td>0.80186</td>
<td>0.79188</td>
<td>2 models with different seed</td>
<td>diff + Last</td>
</tr>
<tr>
<td>TABNET</td>
<td>.7933</td>
<td>-</td>
<td>-</td>
<td>single model trained last day to introduce diversity</td>
<td>HMA + Last</td>
</tr>
</tbody>
</table>
<h4>ENSEMBLE</h4>
<p>Since we had such a great CV LB correlation, we used Optuna to choose the ensemble weights and performed rank ensemble for our submissions. However, since some good public models didn't have oofs predictions we had to give weights to them manually.</p>
<p>Things that we were not able to try:</p>
<ol>
<li>Using B_29 predicting the missing values and then using them as features</li>
<li>LGBM with HMA feature that would be our best public model</li>
<li>pseudo-labelling using just private LB data</li>
</ol>
<p>Notebooks that helped us:</p>
<ul>
<li><a href="https://www.kaggle.com/code/cdeotte/xgboost-starter-0-793" target="_blank">https://www.kaggle.com/code/cdeotte/xgboost-starter-0-793</a></li>
<li><a href="https://www.kaggle.com/code/ragnar123/amex-lgbm-dart-cv-0-7977" target="_blank">https://www.kaggle.com/code/ragnar123/amex-lgbm-dart-cv-0-7977</a></li>
<li><a href="https://www.kaggle.com/code/roberthatch/xgboost-pyramid-test-predictions" target="_blank">https://www.kaggle.com/code/roberthatch/xgboost-pyramid-test-predictions</a></li>
<li><a href="https://www.kaggle.com/code/ambrosm/amex-keras-quickstart-1-training" target="_blank">https://www.kaggle.com/code/ambrosm/amex-keras-quickstart-1-training</a></li>
<li><a href="https://www.kaggle.com/code/werus23/amex-keras-with-tpu" target="_blank">https://www.kaggle.com/code/werus23/amex-keras-with-tpu</a></li>
<li><a href="https://www.kaggle.com/code/raddar/understanding-na-values-in-amex-competition" target="_blank">https://www.kaggle.com/code/raddar/understanding-na-values-in-amex-competition</a></li>
<li><a href="https://www.kaggle.com/code/raddar/the-data-has-random-uniform-noise-added" target="_blank">https://www.kaggle.com/code/raddar/the-data-has-random-uniform-noise-added</a></li>
<li><a href="https://www.kaggle.com/code/jiweiliu/amex-catboost-rounding-trick" target="_blank">https://www.kaggle.com/code/jiweiliu/amex-catboost-rounding-trick</a>.</li>
<li><a href="https://www.kaggle.com/code/cdeotte/xgboost-starter-0-793" target="_blank">https://www.kaggle.com/code/cdeotte/xgboost-starter-0-793</a></li>
</ul> |
American Express - Default Prediction | [Place 17th Solution]: Pseodo-label + FE. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/chaudharypriyanshu" target="_blank">@chaudharypriyanshu</a>, I cannot explain to be honest, it was an experiment and performed much better with my features compared to max_min or standard sclaer;</p>
<p>talking about rectified adam, I didn't adjust too much. <code>opt = tfa.optimizers.RectifiedAdam(
lr=1e-5,
total_steps=10000,
warmup_proportion=0.1,
min_lr=1e-8,
)</code> similar to that. I have to start my workingstation to find it. maybe tomorrow I can tell you, today I just relax with my laptop.</p>
<p>Thank you !</p>
<p>Thank you Daisy. This competetion needs heavy computation resources. I have to work in parallel and take care of my small child. I don't think I can manage more than one competetion at one time. Maybe later when I got more time and better skills, so that I can be like some of the best players managing several projects in parallel. I will try.</p>
<p>thanks, hi, average.</p>
<p>I used rank-averaged prediction, select >= 0.95 and <=0.05 part, set the target to be 1 and 0 in order to genearte extra data and add them to the trainning dataset of each fold.</p>
<p>A quick explaination of my approach is explained by the flowling flowchat:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1025394%2F1d24af17ebcd9493b56d8d7038b4fae3%2Fflowchart.PNG?generation=1661442385065366&alt=media" alt=""></p>
<p>Generally what I did was mainly the feature engineering, stacking and created 110k+ pseudo-labels for semi-supervised learning at 2nd Level stacking. Model optimizations I havn't performed at all, only used the default.</p>
<p> I had however great fun. Gonna take a rest for a couple of days, see you guys soon in the next.</p>
<p>I just filled them all with -100 :). np.log1p() * np.sign(df).</p>
<p>Hi, Tonghui, thanks. I don't think I have a special or secrete structures for it. mostly it is I think because of FE and I have used np.log1p() function to standarize the data. rectified Adam instead of Adam as the optimizer. No magic. For stacking, it brings almost no benifit compared to my 0.791 Version, due to lack of the feature diversity.</p>
<p>:) :) ty ty</p>
<p>Thanks Tonghui !</p>
<p>Hi, Chris, thank you. I also noticed that :) I am feeling great now! </p>
<p>Hi, thanks, I meant: convert S_2 to numerical values and do a / (S_2.max()-S_2.min()) for each coustomer_id.</p> |
American Express - Default Prediction | [Silver Medal] Our secret sauce (was using macro data) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: American Express - Default Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Since the training data ranged from 2017-03-1 to 2018-03-31, we couldn't do much with the year. In order to add some temporal context, we included stock market indices such as the S&P 500 and the DOW JONES. </p>
<p>We thought that if the economic situation worsens, some individuals might find it hard to meet their payments.</p>
<p>In the beginning we did some testing with a simple XGB model, and the version that included the stock market indices yielded a CV that was 0.002 higher. I just ran the models again and CVs were 0.777 and 0.779 (as I said, the model trained on a really simple dataset😬). After that, we included the stock market indices in every dataset we created.</p>
<p>As <a href="https://www.kaggle.com/chenjunlin99" target="_blank">@chenjunlin99</a> said, we were allowed since it was publicly available information. In particular, we obtained the Dow Jones data from <a href="https://www.wsj.com/market-data/quotes/index/DJIA/historical-prices" target="_blank">here</a> and the S&P 500 data from <a href="https://www.wsj.com/market-data/quotes/index/SPX/historical-prices" target="_blank">here</a>.</p> |
Avito Context Ad Clicks | 2st PLACE - WINNER SOLUTION - Gzs_iceberg | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Context Ad Clicks
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I have fixed it. </p>
<p>Sorry about that.</p>
<p>Hi, all</p>
<p>First, thanks to the Kaggle team, kagglers and Avito for such great competition.</p>
<p>All codes and documents, please see <a href="https://github.com/Gzsiceberg/kaggle-avito">here</a></p>
<p>Your comments are very welcome.</p>
<p>Best regards</p>
<p>Gzs_iceberg</p> |
Avito Demand Prediction Challenge | "Dance with Ensemble" Sharing Thread | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1 epoch took about 3 mins for the final model. And each model is about 8 epochs. I have a machine with 64GB RAM, but you can do it with 32 or 48GB.</p>
<p>yes.</p>
<p>The RNN code is surprisingly easy here</p>
<pre><code>seq_title_description = Input(shape=[max_seq_description_length], name="seq_description")
emb_seq_title_description = Embedding(vocab_size, EMBEDDING_DIM1, weights=[embedding_matrix1], trainable=False)(
seq_title_description)
emb_seq_title_description = SpatialDropout1D(0.25)(emb_seq_title_description)
rnn_layer1 = CuDNNLSTM(128, return_sequences=True)(emb_seq_title_description)
rnn_layer1 = CuDNNLSTM(128, return_sequences=False)(rnn_layer1)
fc_rnn = Dense(256, init='he_normal')(rnn_layer1)
fc_rnn = PReLU()(fc_rnn)
fc_rnn = BatchNormalization()(fc_rnn)
fc_rnn = Dropout(0.25)(fc_rnn)
</code></pre>
<p>If you are using Keras you should use CuDNN implementation which would be much faster. It is surprising that you needed 26 epochs to converge. For me it converged after 6 or so and I ran it for 8 because 8 seemed to give slightly better average with different seeds. For each fold, I ran 8 epochs (some fold actually only needed 5, but I just let it "overfit" a bit), with 5 fold, so that is 40 epochs per seed. And I ran 4 seeds and then average them. So in total, it is 40 * 4 = 160 epochs. Before adding all the engineered features I remember each epoch took about 100 seconds so in total it is only 16000 seconds so less than 5 hours.</p>
<p>Hope it helps.</p>
<ol>
<li>I had. But with VGG16 and ResNet50 included, I couldn't get any improvements with others added.</li>
<li>I didn't try anything other than fastText. Maybe I should have...</li>
</ol>
<p>For the VGG16 and ResNet50 features I just did the predictions with the pretrained model and saved them for late use. </p>
<p>When I was fine tuning them, I had to reduce the batch size and it was still very slow for 224 * 224... Didn't really know a way to accelerate that... So I didn't do much search there which means maybe there is a big potential there that I didn't have the computing resources to explore.</p>
<p><strong>Preface</strong></p>
<p>First of all, I would like to thank Avito and Kaggle for hosting such an interesting competition! Lots of ways to do feature engineering, data quality is not bad, and data size is arguably accessible to everyone. </p>
<p>I would also like to thank my three awesome teammates, Arsenal, Georgiy Danshchin and thousandvoices (ordered alphabetically) ! You all are truly amazing feature engineering masters! </p>
<p>Arsenal has been a long-time friend of mine and we decided to go with “Dance with Ensemble” again in honor of our last loss under the same team name (which was 3 years ago already!) I believe that time we dropped out of top 3 mainly because we didn’t use NN at all. And the funny thing is, this time (R)NN is one of the main reasons that we were always staying ahead of other teams.</p>
<p><strong>Summary</strong></p>
<p>So our approach is probably not different than what other top teams did in a high level. We have some lgb models, some NN models, some xgb models as first layer, and some lgb models, some xgb models and some NN models as second layer, and one NN as the final layer. But honestly, the complicated structure (3 layer) probably gave us about 0.0002 - 0.0004 improvement. Just several models with a simple linear stacking should be able to achieve not exactly the same, but quite similar score.</p>
<p>A few days ago our best single models were both 215X (on public LB) for NN and lgb. And then amazingly Georgiy Danshchin discovered a few features based on active train+test and immediately it boosted the best single lgb to 213X! Purely including them to my NN didn't help much but I couldn't find much time tweaking it (and honestly I was lazy at that point). I think in the end, it had 0.0007 ish improvement on our final score. I will leave this black magic to Georgiy Danshchin to disclose (Hint: RNN is involved there).</p>
<p>Stacking is extremely important here, which means that building diversified models are extremely important. I remember when four of us merged, just a linear blending of our models could get to 0.2133. </p>
<p><strong>NN</strong></p>
<p>I exclusively worked on NNs for this one and didn’t do much feature engineering otherwise. So I would like to share how you can achieve 0.215X with a single NN, and leave the rest (truly amazing stuff) to my awesome teammates.</p>
<p>All features matter here. Text, categorical, numerical, images (and probably in this order). And to my best memory, here is how I did it:</p>
<ul>
<li>I got 0.227X with numerical features and categorical embedding</li>
<li>And then I included titile and description with 2 RNNs, with fastText pretrained embedding, with some tuning, the score dropped to 0.221X.</li>
<li>Played with self training fastText embedding on train+test, and also train active, test active. It turned out that self training on train+test was the best. Score got to 0.220X.</li>
<li>Added VGG16 top layer with average pooling. It made my score worse. Did some tuning, specifically, had a separate layer before merging text, image, categorical, numerical features together, and started to see the improvement . Got to about 0.219X.</li>
<li>Tried to tweak text models, with CNN or Attention etc. None worked. In the end, went with 2 layer LSTM followed by a dense layer. Probably 0.0003 improvement here.</li>
<li>Tried different CNN models for images. None of the "fine tuning" models worked (and GOD it was slow). But fixed ResNet50 middle layer helped by probably another 0.0005. Now the score became 0.218X.</li>
<li>Started doing all sorts of tuning (based on intuition mostly). And found that adding spatial dropout between text and LSTM helped quite a bit, probably 0.0007 - 0.001. And fine tuned dropout ratio overall helped too. In the end, about 0.001 - 0.0015 improvement here. So now the score was around 0.2165 - 0.217.</li>
<li>Started including all engineered features from teammates. Lots of engineered features from them (the ones based on text) didn't help but others did. So in the end, a NN with 0.215X!</li>
<li>If you kept saving models along the way (models with fewer features, models with more features that got worse result, etc.), you could train a fully connected NN on top of them and for me it was around 0.008 improvement in addition. In other words, you can easily get into top 10 with only NN!</li>
</ul>
<p>I also attached a simple sketch of what the model architecture looks like. </p>
<p>And this thread is <strong>To Be Continued</strong> by my amazing teammates!</p>
<p>Jump to: </p>
<p><strong>Arsenal's approach:</strong>
<a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59880#349563">https://www.kaggle.com/c/avito-demand-prediction/discussion/59880#349563</a></p>
<p><strong>thousandvoices's approach:</strong>
<a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59880#349386">https://www.kaggle.com/c/avito-demand-prediction/discussion/59880#349386</a></p>
<p><strong>Georgiy Danshchin:</strong>
<a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59880#349710">https://www.kaggle.com/c/avito-demand-prediction/discussion/59880#349710</a></p>
<p><img src="https://pbs.twimg.com/media/DgvX3pWUYAAlCAK.jpg:large" alt="NN Model"></p>
<p>Once again, thank you all for this amazing experience!</p>
<p>one GTX 1080ti</p>
<p>oh yeah that was due to a silly name that I forget to change. I trained LSTM on title and description separately. Separating them probably has about 0.0002 improvement.</p>
<p>That is interesting. I didn't investigate if it was batchnorm or other causes but when I started adding image features it made my model worse and only after using separate layers the model got better. You might want to try that out on your features to see if that would help. My intuition of separate dense layer for each input is that they act like a block/gate to force them to be transformed before concatenating. So you can also in theory tweak the unit size to assign different "weights" to them.</p>
<p>@Darragh,
Actually, title and description were read separately :( it was just dummy name really.... Reading them separately gave about 0.0002-0.0003 improvement.</p> |
Avito Demand Prediction Challenge | 11th Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="/backaggle">@backaggle</a> thanks and glad that my kernel helps.</p>
<p>In theory, you can stack as many layers as you can, but personally I never tried to go beyond level 3. As you see the 2nd team did a 6-layer stacking which is quite an eye opener form me :) </p>
<p>From my point of view, if you want to do multilayers stacking, you would need to spend time and work out a good stacking workflow for you - so that you can generate oof predictions on different layers & different algorithms, but at the same time manage the large amount of files generated </p>
<p>What a competition this has been - it was such a rare opportunity to be able to tackle image, text, tabular and time series competition in one single competition. I personally found that the ability to iterate and execute ideas as quickly as possible is really critical in a competition like this. and it is for the same reason that I really admire all the solo participants who did well here, none more so than <a href="/dmitrylarko">@dmitrylarko</a>, <a href="/kelexu">@kelexu</a> and <a href="/wenbozhao">@wenbozhao</a> Despite managing to finish in the gold medal zone, it is obvious to me that the distance between us and the top teams is very big, so it is somehow humbling to see there is still so much room to improve. We will definitely keep going to get better </p>
<p>Our team names during this competition more or less reflect our journey in the last 7 weeks when we were active in the competition. First of all, 每天进步一点点means making small progress every day, while 挣扎到最后一刻 means fighting til the bitter end. I don't think we have come across any single magical features that improve our score by more than 0.001, while we were working frantically till the last seconds of the competition to make sure we can stay in the gold medal zone. The fight for the last gold medal positions was probably the most intensive that I have witnessed will multiple teams making sudden big jumps, so we are really happy that we manage to stay with them. </p>
<p>I started the competition with <a href="/cxl923cc">@cxl923cc</a>, <a href="/zhiqiangzhong">@zhiqiangzhong</a> and <a href="/yl1202">@yl1202</a>, we are close friends and colleagues with each other and wanted to use this exercise to form a group that can share the burden of kaggle competitions (seems it worked out well). Towards the merger deadline, we formed with <a href="/mzr2017">@mzr2017</a> and <a href="/oyxuan">@oyxuan</a> - because their modelling approach was quite different from ours, and the two groups complemented each other well. </p>
<h2>Feature Engineering</h2>
<p>I will describe the features we have been working on within our original team and let our teammate add theirs in following threads. Most of our features are already covered by other top solutions, but anyway I will provide a list here. The following are shared by all models:</p>
<ul>
<li>text feature with TFIDF vectorizer for title, description and params,
we played around with many different combination of TFIDF
parameters, and it all added to our model diversity.</li>
<li>SVD of TFIDF vectorizer features </li>
<li>text statistical features such as length of text, number of number</li>
<li>text features on ngrams, text distance features </li>
<li>various groupby statistics between different categoricals </li>
<li>aggregated features like the one shared in the kernels, </li>
<li>LDA features </li>
<li>Image features like the one that is extacted from pretrained models, as well as dullness, brightness</li>
<li>rolling statistics such as number of ad in the same category in different time windows</li>
</ul>
<p>For non-NN models we specifically also created:</p>
<ul>
<li>rnn extracted features: rnn features extracted from rnn architecture (Bidirectional-GRU, attention, global max, global avg), and feed into non-nn model as tabular features. </li>
<li>sentence2vec features
both group of features played important roles in our models. </li>
</ul>
<p>For NN model, we made heavy use of pre-trained word embedding models - three variant of FastText models shared in external data thread, and the self-train embedding model trained on all data. </p>
<p>Aside from the above, we have also added the following features trained on five-fold oof manner:</p>
<ul>
<li>"zero deal probability rate" - five-fold oof probability prediction on if the deal probability is zero. </li>
<li>target encoding features - again five-fold oof target encoding features for each original categorical features apart from user-id </li>
</ul>
<p>All these oof features are used directly on level 1 of the stacking, along side with other lv1 meta features (oof train and test)</p>
<h2> Modelling Approach </h2>
<p>We have used a wide range of models/algorithms in this competition. LightGBM with different parameters, XGB, Wordbatch FTRL_FM, Ridge, LinearSVR, RNN with Keras, Fully-connected NN with keras.</p>
<p>In the last week of the competition, <a href="/mzr2017">@mzr2017</a> found out that using xentropy in LightGBM, and building model by different parent categories helps to improve scores and give more diversity - and we started to retrained our better models with the corresponding settings. We achieved our best LB model at public LB 0.2186 with xentropy training on all data - it was a model with 400+ features and some dense rnn & sentence2vec features, and trained for more than 30 hours. meanwhile Our best RNN model was around 0.2195 in public LB, courtesy of our teammates.</p>
<h2> Stacking Approach </h2>
<p>Stacking turned out to be very effective in this competition, and we were able to generate stacking score that is almost 0.004 better than our best l1 models. </p>
<p>With the range of model/algorithms mentioned above, we generate 154 level 1 models. we found that almost all models add some diversity to stacking, and all efforts to trim the selection resulted in worse CV/LB, so in the end we went with including all level1 models. </p>
<p>The stacking approach we used were very similar to the one that I have described in my <a href="https://www.kaggle.com/yifanxie/porto-seguro-tutorial-end-to-end-ensemble">ensemble kernel</a> in the Porto competition. Except from the fact that we went for weight averaging for level 3 instead of stacking. I found that weight averaging always performed better on this level with a combination of lgb and keras level 2 oof. Didn't attempt to perform stacking beyond level 3, which we could have done - but this way we would have to start stacking a bit more earlier into the competition. </p>
<p>We found that training with alternative objective functions on level 1, and retraining level 1 model by different combinations of category helped to increase the diversity. Having discovered this we focused solely on retraining our existing models in the final day of the competition and managed to generate more than 40 models to add to our mix. This contributed a lot to our stacking effort with more 0.0005 gained in the last day. </p>
<p><a href="/ogrellier">@ogrellier</a> thank you for your kind words, a large part of the success was built on our previous teaming experience, so some of the credits must also go to you. Hopefully in the not so distant future we find some common space & time to work on another competition again.</p> |
Avito Demand Prediction Challenge | 14th Place Solution: The Almost Golden Defenders | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. We'll share even more soon!</p>
<p>This competition was very overwhelming. I’ve never seen a Kaggle competition before where we’ve had to intelligently deal with images, text, geospatial information, unlabeled additional data, numeric data, categorical data, and a low-key time series problem to boot! Not to mention that we still don’t even know what the dependent variable is and a few of the independent variables remain a mystery as well.</p>
<p>Congratulations to everyone who got a gold medal -- especially the soloists. In my opinion, anyone who got a solo gold in this competition deserves two gold medals. I have no idea how you’d even manage something like that given everything this competition has to offer.</p>
<p>We’re really excited that we pushed way farther than we ever thought would be possible in the final weeks, but are really astonished by the caliber of the competition that we faced. I’ve never seen such a competitive final week. ...If only there were a few more gold medal slots, as there certainly were quite a few deserving teams.</p>
<p>Naturally, spending most of the competition in the gold medal zone only to lose out in the end by a smidge is a humbling experience. It leads us to second guess everything we did and wonder where the extra few points could have come from that could have made the difference.</p>
<p>Anyways, I’ll stop complaining and try to make up for our second guessing with an instructive write-up.</p>
<h2>The Stacker</h2>
<p>I’m too lazy to make a pretty diagram, but Sijun was not too lazy, <a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/60059#350998">so you can read more detail here</a>. The original plan was to stop at Level 3 with just a single LightGBM, but the competitiveness of the competition forced us to keep going to level 4. Here's some brief discussion, though see Sijun's additional analysis.</p>
<p><strong>Level 4</strong></p>
<p>We created a four-level model, using a special Lasso model to aggregate together submodels. The two key features of this Lasso model was that the weights were constrained to be only positive and the model was trained separately for each <code>parent_category_name</code>. We found that <code>parent_category_name</code> introduced a good degree of variability in the errors and capturing this by building models separately was good for a very small (we’re talking ~0.00002) boost in score compared to training just one Lasso overall. Creating this insane Level 4 stacking system was key to boosting us from 0.2149 LB to 0.2148 LB on the public leaderboard.</p>
<p><strong>Level 3</strong></p>
<p>On the Level 3, we had 3x LightGBMs trained using (1) the L2 models and a few key features (parent_category_name, price, 20-dimensional SVD of text embeddings and a few image features), (2) all the L2 models and all the features used in L1, and (3) the same as the first model but trained with a Poisson objective. We also trained an MLP, a Lasso, and the aforementioned grouped Lasso by <code>parent_category_name</code> on all the L2 models (but no other features). Each of these models were also lazily bagged with some previous versions of the model that did not have some of our final models, for a total of 13 L3 models (7 LightGBM, 4 MLP, 2 Lasso). The LightGBM stackers each got ~0.2149 on the public LB and the Lasso ones each got ~0.2153. This was a large jump from Level 2 and Level 1.</p>
<p><strong>Level 2</strong></p>
<p>There was a small bump between Level 1 and Level 3 where we added all of our Ridge models to our LightGBM combined with all of our features. We trained this with both normal (regression) and Poisson objectives. We also lazily bagged this model for a total of four copies (3 regression,1 Poisson) by adding in previous copies of the model that didn’t have some of our final features. Our best Level 2 model scored 0.2173 on the public LB.</p>
<p><strong>Level 1</strong></p>
<p>We trained a ton of features into LightGBMs and varied the kind of encoding for the categoricals (either one hot encoding, LightGBM’s built-in target encoding, or Bayesian target encoding), varied the objective (regression and Poisson), and did some lazy bagging (using prior copies of the model that didn’t have all the features) for a total of 19 LightGBMs. Our best LightGBM scored 0.2183 on the public LB.</p>
<p>On the NN front, we trained three CNNs with FastText and two RNNs with Attention and pooling. Our best NN we 0.2181 on the public LB. <a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59917">You can see how our best NN looked here.</a> We also diversified our Level 3 blend by adding two NNs trained with multiclass and one on binary objectives, which provided a good boost.</p>
<p>We also trained a large number of Ridge models that were varied on training just on TFIDF for the title, just on TFIDF for the description, just on character-level TFIDF for the description, and on all text (concat of all TFIDFs). We then trained these models individually for each parent_category_name, cat_bin (described below), and parent_category_name - region interaction. We also trained several Ridges on all the data and included interaction terms in one. All told, there were 17 Ridge models. All of these Ridges (but no other L1 models) formed the basis of Level 2.</p>
<p>Lastly, we trained three FM models, one using Wordbatch and two using TFFM. These didn’t end up outperforming our best Ridge models.</p>
<h2>Features</h2>
<p>Our feature engineering work was very exhaustive and tended toward a kitchen sink. In the beginning, we would diligently add each feature into the model one-by-one and keep it if it improved the first fold. Toward the end of the competition, we ran out of time for that, so we would just dump tons of features in at once. The one-by-one approach is nice because frequently there were features that looked good but ended up making the model worse.</p>
<p>We added a lot of features, so we’ll be posting separately sometime next week about everything we did.</p>
<h2>Code Sharing</h2>
<p>We’re still cleaning the code and getting everything together, and hope to fully share all our code sometime next week.</p>
<h2>Until Next Time (Very Soon)</h2>
<p>See you guys more next week. I look forward to continuing to digest all the lessons learned and plotting my steps toward a real gold medal. We have some unfinished business. :)</p>
<p><img src="https://i.imgur.com/Qj8kqVY.jpg" alt="missed it by that much"></p>
<p>Thanks. Stacking towers are my favorite part of Kaggle. I look forward to stacking toward a gold after a few months off.</p> |
Avito Demand Prediction Challenge | 20th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly thank you to Avito for hosting such an awesome competition and thank you to both my teammates for all their hard work. I'll try to keep this brief.</p>
<p><strong>Price model from active data</strong></p>
<p>I actually wrote about this <a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/57410">here</a> and am surprised that many others didn't do this - it was perhaps our single biggest success. Whilst a price model on active data was quite poor in error metric terms it made a large contribution to our score. We used NNs on all train and test active data and Rob trained a per category xgb price model after extracting lots of text features (numerical fields for things like cars/houses etc...) which helped a lot.</p>
<p><strong>NNs and LGBM/XGB</strong></p>
<p>Like others we had good success with NNs and trained a variety on different text embeddings (fasttext cc and wiki, self-trained w2v with and without stemming). We didn't add many other features to the NNs but did use a variety of architectures with and without attention. Modelling raw log1p(price) as a categorical and then embedding it worked well too, as did lower batch sizes and averaging predictions over several epochs. It was my first time using DL in a competition so I was very happy to be reaching ~0.2195 with a NN without many new features.</p>
<p><strong>CV/stacking</strong></p>
<p>Everything was done on a 5 fold basis and using a lgbm stacker. We got a lot of success from this but perhaps could have been more careful in trying to construct diverse base models and using a more rigorous feed forward selection. Antoine in particular did a lot of work ensuring we were robust in what we finally included.</p>
<p><strong>Other thoughts</strong></p>
<ul>
<li>We did try to train models to predict period from the active data to no avail.</li>
<li>We didn't use images beyond basic features</li>
<li>We tried translating all the text, probably too late, but google's api gave up on us. The hope was the translation, whilst average, would help with the cases in the Russian language - as well as letting us finally understand what was being sold!</li>
<li>We trained models on a few different targets (e.g a NN to predict if an item was a 0 or not, treating target as categorical) which then went into the stack. The pdf entropy from these classifications was a good feature also.</li>
<li>We did train a user_id model (only using users that were in test and train) but ended up excluding any models which used user_id as were still finding too unstable CV-LB relationship. Be interested to hear if anyone included it successfully.</li>
<li>I second some of the other comments about workflow: we saved all our transformations down and then simple read in .pkl files and concatenated. Some of the transformations were expensive and this allowed us to get going quickly.</li>
<li><a href="https://radimrehurek.com/gensim/">Gensim</a> is a cool package, though slightly steeper learning curve than sklearn.</li>
<li>A note on lgbm: we actually found better results by sometimes excluding strong categorical features, particularly from the stack. My intuition for this is that given lgb fits in a leaf-wise (best first) manner it perhaps over-uses categorical features which always offer easy splits but ultimately might not be the best choice/don't allow slower exploration of weaker features or more subtle relationships. Keen to hear what others think of this.</li>
</ul>
<p>I'll let Rob and Antoine add anything if they see fit. Thanks again for a fun competition!</p>
<p>Mark</p> |
Avito Demand Prediction Challenge | 22nd place solution [Team NoVices] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and Avito for hosting this competition. This was a great dataset - so much to do.</p>
<p>We had 3 main models - LGB, XGB and NN. LGB Public score 0.2186, XGB Public score 0.2185, NN Public score 0.2186. We also did stacking and Yi Tang can talk more about it.
This was quite a team effort for us and I really enjoyed working with team on this project.
Abhimanyu and YiTang were feature engineers and I was responsible for modeling LGB, XGB. Yiang Zheng handled NN. I had few features of my own before we formed the team. Our score dramatically improved after I merged everyone’s feature into one model.
I also did similar to what Joe Eddy’s post mentions. I was merging or concatenating (or pickled objects) different feature groups - sort of relational matching. I could quickly build the model and see what is working or not. </p>
<p>For some reason we couldn’t use LGB/XGB features in NN and vice versa so Yiang Zheng ended up creating his own feature set. </p>
<p>Image features - We had lot of image features - all mentioned in public kernels or discussion forums. I used dask to generate many image features. It worked really well. Sample code on GitHub. - <a href="https://github.com/rashmibanthia/Avito">https://github.com/rashmibanthia/Avito</a> </p>
<p>We had TFIDF features for title and description with Russian stemmer- code on GitHub </p>
<p>Ridge features - Up until last day we were using features as is extracted from public kernel. Then YiTang had the feature generated with similar folds as our other OOF models. So we used YiTang’s ridge features to avoid any leakage. </p>
<p>Aggregate features - again as is from public kernel.
I’ll let my team elaborate more on the features:</p>
<p><strong>Yiang Zheng</strong> - I'll briefly talk about my nn model (0.2186 public, 0.2225 private), since @Little Boat and @Liu Jilong have already shared their nn architectures and their amazing work. It turns out all nns overall architectures are quite similar, only differs in details. In short, there are four branches in the whole model dealing with categorical, numerical, image and text separately and concatenate into one vector, followed by several dense layer. The thing I found useful is resizing all image to small scale, I used 32 by 32 and trained a CNN on it as a part of it. It gave me significant boost. Little Boat used middle layer of Resnet and Jilong used 64 by 64 here, I think potential boost lies on using both. <br>
<a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59880">https://www.kaggle.com/c/avito-demand-prediction/discussion/59880</a>
<a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59917">https://www.kaggle.com/c/avito-demand-prediction/discussion/59917</a>
For simplicity, I only trained one LSTM followed by [max,ave] for title_description as text info, as littleboat mentioned that might lose roughly 0.0002 boost, though increased the speed a bit. There is not too much needed to mention in categorical embedding, only tuning the embed size based on understanding. Last, for numerical data, I did find category based features are quite useful, like mean price on all categories. I didn't think of adding more interaction between categories, which I really should do. A more clever way might be creating more categories which mentioned by Webber in here: <a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59886">https://www.kaggle.com/c/avito-demand-prediction/discussion/59886</a> Thanks for my teammates' hard work and everyone's sharing and I indeed enjoy this wonderful Kaggle journey.</p> |
Avito Demand Prediction Challenge | 25th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I am refactoring into a GitHub repository. Since I did a picture of the architecture, I'd like to share here</p>
<p><img src="http://github.com/khumbuai/kaggle_avito_demand/raw/master/avito_NN_overview.jpeg" alt="enter image description here"></p>
<p>I used keynote. The file is in the git repo: <a href="https://github.com/khumbuai/kaggle_avito_demand">https://github.com/khumbuai/kaggle_avito_demand</a></p>
<p>If you sort the data for each user_id by item_seq_number you get a sequence for the price. E.g.</p>
<p>user_id =5, item_seq_number = 1, price = 10 </p>
<p>user_id =5, item_seq_number = 5, price = 20 </p>
<p>user_id =5, item_seq_number = 6, price = 100</p>
<p>would result in the sequence [10,20,100]. We then derived delta features between price of previous item and actual item (we actually used previous 2 and following 2, but for illustration I only show delta +- 1):</p>
<p>user_id =5, item_seq_number = 1, price = 10, delta_-1 = N/A, delta_1 = 10</p>
<p>user_id =5, item_seq_number = 5, price = 20, delta_-1 = -10, delta_1 = 80 </p>
<p>user_id =5, item_seq_number = 6, price = 100, delta_-80 = N/A, delta_1 = N/A</p>
<p>If we had this idea earlier I would have tried to actually use RNN on the whole sequence within the NN architecture :)</p>
<p>Thank you for valuing my contribution. That means a lot and encourages to share my ideas in further competitions. </p>
<p>10 fold cv around 6h on Ti 1070. Average number of training epoch before early stopping is around 6</p>
<p>Sorry that I did not put it in code. prense() is just a shortcut I used.</p>
<pre><code>def prense(x, units):
x = Dense(units)(x)
x = PReLU()(x)
return x
</code></pre>
<p>First of all, thanks to kaggle as well as avito for this challenging competition. </p>
<p>Let me quickly give an overview of my part of our solution.</p>
<p><strong>Summary</strong></p>
<p>Until last day we used a single layer stacking of different NN and LGB models. For the final submission we also trained our stack with xgb and blended the results with our LGB stacker 1:1.
The best NN scored 0.2194 the best LGB 0.2196. Some notes on what helped and did not help.</p>
<ul>
<li>I spent a lot of time trying different factorization methods like LDA, libfm etc. but none improved my score</li>
<li>What helped a lot where features derived in public kernels like city population from wikipedia etc. </li>
<li>I made a test on pseudo-labelling and found out it helped, but we had no submission and time left to incorporate</li>
<li>Features from pretrained image models (DenseNet, ILGNet, Ava) </li>
<li>w2v embeddings trained from train active as described in one of my kernels</li>
<li>a higher k-folder significantly improved the score, even more when bagging same models with different seeds</li>
</ul>
<p><strong>Feature engineering</strong></p>
<p>We tried/ used most of the features from public kernels, however two interesting features we found were:</p>
<ul>
<li>number of duplicates per user. While Benjamin drops duplicates in his kernel we considered the number of duplicates important and this helped</li>
<li>my teammate had a great idea seeing item_seq_number as a sequence. (sometimes the obvious is hard to see) and came up with the idea to use the price difference of consecutive items as feature, which improved LB a lot </li>
</ul>
<p><strong>NN structcture</strong></p>
<p>I used the following NN structure, where I did TFIDF on character level on param_3 and w2v embeddings on text. I fed all numerical features we had (including a ridge model trained with price as target)</p>
<pre><code>def build_model():
sparse_params = Input(shape=[X_train['sparse_params'].shape[1]], dtype='float32', sparse=True, name='sparse_params')
categorical_inputs = []
for cat in cat_cols:
categorical_inputs.append(Input(shape=[1], name=cat))
categorical_embeddings = []
for i, cat in enumerate(cat_cols):
categorical_embeddings.append(
Embedding(embed_sizes[i], 10, embeddings_regularizer=l2(0.00001))(categorical_inputs[i]))
categorical_logits = Concatenate()([Flatten()(cat_emb) for cat_emb in categorical_embeddings])
categorical_logits = prense(categorical_logits, 256)
categorical_logits = prense(categorical_logits, 128)
numerical_inputs = Input(shape=[len(num_cols)], name='numerical')
numerical_logits = numerical_inputs
numerical_logits = BatchNormalization()(numerical_logits)
numerical_logits = prense(numerical_logits, 256)
numerical_logits = prense(numerical_logits, 128)
params_logits = prense(sparse_params, 64)
params_logits = prense(params_logits, 32)
desc_inp = Input(shape=[max_len_desc], name='desc')
title_inp = Input(shape=[max_len_title], name='title')
embedding = Embedding(nb_words, embed_size, weights=[embedding_matrix], trainable=False) # nb_words
emb_desc = embedding(desc_inp)
emb_title = embedding(title_inp)
emb_text = Concatenate(axis=1)([emb_desc,emb_title])
text_logits = SpatialDropout1D(0.2)(emb_text)
text_logits = Bidirectional(CuDNNLSTM(128, return_sequences=True))(text_logits)
text_logits = Conv1D(64, kernel_size=3, padding="valid", kernel_initializer="glorot_uniform")(text_logits)
avg_pool = GlobalAveragePooling1D()(text_logits)
max_pool = GlobalMaxPool1D()(text_logits)
text_logits = Concatenate()([avg_pool, max_pool])
x = Dropout(0.2)(text_logits)
x = Concatenate()([categorical_logits, text_logits])
x = BatchNormalization()(x)
x = Concatenate()([x, params_logits, numerical_logits])
x = Dense(512, kernel_initializer=he_uniform(seed=0))(x)
x = PReLU()(x)
x = Dense(256, kernel_initializer=he_uniform(seed=0))(x)
x = PReLU()(x)
x = Dense(128, kernel_initializer=he_uniform(seed=0))(x)
x = PReLU()(x)
x = LayerNorm1D()(x)
out = Dense(1, activation='sigmoid')(x)
model = Model(inputs=[desc_inp] + [title_inp] + [sparse_params] + categorical_inputs + [numerical_inputs],
outputs=out)
model.compile(optimizer=Adam(lr=0.0005, clipnorm=0.5), loss='mean_squared_error',
metrics=[root_mean_squared_error])
return model
</code></pre> |
Avito Demand Prediction Challenge | 3 place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you </p>
<p>Haha, I guess I can live with a 3rd place :)</p>
<p>1) Experimentally this often works . I guess this is because a model has a chance to look at this data from a slightly difference perspective when you already have powerful models in . In other words it tries to explore information it may have missed before. It is mostly trial and error.</p>
<p>2) No. you can see it as a form of bagging if you trained on different folds - to my experience it does not change the results. People can use different folds</p>
<p>Congratulations to the winners, especially <strong>Dance with Ensemble</strong> your score was amazing from start to finish. Also, congrats to the solo gold medallists, and especially to the solo kaggler who claimed it was <strong>too hard to get a solo gold</strong> and eventually snuck in at #13– we have been secretly rooting for you.</p>
<p>Here we will outline a brief explanation of our solution.</p>
<h2>Ensemble/approach</h2>
<p>Our solution is the average of 2 ensembles. The first ensemble was trained using a standard 5-fold validation schema.</p>
<p>The second ensemble used a time-series schema. The test data begins a number of days past the final day included in the training data. We tried to replicate this in our internal validation. To do so, we trained on the first six days of training and used days ten through thirteen as validation. This meant days six through nine were excluded, mimicking the gap between train and test.Then, to generate predictions for the test data, we trained on the entire training set. </p>
<p>In order to generate likelihoods of categorical features for this approach, we always applied a gap of 4 days. For example to estimate likelihoods for day four of the training data, we use the average of target for day zero. To estimate likelihoods for day five, we used likelihoods of (day0+day1)/2. We decided on a gap of four days for stability, as it gave similar CV and LB performance. </p>
<p>Our best single models came from this approach. Our best single lgb clocked in at 0.2163, and our best single nn scored .2180.</p>
<h2>LightGBM features</h2>
<ul>
<li>Tf-idf on words (2 grams on description, 1gram for the params and
title)</li>
<li>Tf-idf on chars (5 grams)</li>
<li>Word2vec-based features on words (this worked better for us than
fastext)</li>
<li>Pretrained fastext features on words</li>
<li>Image quality features (from <a href="https://www.kaggle.com/shivamb/ideas-for-image-features-and-image-quality">here</a>)</li>
<li>Vgg16 feature (from <a>here</a>
and <a>here</a> )</li>
<li>Vgg19 feature (similar to vgg16)</li>
<li>Resnet prediction (of object top 3) features</li>
<li>Inception prediction (of object top 3) features</li>
<li>Xception prediction (of object top 3) features</li>
<li>Some binnings of numerical features (like price)</li>
<li>Some group-by user type of features (like average number of words,
average number of days of Displaying ads)</li>
<li>Some text based counts (like upper counts, punctuation, counts of
emojis)</li>
<li>Some location features based on latitude and longtitude</li>
<li>Likelihood and counts on almost all 3-way interactions of categorical
features (for kfold approach we excluded user_id interactions as they
over fitted. For the time series approach, user_id likelihoods gave a
good boost)</li>
</ul>
<p>We primarily used <strong>xentropy</strong> for our objective, as probabilities are constrained between zero and one. </p>
<h2>Neural Networks</h2>
<p>Our best nn used two stacked, bidirectional GRUs on the text with concatenated embeddings of fastext and those trained on our own with word2vec, along with some of the numerical features used in LightGBM. All categorical features (minus user id) had embedding layers of size 100. Our objective was binary cross-entropy with sigmoid output. </p>
<p>A few small additions/key points:</p>
<ul>
<li>There was a dense input for the vgg16 and vgg19 features</li>
<li>Text was stemmed based using nltk.</li>
<li>For text we combined description,title and params into one field,
separated by a dividing character.</li>
</ul>
<p>In addition to the above network, we ran a NN for each feature channel (image, text, character) with minimal categorical embeddings and likelihood features. When stacking with just these models, they scored ~.216 at the second level, which was comparatively weak to lightGBM. </p>
<p>For these models, we primarily used each feature channel independently, than concatenated them with basic likelihood features and categorical embeddings before feeding them to dense layers for the output. While these models were individually weaker, they stacked well, and maximized the 2nd-level information from each feature channel.</p>
<h2>Stacking</h2>
<p>Both approaches include mostly lightGBM and neural nets. In addition, simple ridge models were used to improve performance for some lightGBM models, in a similar manner to a couple of shared kernels.</p>
<p>To stack our 5-fold approach, we used the time schema as explained above (0-5, 10-14) to validate and optimize hyperparameters. We also did a bit of re-stacking, adding some counts and likelihoods spanning user-based interactions of categorical features and price. Our second level stack combined lightGBM, neural nets using 2 hidden layers, linear output, and mse objective, and sklearn’s ExtratTeesRegressor. This approach scored 0.2145.</p>
<p>For the time series approach, we found best parameters using days [10,11] for training and the remaining for validation. Restacking did not help here and we used lgb and nn of equal weight . This scored 0.2140.</p>
<p>Finally, a blend (35% of time stack + 65% of 5-fold stack) gave our best public (0.2136) and private score.</p>
<p>I am sure you will :)</p>
<blockquote>
<p>I remember from the toxic comment competition, that you advocated
trying pseudo-labeling in any competition.</p>
</blockquote>
<p>I tried that here and it did not work. </p>
<p>I tried with both lgbt and nn</p>
<p>I could see performance of pseudo-labeling was worse with lgb than nns. </p>
<p>Very similar to <a href="https://www.kaggle.com/frankherfert/region-and-city-details-with-lat-lon-and-clusters">these</a> </p>
<p>We created some additional clusters based on kmeans. </p>
<p>There are some competitions (especially when the data is not very big) where you can introduce leakage, no matter what cv you are doing. Sometimes it is good to ensure everyone is cving the same way to avoid any mistakes. Other than that, I have not seen huge differences, but I trust kohei if he says he has. </p>
<p>Yes, it is exactly that.</p> |
Avito Demand Prediction Challenge | 35th place: squeezing out the last -0.0011 RMSE | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>@Abdul: Slow, but educational. Always cool when something unexpected works, like for me CatBoost vs XGBoost as a stacker.</p>
<p>I will not bore everyone with a complete overview of my 35th place solo solution. I can't wait to read over the actual top places solutions, especially <a href="https://www.kaggle.com/xiaozhouwang">Little Boat</a>'s who I have learned so much from!! Instead, I will go over how I squeezed the last -0.0011 out of my RMSE score, -0.0001 to -0.0003 at a time. It took a lot of experimenting to find what worked.</p>
<p>First a quick overview of my basic approach. I did a lot of feature engineering ending up with 170 numerical features derived from text properties, image properties, mining of test_active/train_active, price encodings, categorical stats, polynomial features, etc. </p>
<p>I had a team of GBT models (XGBoost and LightGBM) and neural net models (Keras). My word embedding was created entirely from Avito text. My GBT's included Tfidf and Countv on text, as well as <a href="https://github.com/nadbordrozd/blog_stuff/blob/master/classification_w2v/benchmarking_python3.ipynb">TfidfEmbeddingVectorizer</a> features. My NN's were BiLSTM/Conv1D and Conv1D/Conv1D similar to those in my DonorsChoose.org playground competition <a href="https://www.kaggle.com/shadowwarrior/1st-place-solution">1st place solution</a>. </p>
<p>At first I linearly stacked these models using out-of-fold predictions. I was at something like RMSE 0.2178 on the PLB, and searching for how to further reduce my RMSE. What worked:</p>
<p>1.) -0.0002: I did an DataFrame corr() on all of my numerical features, and pruned out features with correlations above 0.92.<br>
2.) -0.0003: For diversity I added into my stacker LR, GBT and NN models that worked exclusively on just numerical features, on just categorical features, on just text features and on just images. Individually these models scored poorly, but they helped the stack.<br>
3.) -0.0002: Non-linear stacking with XGBoost<br>
4.) -0.0002: Add in categorical features and price into the stacker, which turn out to be helpful segmenting the predictions.<br>
5.) -0.0001: Using CatBoost as non-linear stacker! CatBoost didn't work well for me as a base model, but it's target encoding of categoricals helped with stacking.<br>
6.) -0.0001: Stacker of stackers! I stacked slight variations of my best XGBoost and CatBoost stackers.</p>
<p>Thank you to Kaggle & Avito for a fun competition, and fellow Kagglers for pushing me to squeeze out those last -0.0001 RMSE's!</p>
<p>Thanks YaGana!</p>
<p>Thanks Ahmed. I thought ml-team was destined for gold. You came very close.</p>
<p>Thanks DUO. I enjoyed reading your many contributions in this competition.</p>
<p>I used the category hashing you introduced in <a href="https://www.kaggle.com/c/donorschoose-application-screening">DonorsChoose.org</a> in my NN! It works better for me than entity embedding.</p>
<p>Thanks! Stacking was really powerful in this competition.</p>
<p>Much appreciated and congrats on 14th place, Peter!</p>
<p>Thank you, Serigne, and congrats to you for continuing your string of top finishes!</p> |
Avito Demand Prediction Challenge | 37th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congratulations to all the winners, especially solo gold winners! </p>
<p>I would also like to thank my team members @cutlass90 and @yaroshevskiy for your invaluable contribution to our solution!</p>
<p>Now I will briefly describe our approach. We didn't use sophisticated feature engineering nor complicated neural network based models. Our main model is a fairly simple lgbm with a handful of hand-engineered features and some metafeatures coming from the other models.</p>
<p>Hand-engineered features:</p>
<ul>
<li>Price aggregations: mean, median, var, min/max over the groups of categorical features (approximately 100 features in total). After joining these aggregations to the main dataframe we divided all these features on the price of the item to get some sort of relative price compared to the average price on the market (responsible for approx. 0.0004 improvement).</li>
<li>Number of unique users in each city, region + category_name</li>
<li>Text features: length, word count, number of uppercase/special symbols/punctuation vs length, number of stopwords vs word count</li>
<li>n/a features: separate boolean columns for price/image missing (gave 0.0001 improvement)</li>
<li>Average days active per category/city/param_1 calculated from periods data</li>
</ul>
<p>Metafeatures:</p>
<ul>
<li>RNNs (BiGRU + pool/attention) over title/description with trainable embeddings</li>
<li>Same networks with pretrained fasttext embeddings (Wikipedia). It seems that training own fasttext model on active data could be a good idea, we didn't do it though.</li>
<li>Sparse feedforward NN on top of Tfidf features + categorical/numerical features (responsible for approx. 0.0008 improvement). The model is very simple: sparse matrix multiplication on tfidf matrix -> concatenate with categorical embeddings + numerical features -> 2-3 dense layers with PReLU and Dropout -> sigmoid (we observed cross-entropy works slightly better than RMSE as an objective)</li>
<li>Finetuned resnet34 with ImageNet weights. We used predicts from two checkpoints (2nd and 3rd epoch). The former was a bit underfitted and the latter was a bit overfitted. Together they formed two good metafeatures responsible for approx. 0.001 improvement.</li>
<li>Simple feedforward neural network on top of the hidden state of a denoising autoencoder. The autoencoder was trained on train/test active data in the similar fashion described in <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629">this</a> fantastic thread. We didn't have enough time to optimize the parameters of the autoencoder, but finally we got around 0.0002 improvement from it.</li>
<li>CNN for images + RNN for texts model. It wasn't properly optimized but gave a nice improvement of 0.0002.</li>
</ul>
<p>We also trained a bunch of other models that didn't work well such as FM-like neural network and Transformer-like model (from Attention Is All You Need paper).</p>
<p>We trained a simple neural network on active data to estimate price and number of active days (gave approx. 0.0003 improvement).</p>
<p>The final model is a simple blend of several trained boosted trees on slightly different feature sets.</p>
<p>Once again, thank all the competitors for your work and fruitful discussions! We hope to learn a lot exploring the winning solutions. See you next time!</p> |
Avito Demand Prediction Challenge | 4th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! Your approach is very nice, thanks for sharing it and the link - I'll have to check this out, it looks great.</p>
<p>Thanks Prasun! I was surprised by this too - I've usually seen shallow trees (3-8 max depth) work best for boosting. <a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59880">Thousandvoices</a> went all the way to 1024 leaves! Tree depth is definitely very problem dependent, and one way to think about it intuitively is that it controls the level of feature interactivity that's allowed. With a massive number of low signal sparse features (from something like tf-idf, in my case 100k sparse features), it's sensible to me that trees need to go deep to properly capture the multitude of very specific interactions like word usages. Also, I think there's a lot of granularity in this prediction problem in general - specific target values are associated with multi-level category combinations as well as the very particular items that fall within those combinations and the very specific traits of those items relative to their peers (e.g. why granular price aggregate features work well). Another thing worth mentioning is that LGB uses a leaf-wise tree growing algorithm, which allows it to capitalize on going extremely deep if there's better information gain to be had that way.</p>
<p>I think the reasons I've outlined can be used to an extent as an intuitive decision criterion, but at the end of the day your best answer will always be from proper validation. When you start building a gradient boosting model, try pushing the max depth up until you stop seeing validation improvement, and narrow it down to a reasonable choice in something like a binary-style search. E.g. try depth 3, 8, 15, discover 8 works better than 3 and 15, check around 8 until you find the right balance. </p>
<p>Thanks pocket!</p>
<p>1) My approach only used matrix factorization of text data, not categorical (applied the tf-idf to the concatenation of title, description, params1-3). I tried some user-level categorical factorization too but it didn't seem to add anything (but see James' post, he saw success with flipping it to category-level factorization across users). I chose tf-idf -> SVD because it's the classic approach used for latent semantic analysis on text / the easiest one for me to implement and tune, but it's very possible that LDA would have worked well for this too. LDA would have been a nice thing to try with more time. Also, it's possible that just sparse counts/tf-idf without factorization is more optimal (should have tried this). <a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59880">Thousandvoices</a> used raw countvectorizer on user-concatenated titles, for example.</p>
<p>2) I used a combination of intuition, model feedback, and time/space tradeoff decisions. With tf-idf I found that going up to 3-grams and 50k features gave a validation boost over simpler settings, but I think going higher than that had diminishing returns. For SVD components, values in the hundreds are a common heuristic for semantic models, but for the clustering process k-means run time scales with number of features and number of clusters, so too many of either makes things very slow. I tried fewer clusters at first and found it to not work well, and from my work with aggregating on title nouns I had additional evidence that very specific text groupings worked well. So I chose 30k as a tradeoff between granularity and excessive run time for the k-means algorithm, and found it to work better. For what it's worth, the text clustering didn't add much over the noun and noun+adjective aggregates, and I think it's possible that even much more than 30k clusters would have been preferable. </p>
<p>Thanks Serigne!</p>
<p>Thanks Samrat!</p>
<p>Thanks so much Peter! And congrats, looking forward to reading your team's approach if you share :)</p>
<p><strong>Edit/update</strong>: please also see <a href="https://www.kaggle.com/jtrotman"></a><a href="/jtrotman">@jtrotman</a>'s writeup in the responses below! :) </p>
<p>Thank you @ Kaggle and Avito - this competition was just awesome. There were so many interesting facets of the data to explore, and this was a rare competition where I wish I had joined earlier and spent even more time with it instead of getting sick of the data by the end. And a huge, huge thank you to my team <a href="https://www.kaggle.com/neongen"></a><a href="/neongen">@neongen</a>, <a href="https://www.kaggle.com/gphilippis"></a><a href="/gphilippis">@gphilippis</a>, and <a href="https://www.kaggle.com/jtrotman"></a><a href="/jtrotman">@jtrotman</a>. This was a great and inspiring team that I feel very lucky to have been a part of.</p>
<p>Our solution is a lgbm stacker trained on a bunch of good base models and a chunk of our strongest features. Nonlinear stacking and including features for stacking definitely helped. My teammates can share more about that framework and all of the different models that went into the stack (lgbs, nns with lstm, sparse nns, weaker models like ridge), but my part of this thread will focus most on my biggest contribution to the result - a single lgbm model that scores .2175 public /.2213 private and the features that go into this model.</p>
<p><strong>Hyperparameters</strong></p>
<p>Hyperparameter tuning was not my main focus, but I found that very deep trees with a low learning rate worked quite well. Here are the final parameters:</p>
<pre><code>lgb_params = {
'boosting_type': 'gbdt',
'objective': 'regression',
'metric': 'rmse',
'learning_rate': 0.01,
'num_leaves': 400,
'colsample_bytree': .45
}
</code></pre>
<p><strong>Features</strong></p>
<p>Now to get to the good part and my focus in this competition - features. I ended up with about 800 tabular features (original + engineered), along with 100k tf-idf features for description and title + param_1. Here are the tf-idf settings:</p>
<pre><code>TfidfVectorizer(stop_words=stopwords.words('russian'),
lowercase=True, ngram_range=(1, 3),
max_features=50000,
sublinear_tf=True)
</code></pre>
<p>I'd summarize my tabular features by saying that they try to extract as much information as possible using price and the text fields. At first I only used very simple image features like color channels and size, and the image features I added at the end only gave about a .0003 boost.</p>
<p>My core feature ideas:</p>
<p><strong>Price statistics</strong>: </p>
<p>It seemed clear early on that price information should have a very import impact on the target. So I wanted to extract price distribution information / statistics on pretty much every basis of aggregation I could come up with (e.g. by category, by category and city, by user_id, by image_top_1, etc.). For a bunch of different aggregates, I took these stats across all records (with duplicate item_ids dropped):</p>
<ul>
<li>20th percentile</li>
<li>median</li>
<li>max</li>
<li>standard deviation</li>
<li>skew</li>
</ul>
<p>Then computed some relative price numbers for each post record with respect to the corresponding aggregate -- this measures how expensive or cheap the item is relative to the grouping
- row / 20th percentile, row / median, row / max </p>
<p>These computations are at the core of my FE, including for the ideas just below. </p>
<p><strong>Granular textual aggregates on price</strong>: </p>
<p>I believe my most important discovery in this competition was that applying my price statistics features to very specific text groupings worked extremely well, and these features are at the core of my model. Taking price / median price for each post's title alone gave me a big boost. I hypothesize that this may be a way to partially reverse engineer keyword search rankings (e.g. search sorted by price) and that search rankings heavily impact views and therefore deal_probability. Image that you're an avito user looking browsing for items - you search for a few specific keywords (e.g. "red bicycle"), sort by price, and start looking at many of the cheapest options. I've extended the idea to add more value in a few different ways that try to reduce the number of distinct groupings while still being granular enough to capture this level of signal -- I run price statistics on all of the following:</p>
<ul>
<li>title_noun aggregates: I extract all nouns from each title, normalized them, removed duplicates and sorted them alphabetically, and then used the resulting key as a basis for aggregation</li>
<li>title_noun_adjs aggregates: same as above, but adding adjectives as well </li>
<li>title_cluster: using title tf-idf features, I run SVD with 500 components and form 30,000 k-means clusters on these components. k-means is very slow, so I used mini-batch k-means and limited it to 500 components / 30k clusters even though I think more of both may have performed better.</li>
<li>text_cluster: same as above, but based on a concatenation of title, description, and the param fields. </li>
</ul>
<p><strong>User semantics</strong>:</p>
<p>When you build ML models, it's easy to get stuck in a row-based world and lose sight of the many interesting relationships between the different rows. Normal aggregate feature engineering like column statistics help with this, but I got to a point beyond that where I realized there was still more to be done with the text fields to help capture user characteristics.</p>
<p>Coming off of talkingdata where it saw some success, I had the idea to use matrix factorization for this. So for users in train/test, I concatenated their text fields across all rows (title, desc, params), ran tf-idf -> 300 component SVD, adding these components as features. A performance tip here is to use hashingvectorizer like below rather than straight tf-idf, since it is significantly more memory efficient (at the cost of a bit of accuracy). Set n_features higher than the number of features you really want, since there will be collisions.</p>
<pre><code>print('Applying hash vectorizer then tf-idf to text')
print('Hash Vectorizer')
hv = HashingVectorizer(stop_words=stopwords.words('russian'),
lowercase=True, ngram_range=(1, 3),
n_features=200000)
hv_feats = hv.fit_transform(user_text_df['all_text'])
print('TF-idf transformer')
tfidf_user_text = TfidfTransformer()
tfidf_user_text_feats = tfidf_user_text.fit_transform(hv_feats)
</code></pre>
<p>Another nice way to capture user-based information from text is with aggregations on meta-textual features like percentage of caps in title, etc. I took a bunch of meta-textual features and computed the same aggregate statistics that I did for price, across each user_id. For example, one of my top 30 features was the rather unfortunately named (by my conventions) </p>
<pre><code>"pct_caps_description_pct_user_id_median:pct_caps_description"
</code></pre>
<p>This means taking the specific row's percentage of caps in description and dividing it by that user's median percentage of caps in the description. Maybe I should have called it "user shoutiness relative to their norm" instead. </p>
<p><strong>Image Features</strong></p>
<p>For a big part of the competition I thought the images were mostly a red herring. My worldview was very focused on exploiting the possibilities of textual information and thought its relationship with search rankings had a much more dominant effect than the images would. It was very interesting to hear that people were seeing significant gains from good image features, and I think this is the area where I would try to improve my model more if given more time. I used a few very simple features (color channels, size dimensions), and some nice features prepared by my teammates (they could explain in more detail) that gave about a .0003 boost to my model as the final feature addition --</p>
<ul>
<li>Image blurness</li>
<li>Color histograms - SVD components and some statistical features</li>
<li>NIMA - activations, score, stds, and score + std</li>
</ul>
<p>Aside from features mentioned, I used a bunch of other miscellaneous features similar to those seen in kernels - simple meta-textual features, lat/long + lat/long clusters, avg days a user's post is active and similar stats from the periods data. I did clean and lemmatize title and description before applying tf-idf and some of the other text processing mentioned above, but I believe it made a pretty minor difference. </p>
<p>In addition to lgb I used these features in a few other models, e.g. a neural net that scores .2197 public. Architecture is very similar to what you can see in kernels - biGRU applied to title and description, category embeddings, numerics concatenated and passed through 2 dense layers. It was difficult to close the gap between lgb and NN with this feature set, I think largely because there were many null values as a result of aggregate stats on unique occurrences, and these aggregate stats were among the key features to the lgb. I tried some different imputation strategies, but nothing worked all that well (although I will say that imputing price in a smart way, whether by a groupby-average on some category combinations or by a dedicated price prediction model, improved the NN score by about .001). </p>
<p><strong>Some Comments on Workflow</strong></p>
<p>I want to end on a few comments about workflow that I hope may be useful. What works well in one competition might not be ideal for another, but maybe it's helpful :) I've already written about some of these approaches here: <a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/56986#330023">https://www.kaggle.com/c/avito-demand-prediction/discussion/56986#330023</a></p>
<p>If you do a lot of feature engineering (especially with categorical aggregates), you are likely creating what amounts to a miniature relational database. You don't have to go as heavyweight as creating an actual database, but I found it helpful to create a bunch of feather files for storing aggregate features then merge all the ones I wanted back into the main data to build models. For example I had many feature files like "global_city_features.ftr" that I would merge back into the training dataframe using city as a key. Storing features in a relational manner saves you processing time (run FE once, not every time you model), saves you disk space, and gives you a lot of flexibility for choosing which features to include in a model run (just don't merge them in if you don't want that set).</p>
<p>I used greedy forward feature selection - engineer a new set of related features, add them to my current best model, see if validation result improves. If it doesn't improve, leave them out of the model. This is a fast and fairly reliable (if not optimal) way of selecting for good features to include in a model.</p>
<p>I noticed early on that there was very little variance in RMSE between validation folds (<.0002), so I felt pretty safe testing for feature contributions to my model by evaluating on only one fold at a time. This helped speed up iteration on feature sets.</p>
<p>I've already written a lot and my teammates will have additions to make as well so I'll stop there. If you're not sick of my blabbering and are interested, I'll likely write a blog post about my ideas / approach and share a link here once it's up. I hope this was a useful read, and happy kaggling!</p>
<p>The idea was to get features that capture price characteristics relative to a grouping. So for example, I want to know how expensive an item is vs. other items that share the same image_top_1 category. To get a feature like that, I first compute aggregate statistics for the category - its 20th percentile, median etc. Then for each individual item I merge in the aggregate statistic as a new column, and divide the item's price by the value of that column (that's what I mean by row / 20th percentile). Then it would look like df['price_pct_image_top_1_20thp:price'] = df['price'] / df['image_top_1_20thp:price'].</p>
<p>Hope that's clear :) </p>
<p>Thanks for the kind words pocket! I'm very glad if my posts are useful.</p> |
Avito Demand Prediction Challenge | 51st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, Kagglers!</p>
<p>Because today I talked about my solution and knowledge of this competition in my company, I would like to share it in discussion.</p>
<p>My presentation slide is here.</p>
<p><a href="https://www.slideshare.net/nasukasumino/51st-solution-of-avito-demand-prediction-competition-on-kaggle-104538026">https://www.slideshare.net/nasukasumino/51st-solution-of-avito-demand-prediction-competition-on-kaggle-104538026</a></p>
<p>Thank Avito and Kaggle for hosting this wonderful competition !!</p> |
Avito Demand Prediction Challenge | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Code here to extract densenet featuremap : <a href="https://github.com/darraghdog/avito-demand/blob/master/imgfeatures/densenet_extractor.py">https://github.com/darraghdog/avito-demand/blob/master/imgfeatures/densenet_extractor.py</a>
Just load pretrained model; chop off the end, like in the code line 47; and then run all the images through and save the output. Each image will give an matrix of shape for example <code>32x32</code> but use numpy to flatten it and you get (1*1024) - do the same for the 130K images and you get a matrix 130K*1024 dimensions; then concat that to your RNN of whatever. We filled blank images with 0. </p>
<p>I got an 3.4xlarge instance, I believe - 16 core and 128GB RAM; about 28c per hour on spot pricing to run jobs when they were ready for 5CV. but got the jobs ready locally on the laptop and then kicked off the AWS instance. This was for the Lightgbm. Sometimes you can get vouchers for AWS, which helps on cost. </p>
<p>Yeah, we actually ran it with lightgbm also, same features but ran it on parent parent category only. I was thinking to try on all categories - ridge probably would have been fast enough to do it. </p>
<p>Yeah I think we only ran one set of features on parent categories - given more time, more could have been helpful- I thnk this was set up Monday night :) . One of the parent categories took about 60% of the data; so we split that up to its subcategories and ran a second model - I think it was clothes, where we split to kids clothes and adults clothes. </p>
<p>Congrats to top3 winners, and all top teams – its funny to see the solutions – I was pretty sure there was something massive we missed on images, but after a first read of other solutions it looks like there was no big thing missed there. <br>
When we got into the competition we would have been pretty happy with a top50, so are psyched to end up with 5th. Our team are all working together at Optum Health (hence the name :) ) so it was great to benchmark some of our techniques here. <br>
Our solution involved a LGBM stack of 4 different types of models – lgbm, RNN, MLP and ridge. The best of each scored on public LB approx <code>0.216, 0.2185, 0.2215 and 0.222</code> – but it was really in the stack where diversity between these helped. This is our repo <a href="https://github.com/darraghdog/avito-demand/">linky</a>. We tracked progression for different models/changes in there on the front page. <br>
For teams starting in Kaggle or Data Science I cannot underestimate the importance of getting a good local validation that tracks to the Leaderboard and tracking improvement on val and lb. Initially we used a small validation set of a few of the final days of train; when we had a few good models we set up a stack and we used 5 fold with timesplit. </p>
<p><strong>MLP</strong> <br>
In general we leaned a lot on the Mercari solution’s it was a very similar problem. The winning MLP there scored 0.2215 here with very little changes ( <a href="https://github.com/darraghdog/avito-demand/blob/master/nnet/mlp_1705.py">MLP code</a> ). Tried a few other things to improve, but it did not help in the stack. All credits to Konstantin and Pawel who developed this and for sharing a simple 75 line version <a href="https://www.kaggle.com/lopuhin/mercari-golf-0-3875-cv-in-75-loc-1900-s">linky</a> </p>
<p><strong>RNN</strong> <br>
Tim set up the RNN on a macbook GPU which was pretty impressive . We also introduced pymorphy2 which gave good improvement here and on lgb for tokenization. A lot of work was done on regularization for tuning and we concatenated on the penultimate layer of a densenet feature map of the images. Example <a href="https://github.com/darraghdog/avito-demand/blob/master/nnet/rnntmp/nnetdh5CV_2705A.py">RNN code</a> <br>
We tried adding numerical features and pretrained embeddings; while it helped at L1, it did not add much in the stack for the first few tries. One of the main challenges here was hardware – on 5CV it took about 36 hours to run on an AWS P2, as we bagged 2 times and used 256 wide embedding layer – so we gave up here and concentrated a bit more on LGB. </p>
<p><strong>LGB</strong> <br>
Features engineering in LGB was the happy tree that did not stop giving. On the repo front page you can see the progression. <br>
One of the strongest was relative price. We did a kind of Bayesian mean of item price vs price of the group – <code>((item_price/mean_price_grp)*ct_grp + (prior))/(ct_grp+prior)</code>. This allowed the ratio be weighted on how many items were in the group - which is pretty important, if an item is the cheapest of 2 similar items its a lot less significant than being the cheapest of 100 similar items. Just doing this over lots of different groups – title, params categories, clusters etc. added close to 0.002. <br>
This is an <a href="https://github.com/darraghdog/avito-demand/blob/master/features/code/pratioFestivitiesR1206.R">example</a> of price ratio. <br>
Image features in the public kernels helped a little – dullness, channel intensity etc. All credits to the author. <br>
Bayesian mean and counts over different groups helped maybe 0.001 also. We used a few combinations of tfidf for diversity. Entropy helped also. Dropping categoricals in their raw encoded form helped; and letting the model learn their representation through the FE mentioned. Also moving up to 1000 leaves helped some – 2000 or 5000 leaves probably would have helped more, but took too long to run. </p>
<p><strong>Ridge</strong> <br>
Late in the game we set up a few ridge models; they were very fast. Although a lot weaker, they gave about 0.0005 on the stack. We had a separate model running on each parent category; models on image feature maps (vgg19 and densenet) – and on different types of count vectorizer and tfidf of text features. </p>
<p><strong>Stack</strong> <br>
The final stack just combined about 30 different models – also added all two way combinations of sums and differences of each model. We got an MLP stack scoring approximately the same but correlation to the lgb was very high so averaging did not help. </p>
<p><strong>Final note</strong> <br>
Ok, I was wondering this morning how teams have such long write-ups, so looking back I can see why. There was so much opportunity to try things – thanks to Avito for hosting this and I hope the Avito team get some benefit from the solutions and we’ll see you back on Kaggle with another competition soon! </p>
<p>it's a bit of a guess.... think of it like any groups below the prior will be sort of dampened; and the more lower than the prior the group count is the more it will be dampened. To really see it; its good to set up an excel sheet with a couple of groups with different prices and run the formula over it to calculate the bayes mean for each item depending on the group size. You'll see how moving up and down the prior affects the high count and low count group values. </p>
<p>Thank you very much Nick for publishing the script; we used it as basis for LGBM. </p>
<p>hey, sorry, I cleaned up the repo yesterday, and moved the link - but its corrected now... depending on the groups I pick different priors... or very granular groups, like where they all have the same <code>user_id</code> & <code>title</code>, I would pick a smaller prior, maybe 5 or 10; and on larger groups; like <code>category_name</code>, a larger one of 100 may be used. </p> |
Avito Demand Prediction Challenge | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!</p>
<p>Thank you for your kind words!</p>
<p>Thanks!</p>
<p>Hi, dear kagglers.
First of all, thanks to Kaggle and Avito for holding such a wonderful competition. And congratulations to the winners and all Kagglers.</p>
<p>Our final solution was large stacking that involved 64 models in total. It’s quite similar to the post that was posted by KazAnova on the <a href="http://blog.kaggle.com/2017/06/15/stacking-made-easy-an-introduction-to-stacknet-by-competitions-grandmaster-marios-michailidis-kazanova/">kaggle blog</a> before. And after we completed building models, We grouped all the level-0 model into 4 group in order to gain some diversity :</p>
<ul>
<li>All models</li>
<li>Branden and Takuoko-Angus models</li>
<li>Takuoko-Angus and Yusaku models</li>
<li>Yusaku and Branden models</li>
</ul>
<p>We used the following models for our stacking (level-1 & level-2) : {XGB gblinear, LGBM, Ridge, Extra Trees, simple NN}. Finally we got 20 level-1 models & 5 level-2 model in consequence. than we just use a simple ridge for level-3 stacking (but excluded the meta-features which had negative weight in ridge, than rerun, until there is no negative weight appeared). It was one of our last submission, it scored 0.2148 on public LB (private 0.2186 / CV 0.208749).</p>
<p>In another submission, I just applied the optim() function in R with BFGS solver on level-1 meta-features to find the best weight to blend, with following formula : x1*model1 + x2*model2 + ...... +x20*model20 + x21. It worked almost same well as our level-3 stacking. It scored 0.2148 on public LB(private 0.2186 / CV 0.2087724). Actually two sub are quite same.</p>
<h1>Takuoko and Angus’s solution</h1>
<p>Takuoko and I merged on the 10 days before merger deadline.</p>
<h3>Feature Enginnering</h3>
<ul>
<li><p>Applying (min, max, mean, var) to numeric features that was already grouped by some categorical features (e.g. groupby(by=“region”)[“price”].mean())</p></li>
<li><p>nunique feature 2-way interactions</p></li>
<li><p>count encoding on categorical features</p></li>
<li><p><a href="https://www.kaggle.com/bminixhofer/aggregated-features-lightgbm">aggregated feature of categorical features</a></p></li>
<li><p><a href="https://www.kaggle.com/shivamb/ideas-for-image-features-and-image-quality">image features : mean, whiteness, dullness of RGB</a></p></li>
<li><p>Applying PCA / TSVD to OHE categorical features</p></li>
<li><p><a href="https://www.kaggle.com/sudalairajkumar/simple-feature-engg-notebook-spooky-author">text stats features</a></p></li>
<li><p>different word n-gram and char n-gram </p></li>
<li><p>applying TF-IDF or not</p></li>
<li><p>Some features from Angus’s FE (see the attachment)</p></li>
</ul>
<h3>Features not work</h3>
<ul>
<li><p>nunique feature 3-way interactions</p></li>
<li><p>Applying tsvd on tf-idf text feature or VGG16 features</p></li>
<li><p>Ohe-hot encoding on categorical features</p></li>
<li><p>Applying GaussianRandomProjection / FastICA / LDA / SparseRandomProjection on OHE </p></li>
<li><p><a href="https://github.com/seatgeek/fuzzywuzzy">fuzzywuzzy features</a></p></li>
</ul>
<h3>Modeling</h3>
<p>-- tree based model : LGBM, XGB --
Basically we used Bayesian for our tuning. And there is something special in our setting, In LGBM, Takuoko set 0.1 to feature_fraction and low colsample_byleve in XGB. We achieve great success by performing bagging with different seed. We can get about 0.2184 on public LB with single(5 seed avg.) LGBM by this approach.</p>
<p>We also used objective=poisson. It needs more time to converge and its performance is not so good, but it gave us some extra boost when we doing stacking.</p>
<p>-- NN --
We constructed a lot of different NN like GRU, Conv1D, Conv2D based on this <a href="https://www.kaggle.com/shanth84/rnn-detailed-explanation-0-2246">public kernel</a>.
We also made a MLP since its score is pretty bad but good for stacking. Until end, We still can’t let our NN model beat the 0.2210 on public. even with BN and fine-tuned Dropout. And that’s pretty frustrated for both of us.</p>
<p>-- Ridge, drop0 model --
Ridge and <a href="https://www.kaggle.com/c/allstate-claims-severity/discussion/26416">drop0 model</a> also were not good solo, but both of them improved our stacking score.</p>
<p>Drop 0 model was inspired by the link above. We just dropped target=0 and train a LGBM.</p>
<h2>Branden and Yusaku Solution</h2>
<p>Branden and Yusaku were working together before merging with us. Branden and Yusaku used the same 5-folds based on a random split to train their level 0 models.</p>
<h2>Yusaku’s solution</h2>
<p>I trained 10 level-0 LGBM models that utilized image features.
My models were largely based on the kernel <a href="https://www.kaggle.com/him4318/avito-lightgbm-with-ridge-feature-v-2-0/code">https://www.kaggle.com/him4318/avito-lightgbm-with-ridge-feature-v-2-0/code</a> (v14) with some tweaks and addition of CNN image features.
To add CNN features, the images were first resized to 224x224 and activations from the layer right before classification layers were extracted and average pooled. The models were initialized with pre-trained weights from ImageNet.
The follow image features were used in the final models, though other CNN models were experimented with and did not work as well or were not usable due to memory constraints:</p>
<ul>
<li><p>VGG16 (512x7x7) || Average pool (512x1x1) - Improved 0.0008 LB</p></li>
<li><p>Densenet121 (1024x7x7) || Average pool (1024x1x1) - Improved 0.0012 LB</p></li>
</ul>
<p>Both average and max pool were tried, but average pool gave better CV/LB, so I ended up just using average pooled features in the final models.
It would have been interesting to try lower level image features extracted from the CNN models (to better capture qualities such as image focus, fuzziness, graininess), but did not get around to it.</p>
<p>To encourage diversity and reduce correlations for the trained models, some were trained by removing features that were deemed important by LGBM, such as “image_top_1”, “city”, etc. </p>
<p>For text features, TFIDF of bigrams were used, unmodified from the original base kernel. I also experimented with adding text features based on gensim’s doc2vec but did not help. I did not try FastText, but I probably should have based on favorable results reported by other teams.</p>
<p>My models’ predictions ended up having relatively low correlations with Branden’s models despite both of us using LGBM; combining my models with Branden’s gave a pretty big boost.</p>
<h2>Branden’s solution</h2>
<p>He will release it in the comment field after few days!!!</p> |
Avito Demand Prediction Challenge | A silver solution (31st place) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It's time to share about solutions after this great challenge. Here is an insight of my models:</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/349888/9726/solution.png" alt="enter image description here"></p>
<p>A few different models followed by simple stacking with a CatBoost. My driver was diversity.
Each model tuned with stratified CV5 and used to compute OOF for final stacking. Stratification is based on day of week + bin of deal probability.
A few words about important items of each model:</p>
<ul>
<li>LGB: Based on pymorphy2 text normalization + TF-IDF + categories, images features (NIMA scores, INet scores and basic statistics).
VGG16 features not used. SVD not used. Public LB 0.2200</li>
<li>A set of NN with RNN (BiGRU, BiLSTM), embeddings and dense layers.
RNN are based on fixed W2V and/or FastText (trained from all title + description available).
The one relying on Capsules provided best results (LB 0.2198). Other with Attention Context scored LB 0.2207.
Last one with Conv2D scored 0.2212. One key point for NN was to take care of categories embedding dimensions. </li>
<li>Stacked XGB with mix of target encoding, TF-IDF and WordBatch (seen in kernels). It scored LB 0.2201.</li>
</ul>
<p>Finally, main features + each model prediction were used as inputs of a CatBoost model that scored public LB 0.2171 (private 0.2210) and reached 32nd place.
From what I already read in published solutions I should have spent more time on target encoding with different levels as it was key.
I also tried a second layer of stacking (not in the picture) but results were comparable, so I published the simplest one.</p>
<p>I had a lot of fun with this competition, I was my first steps with RNN and NLP, I learned a lot from kernels and discussions here. Thanks to all competitors!</p>
<p>The idea comes from a kernel available Toxic Comment challenge. </p>
<p>Here is the link: <a href="https://www.kaggle.com/yekenot/textcnn-2d-convolution">https://www.kaggle.com/yekenot/textcnn-2d-convolution</a></p>
<p>I had another layer of models stacking but results were quite similar (it was my second submission) so I did not include it in this chart.</p>
<p>I did not use size of categories and I limited it for high cardinality. I made different evaluations with 64, 50, 32, 24, 12 and 10 and convergence and RMSE was quite better with small one. I used the following formula found in many NN articles about categories embedding: embed_size = min(MAXSIZE, (category_size + 1) // 2)</p>
<pre><code> MAXSIZE = 12
for category, category_size in categorical_features.items():
embed_size = min(MAXSIZE, (category_size + 1) // 2)
inp = Input(shape = (1, ), name=category)
emb = Embedding(category_size, embed_size, input_length=1)(inp)
....
</code></pre>
<p>Also, I dropped user_id from category embedding because it was really too high.</p>
<p>Here is one link about category size selection: <a href="https://towardsdatascience.com/deep-learning-structured-data-8d6a278f3088">https://towardsdatascience.com/deep-learning-structured-data-8d6a278f3088</a></p>
<p>See 'Choosing the embedding size" section.</p>
<p>No package. Handmade powerpoint just with important items.</p> |
Avito Demand Prediction Challenge | Bronze solution with github link | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I with my team ranked 131st (TOP 7%) in this challenge. I know we had much to do, anyway I would like to share our solutions: <a href="https://github.com/Diyago/Machine-Learning-scripts/tree/master/DEEP%20LEARNING/Kaggle:%20Avito%20Demand%20Prediction%20Challenge%20(bronze%20solution)">https://github.com/Diyago/Machine-Learning-scripts/tree/master/DEEP%20LEARNING/Kaggle:%20Avito%20Demand%20Prediction%20Challenge%20(bronze%20solution)</a></p>
<p>Best single model achieved in the Public LB around 2210:
SVD text features
Nima image quality features
Tfidf over desc and title
Counts on title
Agregated features
Ridge features</p>
<p>One of the important lesson learned from the competition:
Join earlier and spent more times
Don't use single ipynb file for competition
Log each submission code and output
Teammates are really help much to move further.
Stacking and assembling rules kaggle for sure </p>
<p>Sorry for the late answer. You have to extract it this way: <a href="https://github.com/Diyago/Machine-Learning-scripts/tree/master/DEEP%20LEARNING/Kaggle:%20Avito%20Demand%20Prediction%20Challenge%20(bronze%20solution)/image%20feat.%20extraction/neural-image-assessment">https://github.com/Diyago/Machine-Learning-scripts/tree/master/DEEP%20LEARNING/Kaggle:%20Avito%20Demand%20Prediction%20Challenge%20(bronze%20solution)/image%20feat.%20extraction/neural-image-assessment</a> - NIMA features</p>
<p>Unfortunately I didn't use classical features. </p> |
Avito Demand Prediction Challenge | My First Kaggle Competition: A silver solution (83rd place) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Finally I have some time to wrap up my first time experience on Kaggle. I have to say it was an amazing experience. And thanks to many wonderful kernels, I learnt a lot from this competition. </p>
<p>I think this competition is very good and also very challenge for a Kaggle beginner. Except some common numerical and catigorical features, this competition also has lots of text and image features which make this competition very challenge. So here is my solution ;)</p>
<p><strong>Features</strong></p>
<p>Geographic features: Such as latitude, longitude, populations, etc. Learnt a clustering trick from this <a href="https://www.kaggle.com/frankherfert/region-and-city-details-with-lat-lon-and-clusters/notebook">kernel</a></p>
<p>Image features: basic image features such as brightness, key colours, etc. Mainly from this <a href="https://www.kaggle.com/shivamb/ideas-for-image-features-and-image-quality">kernel</a>. Applied multiprocessing, but still took a very long time to extract. I also used YOLO to perform a object detection. A python wrapper can be found from <a href="https://github.com/AlexeyAB/darknet">here</a></p>
<p>Aggregated features: from this <a href="https://www.kaggle.com/bminixhofer/aggregated-features-lightgbm">public kernel</a></p>
<p>Text features: TF-IDF and SVD. Embedding vectors using fastText</p>
<p>Mean Encoding: There were lots of high cardinality features. So I used mean encoding to encode these high cardinality categorical features.</p>
<p>Feature interactions: some groupby statistics between categorical features and numerical features. I generated 42 new features from this step. </p>
<p><strong>Models</strong></p>
<p>I trained one XGBoost, two LightGBM, one RNN and a ridge regression. I used a 5-fold CV followed by stacking with a LightGBM. My RNN is simple. A one-layer LSTM was concatenate with other features and then passed through 2 Dense layers. I was about to train two NN models. But because of my poor machine (8GB), I could only train a baseline model on my local computer. And I was first training my models on a spot instance on AWS. But I got terminated twice in the middle of the training. So sad.... Then I switched to a regular instance and paid regular price. So I didn't have enough time train my second NN model. </p>
<p>Finally, thanks again to people who have shared their kernels and methods so generously on Kaggle. There are still a lot of rooms I should improve and lots of things to learn. I indeed enjoy my first Kaggle journey. At last, Happy Kaggling!</p> |
Avito Demand Prediction Challenge | Our 30th Solution: In which our heroes tried Quantum Gravity, Adaptive Noise and other cool stuff… | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was a great competition, with the most variety of data I have seen in kaggle, maybe a time series competition with the same kind of data present here would be great next 😄 Now to our solution</p>
<p><strong>Features</strong></p>
<p><em>Image Features</em> : Extracted simple features(all from public kernels), Imagenet pretrained VGG16 to SVD, NIMA(Neural Image assessment) avg and std from Mobilenet and NASnet, SVD on <a href="https://www.kaggle.com/the1owl/natural-growth-patterns-fractals-of-nature">https://www.kaggle.com/the1owl/natural-growth-patterns-fractals-of-nature</a>.</p>
<p><em>Text features</em>: wordbatch ngrams, text stats from public kernels, english text stats, tfidf translated english and russian , Several different embeddings(Fasttext self-trained on description, wordtovec self-trained on description, 3 embeddings from <a href="https://github.com/deepmipt/DeepPavlov/blob/master/pretrained-vectors.md">https://github.com/deepmipt/DeepPavlov/blob/master/pretrained-vectors.md</a> preprocessed text according to how the embeddings was trained, english GLOVE wiki vectors, russian fasttext wiki vectors), SVD and LDA on description and title, Naive Bayes Tfidf, Sentiment </p>
<p><em>Tabular features</em>: All features on public kernels, normalized log price by groupby param_2 train+test+test & train_active with the idea that relative price difference between items in the same category matter, binning of 'norm_price' by different total number of discrete values, count of 'norm_price' bins by param_2 with test & train_active, test & train_active average item count per day of 10+ categorical features by using test & train period using from > to time, Target Encoding of all categoricals with smoothing and noise, almost 60 different aggregate counts of items groupby a set of categoricals and unique counts of categoricals grouped by different sets of categoricals including test and train active, impute by prediction with NN image top 1</p>
<p><strong>Models</strong></p>
<p>Our final submision was based on stacking and weighted averaging, it provides the boost we needed from our seemingly poor performing models, even the best ones.</p>
<p>19 Level one models that consist of pure RNN , LGBM, Ridge, Lasso, Elastic Net, Random Forest Regressor, XGB , FM-FTRL, RNN + categoricals, RNN + categoricals + continuous, RNN + categoricals + continuous + images.</p>
<p>10 Level two models build on top isotonic regression k-fold transformed level one predictions, consist of Linear Regression, LGBM, XGB, RNN + categoricals + continuous.</p>
<p>3rd Level model is just a Linear Regression with isotonic regression</p>
<p>Final submission a weighted average with best RNN + categoricals + continuous + images model and 3rd Model.</p>
<p>Best models is LGBM- two of them with very uncorrelated predictions, both at 0.2203 Public LB trained with entirely different subset of features listed above, NNs with all type of feature 0.2207 Public LB</p>
<p><strong>Notes</strong></p>
<p>We had two distinct RNNs. One implemented by Chin which is just the best architecture adopted from toxic comment classification and the other implemented by @Pavel and @Andres. The later can be found here: <a href="https://github.com/antorsae/avito-demand-prediction">https://github.com/antorsae/avito-demand-prediction</a> and didn't use as many features as described above, and it has a few different features.</p>
<p><strong>Things that didn't work</strong></p>
<p>Discretizing predictions aka "Quantum Gravity": As discovered by one competitor, predictions are very discrete, so we built a list of predictions grouped by category and added a post-processing layer to convert them to discrete values based on proximity and strength of discrete probability. We dubbed this approach <em>quantum gravity</em> but although the name was cool it didn't work.</p>
<p>Noise: We added this the last day of the competition so our findings were inconclusive. We implemented "swap noise" and "smart noise", the first just swaps a fraction of columns by values of the same column in different samples, whereas the second picks samples to swap columns from whose prediction is similar to the current sample.
Adaptive noise: We saw that controlling the rate of noise was very delicate and if set too low (e.g. <code>-fnr 0.1</code>) the network would eventually overfit, and setting it too high (e.g. <code>-fnr 0.3</code>) would make the network converge very slowly or not converge at all; so we added a callback to adjust noise rate based in a target rmse. We didn't have time to test it properly.</p>
<p>Image pixels: We implemented computing image features and optimizing them in multiple networks (all controlled by command line), and with support for freezing layers; while our initial tests showed promise, we did not have time/GPUs to run it at the end.</p> |
Avito Demand Prediction Challenge | Place 40, tried to get something from train_active and test_active with DAE | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey guys, would like to share what I learned, will try to highlight only the things that may be interesting,</p>
<p>First of all, thanks for the great competition and for learning - was awesome.</p>
<p>From the beginning I thought that semi-supervised/unsupervised learning from active data may play a big role in this competition, and I concentrated on this. Spent waay to many time trying to reimplement Michael's Denoising Autoencoders from <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629">here</a> in keras, there are many small details(like proper learning rate and proper standartization) that slowed me down. The idea was to take only categorical and numerical data (ignoring text and images), that avaliable for both train-test and train_active-test_active and train denoising autoencoder on the whole dataset. Afterwards replace these features in the supervised learning model with the output from the autoencoder. I assumed that this will allow me to skip feature generation and generalize better - was a partially wrong assumption =( I see now from your solution that feature generation could really have helped me. </p>
<p>Overall, my solution is weighted average of 4 NN models (based on public leaderboard, I manually have chosen the weights) , mainly those from <a href="https://www.kaggle.com/shadowwarrior">Harlan</a> (<a href="https://www.kaggle.com/shadowwarrior/1st-place-solution/notebook">here</a>), with some twists (for instance going deep didn't work for me, wide LSTMs was better) and ResNet last layer image features mentioned by others. All predictions are 10 fold averages. I didn't use any gradient boosting models - was too lazy, and I wanted to test how much can I get from the deep learning alone. </p>
<p>It was a big pleasure to learn from you and I see now how much room for improvement for me is there. Happy kaggling!</p> |
Avito Demand Prediction Challenge | Sharing my solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to Avito and Kaggle for organizing such a wonderful competition! It was interesting all-in-one problem with text features, structured data and images. Really very big front of ideas and techniques to use during this competition. Data size was not so big and accessible to everyone and also it was leak free. </p>
<p>But there was some interesting nature of target values (lots of zeros and discrete values that differed for different categories). I couldn't find usage of this fact and I don't know could someone use it for boosting his score.</p>
<p>Last week of competition was really challenging. It was harder to climb on the leaderboard than in previous days. Thanks for all competitors for these last sleepless nights!</p>
<p>Here is the summary of my approach:</p>
<p><strong>Validation</strong> <br>
I used 5 CV Folds as validation strategy. <br>
It differs with Public LB ~0.0034-0.004 and that different was stable</p>
<p><strong>Feature Engineering</strong> <br>
Features were computed on the concatenation of train and test sets.</p>
<p><strong>Images</strong> <br>
I used three Neural Network models (<strong>ResNet50, InceptionV3, Xception</strong>) for predictiong first top3 labels and scores for images.
Other image features:</p>
<ul>
<li>image size</li>
<li>height, width</li>
<li>image blurness/lightness</li>
<li>color channels</li>
<li>rgb average color</li>
<li>other image features (Average Pixel Width, Dominant Color etc) (<a href="https://www.kaggle.com/shivamb/ideas-for-image-features-and-image-quality">from here</a>)</li>
</ul>
<p><strong>Text features</strong></p>
<ul>
<li>TF-IDF of bigrams for description</li>
<li>TF-IDF of 1gram for the params and title</li>
<li>word2vec features on words</li>
<li>pretrained fasttext features on words</li>
<li>pymorphy2 text normalization</li>
<li>svd title/description</li>
</ul>
<p><strong>Other hand-made features</strong></p>
<ul>
<li>count of symbols in title/desc</li>
<li>count of words in title/desc</li>
<li>count of digits in title/desc</li>
<li>number of uppercase/special symbols/punctuation</li>
<li>number of stopwords</li>
<li>missing price/image/descr flag</li>
<li>log1n price</li>
<li>price aggregations: min/max, median, var, mean of the groups of cat features</li>
<li>other group statistics (city counts, groups counts/means/vars)</li>
</ul>
<p><strong>Models</strong> <br>
I used <strong>10 LightGBM</strong> (lb 0.2202), <strong>10 XGBoost</strong> (lb 0.2228) and <strong>10 CatBoost</strong> (lb 0.2240) models. <br>
And pretrained <strong>Neural Networks: ResNet50, InceptionV3, Xception</strong> for label prediction of the images.</p>
<p><strong>Stacking</strong> <br>
It was 2 layers stack. <br>
First layer included models of LightGBM, XGBoost, CatBoost with metafeatures. <br>
Second layer was LightGBM based on features generated with previous layer. <br>
Stacking helped to improve the score to lb <strong>0.2192</strong> (public) and <strong>0.2230</strong> (private)</p> |
Avito Demand Prediction Challenge | The last gold solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Your team also did a great job. I was just a little bit more lucky at the end. You will be the lucky guy next time I believe.</p>
<p>Haha, thanks. Just had something fun in the game</p>
<p>Thanks! It's effort and luck making it easy.</p>
<p>First of all, thanks to Avito and Kaggle for providing this great competition. No leakage, very representative testing set, countless solutions. I believe every one enjoyed the game.</p>
<p>Besides, thanks to all community contributors, like Dieter and Peter Hurford, who published helpful kernels and discussions.</p>
<p>Features:
Most of the ideas came from public kernels and discussions: tf-idf, CountVectorizor (char level), image meta features, self-trained W2V, ads time from Benjamin Minixhofer's aggregated features.</p>
<p>What I did additionally is clustering ads titles based on sub-categories. For example:</p>
<pre><code>sub_category = df[(df['category_name'] == 'something') & (df['param_1'] == 'something') & (df['param_2'] == 'something') & (df['param_3'] == 'something')]
tfidf = TfidfVectorizor()
tfidf_vec = tfidf.fit_transform(sub_category['title'])
kmean_cluster = KMmean()
sub_category['title_cluster'] = kmean_cluster.fit_predict(tfidf_vec)
</code></pre>
<p>I did such calculation for all sub-catgories. After doing this, in the subcategory of iphone, all iphone 5s 32gb were in one group and iphone 7 128 GB were in another group. Then it was a good time to compare the prices and other features. So I groupby this 'title_cluster' and calculated aggregated features like price rank (which gave a big improvement), mean price, count and so on.</p>
<p>I also used regex to manually extract numbers out of the title for properties (area, room numbers, floor) and automobile (vehicle years) ads. Then calculated features such as price per room, price per sqm.</p>
<p>Other than that, since traditional ML models such as LightGBM are not good at handling unstructured data like text and images, I used NN (biLSTM for texts and a simple 4 layers CNN for image) to generate vector representations. Image vectors directly came from a NN that were used to predict the deal_probability. But when I tried the same strategy for texts, the generated text vector dramatically made my LightGBM overfitted. Then I tried another method: only using title and description W2V in a biLSTM NN model with MSE loss and predicting everything else: price, item_seq_number, city, region, user_id, parent_category_name, category_name, param_1, param_2, param_3. All categorical features are onehot encoded after removing low frequent entities. Therefore, X = title W2V + description W2V. y = a few hundred columns table.</p>
<p>Models:
Three layer of stacking. first layer: 11 Lightgbms, 6NNs. second layer: 3 LightGBM, 2 Ridge. third layer is just a Ridge and a LighGBM with linear average. I am a newbie of NN (my oof NN scores were very unstable) and hope to learn more NN strategies from other teams.</p>
<p>Thanks</p> |
Avito Demand Prediction Challenge | The way to the 18th place, just for rookies. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congratulations to all of you guys who put great effort and learned a lot of new things from this competition. I had a great fun and meet new friends here, so lucky.</p>
<p>Now comes the solution of our team, which should be simple without any magic. The steps are:</p>
<h2>Best single model part (LGB):</h2>
<ol>
<li>Folked the public kernel and did some research such as parameter tuning which gave us a slightly better result than all the public kernels. </li>
<li>Doing more feature engineering such as keywords count, text hash, categorical feature hash. </li>
<li>We found the price and item_seq_number features were interesting. We applied exponential and linear segmentation of them and adding the segmented data as new categorical features.</li>
<li>We added "char" level analyzer for TFID process.</li>
<li>Target encoding.</li>
<li>Applying external data. we found the "income", "population" features are helpful. The Location Information did not help us.</li>
<li>Adding basic image features based on public kernels. </li>
<li>Final parameter tunning.
This model helped us to score 0.2184 after doing a 5-folds average. But it needs 3 hours to run on a 20 core server for each fold.</li>
</ol>
<h2>NN part:</h2>
<p>We used most the similar features for NN, which gave us a 0.2205 score on LB. By a simple stacking of the LGB and NN models, we got 0.2179 and after some blending, we got 0.2176 on LB.</p>
<h2>Team up part:</h2>
<p>It was so lucky for us that @Yuki.O joined us. He is a genius guy who did massive feature engineering which makes the big diversity between our models. A simple stack gave us 0.2169 on the public LB.</p>
<h2>Stacking part:</h2>
<p>In the last week, we start to stack all the models. We created aprrox. 40 models with catboost, FM, mlp, LSTM. I think because it was limited by our best single model, the can just reach 0.2156 on the public LB. We fight till the last minutes though. :P</p>
<p>Ok, this is all about what we did in this competition, great fun! Many thanks to all of you guys <a href="/huiqin">@huiqin</a>, @Steeve Huang, <a href="/steinhafen">@steinhafen</a>, and <a href="/extremin">@extremin</a>. Special thanks to my new friend @Yuki.O, who put lots of effort into this competition and I learned a lot from him, not only the knowledge but also the professional attitude.</p>
<p>Thanks man, happy to see you again here :P</p>
<p>Thanks bud, congratulations to you and Shannon.</p>
<p>Thanks man. </p>
<p>long for short, things will be explained in the script as follow: </p>
<p>df["price_resampled"] = np.round(np.log1p(df["price"]*res_coef)).astype(np.int16) </p> |
Avito Demand Prediction Challenge | second place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Demand Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yeah. One thing that we were constantly trying with absolutely no success is a time-series approach for this dataset. Finally I decided to forget about time at all. </p>
<p>The best results have been achieved with two losses at the same time ['mse','binary_crossentropy']</p>
<p>I gave up after this: CV: [0.22300, 0.22390, 0.22325, 0.22388, 0.22394, 0.22237, 0.22409, 0.22482, 0.22386, 0.22370]
Private: 0.2312
Public: 0.2276
But... It brought us some diversity )</p>
<p>Oops. Something is wrong here. Sorry for this. </p>
<p>For this type of statfeatures we groupped periods_* dataset by item_id and calculating the sum of days for each item. After that we calculated the average number of days <strong>by categorical features and their interactions</strong> using only active_* dataset. We tried to predict values for each item in train and test but with no luck. </p>
<p>Just a question of time. I'm too lazy to recalculate first level each time ) As far as we got new models/features and stack started to produce > 2000 trees I switched to new layer with a smaller amount of features. </p>
<p>Autoencoder is not so strong as we want it'd be ) It's not a critical part of our solution but it worked well enough to see the difference in CV. We'll share code later.</p>
<p>Yeah full set is a train + test + active dataset without duplicates.</p>
<p>Sure, we will share our github repository little bit later.</p>
<p>First of all thanks Avito and Kaggle for the very interesting and challenging (especially for our hardware) competition. It was very pleasurable to compete here.</p>
<p><strong>Features</strong></p>
<p>Image Features: we extracted vectors (like in public kernels) from the pretrained VGG16, ImageNet, ResNet50 and MobileNet models.</p>
<p>Text Features: we trained Fasttext model on the full dataset and used it to generate vectors for title, description, title-city interaction, title-category interaction, stemmed title, stemmed descriptions. We used the same models for embeddings in our NN model. </p>
<p>Statistical Features: it was the most important type of numerical features that we had. We calculated average prices for each categorical feature and for second and third order of interactions. We calculated the average number of days the each advertisement had been active. Additionally, we calculated the same set of statistical features for each day during train and test periods.</p>
<p>Unsupervised Learning: we extracted vectors from autoencoder on categorical features. We trained a user2vec model to represent a user_id as a composition of other features.</p>
<p><strong>Models</strong></p>
<p>Neural Networks: Our best single model (0.2163 on public) is a neural network with different branches: FM like style for categorical features with embeddings, numerical features, concatenated fasttext vectors, concatenated image vectors, BiLSTM for words and BiLSTM for characters with concatenated max,avg poolings with attention (like we had in toxic competition), target encoded features for categorical features and their second and third order interactions, users 2 vectors features. Some details: cyclic LR, Nadam optimizer, plenty of BNs, big dropouts. Almost each of the branches have a dense layer before concatenating them.</p>
<p>LightGBM: Surprisingly, fasttext vectors were helpful for lightgbm single model as well (0.2188 on public). SVD over TFIDF transformation was a nice feature for our models. Numerical features and target encoded categories made the rest of the work. </p>
<p>Bunch of weak models: FM_FTRL, Ridge, CatBoost.</p>
<p>Stack: we developed six layers (OMG) stack. 1,2,3,4 and 6 layer have been made with lightgbm and all previous metafeatures. We used unique meta information for each layer, based on a weaknesses of the previous layer.</p>
<p><strong>Validation</strong></p>
<p>10 folds CV was the best choice for us. It was accurate and fast enough.</p>
<p>Thanks! Your team performance is amazing! </p>
<p>I meant... We had not enough RAM for to use all text and image vectors in one model, so I added new text and new image vector on each new layer of stack as additional features and have been constantly trying to choose the most appropriate one. </p>
<p>User2vec is an original doc2vec on top of categories representation of each user. For example one record for user looks like </p>
<p>[TaggedDocument(words=['region_Белгородская область', 'city_Белгород', 'parent_category_name_Для бизнеса', 'category_name_Оборудование для бизнеса', 'param_1_Промышленное', 'user_type_Company', 'activation_week_day_0', 'image_is_null_True', 'price_is_null_False'], tags=['e91bc0476d26']),
TaggedDocument(words=['region_Кемеровская область', 'city_Кемерово', 'parent_category_name_Хобби и отдых', 'category_name_Спорт и отдых', 'param_1_Ролики и скейтбординг', 'user_type_Private', 'activation_week_day_2', 'image_top_1_2653.0', 'image_is_null_False', 'price_is_null_False'], tags=['ae3863cb0a54']),
TaggedDocument(words=['region_Новосибирская область', 'city_Новосибирск', 'parent_category_name_Личные вещи', 'category_name_Одежда, обувь, аксессуары', 'param_1_Женская одежда', 'param_2_Верхняя одежда', 'param_3_44–46 (M)', 'user_type_Private', 'activation_week_day_3', 'activation_week_day_5', 'image_top_1_645.0', 'image_is_null_False', 'price_is_null_False'], tags=['9dd7ff9cf683'])]</p> |
Avito Duplicate Ads Detection | 2nd Place Solution: TheQuants | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Duplicate Ads Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>@Ed53 @ololo @Gerard</p>
<p>I put the code and a short explanation up in a GitHub repository:</p>
<p><a href="https://github.com/mxbi/ftim">https://github.com/mxbi/ftim</a></p>
<p>- Mikel</p>
<p>[quote=eagle4;127511]</p>
<p>@ anokas / The Quants:</p>
<p>Can you please explain quickly how you use BRICK keypoint matching ? </p>
<p>What is something like: for every image in itemID_1, find number of keypoint matchers in every image in itemID_2 and use the calculated number as a feature ?</p>
<p>[/quote]</p>
<p>First, the keypoints/descriptors were extracted from every image. And then for every pair of images (x, y):</p>
<ol>
<li>For every keypoint in x, find closest keypoint in y (brute-force)</li>
<li>Measure hamming distances between every selected keypoint pair</li>
<li>Return mean & median hamming distance of keypoint pairs to get values for those pairs of images</li>
</ol>
<p>Then for each row:</p>
<ol>
<li>Perform the above operation on every pair of images to get an array of medians and means</li>
<li>Return minimum mean/median</li>
<li>Return proportion of pairs where mean/median where distance was under N bits (N=30,80,100)</li>
<li>Return (<code>min(len(images_array_x), len(images_array_y))/2</code>)th element in array of means/medians (essentially a median adjusted to the smaller set of images)</li>
</ol>
<p>Hopefully that makes a little bit of sense - there are means and medians everywhere!</p>
<p>[quote=Ed53;127604]</p>
<p>@anokas <br>
Congratulations on great result!</p>
<p>Could you please provide more details about discarding overfitting features? This is one technique which is generally useful and it would be beneficial to many of us.</p>
<p>[/quote]</p>
<p>Thanks for reminding me. I forgot to post it earlier as it's stuck on another machine - I'll look for it now!</p>
<p>Hi everyone!</p>
<p>Here's a description of our team's efforts for this competition. I think this competition was very much driven on feature engineering, with meta-modelling more as a finishing touch.</p>
<p><strong>Our Strategy</strong></p>
<ul>
<li>Understanding the Data, developing features until team merger deadline, metamodelling in the last week.</li>
<li>Merging early based on standing of the leaderboard at the time</li>
<li>Each individually building features to try and branch out as much as possible, with constant discussion about where to look next</li>
<li>Onboarding an experienced w̶i̶z̶a̶r̶d̶ Kaggler for the final stage. </li>
</ul>
<hr>
<p><strong>Data Cleaning:</strong></p>
<p>In order to clean the text, we applied stemming using the NLTK Snowball Stemmer, and removed stopwords/punctuation as well as transforming to lowercase.</p>
<p><strong>Validation Strategy:</strong></p>
<p>Initially, we were using a random validation set before switching to a set of non-overlapping items, where none of the items in the valset appeared in the train set. This performed somewhat better, however we had failed to notice that the training set was ordered based on time! We later noticed this and switched to using last 33% as a valset. This set correlated relatively well with the leaderboard until the last week, when we were doing meta-modelling and it fell apart - at a point where it would be too much work to switch to a better set. This hurt us a lot towards the end of the competition.</p>
<h2>Features:</h2>
<p>In order to pre-emptively find over-fitting features, we built a script that looks at the changes in the properties (histograms and split purity) of a feature over time, which allowed us to quickly (200ms/feature) identify overfitting features without having to run overnight xgboost jobs. If there is interest, I would be willing to put it on github. </p>
<p>After removing overfitting features, our final feature space had 587 features in it. Here’s a summary:</p>
<p><strong>General:</strong></p>
<ul>
<li>CategoryID, parentCategoryID raw CategoryID, parentCategoryID one-hot
(except overfitting ones) </li>
<li>Price difference / mean</li>
<li>Generation3probability (output from model trained to detect <br>
generationmethod=3)</li>
</ul>
<p><strong>Location:</strong></p>
<ul>
<li>LocationID & RegionID raw</li>
<li>Total latitude/longtitude</li>
<li>SameMetro, samelocation, same region etc.</li>
<li>Distance from city centres (kalingrad, moscow, petersburg, krasnodar, makhachkala, murmansk, perm, omsk, khabarovsk, kluichi, norilsk)</li>
</ul>
<p>Gaussian noise was added to the location features to prevent overfitting to specific locations, whilst allowing xgboost to create its own regions.</p>
<p><strong>All Text:</strong></p>
<ul>
<li>Length / difference in length</li>
<li>nGrams Features (n = 1,2,3) for title and description (Both Words and Characters)
<ul><li>Count of Ngrams (#, Sum, Diff, Max, Min)</li>
<li>Count of Unique Ngrams</li>
<li>Ratio of Intersect Ngrams</li>
<li>Ratio of Unique Intersect Ngrams</li></ul></li>
<li>Distance Features:
<ul><li>Jaccard , Cosine, Levenshtein and Hamming Distance between the titles and descriptions</li></ul></li>
<li>Special Character Counting & Ratio Features:
<ul><li>Counting & Ratio features of Capital Letters in title and description</li>
<li>Counting & Ratio features of Special Letters (digits, punctuations, etc.) in title and description</li></ul></li>
<li>Similarity between sets of words/characters</li>
<li>Fuzzywuzzy/jellyfish distances</li>
<li>Number of overlapping sets of n words (n=1,2,3)</li>
<li>Matching moving windows of strings</li>
<li>Cross-matching columns (eg. title1 with description2)</li>
</ul>
<p><strong>Bag of words:</strong></p>
<p>For each of the text columns, we created a bag of words for both the intersection of words and the difference in words and encoded these in a sparse format resulting in ~80,000 columns each. We then used this to build Naive Bayes, SGD and similar models to be used as features.</p>
<p><strong>Price Features:</strong></p>
<ul>
<li>Price Ratio</li>
<li>Is both/one price NaN</li>
<li>Total Price</li>
</ul>
<p><strong>JSON Features:</strong></p>
<ul>
<li>Attribute Counting Features:
<ul><li>Sum, diff, max, min</li></ul></li>
<li>Count of Common Attributes Names</li>
<li>Count of Common Attributes Values</li>
<li>Weights of Evidence model on keys/values
XGBoost model on sparse encoded attributes</li>
</ul>
<p><strong>Image Features:</strong></p>
<ul>
<li># of Images in each Set</li>
<li>Difference Hashing of images</li>
<li>Hamming distance between each pair of images</li>
<li>Pairwise comparison of file size of each image</li>
<li>Image dimension Features:</li>
<li>Pairwise comparison of dimension of each image</li>
<li>BRISK keypoint/descriptor matching</li>
<li>Image histogram comparisons</li>
<li>Dominant colour analysis</li>
<li>Uniqueness of images (how many other items have the same images)</li>
<li>Difference in number of images</li>
</ul>
<p>I found a possible image metadata leak (the creation dates of the images were embedded in the zip files) but didn’t try to use these for features - gotta play fair after all!</p>
<p><strong>Clusters:</strong></p>
<p>We found clusters of rows by grouping rows which contain the same items (eg. if row1 has items 123, 456 and row2 has items 456, 789 they are in the same cluster). We discovered that the size of these clusters was a very good feature (larger clusters were more likely to be non-duplicates), as well as the fact that clusters always the same generationMethod. Adding cluster-size features gave us a 0.003 to 0.004 improvement. It would be interesting to know if anyone else found these features :)</p>
<h3>Feature Graveyard:</h3>
<p>Overfitting was probably the biggest problem throughout the competition, and lots of features which destroyed in validation didn’t do so well on the leaderboard. This is likely because the very powerful features learn to recognise specific products or sellers that do not appear in the test set. Hence, our feature graveyard:</p>
<p><strong>TF-IDF:</strong> This was something we tried very early into the competition, adapting our code from the Home Depot competition. Unfortunately, it overfitted very strongly, netting us 0.98 val-auc and only 0.89 on LB. We tried adding noise, reducing complexity etc. but in the end we gave up.</p>
<p><strong>Word2vec:</strong> We tried both training a model on our cleaned data and using the pretrained model posted in the forums. We tried using word-mover distance from our model as features, but they were rather weak (0.70AUC) so in the end we decided to drop these for simplicity. Using the pre-trained model did not help, as the authors used MyStem for stemming (which is not open-source) so we could not replicate their data cleaning. After doing some transformations on the pre-trained model to try and make it work with our stemming (we got it down to about 20% missing words), it scored the same as our custom word2vec model.</p>
<p><strong>Advanced cluster features:</strong> We tried to expand the gain from our cluster features in several ways. I found that taking the mean prediction for the cluster as well as cluster_size * (1-cluster_mean) provided excellent features in validation (50% of gain in xgb importance), however these overfits for reasons I still have to investigate. We also tried taking features such as the stdev of locations of items in a cluster, but these also overfitted. I suspect that Avito used slightly different methods for generating the test set.</p>
<p><strong>Grammar features:</strong> We tried building features to ‘fingerprint’ different types of sellers, such as usage of capital letters, special characters, newlines, punctuation etc. However while these helped a lot in CV, they overfitted on the leaderboard.</p>
<p><strong>Brand violations:</strong> We built some features based around words that could never appear together in duplicate listings. (For example, if one item wrote ‘iPhone 4s’ but the other one wrote ‘iPhone 5s’, they could not be duplicates). While they worked well at finding non-duplicates, there were just too few cases where these violations occurred to make a difference to the score.</p>
<h2>Meta-model:</h2>
<p>Using a meta-model improved our score by roughly 0.0015 versus our best XGBoost. In the end, a single model would have been enough to net us second place, but you can never be too careful!</p>
<p>Below is an overview of what our final meta looked like. Each generation is different set of features, with later generations having more features. Note that all scores are private LB scores:</p>
<p><img src="https://files.slack.com/files-pri/T18TSG1C3-F1QTWCZ5G/avito-meta.png?pub_secret=14957b8a3e" alt="Meta-model schematic" title></p>
<p>We also tried using Extra Trees, Random Forest, Adaboost & approximate kNN as base models, however these overfit to our validation-set and so they couldn’t be used in the final meta-model.</p>
<hr>
<p>If anyone has any questions about our solution, we would be happy to go into more depth!</p>
<p><strong>Finally, a great big thank you to my teammates Peter, Sonny and Marios for making TheQuants a brilliant success!</strong> We couldn’t have done it without every one of you.</p>
<p>And thanks to everyone else for making this such an enjoyable competition</p>
<p>- Mikel, <br>
TheQuants</p>
<p>[quote=myouness;127072]</p>
<p>Hello,</p>
<p>Would it be possible to get the neural network architecture you used and the hyper parameter choice ? </p>
<p>Thank you ! </p>
<p>[/quote]</p>
<p>Hi myouness,</p>
<p>Here is the 3 layer architecture we used (with slight variation):</p>
<pre><code>models = Sequential()
models.add(Dense(800, input_dim=input_dim, init='uniform'))
models.add(Activation('relu'))
models.add(Dropout(0.6))
models.add(BatchNormalization())
models.add(Dense(800, init='uniform'))
models.add(Activation('relu'))
models.add(Dropout(0.6))
models.add(BatchNormalization())
models.add(Dense(400, init='uniform'))
models.add(Activation('relu'))
models.add(Dropout(0.6))
models.add(Dense(output_dim, init='uniform'))
models.add(Activation('softmax'))
opt = optimizers.Adagrad(lr=0.01)
models.compile(loss='binary_crossentropy', optimizer=opt)
</code></pre>
<p>and here is the one-layer architecture:</p>
<pre><code>models = Sequential()
models.add(Dense(2000, input_dim=input_dim, init='uniform', W_regularizer=l2(0.00001)))
models.add(PReLU())
models.add(BatchNormalization())
models.add(Dropout(0.6))
models.add(Dense(output_dim, init='uniform'))
models.add(Activation('softmax'))
opt = optimizers.Adagrad(lr=0.01)
models.compile(loss='binary_crossentropy', optimizer=opt)
</code></pre>
<p>All our NNs were 10x bagged with different seeds, and were run for 150 epochs.</p>
<p>[quote=jzbdbeb;127149]</p>
<p>Excellent feature engineering and model ensemble! Thank you for your sharing anokas.</p>
<p>Could you give us an introduction of the importance of these features? Which features make your submission improve a lot? Maybe the importance output by XGB. Thanks a lot!</p>
<p>[/quote]</p>
<p>I will have to find an importance file later on, but I know that some of the most important features were:</p>
<ul>
<li>The cluster features</li>
<li>BRISK</li>
<li>dHash</li>
<li>Bag of words</li>
</ul>
<p>[quote=Snow Dog;127006]</p>
<p>Wait... parentCategID is not a 1:1 map with categoryID?</p>
<p>[/quote]</p>
<p>It is not a 1:1 map in the sense that they are not exactly the same. While parentCat can be constructed using categoryID, in order to model a parentCategoryID, XGBoost would need to make multiple splits (one for each underlying categoryID). Hence we provide it with the one-hot parent categories so that it can model these with only one split - it can be viewed almost as a set of pre-determined trees.</p> |
Avito Duplicate Ads Detection | Solution sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Avito Duplicate Ads Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I updated the solution description to include the link to a github repository with the code. </p>
<p>[quote=Run2;126780]</p>
<p>Great post Ololo. Thanks for sharing. Our features were similar - obviously a subset. Couple of queries</p>
<p>1) how similar misspellings are in the ads. What did you use to get this information ?</p>
<p>2) So I gather you used geolocation to get City and Zip code and Region ? Did you use a license for google maps api ?
If you already have location id and metro id as features - did this additional information help ? </p>
<p>Fantastic feature engineering work.</p>
<p>[/quote]</p>
<p>The answer to both of the questions is here: <a href="https://www.kaggle.com/c/avito-duplicate-ads-detection/forums/t/20759/external-data-thread/124803#post124803">https://www.kaggle.com/c/avito-duplicate-ads-detection/forums/t/20759/external-data-thread/124803#post124803</a></p>
<p>I set up a nominatim instance with a copy of open street map of Russia on my laptop (the server was busy doing other stuff). Then I run all (lat, lon) pairs through it, it took several days. But later I noticed that there are not so many distinct pairs, so it could be done in less than a day. </p>
<p>So here is my solution:</p>
<h3>Simple features</h3>
<ul>
<li>category id</li>
<li>number of images </li>
<li>price difference</li>
</ul>
<h3>Simple text features</h3>
<ul>
<li>number of Russian and English characters in the title and description and number of digits and non alphanumeric characters; their ratio to all the text</li>
<li>len of title and description</li>
<li>number of unique chars </li>
<li>cosine and jaccard of 2-3-4 ngrams on the char level</li>
<li>fuzzy string distances calculated with FuzzyWuzzy (<a href="https://github.com/seatgeek/fuzzywuzzy">https://github.com/seatgeek/fuzzywuzzy</a>)</li>
</ul>
<h3>Simple image features</h3>
<ul>
<li>stats (min, mean, max, std, skew, kurtosis) of each channel (R, G, B) as well as the average of all 3 channels </li>
<li>file size</li>
<li>number of geometry matches </li>
<li>number of exact matches (calculated by md5)</li>
</ul>
<h3>Simple Geo features</h3>
<ul>
<li>metro id, location id</li>
<li>distance between two locations</li>
</ul>
<p>With these features I was able to beat the avito benchmark </p>
<p>Other features that I included afterwards: </p>
<h3>Attributes</h3>
<ul>
<li>number of key matches, number of value matches, number of key-value pair matches</li>
<li>number of fields that both ads didn't fill</li>
<li>similarity of pairs in the tf-idf space, also svd of this space</li>
</ul>
<h3>Text features</h3>
<ul>
<li>jaccard and cosine only on digits and English tokens </li>
<li>if any of the ads have english chars in a russian word (some of the characters look the same, but have different codes)</li>
<li>tf, tf-idf and bm25 of title, description and all text</li>
<li>svd of the above </li>
<li>tf only on words that both ads have in common (in title, desc, all text), tf on words that only one of the ad has, svd of them</li>
<li>distances and similarities in word2vec and glove spaces </li>
<li>word2vec and glove similarity only on nouns</li>
<li>some variation of word's mover distance for both w2v and glove </li>
<li>how similar misspellings are in the ads </li>
</ul>
<p>I also tried to extract contact information (phones, emails, etc) but it didn't help much</p>
<h3>Image features</h3>
<ul>
<li>image hashes from the imagehash library and from the forums </li>
<li>phash from imagemagick computed on all individual channels </li>
<li>diffirent similarities and distances of image histograms </li>
<li>centroids and image moment invariants computed with imagemagick</li>
<li>structural similarity of images</li>
<li>SIFT and keypoint matching </li>
</ul>
<h3>Geo features</h3>
<ul>
<li>city, region and zip code extracted from geolocation (same sity, region, etc)</li>
<li>dbscan and kmeans clusters of geolocations</li>
</ul>
<h3>Meta features</h3>
<ul>
<li>I run PCA and SVM on all the feature groups and used them as meta features</li>
</ul>
<h3>Ensembling</h3>
<p>I computed too many features and it was not possible to fit them all into RAM (I have a 32gb/8cores machine) so I started ensembling pretty early. I randomly picked up 100-150 features and run XGBs or ETs on them. </p>
<p>I mostly trained ETs because I could do ~10 of them per day, while training XGB took 2-3 days. </p>
<p>My best model scored ~0.939 on the public LB (xgb with 2.5k trees). The best submission before merging with Abhishek was 0.94437 public / 0.94394 private which would place me at the 10th position. With Abhishek we reached the 5th. </p>
<h3>Code</h3>
<p>The code is available at <a href="https://github.com/alexeygrigorev/avito-duplicates-kaggle">https://github.com/alexeygrigorev/avito-duplicates-kaggle</a> </p>
<p>Thanks everybody, it was very fun! Looking forward to seeing you in the next competitions. </p>
<p>So now when the competition is over I'm very curious what others were doing. So let's discuss the solutions! </p>
<p>[quote=Run2;126799]
Can you tell a little more on histogram distances ? And MSE of what ? SSIM of images ? <br>
[/quote]</p>
<p>Image histogram (<a href="https://en.wikipedia.org/wiki/Image_histogram">https://en.wikipedia.org/wiki/Image_histogram</a>) is a way of representing an image as a vector, where dimensionality of this vector is the number of bins of the histogram. Once histograms are calculated, it's possible to use usual distances and similarities to see how similar two histograms are (e.g. euclidean worked fine for me)</p>
<p>SSIM is structural similarity (<a href="https://en.wikipedia.org/wiki/Structural_similarity">https://en.wikipedia.org/wiki/Structural_similarity</a>) - another way of measuring the similarity between two images </p>
<p>As of MSE, maybe @frist meant this: <a href="https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio">https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio</a>?</p>
<p>Good job ZFTurbo! Thanks for sharing the code! I plan to do the same a bit later. </p>
<p>[quote=frist;126788]
1. How did you get number of geometry matches?
2. Does anybody successfuly use the generationMethod and how?
[/quote]</p>
<p>By geometry I meant image size (width x height). I used imagemagick to extract that. Then I represented each ad as a "bag of geometries" and computed something similar to jaccard on them. </p>
<p>We tried predicting generationMethod, but in our case it overfit, so we didn't include it. </p>
<p>[quote=myouness;126782]
What is ET ? <br>
Did you try anything to reduce the performance score difference between local CV and Public LB ?</p>
<p>How did you combine your different models ?
[/quote]
ET = ExtraTrees from sklearn</p>
<p>Nope, we used 3-fold, and it was more or less good for the 2nd decimal, but somewhat random on the 3rd. So we mostly relied on LB and hoped that with this amount of data it shouldn't shake a lot (and it didn't) </p>
<p>To combine the models we used stacking</p>
<p>Alexander Vikulin, you're stressing too much about it. It's just a competition :-)</p> |
BCI Challenge @ NER 2015 | My code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BCI Challenge @ NER 2015
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello</p>
<p>I've added my code to github for my winning NER solution (4th in leader-board). When I have time I'll also include the cross validation code I used. I've also attached the model documentation to this post. Any comments are very welcome.</p>
<p>github.com/duncan-barrack/kaggle_BCI_challenge</p>
<p>Cheers,</p>
<p>Duncan</p>
<p>Hi Frank,</p>
<p>Yes, no problem although please bear in mind I'm far from an EEG expert. I'd be more than happy to answer any questions in this forum or, if you'd prefer, via the email address at the top of my report (attached in my original post).</p>
<p>Cheers,</p>
<p>Duncan</p> |
BCI Challenge @ NER 2015 | The Overfitting Avengers Solution & Code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BCI Challenge @ NER 2015
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi,</p>
<p>You can find our code and documentation on <a href="https://github.com/alexandrebarachant/bci-challenge-ner-2015">github</a>.</p>
<p>In short, we used a special form covariance matrices as feature and tools from Riemannian Geometry to manipulate them. A channel selection algorithm was involved, and the classification was achieved by an ElasticNet classifier. </p>
<p>The above pipeline was applied on a number of random subsets of subjects, and predictions are averaged across bagged models.</p>
<p>The complete explanation of our model is provided <a href="https://github.com/alexandrebarachant/bci-challenge-ner-2015/blob/master/README.md">here</a>.</p>
<p>We made two different submission. The first one does not make any use of the leakage information and satisfies an "online processing" constraint, which means that any trial performed by a subject can be classified without the need for future complementary data or information. The second model uses the leak, thus it is not online-compatible. The two models are built upon the same classification pipeline, but with parameters tuned independently to achieve the highest performance.</p>
<ul>
<li><span style="line-height: 1.4">The online model (no leak) got 0.85083 on Public LB and 0.84585 on Private LB</span></li>
<li><span style="line-height: 1.4">The leak model got </span><span style="line-height: 1.4">0.85145 on Public LB and 0.87224 on Private LB.</span></li>
</ul>
<p><span style="line-height: 1.4">Any comments are welcomes !</span></p>
<p><span style="line-height: 1.4">Cheers,</span></p>
<p><span style="line-height: 1.4">The overfitting Avengers :)</span></p>
<p>Hi Mike,</p>
<p>these are very relevant questions.</p>
<p>1) There is multiple reason we took this decision, but first lets got back to the dimensionality reduction for Riemannian geometry (RG). Dealing with High dimensional data is the main problem when using RG. The main reasons are :</p>
<ul>
<ul>
<li>Your matrices must be definite positive and this is not always the case when the number of channel is high. You can still regularize, but it can degrade perfs by affecting the structure of the matrix.</li>
<li>Tangent space mapping is a local approximation of the manifold. it suppose that all your matrices are scattered in a relatively small part of the manifold. when you increase the dimension, you decrease the density of your data (or increase the sparsity), and it result in bad tangent space approximation.</li>
<li>Everything become more and more computationally expensive. RG involve a lot of eigenvalue/eigenvector decomposition.</li>
</ul>
</ul>
<p>My experience with EEG/MEG data is that if you keep everything bellow 64 channel, you will have no trouble.</p>
<p>For the Decmeg Challenge, the dimensionality was simply to high to use any of my methods (including the channel selection) out of the box. So i had no choice and i decided to use spatial filtering.</p>
<p>The problem with spatial filtering is that is is not very well suited for a cross-subject design. The basic hypothesis of source separation is that the training data and the test data are mixed by the same operator and are stationary. When you train on a subject, and apply on another, you take the risk to discard relevant information in the process. Again, when you aggregates data from different subject for training, you source separation becomes less effective because your algorithm try to estimate a unique unmixing matrix. </p>
<p>For decMeg, i found a workaround. My classification was in two stage, the first was a cross-subject classifier, using stacked generalization (so spatial filters were not train on aggregated data), the second was an iterative retraining using the data on the test subject only (the case where spatial filtering is really efficient). As you see, the key is to know what are the advantages and the flaws of you methods in order to use them in the best possible conditions.</p>
<p>Now for this challenge, the dimensionality was manageable, so spatial filtering was not necessary. Still, it is a good thing to reduce dimensionality, so we apply the channel selection procedure to discard irrelevant channels. The number of channels we kept is not very tight (35). it makes everything running more smoothly without having much risk to discard relevant information. EEG has a low spatial resolution, so close channels are highly correlated and often carry the same information. Discarding a few of them doesn't hurt</p>
<p>So the next question is why we didn't apply the same retraining process for this challenge. Well, first because we wanted to keep thing as 'online' as possible. The second is simply because the convergence were more tricky. The method i used in Decmeg was based on the assumption that the classes were balanced (which was the case by design), in order to set the optimal threshold for the re-labeling between iteration. Here the classes balance were unknown (if you don't use the leak).</p>
<p>2) Okay, now for the second question, i will let my teammate answer.</p>
<p>3) I think the best book you can find is "Methods of Information Geometry" from S. Amari. The basic idea is that each probability distribution is a point of a manifold with the parameters of the distribution as coordinates. For example a normal distribution of mean u and variance s is a point of coordinate (u,s) in the manifold. The trick is, the Information geometry make the assumption that the natural metric for this manifold is the Fisher information. After a bunch of equation manipulation, it allow you to define a true distance between two distribution, and therefore unlock a lot of issue (for example, the mean of a distribution is not the distribution with the mean parameters :) )</p>
<p>Alex</p>
<p>[quote=TF;66295]</p>
<p>Great job! Are you finding Riemannian Geometry to be an indispensable tool for signal processing and BCI-related work?</p>
<p>[/quote]</p>
<p>I would say definitely yes but it will be a bit unfair for all the other people who have done amazing work in the BCI community.</p>
<p>At first, i developed these methods to get rid of the source separation step (i.e. spatial filtering) during online BCI experiments. While algorithm such Common Spatial Patterns (CSP) or XDAWN are really powerful, they lack of robustness and introduce extra parameters to tune. And if they are not used correctly, they can lead to a loss of spatial information.</p>
<p>By using covariance matrices as features, you make sure you get all the spatial information in your classifier. Then, tools from Riemannian geometry provide you a convenient and robust way to manipulate them. More specifically, the tangent space mapping (which can be see as a kernel operation) is a very powerful tool. It allows to transform these matrices in euclidean vectors without breaking their particular structure (Symmetric and positive definite). Then, you can do whatever you want with these vectors.</p>
<p>Now, i have to say that you can probably get similar results with other methods, but thanks to a bunch of properties of the Riemannian metric, it is very well suited for cross-subject classification (which was the case for DecMeg and this challenge).</p>
<p>Covariance matrices are involved in wide range of signal processing problem (well, let say every time you have a multivariate problem), so riemannian geometry has a lot of potential. It is also used in <a href="http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6514112&url=http:%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6514112">Radar signal processing</a> and <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.336.4231&rep=rep1&type=pdf">Image classification</a>.</p>
<p>Finally, I wanted to say that the riemannian metric i use is a special case of something bigger called Information Geometry, a theory that study the geometric properties of probability distributions. In my case, the distribution of interest is the multivariate normal distribution with zero mean (only parametrized by it covariance matrix), but such metric can be extracted for any other distribution (but the math behind can be hardcore).</p>
<p>You're welcome :)</p>
<p>For me, it was a good opportunity to switch to python, and i have to say that i will never go back to matlab.</p>
<p>Now that i have an equivalent of my <a href="https://github.com/alexandrebarachant/covariancetoolbox">covariance toolbox</a> in python, I plan to release a package as soon as possible.</p>
<p>We took our time before publishing the code, but we wanted to make a deeper analysis of our solution. The final solution is a bit computationally intensive, but you can speed of the process by turning off the bagging and the electrode selection.</p>
<p>It turns out that by doing this you can achieve a score ~0.843 (for the model without the leak) and it will takes around 2 minutes to process.</p>
<p>Hi Mike,</p>
<p>Thank you for carrying this interesting discussion.</p>
<p>I could not agree more with you :)</p>
<p>Information geometry is gaining momentum in a lot of different domain. We can see a lot of stuff in computer vision for example. I would explain that by the facts that :</p>
<ul>
<li>There is a strong theoretical background that make the methods elegant.</li>
<li>It brings a new way to tacle problems, and thats refreshing.</li>
<li>the two point above make the thing easier to publish. Even if you don't show cutting edge performances, the novelty of the approach is well appreciated by the reviewers.</li>
</ul>
<p>About BCI, i totally share you point of view. There is a lack of <a href="http://en.wikipedia.org/wiki/Reproducibility#Reproducible_research">reproducible research</a>. The problem does not only comes from the methods implementation, but also from the environment it is applied. Results are so bad and datasets are so small that even changing the order of the frequential filter you apply as preprocessing step can change significantly the outcome of you classification.</p>
<p>In addition, parameters are usually over-optimized for a specific dataset, and when you tried a new method on a new set of data, it require an high amount of domain specific knowledge to make it works properly. This partially explain the popularity of the 'old' methods like CSP. They does not give optimal results, but they are stable across a wide range of applications.</p>
<p>The other problem is that EEG data are considered as medical data and are really hard to release publicly. You need consents to release from the subject, and it is rarely done before the experiment. There is a lot of data that are just sitting on the labs hard drive without any chance to be released even if you want to (i have tons of data like this).</p>
<p>What we really need is a platform where you can submit your code and your method will be blindly evaluated on a wide range of dataset (you can even add realistic stress test, like adding noise on data or on labels, etc).</p>
<p>Finally, thanks for your interest for <a href="https://github.com/alexandrebarachant/pyRiemann">pyRiemann</a>. your commits will be greatly appreciated :)</p>
<p>Best,</p>
<p>Alexandre</p>
<p>[quote=Sunu A;72858]</p>
<p>hi sir, is there anyway we can have your features(not the meta features),it would be great help.we need to examine how that works in our classifier,and the accuracy it could manage with your features..</p>
<p>thanking you in advance,</p>
<p>Sunu A</p>
<p>[/quote]</p>
<p>Hi,</p>
<p>The feature extraction we used include numerous supervised steps, like Xdawn spatial filtering, special form covariance estimation or channel selection. Giving you the raw feature will lead to biased results.</p>
<p>However, we published the code with a easy mechanism to test your ideas. All of this is explained <a href="https://github.com/alexandrebarachant/bci-challenge-ner-2015#parameter-file">here</a>. All you have to do is to build a python class that inherit from the sklearn base API (basically, two method, fit() and predict() ) and then declare it in the parameter file.</p>
<p>This way, you will evaluates the performance of your classifier in the same scheme as our, and you will get comparable results.</p>
<p>Alex</p>
<p>Hi Mike,</p>
<p>Thanks for looking into the decMeg code and pointing the issue of the metric. </p>
<ul>
<li><span style="line-height: 1.4">The Riemannian metric can be considered as a gold standard. It has numerous interesting properties. The most useful is the invariance by congruent transformation. Therefore, In a source separation point of view, the Riemannian distance in the sensor space is equivalent of the Riemannian distance in the source space. So under some circumstances, you can skip the source separation step and directly use the Riemannian distance without any loss of information.<br></span></li>
<li><span style="line-height: 1.4">Obviously, the Euclidean metric is a bad choice and lacks of sensitivity , because it does not take into account the particular structure of the covariance matrices (SPD). It could give good results when data are clean and classe are really separable (for example in EMG signal) but i would not recommend to use it.</span></li>
<li><span style="line-height: 1.4">The log-euclid metric is</span><span style="line-height: 1.4"> a good approximation of the Riemannian metric. Actually, the two metric are equivalent if all the matrices commute in multiplication. This is the case for diagonal matrices. This is why you can improve results by whitening your signal before using the log-euclid metric. However, this metric does not have the property of invariance by congruent transformation, which makes the results sensitive to preprocessing steps like spatial filtering. However, this metric is less computationally extensive, so i use it when i want to tune parameters.</span></li>
<li><span style="line-height: 1.4">The kullback-leibler divergence is not a metric because it does not have the symmetry property. You can use the symmetrised version, but it is an artificial workaround. There is also no definition of the mean according to the kullback-leibler divergence. However, used with the Riemannian mean, it can give surprisingly good results. After all, the kullback leibler is also based on the Fisher information, and is actually a measure of the manifold curvature (the second derivative of the Riemannian metric)</span></li>
<li><span style="line-height: 1.4">Another interesting metric is the log-det metric (based on alpha divergence), and described <a href="http://www.sciencedirect.com/science/article/pii/S002437951100783X#">here</a>.</span></li>
</ul>
<p><span style="line-height: 1.4">if i have to rank the metric by their efficiency <strong>on EEG data</strong>, i would do it like this :</span></p>
<p><span style="line-height: 1.4">Riemann > Log-det > Log-euclid > kullback > Euclidean</span></p>
<p><span style="line-height: 1.4">It may be different for other problem, as you said. The log-euclidean seems very popular in image processing and computer vision. </span></p>
<p>As a side note, i just released a python package, <a href="https://github.com/alexandrebarachant/pyRiemann">pyRiemann</a>. It is an extended version of the code developed for this challenge, and it support the different metrics I mention in this post (except the kullblack, but i accept pull request :) )</p> |
BNP Paribas Cardif Claims Management | #1 Dexter's Lab winning solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BNP Paribas Cardif Claims Management
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We figured, that all other teams just did brutal approximation on our findings, probably using many models with v22;v40 and v22;v50 interactions:)</p>
<p>[quote=happydemiologist;115573]</p>
<p>Congratulations, raddar. You both really got into the data. Could you please further simplify the example for laypersons like myself. I see that variable X is sorted (?), but the median is now calculated for each point in the variable?</p>
<p>[/quote]</p>
<p>maybe this simple script will help:</p>
<p>v50=sort(train$v50)</p>
<p>v50_diff=v50[2:length(v50)]- v50[1:(length(v50)-1)]</p>
<p>table(round(v50_diff,5))</p>
<p>now if you looked at the table, you will see that table values are increasing by a factor of 0.00147*x, which is strong indication of v50 being a integer variable</p>
<p>My teammate Song works in insurance, and I tackled the data from a banking perspective (I actually thought we were making a model for a bank for long time, lol). So we convinced ourselves that these were dates, because from our experience it seemed logical, that a person which claim was approved at least once would indicate a "good" customer feature. and the person who is constantly rejected (target=0) would indicate persistent fraudulent behaviour.</p>
<p>By the way all the data has another interesting property I have not shared :) when you look at i.e. {v22,v40-v50,v56} level and look at all other variables you will notice that they are all constants in time :) So in the end we made a conclusion that all variables are some kind filled in application forms; if you calculated hashes on numericals, there are only 30k unique hashes, which kind of support our hypothesis. That is why people saw some interesting behaviour, when removing many variables would help their score:)</p>
<p>By the way, my feature exploration ended up with using Excel, so cannot provide any good R code. But something like plotting variable ecdf functions of target and non target levels in a same plot is a good thing to have :)</p>
<p>Thank you all,</p>
<p>I see everyone is interested about the denominators. Actually the solution how I came up to that was posted in forums weeks ago, and some1 even did visualization for that:)</p>
<p>It is actually quite simple. lets say you have variable X; you sort that X and calculate rank values for each X value. then take X[rank] - X[rank-1] and plot a distribution plot of that. You will see very interesting distribution of several spikes.</p>
<p>To put it simply, lets say you have 0.00049, 0.00049,....0.00049,0.00098,0.00098,....0.00147,0.00147. See the pattern :) THen it just comes to simple optim function to search for median values between "jumping points" (X[rank] - X[rank-1]) distribution.</p>
<p>v40,v10,v50,v18,v28 were the variables, where this seperation is present. There are some more variables which are a mixture of this and gaussian density (meaning there are many integers, but also fractional numbers in that variable) - However I did not dig into it.</p>
<p>~350 features.</p>
<p>So glad we won this one! After having such good data findings it would have been a shame not to win :)</p>
<p>It is clear that this competition was not about ensembling but feature engineering, and we are happy that we found out the major key to winning this competition early on.
Therefore, we had lots of time to exploit our findings to the max and maintain the top position ever since.
The trick was very simple, and I even shared a hint to that early in the competition (without me understanding its importance at that time):
<a href="https://www.kaggle.com/c/bnp-paribas-cardif-claims-management/forums/t/19240/analysis-of-duplicate-variables-correlated-variables-large-post/110095#post110095">https://www.kaggle.com/c/bnp-paribas-cardif-claims-management/forums/t/19240/analysis-of-duplicate-variables-correlated-variables-large-post/110095#post110095</a></p>
<p>We figured, that data unanonymizing was important and in the end we were able to figure out some of variables' meaning:</p>
<p>v40 - date of observation</p>
<p>v40-v50 - contract start date</p>
<p>v50 - days since contract startdate (in other words, claimdate)</p>
<p>v10 - contract term in months</p>
<p>and few others for remaining days of the contract:</p>
<p>v12=v10*(365.25)/12-v50 </p>
<p>v34=v10*(365.25)/12-v40</p>
<p>Now add this knowledge to the assumption that v22 is a customer and v56/v113 is a product type,
and you will see very obvious patterns, that one contract may have several claims during contract lifespan (R: group_by(v22,v40-v50) %>% arrange(v50)):</p>
<p>[v22,v40-v50] [v50] [target]</p>
<p>[ZLS 12840] 197 1</p>
<p>[ZLS 12840] 962 1</p>
<p>[ZLS 12840] 1437 1</p>
<p>[ZLS 12840] 1498 NA</p>
<p>[ZLS 12840] 1501 1</p>
<p>[ZLS 12840] 1726 1</p>
<p>[ZLS 12840] 1788 NA</p>
<p>[ZLS 12840] 1882 NA</p>
<p>[ZLS 12840] 2418 1</p>
<p>[ZLS 12840] 3352 NA</p>
<p>[ZLS 12840] 3370 NA</p>
<p>[ZLS 12840] 3909 NA</p>
<p>So we could drop the categorical {v22} level i.i.d. assumption, and move to panel data structured categorical levels such as:
{v22,v40-v50} => sort(v50), {v22,v40-v50,v56} => sort(v50), {v22} => sort(v40), etc; </p>
<p>Just looking at the data this way, it was obvious that target value is very persistent for each {v22,v40-v50} level -
i.e. if a time series starts with target=1, it usually ends with target=1; if it starts with target=0 it often ends with target=0 too; claims which target shifts 0->1 are quite rare (1->0 only few cases);
So in the end it all resulted to correctly imputing target sequence for each level - which we done using lag(target) and lead(target) variables;
To our surprise, these variables were not overfitting LB, and in the end we made so many lag/lead variables that it was possible to drop v22 column,
and didn't even use v22 mean target techniques disccussed in the forums, which probably many top teams did anyway.</p>
<p>Our best single xgboost model achieved 0.42347 public LB (0.42193 private), and the model takes itself only about half an hour to train on 12-thread cpu.
To seal the deal, in last 2 weeks we experimented with other techniques and build few stack models with tens of different methodology models, such as nnets,
linear SVM's, elastic nets, xgboosts with count:poisson, rank:pairwise, etc.
I personally enjoyed working with regularized greedy forests, which were almost on par with xgboost.</p>
<p>The role of ensembling may not as important as other competition though we have tried several diverse models. </p>
<p>-models with tsne feature from continous variables</p>
<p>-models by levels of certain varaible (for example,var5) </p>
<p>-knn models on likehihood encoding varibles</p>
<p>To sum up:
As most of you, we were stuck at 0.45x for a long time and it took 3-4 weeks of dedicated time of looking and exploring the data in Excel to end up with panel time-series data, which was the key to success.
Having such knowledge about the data could have gotten you to top10 without too much effort.
In the end we created a useless model for Bnp, as our lead(target) variables use information from the future:)</p>
<p>And for guys who wants to succeed - </p>
<p>a) when starting a new competition, create simple xgboost model and use feature.importance to get a nice start to discovering important features - then stop making models and work with the data.</p>
<p>b) do not underestimate the power of knowing what data you are working with</p>
<p>c) dedicate some time for data exploration and try to understand how people visualize the data in the forums</p>
<p>d) look for data patterns, especially if it has many categorical variables</p>
<p>e) spend some time reading forums of past competitions, especially winning materials</p>
<p>f) keep eyes on overfitting</p>
<p>P.S. special thanks to this blog post <a href="http://blog.kaggle.com/2016/03/17/airbnb-new-user-bookings-winners-interview-2nd-place-keiichi-kuroyanagi-keiku/">http://blog.kaggle.com/2016/03/17/airbnb-new-user-bookings-winners-interview-2nd-place-keiichi-kuroyanagi-keiku/</a>
"I found that the out-of-fold CV predictions of categorized lag features were very important. As far as I saw in the forum, many of the participants may have not created these features."
This shook our beliefs and assumptions about data being i.i.d in {v22} levels, and it took us only 1 day to utilize this and claim the top1 rank for the rest of the copmetition:)</p>
<p>P.S.S huge thanks to Laurae for the input to data exploration. I believe it made this competition more interesting for most of us.</p>
<p>P.S.S. team name came up when I decided to join forces with Davut, and the cartoon was playing in the background- for both of us it was our one of favorite childhood cartoons :)</p>
<p>var114 has strong correlation with var40 (which is observation date). so var114 has to do something with dates.</p>
<p>One of my hypothesis is that var114 is a datetime and var40 is a date format. the time value caused the correlation not be linearly perfect. </p> |
BNP Paribas Cardif Claims Management | #3 place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BNP Paribas Cardif Claims Management
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I feel keen to mention that our team's name is inspired by:</p>
<p><a href="http://www.explodingkittens.com/">http://www.explodingkittens.com/</a></p>
<p>(A card game for those who like both kittens and explosions!)</p>
<p><img src="http://www.explodingkittens.com/img/defuse.png" alt="Exploding Kittens" title></p>
<p>[quote=Shubin;115688]</p>
<p>congrats! Could you elaborate more on nn structure? besides, you only apply nn on the second level instead of the first? if not, what would be the strcuture look like for the first level nn?</p>
<p>[/quote]</p>
<p>Thank you :) . I dont feel I am a great nn-tunner-structure-master and there are many different structures that could work I guess. </p>
<p>I have tried nn in both 1st and end levels, although in 2nd levels they were much more useful. </p>
<p>for second level I used </p>
<pre><code>models = Sequential()
models.add(Dense(100, input_dim=input_dim, init='uniform', W_regularizer=l2(0.00001)))
models.add(PReLU())
models.add(BatchNormalization())
models.add(Dropout(0.7))
models.add(Dense(output_dim, init='uniform'))
models.add(Activation('softmax'))
opt = optimizers.Adagrad(lr=lrs)
models.compile(loss='binary_crossentropy', optimizer=opt)
</code></pre>
<p>epocs = 50</p>
<p>and for 1st level</p>
<pre><code>models = Sequential()
models.add(Dense(150, input_dim=input_dim, init='uniform', W_regularizer=l2(0.00001), activation='relu'))
#models.add(PReLU())
models.add(BatchNormalization())
models.add(Dropout(0.8))
models.add(Dense(output_dim, init='uniform'))
models.add(Activation('softmax'))
opt = optimizers.Adagrad(lr=lrs)
models.compile(loss='binary_crossentropy', optimizer=opt)
</code></pre>
<p>also epoc = 50.</p>
<p>Both in Keras.</p>
<p>Hope it helps. I think other people did much better job- as said I am not good with nns. </p>
<p>[quote=MadScientist;115676]</p>
<p>First of all congrats. Well done!</p>
<p>[/quote]</p>
<p>Thank u!</p>
<p>[quote=MadScientist;115676]</p>
<p>Then of course your reward is to be peppered with questions! =)</p>
<p>[/quote]</p>
<p>Thank u x2!</p>
<p>[quote=MadScientist;115676]</p>
<p>I'm not sure I follow what you are saying--could you expand upon this?</p>
<p>[/quote]</p>
<p>for example max(v50) , mean(v50), min(v50)
group by v22
We did that for many categorical variables. and also group by 2 cats or even 3 cats.
ps. when I say "we" , I mean Stans - haha.</p>
<p>[quote=MadScientist;115676]</p>
<p>.Am I understsanding that for 1 "model" you leave all category variables as encoded integers, then for a second model you one hot encode. </p>
<p>[/quote]</p>
<p>Yes that's it . Also , trying different algorithm in the same data input is also considered a different model (at least in my slang!) </p>
<p>I think our solution may sound too plain compared to what the top 2 teams did :) .</p>
<p>We spent quite some time reading forum threads and applied many of things discussed with some success like :</p>
<p>1) rounding features to remove noise
2) finding the correct denominator of features to get them back to the initial integer state
3) converting categorical variables to likelihoods</p>
<p>They all helped a little bit but the most important thing for us (and thanks to clobber for finding it! ) was interacting the categorical variables together , especially v22 with all the rest categorical variables in 2way and even 3way manner. </p>
<p>Another thing that helped was rounding strongly ( 2 decimals) all the numerical features and creating interactions (as categorical "numericalrounded_catgeorical" with some of the categorical ones.) </p>
<p>Stans was kinda of our features'factory machine! - apart from likelihood and other target-based feature transformations on categorical data, he did a lot of extra work in creating group-by type of aggregate features (like averages) of v50 and other variables (up to 3 way). Although we never understood exactly why this works, stans was able to capture some very useful patterns here .</p>
<p>Our best single xgboost would have finished 4th (0.4242 in private) and was consisted of more than 7.5 K features (mostly interactions).</p>
<p>We also produced many different base level models without much Feature engineering, just different input format types (like load all categorical variables as counts, or as onehot encoding etc).</p>
<p>Our ensemble was consisted of 223 models. Faron did a lot of work in removing noise and discarding many of these in order to get to our bets score with a lvl2 ensemble of geomean weights between an ET , 2NN and 2 Xgmodels. </p>
<p>We burnt many cores in case you might have questions about it!</p> |
BNP Paribas Cardif Claims Management | escalated chi: 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BNP Paribas Cardif Claims Management
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I was reading raddar's post about their advance data exploration and inference about the anonymous feature sets, I must admit you guys did a fantastic job there. In contrast, we did not invest much time on EDA as we found out likelihood encoding of interactions improving the result exponentially in the early stage of the competition. Since then, I mostly focused on creating different representation of the interactions in different levels and Daniel took care of the Stacking and ensembling part. Jeremy (idle_speculation) also had a fantastic set of features which helped us in the later stage of the competition. the image below illustrates our workflow in a nutshell:</p>
<p><img src="https://i.imgsafe.org/91d5795.png" alt="enter image description here" title></p>
<p>We also tried a lot of other things but didn't have much time to finish training on them- especially worth mentioning RGF. As already noted in the forum, performance wise it was almost on par with XGB but very slow.</p>
<p>I extend my greetings to the other winners and thank Kaggle and BNP Paribas for hosting such a nice competition. I would also like to thank my teammates Daniel and Jeremy for their valuable contributions.</p>
<p>Last but not the least, Laurea - keep rocking :)</p> |
Benetech - Making Graphs Accessible | #13th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I have used it several times in the past and it showed good result. The architecture does not use batch norm which has some limitation. However, if you have time, you can try various model types in timm package and see what works with your data</p>
<p>Thank Kaggle and the host to organize the competition. I want to do a quick write-up of my simple solution. Everything is basic, all steps can be done using Kaggle and Google Colab (Pro) computing resources.</p>
<p>Final Submission notebook: <a href="https://www.kaggle.com/code/namgalielei/benetech-eval-and-infer-v12/notebook" target="_blank">https://www.kaggle.com/code/namgalielei/benetech-eval-and-infer-v12/notebook</a></p>
<h2>1. Overview</h2>
<p>Chart type classification + Plot BBox prediction -> OCR -> Chart data instance detection / segmentation (line) -> Data association </p>
<h2>2. Modules</h2>
<h3>a. Chart type classification + Plot BBox prediction:</h3>
<ul>
<li>Segmentation model pytorch Unet with backbone eca nfnet l1, auxilary classification head. I do binary segmentation for Plot BBox prediction, and softmax classifcation for Chart type classification </li>
<li>Ensemble 3 folds.<br>
Training code: <a href="https://colab.research.google.com/drive/1WdlqUr1ONcntWfaIksDp6Z6BD4MXDJg4?usp=sharing" target="_blank">https://colab.research.google.com/drive/1WdlqUr1ONcntWfaIksDp6Z6BD4MXDJg4?usp=sharing</a></li>
</ul>
<h3>b. OCR:</h3>
<ul>
<li>Based on Paddle ppOCRv3</li>
<li>Text detection: Fine tune the light weighted mobilenetv3-dbnet on the competition's data. The target is at word group level. The ground-truth for this is provided on the json file.</li>
<li>Text recognition: Use the pretrained model. </li>
<li>Modify some logic: First, the default post processing of Paddle TextDet (mask to quad) may result in some polygons with only 3 points (triangle instead of quadrilateral). I fixed this by finding the minimum bounding rotated rectangle. Second, when perspective-transform a cropped text, it might be rotated, so the text recognition is not able to read them well. I predict twice (no rotation and clock-wise 90deg rotation) and take the one with higher confidence score. </li>
</ul>
<h3>c. Chart data detection:</h3>
<ul>
<li>VBar, HBar, Scatter and Dot detector: Mask RCNN Resnet50 model. Training code: <a href="https://colab.research.google.com/drive/14X97mTwAU9kxS__xaehDU7SQbe_2ivP0?usp=sharing" target="_blank">https://colab.research.google.com/drive/14X97mTwAU9kxS__xaehDU7SQbe_2ivP0?usp=sharing</a></li>
<li>Line detector: a single Unet++ model with backbone efficientnet b5. Training code: <a href="https://drive.google.com/file/d/1OFoogWWWP3vHAHe9g0UsY0cCfiHKOeMp/view?usp=sharing" target="_blank">https://drive.google.com/file/d/1OFoogWWWP3vHAHe9g0UsY0cCfiHKOeMp/view?usp=sharing</a><br>
I annotated some images and train these models.</li>
</ul>
<h3>d. Data association:</h3>
<ul>
<li>Some rule-based logic to associate the chart data (bar, scatter, dot) to its corresponding ticks. </li>
<li>Project the pixel coordinates of ticks and chart data onto the axes-relative scale.</li>
</ul>
<p>Some thing not having time to try yet: External dataset, a wider range of instance segmentation / object detection models, Generative models</p>
<p>I use semi-auto mask generation. First, I notice that by using Opencv image processing, I can group out those pixels belong to the line with some heuristic thresholds (binarization), then use cv2.contours to retrieve those polygons of that line. Second, there are some lines which cannot be extracted automatically, I manually annotate them. </p> |
Benetech - Making Graphs Accessible | 14th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, congratulations to the winning teams! This task was very interesting and challenging. I hope to learn a lot from the top solutions.</p>
<p>My solution consists of chart_type classification, template matching for scatter and pix2struct model for the rest chart_types.</p>
<h1>Chart_type Classification</h1>
<p>I finetuned resnet18d with about 70,000 images in the ICDAR dataset, resulting in the accuracy of 0.995 for extracted data.</p>
<h1>Template matching for scatter</h1>
<p>Scatter often requires a large number of datapoints, and long token lengths are needed. I addressed this problem by object detection, OCR and template matching. This approach scores 0.51 on extracted data. The algorithm is as follows:</p>
<ol>
<li>object detection of data points and axis texts by YOLOX (im_size=480)</li>
<li>generate a pattern of datapoints using the highest confidence among the detected points</li>
<li>identify the shape of the pattern based on the distance between the RGB values of the pattern and the background color</li>
<li>scan the entire image and calculate the similarity score (L2 distance) with the pattern</li>
<li>select approximately 1000 pixels with the highest similarity score as candidate points</li>
<li>fill the rectangular area containing the data points with background color and use it as a base for reconstruction</li>
<li>take out the candidate points in order from the one with the highest similarity score, and place a pattern if the following two conditions are satisfied<ol>
<li>the L2 error with the original image is smaller when a pattern is placed than when a pattern is not placed</li>
<li>the overlap with the existing pattern does not exceed 30%.</li></ol></li>
</ol>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3962786%2F7bc7f2767887a28aa23dfff79aa62965%2Fscatter.png?generation=1687226523383966&alt=media"></p>
<p>Since simple linear regression is vulnerable to outliers, I tried to use some form of robust regression. Considering that methods such as RANSAC and huber require a parameter on the scale and that the parameters tend to overfit the data set due to small data size, I implemented a simple robust regression without any hyper parameters. All two point pairs are connected by a straight line and the slope and intercept are calculated. The desired regression line is obtained using the median value for all computed slopes and intercepts. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3962786%2Fa42b3944e43c17ada9c800e7242411c4%2Freg.png?generation=1687226587258677&alt=media"></p>
<h2>Pix2Struct model for other types</h2>
<ul>
<li>labels are in the form xyxy</li>
<li>decimal part of numerical is rounded to 4 significant digits</li>
<li>augmentation for translation, rotation, noise, hue, etc.</li>
<li>pretrained model: MatCha-base</li>
<li>hyperparameters<ul>
<li>max_length: 512</li>
<li>max_patches: 2048</li>
<li>lr: 1e-5</li>
<li>scheduler: get_cosine_schedule_with_warmup</li>
<li>num_warmup_steps: 1000</li>
<li>weight_decay: 1e-5</li></ul></li>
<li>Training data<ul>
<li>extracted (1118)</li>
<li>generated (999)</li>
<li>ICDAR (labeled 1286+ pseudo&hand labeled 2988)</li></ul></li>
</ul> |
Benetech - Making Graphs Accessible | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to pay tribute to all the participants who worked on this competition. <br>
I would also like to thank the hosts for organizing this interesting task competition. <br>
This task was very interesting and I enjoyed working on it because I could think of many different approaches. <br>
I am honored to have won first place in this very interesting competition.</p>
<h1>Overview</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2F16283ff691149c87b3b01a49a432f107%2Foverview.jpg?generation=1687442367636860&alt=media" alt="Overview"></p>
<p>My solution consists of a two-step pipeline that first classifies chart types using a classification model and then performs data series inference. <br>
In the inference phase of the data series, Bar, Line, and Dot were end-to-end predictions by Deplot trained for each chart type, while Scatter was predicted by an object detection-based approach. </p>
<p>The final scores are as follows. </p>
<table>
<thead>
<tr>
<th></th>
<th>Overall</th>
<th>Scatter</th>
<th>Dot</th>
<th>Line</th>
<th>Vertical Bar</th>
<th>Horizontal Bar</th>
</tr>
</thead>
<tbody>
<tr>
<td>public</td>
<td>0.86</td>
<td>0.10</td>
<td>0.00</td>
<td>0.32</td>
<td>0.39</td>
<td>0.05</td>
</tr>
<tr>
<td>private</td>
<td>0.72</td>
<td>0.30</td>
<td>0.01</td>
<td>0.13</td>
<td>0.26</td>
<td>0.01</td>
</tr>
</tbody>
</table>
<h1>Dataset</h1>
<p>I used the following three data sets. </p>
<ol>
<li><p>Competition data set(comp_extracted_dataset/comp_generated_dataset) </p>
<ul>
<li>Using both extracted dataset (comp_extracted_dataset) and generated dataset (comp_generated_dataset). </li>
<li>For the generated data, data with noise in the annotations were removed by a simple check (about 100 images). </li></ul></li>
<li><p>ICDAR data set(ICDAR_dataset/ICDAR_manualannot_dataset) </p>
<ul>
<li>I used 1406 pieces of data for which annotations were provided (ICDAR_dataset) and 1903 pieces of data for which no annotations were provided (ICDAR_manualannot_dataset). </li>
<li>For data for which annotations were provided, I visually rechecked the annotation contents and manually corrected all data that did not follow the competition's annotation rules (e.g., % notation) or contained noise. </li>
<li>For the data for which no annotations were provided, I first visually checked the appearance of all the data and selected the data that could be used in this competition. Next, I inferred and assigned pseudo-labels using the Deplot model, visually checked all the results again, and manually corrected all incorrect annotations. </li></ul></li>
<li><p>Generated synthetic data set(synthetic_dataset) </p>
<ul>
<li>After reviewing the images in the competition dataset and determining that the comp_generated_dataset alone did not have a sufficient variation to achieve robustness, so I generated about 65k synthetic data myself. </li>
<li>I mainly generated synthetic data with features that comp_generated_dataset does not have. <ul>
<li>histogram </li>
<li>Label contains line breaks </li>
<li>Bar chart with error bars </li>
<li>Line chart containing x labels not included in the data series</li>
<li>etc… </li></ul></li>
<li>To add diversity, I also included 10k images from the dataset published by <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a>. </li></ul>
<p>(Example of synthetic data I generated) <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2Fd4df82a41fcaebeda2ffc93fb8ad9159%2Fgenerated_data.jpg?generation=1687442722440643&alt=media" alt="generated_data"></p></li>
</ol>
<h1>Solution pipeline</h1>
<h2>1. Chart classification</h2>
<ul>
<li>There is not much to note, as I just performed a simple classification task. </li>
<li>I used two models, convnext_large_384 and swin_large_patch4_window12_384, and integrated their results in a weighted ensemble. </li>
</ul>
<p><strong>Training configs</strong> </p>
<ul>
<li>Dataset (training data and validation data were randomly divided into folds) <ul>
<li>Training(78k images):comp_extracted_dataset, comp_generated_dataset, ICDAR_dataset, synthetic_dataset(horizontal_bar/dot) </li>
<li>Validation(4k images):comp_extracted_dataset, ICDAR_dataset, synthetic_dataset(horizontal_bar/dot)</li></ul></li>
<li>Hyperparameters<ul>
<li>Epochs:15 </li>
<li>Batch size:16 </li>
<li>Adafactor (lr=3e-5) </li></ul></li>
</ul>
<h2>2. Data series prediction</h2>
<p>Based on the classification results of Step 1, different approaches are used for scatter and the rest. <br>
Except for scatter, the predictions are made in Deplot, and scatter is an object detection-based prediction. </p>
<h3>2-a. Bars & Lines & Dots</h3>
<ul>
<li><p>I experimented by switching the model to Deplot based on the Donut-based notebook that <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> had published. However, I could not train well just by changing the model, so I made some modifications. </p>
<ul>
<li>Ground Truth Format <br>
Removed chart type from GroundTruth and adapted the format to that of the Deplot original. <br>
<code><0x0A> x_value1 | y_value1 <0x0A> x_value2 | y_value2 <0x0A> x_value3 | y_value3 </s></code></li>
<li>x-axis and y-axis swap of horizontal_bar <br>
Since the concept of x-axis and y-axis was reversed in the annotation rules of this competition and the original format of Deplot, I trained according to the original concept of Deplot and swapped the values during inference. </li></ul></li>
<li><p>Training was conducted in multiple stages. In the first stage, training was conducted using data from all chart types (All Chart-type Train), and using the results of that training as initial weights, one or two additional training sessions (Specific Chart-type Train) were conducted using only specific chart-type images to generate a model specialized for a chart type.</p>
<ul>
<li>The scores for vertical_bar and line were slightly improved by this method. (Two Specific Chart-type Train runs were performed for vertical_bar and one for line.) </li>
<li>Horizontal_bar had a worse cv after the second stage of training, probably due to the small amount of extracted data, so I decided to use the All Chart-type Train model to make predictions. </li>
<li>Dot chart only has generated data and I decided that I could not validate it successfully, so I decided not to train after the second stage and use the All Chart-type Train model to make predictions. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2F0a86b394e87edd440600ac531288897c%2FDeplot_training.jpg?generation=1687442833429916&alt=media" alt="Deplot_training"></li></ul></li>
</ul>
<p><strong>Training configs</strong> </p>
<ol>
<li><p>All Chart-type Train </p>
<ul>
<li>Dataset <ul>
<li>Train(120k images):comp_extracted_dataset, comp_generated_dataset, ICDAR_dataset, <br>
synthetic_dataset </li>
<li>Validation(2.5k images):comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset</li></ul></li>
<li>Hyperparameters<ul>
<li>Epochs:8</li>
<li>Batch size:2 </li>
<li>Adafactor (lr=1e-5)</li>
<li>cosine scheduler with warmup (warmup_step=4000)</li>
<li>Augmentation: GaussianBlur, GaussNoise, some color augmentations </li></ul></li></ul></li>
<li><p>Specific Chart-type Train (vertical_bar) </p>
<ul>
<li><p>Dataset <br>
[1st time] </p>
<ul>
<li>Train(6k images) :comp_extracted_dataset, ICDAR_dataset, synthetic_dataset </li>
<li>Validation(1.3k images):comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset </li></ul>
<p>[2nd time] </p>
<ul>
<li>Train(1500 images):comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset </li>
<li>Validation(500 images) :comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset</li></ul></li>
<li><p>Hyperparameters</p>
<ul>
<li>cosine scheduler with warmup (warmup_step=0)</li>
<li>Otherwise, same as All Chart-type Train </li></ul></li></ul></li>
<li><p>Specific Chart-type Train (line) </p>
<ul>
<li><p>Dataset <br>
[1st time] </p>
<ul>
<li>Train(1150 images):comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset </li>
<li>Validation(400 images):comp_extracted_dataset, ICDAR_dataset, ICDAR_manualannot_dataset </li></ul></li>
<li><p>Hyper paramete</p>
<ul>
<li>cosine scheduler with warmup (warmup_step=0)</li>
<li>Otherwise, same as All Chart-type Train </li></ul></li></ul></li>
</ol>
<p><strong>Score</strong> </p>
<ul>
<li><p>All Chart-type Train</p>
<table>
<thead>
<tr>
<th></th>
<th>Overall</th>
<th>Scatter</th>
<th>Dot</th>
<th>Line</th>
<th>Vertical Bar</th>
<th>Horizontal Bar</th>
</tr>
</thead>
<tbody>
<tr>
<td>public</td>
<td>0.78</td>
<td>0.06</td>
<td>0.00</td>
<td>0.29</td>
<td>0.38</td>
<td>0.05</td>
</tr>
<tr>
<td>private</td>
<td>0.53</td>
<td>0.13</td>
<td>0.01</td>
<td>0.12</td>
<td>0.26</td>
<td>0.01</td>
</tr>
</tbody>
</table></li>
<li><p>Specific Chart-type Train</p>
<table>
<thead>
<tr>
<th></th>
<th>Overall</th>
<th>Scatter</th>
<th>Dot</th>
<th>Line</th>
<th>Vertical Bar</th>
<th>Horizontal Bar</th>
</tr>
</thead>
<tbody>
<tr>
<td>public</td>
<td>0.81</td>
<td>0.06</td>
<td>0.00</td>
<td>0.32</td>
<td>0.39</td>
<td>0.05</td>
</tr>
<tr>
<td>private</td>
<td>0.55</td>
<td>0.13</td>
<td>0.01</td>
<td>0.13</td>
<td>0.26</td>
<td>0.01</td>
</tr>
</tbody>
</table></li>
</ul>
<h3>2-b. Scatter</h3>
<ul>
<li>Only scatter was not accurate enough with Deplot no matter how I tried, so I adopted an object detection-based approach. </li>
<li>Since scatter is guaranteed to have label values of numeric type, it is easy to calculate values from ratios if the "label position," "label text," and "scatter point position" are known. I used CACHED to detect "label position," Deplot to read "label text," and YOLOX to detect "scatter point". </li>
<li>The mapping between label bboxes and label texts is a bit crude, but the smallest label text is mapped to the left-most (top in the case of y-labels) label bbox, and the largest label text is mapped to the right-most (bottom in the case of y-labels) label bbox. It is possible that the total number of detected label bboxes does not match the total number of read label texts, but we did not take any special measures because such a pattern was very rare when we checked the verification data. </li>
<li>The above object detection approach resulted in a scatter score of <strong>Pulic:0.10/Private:0.30</strong>, much better than the Deplot-based score (Pulic:0.06/Private:0.13).</li>
</ul>
<p><strong>More Details</strong> <br>
Scatter point detection </p>
<ul>
<li>comp_extracted_dataset, ICDAR_dataset, synthetic_dataset were used for training data (about 12.5k images)</li>
<li>Somewhat semi-automated using provided annotation data and image processing, but inevitably noisy, so corrected with manual annotation. </li>
<li>To increase detection accuracy, only the plot area was cropped for training and inference.</li>
<li>The model was YOLOX-l, input image size was 1280 for both training and inference, and 50 epochs were trained.</li>
</ul>
<p>Label text reading</p>
<ul>
<li>comp_extracted_dataset, ICDAR_dataset, comp_generated_dataset were used for training data (about 62k images, all chart types were used, not just scatter) </li>
<li>Use the same training configs as in (2-a) All_Chart-type_Train for the training parameters, and change the format of Ground Truth as follows <br>
<code><0x0A> x_label1 | x_label2 | x_label3 <0x0A> y_label1 | y_label2 </s></code> </li>
<li>At first, I thought of using EasyOCR or similar to achieve this, but there were many issues to be addressed, such as support for character rotation, and it seemed that tuning would be very difficult to increase versatility, so I adopted a method of batch reading with Deplot. </li>
</ul>
<hr>
<p>Finally, I would like to thank once again everyone involved in this competition. <br>
Thank you very much for taking the time to read this!</p>
<p>[edit] </p>
<ul>
<li>I have published 1st place <a href="https://www.kaggle.com/code/kashiwaba/benetech-1st-place-inference" target="_blank">inference notebook</a>(23/6/28) </li>
<li>I have published 1st place <a href="https://github.com/KASSII/benetech_1st_place_train" target="_blank">training code</a>(23/7/8) </li>
</ul>
<p>Thank you.<br>
I may make it public once I get it sorted out.</p>
<p>Thank you.<br>
There are some differences from Bartley's source code as I created the original.<br>
I may release it once I have it organized to a level that others can read, along with other scripts.</p>
<p>Thank you.</p>
<ol>
<li><p>I honestly don't know because I haven't pursued it that far, but Augmentation was introduced when I was experimenting with Donut. The score remained the same or increased slightly, but the number of epochs of convergence increased, allowing for longer learning, so I kept using it.<br>
Also, the model we used this time was highly expressive and easily overfitted, so we intuitively judged Augmentation to be effective, which is another reason why we used it.</p></li>
<li><p>All parameters are the same except the learning rate and scheduler. However, the actual number of epochs in which the model used was generated should have been much earlier, since the validation score was output for each epoch and the weights at the best time were used.</p></li>
</ol>
<p>3 . <br>
As shown in the figure below, I simply tied the smallest and largest indexed readings output by Deplot to the smallest and largest bbox coordinates detected by the detection model. All other results are completely ignored.<br>
In other words, as long as the minimum and maximum values are correct, the other results are not affected at all even if they are wrong, but conversely, if the minimum and maximum values are wrong, the accuracy will decrease even if the other results are correct.<br>
In fact, it would be better to take this into account in the implementation, but since there was not enough time and the accuracy was still sufficient, a compromise was made.<br>
I consider this part to be an area for improvement.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2F50ca73f954168bddf1b6d4512aaedc53%2Fbenetech_answer.jpg?generation=1687605259155019&alt=media" alt=""></p> |
Benetech - Making Graphs Accessible | 20th place solution (YOLOv7+EasyOCR) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the prize/medal winners of the competition🎉🎉. It was a great opportunity to learn and grow.I would like to share my approach here.</p>
<p>My approach to the problem involves two main steps: object detection and Optical Character Recognition(OCR).<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4461928%2F7cf3386a43e8f1fa047c53892e9ab40f%2F247040237-f349dd62-6dbb-4792-8654-65d870c1b6f1.png?generation=1687256357459108&alt=media" alt=""></p>
<p><strong>Object Detection(Yolov7-X):</strong><br>
I trained a yolov7 model to detect the x-axis labels, y-axis labels, the chart bounding box, and the data points on the chart. The coordinates of the data-points were not provided in the dataset. I was able to accurately compute the position of the datapoints on the chart images by linear interpolation of the x-axis and y-axis tick coordinates with respect to the x-axis/y-axis labels (values). <a href="https://www.kaggle.com/samratthapa/data-prep-for-yolov7-data-point-x-y-label-chart" target="_blank">Here</a> is a notebook to create a dataset using this approach. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4461928%2Fc68d1abc59f45a4aef8b8c6628d2fb23%2FScreenshot%202023-06-20%20at%2019.25.55.png?generation=1687256785660301&alt=media" alt=""><br>
During inference, I inverted this process by calulating the data-series from the linear interpolation of the values of x-axis and y-axis labels with respect to the co-ordinates of the data points.</p>
<p>This approach also works relatively well for scatter plots, compared to other approaches like Donut.</p>
<p>Notes:</p>
<ul>
<li>This object detection model was also used as the chart-type classification model.</li>
<li>There are overlapping bounxing boxes for some x-axis labels like the image below. However, my OCR model was able to extract the correct text despite the input image including text from neighbour bboxes.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4461928%2F5dff7698a985cbbdbd1b333700b5537f%2F247041714-1ab49a89-1db7-48a1-9b3e-429217142b45.png?generation=1687256377327384&alt=media" alt=""></li>
</ul>
<p><strong>OCR model:</strong><br>
Using the EasyOCR libary, I trained a ResNet(feature extractor)+BidirectionalLSTM model with Connectionist Temporal Classification(CTC) loss. The additional dataset(ICPR 2022 CHART) improved the accuracy of the OCR model by about 5% from 84% to 89%. I used <a href="https://www.kaggle.com/code/nadhirhasan/let-s-finetune-easyocr-boom" target="_blank">this notebook </a>as a starting point to train the OCR model (Thank you <a href="https://www.kaggle.com/nadhirhasan" target="_blank">@nadhirhasan</a> ).</p>
<p><strong>Post processing:</strong><br>
After receiving the bounding boxes from the model, I performed some post-processing based on some simple heuristics like: removing the data points that lie outside the chart bbox, restricting x-labels(y-labels for horizontal-bar) to lie under the chart bbox, and restricting y-labels(x-labels for horizontal-bar) to the left side of the chart bbox. Also, the x/y axis tick coordinates are calculated using the x/y-axis bbox, and the chart bbox. I use the nearest point that lies on the chart bbox from the center of the x/y label bbox as the respective x/y tick coordinate. I chose this approach because the precision and recall of the x/y labels was higher than the x/y axis ticks in an older version of the model.</p>
<p>I participated in this competition only for the last 4 weeks. So, due to lack of time, I wasnt able to try out other approaches like Donut . I think there is a lot of room for improvement for this model. For example,about 25% of the predictions made by the model automatically get scored 0, because of mismatching number of predictions. This mismatch is due to only 1 or 2 points for charts besides scatter plot.</p>
<p>Submission Notebook:<br>
<a href="https://www.kaggle.com/code/samratthapa/20th-place-submission-yolov7-easyocr/notebook?scriptVersionId=134338733" target="_blank">https://www.kaggle.com/code/samratthapa/20th-place-submission-yolov7-easyocr/notebook?scriptVersionId=134338733</a></p>
<p>For model checkpoints, and training scripts: <br>
<a href="https://github.com/SamratThapa120/benetech-making-graphs-accessible" target="_blank">https://github.com/SamratThapa120/benetech-making-graphs-accessible</a></p>
<p>THank you, Congratulations to you too.<br>
The ResNet feature extractor was able to handle rotated images too. In addition, I trained with rotation data augmentations along with other affine transforms, blurring, ColorJitter,. I tried to use VGG instead of ResNet , but it was not working so well. </p>
<p>Before training with overlapping bounding boxes, I was trying to use a segmentation model to extract the x/y labels, but there were a lot of false negatives. On the other hand, my bbox detection model was working well, so I decided to stick with it, and train an OCR adaped to the bbox detection model. Surprisingly, the OCR worked better than I has anticipated even with the overlapping bboxes .</p> |
Benetech - Making Graphs Accessible | 28th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we would like to thank Kaggle & Benetech for hosting this very interesting competition. It was great fun to participate, and we learned a tremendous amount from it. 😀</p>
<p>We used a combination of image classification, object detection, Donut, PaddleOCR, TrOCR, and lots of postprocessing! 😅</p>
<hr>
<h1>Models</h1>
<p>We trained models for the following tasks:</p>
<ol>
<li>Plot type classification - ResNet34</li>
<li>Rotated vs non-rotated X tick label classification - ResNet34</li>
<li>Horizontal bar detection - FasterRCNN with ResNet50-FPN backbone</li>
<li>Vertical bar detection (used in the histogram pipeline as well) - FasterRCNN with ResNet50-FPN backbone</li>
<li>Line point detection - FasterRCNN with ResNet50-FPN backbone</li>
<li>Scatter point & dot detection (a single model for both) - FasterRCNN with ResNet50-FPN backbone</li>
<li>Tick mark detection - FasterRCNN with ResNet50-FPN backbone</li>
<li>X tick label detection (used only for detecting non-rotated X tick labels) - FasterRCNN with ResNet50-FPN backbone</li>
<li>Y tick label detection - FasterRCNN with ResNet50-FPN backbone</li>
<li>Donut (used only for recognizing rotated X tick labels) - <a href="https://huggingface.co/naver-clova-ix/donut-base" target="_blank"><code>'naver-clova-ix/donut-base'</code></a></li>
</ol>
<p>In addition, we used the following models as-is (i.e., without any fine-tuning):</p>
<ol>
<li>PaddleOCR for multi-line text detection & conversion to single-line texts - <a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_en/PP-OCRv3_introduction_en.md" target="_blank">PP-OCRv3</a></li>
<li>TrOCR for OCR of single-line texts - <a href="https://huggingface.co/microsoft/trocr-base-printed" target="_blank"><code>'microsoft/trocr-base-printed'</code></a></li>
</ol>
<p><strong>Note:</strong> We used PaddleOCR because TrOCR can only read single-line texts. For single-line texts, we found TrOCR to be very accurate.</p>
<hr>
<h1>Inference Examples</h1>
<p>The following images are examples of inference using each of the above models:</p>
<p><strong>Plot type classification:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Faf30206384b2f090c6fcbd0cb59bf959%2Fplot_type_inference.png?generation=1687424929176720&alt=media" alt=""></p>
<p><strong>Rotated vs non-rotated X tick label classification:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F037c35db3590c5c1b67f85ab57fb36e2%2Frotation_inference.png?generation=1687422645141712&alt=media" alt=""></p>
<p><strong>Horizontal bar detection:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fd4d7e6bee836a68203a8b0a5a08f17d3%2Fhorizontal_bar_inference.png?generation=1687422676773859&alt=media"></p>
<p><strong>Vertical bar detection (used in the histogram pipeline as well):</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F7746b8f7e88ee68b0d45729673d7a1ff%2Fvertical_bar_inference.png?generation=1687441005236360&alt=media"></p>
<p><strong>Line point detection:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F0ad448751c154024bf8f9bef757e8cc0%2Fline_inference.png?generation=1687422824609612&alt=media"></p>
<p><strong>Scatter point & dot detection (a single model for both):</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F652aa1e31c4486cc05a0de0fc2a4b275%2Fscatter_inference.png?generation=1687422924497010&alt=media"></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Ff8a45dce67f94bde15a6288b28ebef85%2Fdot_inference.png?generation=1687422948970505&alt=media"></p>
<p><strong>Tick mark detection:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F33f16e5a073df164463c026a48d40360%2Ftick_mark_inference.png?generation=1687422992117363&alt=media"></p>
<p><strong>X tick label detection (used only for detecting non-rotated X tick labels):</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fb793316f01d17ea577caa501d142591d%2Fx_tick_label_inference.png?generation=1687423056622492&alt=media"></p>
<p><strong>Y tick label detection:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F2b458db805a977b2c36447bdb6c53a31%2Fy_tick_label_inference.png?generation=1687423081364918&alt=media"></p>
<p><strong>Donut (used only for recognizing rotated X tick labels):</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Ffbb5753a331e6cbb22fa19cc56a55a04%2Fdonut_inference.png?generation=1687429751410113&alt=media" alt=""></p>
<p><strong>Note:</strong> We trained our Donut model only on image patches below the X axis which contain rotated text.</p>
<p><strong>PaddleOCR (for multi-line text detection & conversion to single-line texts):</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Ff3647d71073644c6033c7116e1e33da2%2Fpaddle.png?generation=1687440095811819&alt=media" alt=""></p>
<p><strong>TrOCR (for OCR of single-line texts):</strong></p>
<p>Image snippet:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F49734d8ecacdb174d1bfaf431d15be99%2Ftrocr_snippet.jpg?generation=1687429867440216&alt=media" alt=""></p>
<p>Output: <code>"A. Total Genotoxic Potency"</code></p>
<hr>
<h1>Deep Learning Frameworks</h1>
<p>We used:</p>
<ul>
<li>For image classification - <a href="https://docs.fast.ai/" target="_blank">fastai</a></li>
<li>For object detection - <a href="https://airctic.github.io/icevision/0.12.0/" target="_blank">IceVision</a> + fastai</li>
<li>For Donut - Hugging Face Transformers <a href="https://huggingface.co/docs/transformers/main_classes/trainer" target="_blank">Trainer</a> API</li>
</ul>
<hr>
<h1>Data</h1>
<p>In addition to the competition dataset, we used some additional images from the <em>ICPR 2022 CHART</em> dataset available at: <a href="https://chartinfo.github.io/toolsanddata.html" target="_blank">https://chartinfo.github.io/toolsanddata.html</a></p>
<p>Bounding box annotations for the chart elements were added using either (i) custom Python scripts or (ii) manual annotation tools, viz. Make Sense (<a href="https://www.makesense.ai/" target="_blank">https://www.makesense.ai/</a>) and Roboflow. The annotations were created in the PASCAL VOC format.</p>
<p>For line point detection, scatter point detection and tick mark detection, we had to convert the original point annotations to bounding box annotations. We noticed that the choice of bounding box size has a big impact on performance. Hence, we experimented with a few bounding box sizes, and found that the following sizes work well:</p>
<table>
<thead>
<tr>
<th>Line Point</th>
<th>Scatter Point</th>
<th>Tick Mark</th>
</tr>
</thead>
<tbody>
<tr>
<td>18px</td>
<td>16px</td>
<td>16px</td>
</tr>
</tbody>
</table>
<p>For X tick label detection and Y tick label detection, we noticed that adding a 2px margin to the provided bounding boxes improves performance.</p>
<p>Finally, we found that in a tiny fraction of images, one or more bounding boxes overstep the bounds of the image. We deleted these images from our dataset.</p>
<hr>
<h1>Model Validation Schemes</h1>
<p><strong>Plot type classification:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fb0e8a053b3a9509a36b0a03b0935f248%2Fplot_type_val_scheme.png?generation=1689168901672895&alt=media" alt=""></p>
<p><strong>Rotated vs non-rotated X tick label classification:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F7a1cdc221a5e10f875a76b9163888e21%2Frotation_val_scheme.png?generation=1689169392117974&alt=media" alt=""></p>
<p><strong>Horizontal bar detection:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fa866016d76730c38864c90c317fb89b7%2Fhorizontal_bar_val_scheme.png?generation=1689168936143640&alt=media" alt=""></p>
<p><strong>Vertical bar detection (used in the histogram pipeline as well):</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fd27e058d0bb05846f5ea42eca69e101f%2Fvertical_bar_val_scheme.png?generation=1689168951683704&alt=media" alt=""></p>
<p><strong>Line point detection:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Ffc25de18f4386ad8711b7953ddffc904%2Fline_val_scheme.png?generation=1689168967592482&alt=media" alt=""></p>
<p><strong>Scatter point & dot detection (a single model for both):</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F2411ee14867dda5e3b7aba79d7d234cf%2Fscatter_dot_val_scheme.png?generation=1689170209501077&alt=media" alt=""></p>
<p><strong>Tick mark detection:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fc3cc591f20cadb2cb59c53fa40ba9b0b%2Ftick_mark_val_scheme.png?generation=1689170225648108&alt=media" alt=""></p>
<p><strong>X tick label detection (used only for detecting non-rotated X tick labels):</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F2c1ee44e72bbea46fb8e3e514ad042d2%2Fx_tick_label_val_scheme.png?generation=1689170239312207&alt=media" alt=""></p>
<p><strong>Y tick label detection:</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F04a9bed0423c7c186b6b4a3a24b1fc8a%2Fy_tick_label_val_scheme.png?generation=1689170253004364&alt=media" alt=""></p>
<p><strong>Donut (used only for recognizing rotated X tick labels):</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fce3009c4f44c3d1126e268840097779b%2Fdonut_val_scheme.png?generation=1689170268258727&alt=media" alt=""></p>
<hr>
<h1>Pipelines</h1>
<h2>Image Classification</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F65ff8a8d4344bd4c111c8a1d595b740a%2Fdiagram_image_classifier.png?generation=1687437455837261&alt=media" alt=""></p>
<h2>Horizontal Bar Pipeline</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Ff1c8ac7ea8ce27d36a9c8f21ad5a3112%2Fdiagram_horizontal_bar.png?generation=1687695397682069&alt=media" alt=""></p>
<h2>Vertical Bar Pipeline</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F4b4ea6617dd3907c0a044474cc475fef%2Fdiagram_vertical_bar.png?generation=1687695428772800&alt=media" alt=""></p>
<h2>Histogram Pipeline</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F717f752077d0b61d3f277aba0ab66877%2Fdiagram_histogram.png?generation=1687696678498713&alt=media" alt=""></p>
<h2>Line Pipeline</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F2e1debd34468def631db044e287cb0f5%2Fdiagram_line.png?generation=1687696700722044&alt=media" alt=""></p>
<h2>Scatter Pipeline</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2Fc9b7be151091d9a3cc030f8341d988bf%2Fdiagram_scatter.png?generation=1687518134085611&alt=media" alt=""></p>
<h2>Dot Pipeline</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1191272%2F8b79f1c390dff4b5c8a703767ee3a372%2Fdiagram_dot.png?generation=1687764390803909&alt=media" alt=""></p>
<hr>
<h1>Results</h1>
<table>
<thead>
<tr>
<th></th>
<th>Overall</th>
<th>Scatter</th>
<th>Dot</th>
<th>Line</th>
<th>Vertical Bar</th>
<th>Horizontal Bar</th>
</tr>
</thead>
<tbody>
<tr>
<td>public</td>
<td>0.66</td>
<td>0.06</td>
<td>0.00</td>
<td>0.26</td>
<td>0.33</td>
<td>0.01</td>
</tr>
<tr>
<td>private</td>
<td>0.49</td>
<td>0.19</td>
<td>0.01</td>
<td>0.08</td>
<td>0.20</td>
<td>0.01</td>
</tr>
</tbody>
</table>
<hr>
<p>Thanks. I used a free web-based tool called Excalidraw…</p> |
Benetech - Making Graphs Accessible | 2nd Place Solution [Updated with Code Link] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> ! Congratulations to you too!</p>
<p>I only tried separate sub for scatter plots. The best I could get for scatter was: 0.24 private LB (phase I training), 0.29 (phase I + scatter only train). <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2F21abaaae41c1d4422e4b64a654bac9cc%2FScreenshot%202023-06-20%20at%2011.12.58%20PM.png?generation=1687273996904154&alt=media" alt=""></p>
<p>The following gave good boosts - </p>
<ul>
<li>Inclusion of a large number of synthetic scatter plots with high number of data points (36-64)</li>
<li>longer training (performance of bar plots saturate very fast, whereas line & scatter continues to improve with longer training) </li>
</ul>
<p>Thanks <a href="https://www.kaggle.com/thanhhau097a" target="_blank">@thanhhau097a</a>! Congratulations to you on solo gold, I was trying to catch your score entire competition (also in LECR previously…)!</p>
<p>Thanks <a href="https://www.kaggle.com/alejopaullier" target="_blank">@alejopaullier</a> ! Yes, you're right, it should be max_patches, will update soon. I used A100/ A6000. Gradient checkpointing dramatically reduced the memory requirements:</p>
<pre><code>model()
model()
</code></pre>
<p>as shared by Nicholas previously 😊</p>
<p>Increasing max_patches definitely gave boosts until 2048. For my phase II, i used up to 4096, but I didn't do ablation study to check its impact.</p>
<p>thanks <a href="https://www.kaggle.com/snnclsr" target="_blank">@snnclsr</a> 😊</p>
<p>yes! I will try late subs for other plot types to see if we have same scores for them too</p>
<p>First of all, I would like to thank the Kaggle community for sharing great ideas and engaging discussions. Special shout outs to <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> and <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a>. Congratulations to the winning teams - looking forward to your write ups.</p>
<p>Thank you Kaggle and hosts for organizing this interesting competition.</p>
<h2>Links</h2>
<ul>
<li>Inference notebook: <a href="https://www.kaggle.com/code/conjuring92/a05-mga-split-pipe" target="_blank">https://www.kaggle.com/code/conjuring92/a05-mga-split-pipe</a></li>
<li>Github Repo: <a href="https://github.com/rbiswasfc/benetech-mga" target="_blank">https://github.com/rbiswasfc/benetech-mga</a></li>
</ul>
<p>The following is a detailed summary of my solution:</p>
<h2>1 Overview</h2>
<p>My solution is entirely based on image-to-text models finetuned from the <code>google/matcha-base</code> backbone. The training pipeline, as depicted below, is a sequence of two phases. In the first phase, I leveraged a large number of synthetic plots to adapt the backbone for the current task. In the second phase, I used oversampled extracted / non-generated plots to specialize the pipeline for real world graphs. In this phase, I created separate models for scatter and non-scatter plots, primarily to mitigate the difficulty in predicting scatter data points.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2F232cced3276ad6561fe6a56d2e7509ad%2Fmga_flow.png?generation=1687269107458839&alt=media" alt=""></p>
<h2>2 Model</h2>
<p>All models share the same architecture (image-to-text transformer) and input-output pattern. Model input is simply the plot image itself without any prompts. The output text has the following template:</p>
<p><code><|bos|> <|chart_type_start|> {chart_type} <|chart_type_end|> <|num_point_start|> {n_x} | {n_y} <|num_point_end|> <|x_span_start|> {x0} | {x1} | {x2} | … | {xn} <|x_span_end|> <|y_span_start|> {y0} | {y1} | {y2} | … | {ym} <|y_span_end|> <|eos|></code></p>
<p>Some minor details:</p>
<ul>
<li>Numeric values are cast into scientific notation using <code>val = "{:.2e}".format(float(val))</code>.</li>
<li>Added histogram as additional chart type, which later converted to vertical_bar during post processing</li>
</ul>
<h2>3 Data</h2>
<p>Apart from the competition data, I used the following sources.</p>
<h3>Synthetic Dataset</h3>
<p>I spent majority of my competition time creating the synthetic dataset. For the underlying data in synthetic plots, I used</p>
<ul>
<li>Wikitables data i.e. tables from wikipedia (25%)<ul>
<li><a href="http://websail-fe.cs.northwestern.edu/TabEL/" target="_blank">http://websail-fe.cs.northwestern.edu/TabEL/</a></li>
<li><a href="https://github.com/sunlab-osu/TURL" target="_blank">https://github.com/sunlab-osu/TURL</a></li></ul></li>
<li>Synthetic XY data (75%)<ul>
<li>Categorical series: I created a list of categories using wikipedia glossary pages in STEM domain (<a href="https://www.kaggle.com/code/conjuring92/w03-stem-glossary/notebook" target="_blank">https://www.kaggle.com/code/conjuring92/w03-stem-glossary/notebook</a>)</li>
<li>Numerical series: random function generators ensuring all combinations of narrow to wide min-max range, small (1e-6) - large values (1e6), inclusion of outliers etc</li></ul></li>
</ul>
<p>I generated the plots using matplotlib ensuring they capture all aspects of the graph conventions. For example, in the case of line plots, the generated graphs included shared origin plots, having tick labels that are not included in the data-series, additional point markers in between two tick labels, unequal spacing between x tick labels etc. I tried to maximize the diversity in the graphs by -</p>
<ul>
<li>customizing tick marks, tick labels, tick direction, formatting of numerical tick labels (e.g. scientific notation, different rounding, European style formatting, adding % as suffix, currency symbol as prefix etc), major / minor grids, titles, axis limits, spines (e.g. setting bottom spine at y=0), legends, markers etc</li>
<li>grayscale, background textures, aspect ratio (very wide to very narrow)</li>
<li>number of data points (4-24 points for non-scatter, 4-64 for scatter)</li>
<li>random special effects: text boxes with stats on y values, horizontal / vertical bands, insets, random texts, random equations, annotating min/max points, error bands, adding random shapes, varying line width, color, data point marker size, hatches, error bars, slopes etc</li>
<li>log-log, semi-log plots, reverse y axis (although these weren’t part of test set, my hypothesis (?) was they would help in model generalization)</li>
</ul>
<p>The synthetic dataset consisted of</p>
<ul>
<li>100k horizontal bars</li>
<li>100k vertical bars + histograms</li>
<li>100k dot plots</li>
<li>200k line plots</li>
<li>200k scatter plots</li>
</ul>
<h3>Synthetic Dataset - Bartley</h3>
<ul>
<li>Random selection of 25k data points from the synthetic dataset shared by <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a>: <a href="https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data" target="_blank">https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data</a></li>
</ul>
<h3>Pseudo Labelling</h3>
<p>I took screenshot of around 700 images from wikimedia commons (e.g. <a href="https://commons.wikimedia.org/w/index.php?search=line+plots&title=Special:MediaSearch&go=Go&type=image)" target="_blank">https://commons.wikimedia.org/w/index.php?search=line+plots&title=Special:MediaSearch&go=Go&type=image)</a>. I used pseudo labelling, followed by manual correction, to generate the annotations.</p>
<h3>ICDAR dataset</h3>
<p>I used around 1100 images from ICDAR, only those having 1 XY series (250 horizontal bar + 450 vertical bar + 250 lines + 150 scatter). I also did post-processing to ensure annotations match competition graph conventions (e.g. handling percentages, interpolation of line plot data to match tick labels etc).</p>
<h2>4 Datamix</h2>
<ul>
<li>Datamix 1: used for domain adaptation</li>
</ul>
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Size</th>
<th>Multiplier</th>
<th>Effective Size</th>
</tr>
</thead>
<tbody>
<tr>
<td>Competition Dataset - Synthetic</td>
<td>60k</td>
<td>3</td>
<td>180k</td>
</tr>
<tr>
<td>Competition Dataset - Extracted</td>
<td>1.1k</td>
<td>16</td>
<td>17k</td>
</tr>
<tr>
<td>Synthetic Dataset - Self</td>
<td>700k</td>
<td>1</td>
<td>700k</td>
</tr>
<tr>
<td>Synthetic Dataset - Bartley</td>
<td>25k</td>
<td>1</td>
<td>25k</td>
</tr>
</tbody>
</table>
<ul>
<li>Datamix 2: scatter specialization</li>
</ul>
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Size</th>
<th>Multiplier</th>
<th>Effective Size</th>
</tr>
</thead>
<tbody>
<tr>
<td>Competition Dataset - Synthetic</td>
<td>11k</td>
<td>1</td>
<td>11k</td>
</tr>
<tr>
<td>Competition Dataset - Extracted</td>
<td>0.2k</td>
<td>16</td>
<td>3.2k</td>
</tr>
<tr>
<td>Synthetic Dataset - Self</td>
<td>30k</td>
<td>1</td>
<td>30k</td>
</tr>
<tr>
<td>Pseudo Labelled Dataset</td>
<td>0.1k</td>
<td>16</td>
<td>1.6k</td>
</tr>
<tr>
<td>ICDAR Dataset</td>
<td>0.2k</td>
<td>16</td>
<td>3.2k</td>
</tr>
</tbody>
</table>
<ul>
<li>Datamix 3: non-scatter specialization</li>
</ul>
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Size</th>
<th>Multiplier</th>
<th>Effective Size</th>
</tr>
</thead>
<tbody>
<tr>
<td>Competition Dataset - Synthetic</td>
<td>48k</td>
<td>1</td>
<td>48k</td>
</tr>
<tr>
<td>Competition Dataset - Extracted</td>
<td>0.9k</td>
<td>8</td>
<td>7.2k</td>
</tr>
<tr>
<td>Synthetic Dataset - Self</td>
<td>20k</td>
<td>1</td>
<td>20k</td>
</tr>
<tr>
<td>Pseudo Labelled Dataset</td>
<td>0.5k</td>
<td>8</td>
<td>4k</td>
</tr>
<tr>
<td>ICDAR Dataset</td>
<td>1k</td>
<td>8</td>
<td>8k</td>
</tr>
</tbody>
</table>
<h2>5 Training</h2>
<p>The main hyper-parameters for training were <code>max_patches</code> and <code>max_length</code>. I used the following settings at various phases of training:</p>
<h5>Phase 1 Training</h5>
<ul>
<li>max_patches: 2048</li>
<li>max length: 1024</li>
<li>lr: 5e-5</li>
<li>batch size: 2</li>
<li>gradient accumulation: 16</li>
</ul>
<h5>Phase 2 training - non-scatter</h5>
<ul>
<li>max_patches: 4096</li>
<li>max length: 512</li>
<li>lr: 2e-5</li>
<li>batch size: 4</li>
<li>gradient accumulation: 2</li>
</ul>
<h5>Phase 2 training - scatter</h5>
<ul>
<li>max_patches: 3072</li>
<li>max length: 1024</li>
<li>lr: 2e-5</li>
<li>batch size: 8</li>
<li>gradient accumulation: 1</li>
<li>AWP</li>
</ul>
<p>As minor detail, I used Exponential Moving Average (EMA) of model weights, gradient clipping, cosine scheduler with liner warmup during training.</p>
<h2>6 Augmentation</h2>
<p>Since I was repeating the extracted images many times, I decided to include the following augmentation</p>
<pre><code>transforms = A.Compose(
[
A.OneOf(
[
A.RandomToneCurve(=0.3),
A.RandomBrightnessContrast(
brightness_limit=(-0.1, 0.2),
contrast_limit=(-0.4, 0.5),
=,
),
A.HueSaturationValue(
hue_shift_limit=(-20, 20),
sat_shift_limit=(-30, 30),
val_shift_limit=(-20, 20)
)
],
=0.5,
),
A.OneOf(
[
A.MotionBlur(=3),
A.MedianBlur(=3),
A.GaussianBlur(=3),
A.GaussNoise(var_limit=(3.0, 9.0)),
],
=0.5,
),
A.Downscale(=, =0.1, =0.90, =0.99),
],
=0.5,
)
</code></pre>
<h2>7 References</h2>
<ul>
<li>Matcha Paper: <a href="https://arxiv.org/pdf/2212.09662v2.pdf" target="_blank">https://arxiv.org/pdf/2212.09662v2.pdf</a></li>
<li><a href="https://www.kaggle.com/code/nbroad/donut-train-benetech" target="_blank">https://www.kaggle.com/code/nbroad/donut-train-benetech</a></li>
<li><a href="https://www.kaggle.com/code/nbroad/donut-infer-lb-0-44-benetech" target="_blank">https://www.kaggle.com/code/nbroad/donut-infer-lb-0-44-benetech</a></li>
<li><a href="https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data" target="_blank">https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data</a></li>
<li>AWP: <a href="https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook" target="_blank">https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook</a></li>
</ul>
<p>Sorry for the long write up. Please let me know if you have any queries / suggestions. I plan to release all scripts, configs and datasets by next week. [Edit: code released at <a href="https://github.com/rbiswasfc/benetech-mga" target="_blank">https://github.com/rbiswasfc/benetech-mga</a>]</p>
<p>Thanks Nicholas! </p>
<ol>
<li><p>Yes, very long, as compared to any other models I trained for Kaggle comps. Phase I training was around 36hrs in A100. I didn't do anything special to speed it up (only using bf16, turned off gradient checkpointing for phase I - hence batch size of 2). I plan to experiment with torch 2.0, fsdp, maybe LoRA later.</p></li>
<li><p>I wanted to avoid geometric transformations (I felt it may make training more difficult). Rest was just manual observation to make sure augmented images look reasonable. Since training runs were long and benefit of augmentation wasn't likely to show up early in training, I couldn't really tune any aspect of it. It was applied at 50% rate.</p></li>
</ol>
<p>On scatter plots, I think key was to include a large number of synthetic plots with many data points. In post-processing, I didn't force x and y series to have same number of points, which also helped. I wasn't able to explore any other strategies for scatter, mainly due to my lack of knowledge in object detection. I also couldn't allocate research time on it, as my focus is LLM these days. </p>
<p>Thanks <a href="https://www.kaggle.com/qdv206" target="_blank">@qdv206</a>! Congratulations to you too! </p>
<p>Indeed, our solution and thought process were very similar. I tried a few things to boost score for scatter but all within matcha model. Since the public LB score was decent, my main fear was model robustness against unseen examples.</p>
<p>Thanks <a href="https://www.kaggle.com/chumajin" target="_blank">@chumajin</a>! One more to go for you 😊</p>
<p>Thanks a lot Darek😊 The performance of these image-to-text models is truly insane and can handle wide range of tasks. It was great experience teaming up in LECR, looking forward to future collaborations!</p>
<p>Thanks <a href="https://www.kaggle.com/crodoc" target="_blank">@crodoc</a>! Congratulations to you too, you became GM in such a short time & always performing well in diverse competitions 👍</p> |
Benetech - Making Graphs Accessible | 3rd place Solution - Matcha & Object Detection | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congrats to all the participants especially to those that managed to stick it to the end. This was overall a really interesting challenge, and we really enjoyed tackling it. Anyone in 0.86+ public scores could've taken the win so we're glad we dropped only 2 spots.</p>
<p>Thanks to <a href="https://www.kaggle.com/crodoc" target="_blank">@crodoc</a> for the great teaming experience, we had two different opinions on how to tackle the problem : he went for and end-to-end approach whereas I wanted to do object detection & OCR. This proved to be ideal for this competition as object detection compensated for end-to-end weaknesses.</p>
<h2>Overview</h2>
<p><a href="https://ibb.co/qFXF9vq"><img src="https://i.ibb.co/84T42Hv/cls.png" alt="cls"></a></p>
<p>Our solution is a two-step pipeline, where the first step is a simple classification task and the second step solves the task for the different chart types. For scatter and dot plots, we used a detection approach. For line and bar charts, Matcha was really strong. Here are our LB scores :</p>
<table>
<thead>
<tr>
<th></th>
<th>Overall</th>
<th>Scatter</th>
<th>Dots</th>
<th>Line</th>
<th>Horizontal Bar</th>
<th>Vertical Bar</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Public</strong></td>
<td>0.87</td>
<td>0.09</td>
<td>0</td>
<td>0.33</td>
<td>0.04</td>
<td>0.39</td>
</tr>
<tr>
<td><strong>Private</strong></td>
<td>0.71</td>
<td>0.28</td>
<td>0.01</td>
<td>0.13</td>
<td>0.01</td>
<td>0.27</td>
</tr>
</tbody>
</table>
<h2>Validation & Insights</h2>
<p>From the description page : </p>
<blockquote>
<p>The extracted figures in the training and public test sets are drawn from the same set of sources. The figures in the private test set are drawn from a distinct set of sources.</p>
</blockquote>
<p>Public LB is leaky, we need to create models that generalize to new data sources. Therefore, we use all the extracted data as validation, and train only on generated data. Data provided by the hosts is not diverse enough to allow for generalization so we generated our own. Matcha and classification models were fullfit on all the data after careful parameter optimization, but detection models were not trained on any extracted data.</p>
<p>For validating on dots, we used a small curated dataset of 20 images found on google images. We estimated via probing that there are between 100 and 125 dots in the test set, all of them most likely being in the private test set. We did not probe for scatter though, this would've been helpful. </p>
<h2>Step 1 - Classification</h2>
<p>Nothing too fancy here, we trained models on (benetech + theo + crodoc) generated data.</p>
<p><strong>Main parameters:</strong></p>
<ul>
<li>2 epochs on 88k images. </li>
<li>lr <code>3e-4</code> or <code>5e-4</code> (2 lrs for blend, we also used 2 seeds).</li>
<li>Mixup and some color augmentations mostly.</li>
<li>256x384 image size.</li>
<li>NfNet-l2 with 0.2 dropout.</li>
</ul>
<h2>Step 2.a - Scatter</h2>
<p><a href="https://ibb.co/5v4cM9C"><img src="https://i.ibb.co/ct3TgvR/scatter-pipe.png" alt="scatter-pipe"></a></p>
<p>For scatter, we relied on <a href="https://github.com/Megvii-BaseDetection/YOLOX" target="_blank">YoloX</a> to detect all markers. Cached is used to take care of the other useful elements. If all the points are detected correctly, it’s not that complicated to infer the target : detect the ticks & labels, read the labels, and interpolate !</p>
<p><strong>More details:</strong></p>
<ul>
<li>Ensemble YoloX-m and YoloX-l with NMS, helps reduce the number of FNs.</li>
<li>Models are trained on scatter + dot generated benetech data + a bunch of plots I generated and pseudo labeled, for 10 epochs.</li>
<li>CV 0.67, public LB 0.09~, private 0.29 - performance drop almost exclusively comes from overlapping/too hard to detect markers.</li>
<li>Bunch of post-processing to make the pipeline more robust to OCR mistakes and detection mistakes.</li>
<li>We were initially using Yolo-v7, but had to switch to YoloX because of the first rule change. It took us a week to match Yolo-v7 performance with YoloX.</li>
</ul>
<h2>Step 2.b - Dots</h2>
<p><a href="https://ibb.co/sgMLzxG"><img src="https://i.ibb.co/TwFzdjS/dot-pipe.png" alt="dot-pipe"></a></p>
<p>Dot pipeline is similar to scatter, but a bit more simple. We detect the points and cluster them, and map them to the detected x-labels. Labels with no assigned cluster are given the target 0, others the number of detected points. Counting points is not really robust to detection mistakes, so instead we used the height of the uppermost point and interpolated.</p>
<h2>Step 2.c - Bars & Lines</h2>
<p>Matcha was really powerful here. We used <code>matcha-base</code> and set <code>is_vqa=False</code> to avoid giving texts as input to the model.</p>
<p>We trained Matcha to predict chart type, xs and ys for an image. The ground truth looked the same as <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> used for his donut approach (except we dropped the prompt token). We tried other approaches, but this worked best :</p>
<pre><code> = X_START + .join(list(map(str, xs))) + X_END
= Y_START + .join(list(map(str, ys))) + Y_END
= + chart_type + + x_str + y_str
</code></pre>
<p>e.g: <code><line><x_start>0;2;4;6<x_end><y_start>2.7;2.2;3.6;5.2;<y_end></code></p>
<p>The most valuable boost we got by generating additional charts using matplotlib. We reused the values and texts from the train dataset to generate ticks & values, with different styles/patterns/fonts/colors for diversity. The code for generating additional images has around 1000 lines and basically covers most cases where the model was failing when validated on the "extracted" dataset (e.g. negative values, line edges, missing bars, multiline text, text rotations).</p>
<p><strong>Other things that helped :</strong></p>
<ul>
<li>Fix the number of decimal points for numerical values using their range :<br>
<code>number_of_decimals = max(0, round(np.log10(1/ (max(y_ticks) - min(y_ticks))) + 3)</code></li>
<li>We use an additional chart type : histograms, to learn that such charts have one less y value. </li>
<li>Add an additional cross entropy loss for chart types.</li>
<li>Ensemble several (4) models (+0.01 public) :<ul>
<li>Voting to compute the number of outputs and fix obvious mistakes.</li>
<li>Voting for categorical predictions.</li>
<li>Averaging for continuous predictions.</li></ul></li>
</ul>
<p><strong>More details:</strong></p>
<ul>
<li>lr 3e-5, cosine with 0.25 cycle.</li>
<li>10 Epochs.</li>
<li>Save weights every 0.25 epochs and use a model soup of all checkpoints from epoch > 1.</li>
<li>Augs : Color transforms, image compression and random scaling.</li>
</ul>
<h2>Final Words</h2>
<p>We were greatly perturbed by the unexpected second rule change, especially considering the fact that I was on vacation. Matcha and classification models were retrained on ICDAR during the last 3 days, this gave a 0.01 public LB boost but nothing on private. </p>
<p>Thanks for reading !</p>
<p>Our inference code is here : <a href="https://www.kaggle.com/code/crodoc/benetech-mit-ensemble?scriptVersionId=134055662" target="_blank">https://www.kaggle.com/code/crodoc/benetech-mit-ensemble?scriptVersionId=134055662</a><br>
We will be sharing and open-sourcing the rest of our solution in the upcoming days. </p>
<p>We are prize eligible, but I still have not understood which prize we will get :)</p> |
Benetech - Making Graphs Accessible | 40th place solution + code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Code</h1>
<p>I hope you find the training and inference codes neat and tidy, I did my best effort to not make a mess!</p>
<ul>
<li><a href="https://www.kaggle.com/code/alejopaullier/benetech-matcha-train-0-74" target="_blank">Matcha Training Code</a></li>
<li><a href="https://www.kaggle.com/code/alejopaullier/benetech-matcha-inference-0-74" target="_blank">Matcha Inference Code</a></li>
</ul>
<h1>TL;DR</h1>
<p>Here is a brief explanation of our team solution with <a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a>.</p>
<p>Our solution consisted of a pipeline of object detectors (EfficientDet) and image-encoders/text-decoders (Matcha and Donut):</p>
<ol>
<li>A Donut model which acts like an OCR (Object Character Recognition) model is used to extract the axis labels of the chart (red)</li>
<li>An object detector (EfficientDet) detects the data series (green). In the end, we only used this for scatter plots.</li>
<li>A second object detector (EfficientDet) detects the plot's chart area (area where the data points are)(purple).</li>
<li>Knowing the chart's limits from (3), the bounding boxes of the data series from (2) and the axis numerical values from (1) you can with a simple cross multiplication get the data series values. This approach is essentially as described in my <a href="https://www.kaggle.com/competitions/benetech-making-graphs-accessible/discussion/396773" target="_blank">discussion</a>.</li>
<li>For the other chart types (<code>vertical_bar</code>, <code>horizontal_bar</code>, <code>line</code> and <code>dot</code>) we used a Matcha model.</li>
</ol>
<p><img src="https://i.imgur.com/SpheoRP.png"></p>
<h1>Solution in detail</h1>
<h3>1. Donut: Axis labels detection</h3>
<p>We train different image-encoder/text-decoder architectures to detect the axis labels. I trained a Donut model (which surprisingly got the best results) and two other Matcha models which got worse performance. Despite training with different <code>max_patches</code> results were rather similar for all. Most complications were associated to shared origins in the axis labels, long sequences, floats with many decimal places, etc. In this step, the most important is to get the minimum and the maximum values right for each axis so the cross multiplication is done right.</p>
<h3>2. EfficientDet: Object Detector for Data Series</h3>
<p>This was one of the most time consuming parts for me. At the beggining of the competition until Nicholas shared his Donut model I didn't know that image-encoder/text-decoder models could perform that well in this competition so I focused on an object detector model which could detect well data series. To do so I lack bounding boxes training data, so I started manually labelling thousands of images. These took me a lot of days. My EfficientDet object detector did extremely well on generated images but not as well on extracted. I labeled as many images as I could. My OD has ~50% exact matches accuracy for scatter plots. It performs much better for other chart types, but since scatter plots can have <em>a lot</em> of points it's reasonable to be a tougher task.</p>
<p>Here you can see some of my ODs predictions:</p>
<p><img src="https://i.imgur.com/PQ1L3az.png"></p>
<h3>3. EfficientDet: Object Detector for Chart bounding box</h3>
<p>As part of the pipeline I had to detect the chart's bounding box so I could map pixel coordinates to real numerical values from the axis labels. This step was rather easy and its basically the same code from part (2) applied on the bounding boxes of <code>plot-bb</code> from the annotations. We had this training data available inside the JSONs files and it's a simple task so the OD achieves high accuracy.</p>
<h3>4. Matcha and Donut models</h3>
<p>At the beggining of the competition I started making several changes to Nicholas' Donut model and soon discovered that you could achieve higher performance by perfoming some basic postprocessing. However, even after training more epochs, perfoming data augmentation and other tricks I realised I couldn't achieve much higher performances. I then tried implementing a Matcha model and with a lot of effort I made it work thanks (again) to Nicholas which raised the issue of the GitHub discussion. Once I got the Matcha model working I tried a ton of different stuff until I could squeeze as much performance as I could. So here is a little roadmap to achieveing 0.74 in the public LB:</p>
<ul>
<li><strong>0.20:</strong> a model with only Object Detection and Donut for axis labels.</li>
<li><strong>0.47:</strong> Nicholas model but just modifying post processing, like using max length of xy series instead of the min.</li>
<li><strong>0.48:</strong> doing additional post processing like filling values with average of the data series mean values.</li>
<li><strong>0.49:</strong> using a mixture of Donut and the Object Detector.</li>
<li><strong>0.50</strong>: training Donut for 10 epochs instead of 5.</li>
<li><strong>0.56:</strong> BOOM! big increase by making the Matcha model work. Super vanilla, no fancy stuff.</li>
<li><strong>0.61:</strong> combining vanilla Matcha with my Object Detector for scatter plots.</li>
<li><strong>0.64:</strong> training Matcha for more epochs (10 epochs).</li>
<li><strong>0.69:</strong> training Matcha with Bartley's generated images from code. These improvement increase the performance a lot of all chart types except scatter with a relatively low number of additional images per chart type (+5k per image).</li>
<li><strong>0.71:</strong> increase <code>max_patches</code> of Matcha from 512 to 1024.</li>
<li><strong>0.74:</strong> trained Matcha on 100% of the extracted images instead of the 75% I usually used so I could get a 25% holdout set for validation.</li>
</ul>
<h1>Things that didn't work</h1>
<p>So many things that didn't work! I will write the ones I can remember:</p>
<ol>
<li>I tried training at an early stage <strong>one different image-encoder/text-decoder model per each chart type</strong> instead of training a general model. I believe that this didnt work for two reasons: 1. I think I trained a donut model for this 2. I didnt have enough images (at that point I didnt use Bartley generated images).</li>
<li>I tried <strong>different pre-trained weights</strong> for the Matcha model and ALL got worse results. These were: <code>statista</code>, <code>chartqa</code>, <code>plotqa</code>. Best performance came from <code>matcha-base</code>.</li>
<li>Tried training <strong>two Matcha models for each axis</strong>. I really thought this idea could work and until today I don't know why it didnt! My reasoning is the following: it's harder to predict a longer sequence, so why don't I train a model to predict x-axis series and another for y-axis? In the end it gave worse results than a combined model.</li>
<li>Tried <strong>training with A LOT of generated images</strong>. I tried training with additional 5k, 10k and 25k more images per chart type. So essentially I trained the model with the competitions 60k plus the additional generated images. So I trained some models with a total number of 180k images! What I soon realised was that if there is not that much variance in additional generated images no matter how many images you add the model will have reached its capacity. I discussed this <a href="https://www.kaggle.com/competitions/benetech-making-graphs-accessible/discussion/415470#2294381" target="_blank">here</a>, which relates to the Law of Diminishing returns.</li>
<li>Trained with <strong>Balance Sampler/Oversampling</strong>. I trained with a Balance Sampler, which guarantees that on every batch, at least one minority class image will be present. My minority class images are the extracted images. This has two effects: it oversamples the minority class and (in theory) makes the model converge faster. I didn't see any benefit from oversampling (no surprise here).</li>
<li><strong>Data augmentation</strong>. I tried augmenting data mostly by applying color-related augmentations like <code>RGBShift</code>, <code>RandomBrightness</code>, <code>ColorJitter</code>, etc. </li>
<li><strong>Increased the <code>max_patches</code> parameter</strong> to 1536 (halfway between 1024 and 2048). So, as I saw an improvement by increasing <code>max_patches</code> from 512 to 1024 I thought, why don't I increase it a bit more and see what happens? Well, it didn't seem to improve the score and of course consumed more VRAM and computation time. Increasing <code>max_patches</code> increases a lot the VRAM consumption so the batch size had to be lowered and training took longer.</li>
<li>Trained with <strong>different schedulers</strong>. I ended up training with <code>OneCycleLR</code> but before I tried some other, even constant LR, and never saw an increase in performance by modifying the LR scheduler.</li>
<li>For the Object Detector I tried using <strong>Weighted Boxes Fusion</strong> to increase the exact matches of the data series. One of the greatest challenges of this competition is to get the exact number of data series points right. My OD sometimes produced an excess of low confident bounding boxes. I thought that maybe if I could fuse them with higher confidence bounding boxes I could get higher accuracies but couldn't quite make it. I ended up tunning the probability threshold for keeping/discarding the bounding boxes. In the end, bounding boxes with p>0.22 where kept. Would be glad to know how anyone did this!</li>
<li>Different Object Detector backbones. I used <code>tf_efficientnetv2_s</code> as my OD's backbone, but tried larger models without match success like <code>tf_efficientnetv2_l</code>.</li>
<li><strong>Automatic Mix Precision</strong>. Not that it didn't work but didn't provide any performance by training with mixed precision using brain floating point tensors (<code>bfloat16</code>). However, since you use less VRAM compared to <code>float32</code> you can train with larger batch sizes which is a recommended practice (I heard that Karpathy said that it's always the best to train with as larger batches as you can, I may be wrong here).</li>
<li>Many other things that I don't remember right now.</li>
</ol>
<h1>Hardware</h1>
<p>It's barely impossible to train models like Matcha and Donut with the P100's provided by Kaggle. We ended up paying Google Colab Pro+ which provides A100 GPUs with 40 GB of VRAM since neither of us had DL GPUs like an RTX 3090/4090. With the A100 we were able to train large models and run many experiments faster. I trained more than 30 different models!</p>
<h1>Conclusions</h1>
<p>Even though I would have liked to get inside the gold zone (always hoping for the best) I am grateful for the result we obtained and mainly for all the lessons learned throughout the competition. I firmly believe that if you want to learn something try changing it! Code something from scratch! Best way to learn by far. I am really looking forward to read other teams solutions. I would also like to know if anyone tried changing the Vision Model and the Text Model from Pix2Struct. I don't know if it's even possible or how its done so if you know please leave in the comments.</p>
<p>Thanks all!</p> |
Benetech - Making Graphs Accessible | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the winners and thanks to the organizers for hosting this interesting competition.<br>
I would also like to thank my teammate <a href="https://www.kaggle.com/fuumin621" target="_blank">@fuumin621</a> for working with me on this challenging task.</p>
<h1>SUMMARY</h1>
<p>Similar to other solutions, our pipeline also consists of object detection for scatter points and end-to-end plot prediction by deplot for other graphs.<br>
<strong>Scatter plots</strong>: plot-bb detection with YOLOX + scatter point detection with YOLOX + OCR of max/min axis labels with matcha<br>
<strong>Other graphs</strong>: deplot trained with original synthetic data</p>
<h1>Scatter point detection</h1>
<p>We used the following two steps object detection to detect scatter points and determined the coordinates of scatter points from the axis labels read by matcha.</p>
<p>Step 1. plot-bb detection by YOLOX (model size : x, image size : 640)<br>
Step 2. scatter points detection with "plot-bb cropped" image by YOLOX (model size : x, image size : 960)</p>
<p>Training data for step.2 is the competition data (generated, extracted)+ 538 scatter plots from ICDAR dataset.</p>
<p>Public LB : Private LB = 0.09 : 0.29</p>
<h1>Bar graph, line graph, dot graph</h1>
<p>We used a variety of graph data sources for training deplot,</p>
<ol>
<li>generated graph (competition data)</li>
<li>extracted graph (competition data)</li>
<li>ICDAR dataset (1185 single series + 2992 multiple series)</li>
<li><a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a> 's 500k dataset (except scatter)</li>
<li>original synthetic dataset (vertical bar: 40000, horizontal bar: 40000, line: 40000)</li>
<li>original synthetic dataset (histogram : 10000)</li>
</ol>
<h2>original dataset (generated by matplotlib)</h2>
<p>Our key focus of this competition is data generation.<br>
We generated graphs with various patterns to make our model robust against unknown private dataset.<br>
The generated graphs were evaluated using the competition dataset (extracted) and the ICDAR dataset.<br>
Continuous graph modification and feedback from error analysis allowed predictions to be made on difficult data.<br>
Our data was especially helpful in improving the accuracy of line graphs (so it is very important whether the line is on the tick label or not!).</p>
<p>Some attempts for improving common mistakes:</p>
<ul>
<li>line plot with no value on some tick labels</li>
<li>JPG compression to lower quality (simulate low resolution and blurred characters)</li>
<li>add symbols ($ % , .) to distinguish numbers and symbols (like 8 and $)</li>
<li>extended predicted digits for small values (like < 0.0001)</li>
<li>separate labels into multiple lines</li>
<li>etc..</li>
</ul>
<p>Some examples of our generated graphs are shown below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2F8e95529288fbfda2462e21336fb43b5b%2F00000008.jpeg?generation=1687351996558599&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2F392cab76953662bed1d6fc6f17be4e59%2F00000054.jpeg?generation=1687352984294446&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2F2dfff51c40cb57f1793b4c56fbea2560%2F00000042.jpeg?generation=1687353205159183&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2F584b0585fa82beffd0d3cd771c347bd3%2F00000081.jpeg?generation=1687357868152994&alt=media" alt=""></p>
<h2>Training (deplot)</h2>
<h3>training data configuration</h3>
<table>
<thead>
<tr>
<th>source</th>
<th># of samples per epoch</th>
<th>total graphs</th>
</tr>
</thead>
<tbody>
<tr>
<td>generated</td>
<td>1000</td>
<td>50000</td>
</tr>
<tr>
<td>extracted</td>
<td>953</td>
<td>953</td>
</tr>
<tr>
<td>ICDAR</td>
<td>1000</td>
<td>4177</td>
</tr>
<tr>
<td>500k dataset</td>
<td>1000</td>
<td>400000</td>
</tr>
<tr>
<td>original synthetic data</td>
<td>2000</td>
<td>120000</td>
</tr>
<tr>
<td>original synthetic data (histogram)</td>
<td>100</td>
<td>10000</td>
</tr>
</tbody>
</table>
<p>For each epoch, we randomly sampled the graphs for each source at the above ratios.</p>
<h3>training conditions</h3>
<p>epochs: 50<br>
learning rate: 2e-4<br>
scheduler: cosine annealing<br>
batch (accumulation): 64</p>
<h1>Final results</h1>
<table>
<thead>
<tr>
<th>chart-type</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>all</td>
<td>0.86</td>
<td>0.69</td>
</tr>
<tr>
<td>vertical bar</td>
<td>0.39</td>
<td>0.26</td>
</tr>
<tr>
<td>horizontal bar</td>
<td>0.04</td>
<td>0.01</td>
</tr>
<tr>
<td>line</td>
<td>0.32</td>
<td>0.12</td>
</tr>
<tr>
<td>scatter</td>
<td>0.09</td>
<td>0.29</td>
</tr>
<tr>
<td>dot</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>Yes, we didn't detect tick labels pixel coordinates, instead we simply read min/max values of x/y tick labels by matcha.<br>
Then, these min/max values are mapped to pixel coordinates as (x_min, y_min) -> (0,0), (x_max, y_max) -> (w', h') where w', h' are the width and height of the graph after cropping by plot-bb.<br>
But as you know, the min/max tick labels do not always coincide with the edge of the image. So, to be honest, our approach is inadequate and there is still room for score improvement.</p> |
Benetech - Making Graphs Accessible | 51st Place Solution - Single Matcha Model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone,</p>
<p>First of all, huge congratulations to the winners. I would like to share our 51st-place solution briefly as well. Our team is <a href="https://www.kaggle.com/batuhanm" target="_blank">@batuhanm</a> and <a href="https://www.kaggle.com/emirkocak" target="_blank">@emirkocak</a> </p>
<hr>
<p>We started working on this competition just two weeks prior to the deadline and dived into the discussions immediately. After having an initial understanding we found these two baseline notebooks:</p>
<ul>
<li><a href="https://www.kaggle.com/code/nbroad/donut-train-benetech" target="_blank">https://www.kaggle.com/code/nbroad/donut-train-benetech</a> </li>
<li><a href="https://www.kaggle.com/code/heyytanay/training-pix2struct-pytorch-amp-w-b" target="_blank">https://www.kaggle.com/code/heyytanay/training-pix2struct-pytorch-amp-w-b</a> </li>
</ul>
<p>huge thanks to <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> <a href="https://www.kaggle.com/heyytanay" target="_blank">@heyytanay</a> for their contributions. I merged these two code baselines and trained an end-to-end Matcha model for two epochs. The result of this pipeline was .43 - .25. After some investigations, I realized that the model was underfitting so we trained the same model for 8 more epochs (so 10 in total). This experiment reached .66 in the public LB and .37 in the private. This whole process took a week or so.</p>
<p>Then after having lots of debates on using the ICDAR dataset and discussions, it was banned at first so we didn't focus on that part. Instead, we included this dataset:</p>
<ul>
<li><a href="https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data" target="_blank">https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data</a></li>
</ul>
<p>from <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a> (again huge thanks here). With around 300k~ images we trained 3 more epochs in total and reached out to .73 in the public LB.</p>
<p>Since the matcha is a generative model, for final post-processing, we did some cleaning with the numbers and got a slight boost (+0.001).</p>
<h4>Things didn't work</h4>
<ul>
<li>We couldn't manage deplot to work as matcha. Our goal was to combine these two models and improve on confused examples, but 1 week of effort was not enough and it just didn't work.</li>
</ul>
<p>We also trained a separate scatter model but due to time limit, we couldn't manage it to incorporate our inference pipeline. After reading couple of solutions, it looks this would improve the results a lot.</p>
<p>Well, thanks for reading, and again congrats to the winners!</p>
<p>Thank you! <a href="https://www.kaggle.com/heyytanay" target="_blank">@heyytanay</a> </p> |
Benetech - Making Graphs Accessible | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Many thanks to Kaggle and Benetech for this interesting competition where many different approaches are possible and ideas are endless. Given the strong distribution shift in the private test set, we are quite happy with our finish.</p>
<p><strong>Summary</strong></p>
<p>Our solution consists of 3 main components:</p>
<ul>
<li>Synthetic data generation.</li>
<li>3-stage training of matcha models.</li>
<li>Separate process to handle scatter plot.</li>
</ul>
<p><strong>1. Synthetic data generation</strong></p>
<p>Early on we realized this competition is unique in the sense that we are not limited to the available dataset. If the model struggles with certain plot patterns, we can generate a large number of plots with these patterns and as the model (matcha) is very strong, it will learn.</p>
<p>We started by building upon this amazing repo: <a href="https://github.com/rakutentech/chart-synthesizer" target="_blank">https://github.com/rakutentech/chart-synthesizer</a>. We used both competition’s extracted data and ICDAR data as validation and try to emulate as many patterns from them as possible. For example:</p>
<ul>
<li>Different fonts, tick orientations, tick styles, background colors, grid styles etc.</li>
<li>Add error bars to bar and line plots.</li>
<li>Generate histogram plots as a separate chart type.</li>
<li>Difficult line plot patterns, such as when the line starts very close to a tick mark but doesn’t touch it, the model without additional training data will very likely include this tick value in the prediction. </li>
<li>Add blur and noise effects to reproduce the look of extracted plots.</li>
</ul>
<p>Some examples of our generated data:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9250575%2F93bb591a9717e095b9f422c224b31c02%2Fline61.jpg?generation=1687298963057596&alt=media" alt="“”"><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9250575%2F6f24a6ba2660e0440b150164d6390ab6%2Fvbar3.jpg?generation=1687298981955314&alt=media" alt="“”"></p>
<p>With each round of training, we would analyze the validation set, locate the patterns that give low score, and add these patterns to our chart generation code. We made the rise from 0.74 to 0.82 public LB simply by repeating this process multiple times.</p>
<p>With our generation code we can create as many bar, line and scatter plots as necessary for each training stage. For dot plot we sampled 10k images from the great dataset provided by <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a> </p>
<p><strong>2. Three-stage training of matcha models</strong></p>
<p>Thanks to the kind sharing by <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> , we started the competition training <code>matcha-base</code> models and quickly reached a good score.</p>
<p>The training is done in 3 stages:</p>
<ul>
<li>Train from <code>matcha-base</code> checkpoint as a chart classifier. The model is trained for 5 epochs, using little synthetic data and oversampling extracted data.</li>
<li>Use the weight from stage 1, continue training for 10 epochs to extract data series from all chart types. In this step we add a lot of our own synthetic data (~150k images) to combine with the original dataset.</li>
<li>Use the weight from stage 2, finetune separate models for 5 epochs with each chart type group:<ul>
<li>vertical bar/dot (add 50k synthetic vertical bar plots, 5k histograms, 5k dot plots)</li>
<li>horizontal bar (add 50k synthetic plots)</li>
<li>line (add 200k synthetic plots)</li>
<li>scatter (add 30k synthetic plots)</li></ul></li>
</ul>
<p>At inference time, the chart classifier is run first and then each group of charts will be handled by its dedicated model.</p>
<p>At the beginning our process had only step 2 and 3, and we used model from step 2 as classifier. But then we observed that adding a lot of synthetic data decreases the performance of classifying task. As a result, training for the classifier was separated.</p>
<p>It is also possible to perform classifying task with a simple CNN, however from our early experiments we saw that using matcha gave a little better performance, so we sticked with its usage.</p>
<p><strong>Some training details:</strong></p>
<ul>
<li>Learning rate 3e-5 for 1st and 2nd stages, 2e-5 for 3rd stage, using cosine schedule with warmup.</li>
<li>Adafactor optimizer.</li>
<li>Max patches 2048, max length 512.</li>
<li>Freeze first 4 layers of encoder.</li>
<li>Prediction string: <code><chart_type><start>x1|y1;…;xn|yn<end></code></li>
<li>Histogram is treated as a separated chart type in training and mapped back to vertical bar at inference.</li>
<li>Dynamic rounding based on the range of data series. </li>
</ul>
<p><strong>3. Scatter plots processing</strong></p>
<p>While the mentioned above method helped us reach good results for bar, dot and line charts, we found scatter plots much harder for matcha to handle. As a result, we developed a separate scheme only for scatter:</p>
<ul>
<li>Plot area and tick label detection: use outputs from <a href="https://github.com/pengyu965/ChartDete" target="_blank">CACHED</a> .</li>
<li>Textline Rotation : use MobilenetV2 backbone to train angle classification task.</li>
<li>Textline OCR: use <a href="https://github.com/pbcquoc/vietocr" target="_blank">vietocr</a> ’s seq2seq model to train textline reading task. We modified the encoder to use ResNeXt50 with dilated convolution.</li>
<li>Scatter marker detection: use Mask-RCNN with <a href="https://github.com/mlpc-ucsd/CoaT" target="_blank">CoaT</a> backbone for this task. We trained the model with detectron2. </li>
<li>Final values of each marker are deduced from marker position, tick label position and tick values.</li>
</ul>
<p>This process helped us score 0.09 public and 0.28 private for scatter and is the key for us to survive the private test set.</p>
<p>Thank you for reading and please let us know if you have any questions.</p>
<p>Edit: Inference notebook for our solution now available: <a href="https://www.kaggle.com/code/qdv206/benetech-5th-place-inference" target="_blank">https://www.kaggle.com/code/qdv206/benetech-5th-place-inference</a></p>
<p>Thank you very much! Without your sharing on matcha we probably would have had a much harder time with this competition.</p>
<p>Here is our result by chart type: (CV/Public/Private)</p>
<ul>
<li>Vertical bar: 0.95/0.39/0.26</li>
<li>Horizontal bar: 0.95/0.04/0.01</li>
<li>Dot: 0.98/0/0 (not really reliable as we only used a subset of generated plots as validation)</li>
<li>Line: 0.87/0.32/0.13</li>
<li>Scatter: 0.66/0.09/0.28</li>
</ul> |
Benetech - Making Graphs Accessible | 6th place solution - deplot & UNet postprocessing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to kaggle and benetech for hosting an interesting problem. It was a great team effort with <a href="https://www.kaggle.com/darraghdog" target="_blank">@darraghdog</a> </p>
<h2>TLDR</h2>
<p>Our solution is a 4 seed blend of a single deplot model for all chart types. For scatter plots we run additional post processing by using the number of scatter points predicted by a U-Net point segmentation model. </p>
<h2>Data preprocessing</h2>
<h3>Benetech dataset</h3>
<p>For training we used all of benetech extracted data, and approx 20% of the generated data which was selected based on the hardest samples. This helped speed up the training pipeline significantly. <br>
Numerical values had a lot of digits in ground truth which did not match well with our loss function (cross entropy) in pix2struct based models. Therefore, it was important to round these numerical values. For this, we bucketed the labels to approx 5 buckets within the tick label axis. So, for example, the buckets in the below would be <code>0, 40000, 80000, 120000, 160000, 200000, 240000… etc.</code> and the y-axis label to learn would be <code>840000, 960000, 760000, 700000, 580000, 460000, etc.</code>. For ground truth labels which had no tick labels, similar rounding was performed based on the min-max range of the data series, using approximately 40 buckets. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2Fdd90fda19f8e33c767b75e32188a0c29%2FScreenshot%202023-06-20%20at%2021.37.57.png?generation=1687289889671066&alt=media" alt=""></p>
<p>Within the pix2struct model, the labels were learned as key-value sequences, similar to the training of deplot. For the example above, that would be<br>
<code><pad>Malaysia;960000|Malawi;840000|Maldives;760000|Mali;700000|Mauritania;660000|Melanesia;580000|Malta;580000|Mauritius;540000|Martinique;460000|Mayotte;420000<\s></code></p>
<p>A separate decoder was used to learn the reversed key-value sequence. This was linked to the same encoder and added as an auxiliary loss so the model would not focus more on either side of the image. Since the heavy part of deplot is the encoder, a double decoder did not create much overhead.</p>
<h3>External data</h3>
<p>For this competition, we utilized a commercially available subset of PubMed articles to train our models. Approximately 80% of the labeled samples in the ICDAR dataset were found to originate from this commercial dataset. We ensured that the use of this dataset complied with the applicable licensing terms. To expand our training data, we applied pseudo-labeling techniques followed by manual review. Specifically, we pseudo-labeled and manually reviewed around 8,000 bar and line graph samples from PubMed articles. The pseudo-labeling results were quite accurate on bars, requiring only verification that the identified charts were the correct type (e.g. removing box plots). Lines needed some corrections. We also used our U-Net model to pseudo-label approximately 1,000 scatter plot samples from PubMed, and then manually added any missing data points. Using the VGG Image Annotator tool, this manual annotation process was surprisingly fast and efficient. </p>
<h3>Data generation</h3>
<p>We used the great <a href="https://www.kaggle.com/code/brendanartley/benetech-5-chart-types-generator" target="_blank">public kernel</a> by <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a> in two ways</p>
<p>We pretrained our deplot models using the 500k images downloadable the public kernel, which gave a small improvement, compared to using the original checkpoint from huggingface. </p>
<p>We also tweaked the kernel a bit to generate more sophisticated scatter plots (more points, different markers, more overlap etc) and output x/y coordinates of each scatter point. We generated 50k scatter plots with that script and used them additionally for training the segmentation model</p>
<h2>Models</h2>
<p>Deplot was used for predicting bars, lines and scatter; for scatter, we additionally used a u-net model. </p>
<p>Deplot was trained for 20 epochs, with a learning rate of approx 5e-5 and batchsize 8. <a href="https://github.com/albumentations-team/albumentations/blob/master/albumentations/augmentations/transforms.py#L2413" target="_blank">Pixeldropout augmentation</a> was found to prevent overfitting. No other image augmentations were used. For experimentation, 1024 patches were used which decreased training time 2X with only a slight degradation in scores. The loss was a combination of cross entropy on the key value sequences mentioned above and the chart type. A linear layer on top of the first token of deplot was used to predict the chart type as an auxiliary.</p>
<p>For scatter plots we used a U-Net (EfficientNet-B7 encoder) + non maximum suppression (NMS) to predict number of scatter points and performed a simple yet effective postprocessing where deplot predicted a different number of points:</p>
<ul>
<li>If deplot predicted too many points, disregard the last ones</li>
<li>If deplot predicted too few points, fill missing with the mean value</li>
</ul>
<p>Below you can see the segmentation mask for U-Net training. Using the scatter point annotation we created “smooth” points with 10 pixel gaussian radius and drew onto an empty image. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2Fe0cbc56d90a54892ee8a4365b03d2f59%2FScreenshot%202023-06-20%20at%2021.44.38.png?generation=1687290294572628&alt=media" alt=""></p>
<h2>Ensembling</h2>
<p>For deplot we did inner ensembling by averaging decoder prediction in each greedy decoding step. For Unet we simply averaged predicted masks before performing NMS.</p>
<h2>Ablation study (Public LB Scores)</h2>
<p>The following table gives a quick overview, how much each component contributed to our solution</p>
<table>
<thead>
<tr>
<th>component</th>
<th>score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Baseline deplot@1024 tokens</td>
<td>0.71</td>
</tr>
<tr>
<td>using fullfit</td>
<td>+0.01</td>
</tr>
<tr>
<td>1024 -> 2048 tokens</td>
<td>+0.03</td>
</tr>
<tr>
<td>Scatter pp Unet@512x512</td>
<td>+0.05</td>
</tr>
<tr>
<td>Pixeldropout</td>
<td>+0.01</td>
</tr>
<tr>
<td>External data</td>
<td>+0.02</td>
</tr>
<tr>
<td>Pretrain on 500k</td>
<td>+0.01</td>
</tr>
<tr>
<td>Unet@512x512 -> 768x768</td>
<td>+0.01</td>
</tr>
<tr>
<td>3x Unet@768x768</td>
<td>+0.01</td>
</tr>
<tr>
<td>4x seed blend of deplots</td>
<td>+0.01</td>
</tr>
<tr>
<td><strong>final score</strong></td>
<td><strong>0.87</strong></td>
</tr>
</tbody>
</table>
<h2>Used tools/ repos</h2>
<ul>
<li>Unet:<a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank"> Segmentation models pytorch</a> + <a href="https://github.com/huggingface/pytorch-image-models/tree/main/timm" target="_blank">timm encoder</a></li>
<li>Deplot: <a href="https://huggingface.co/google/deplot" target="_blank">huggingface</a></li>
<li>logging/ visualization: <a href="https://neptune.ai/" target="_blank">neptune.ai</a></li>
<li>augmentation: <a href="https://albumentations.ai/" target="_blank">albumentations</a></li>
<li>labelling: <a href="https://www.robots.ox.ac.uk/~vgg/software/via/" target="_blank">VGG Image Annotator tool</a></li>
</ul>
<p>Thank you for reading, questions welcome</p>
<p>edit:</p>
<p>June 21st: inference kernel made public: <a href="https://www.kaggle.com/code/christofhenkel/benetech-6th-place-dd/notebook" target="_blank">https://www.kaggle.com/code/christofhenkel/benetech-6th-place-dd/notebook</a></p>
<p>In the beginning matcha was way worse for us. But after we found that pretraining helps, matcha was roughly same as deplot. We also tried Pix2Struct Large. But no luck there. </p>
<p>Say x_seg is your predicted segmentation mask. Run non maximum supression and count number of remaining positive pixels</p>
<pre><code> x_pred = x_seg.sigmoid()
x_pooled = F.max_pool2d(x_pred, kernel_size=self.nms_kernel_size, stride=, padding=self.nms_padding)
x_pred[x_pred != x_pooled] =
n_pred = (x_pred[:,] > self.n_threshold).((,))
</code></pre>
<blockquote>
<p>Also, which timm backbone did you use?</p>
</blockquote>
<p>Stated in the modeling section: U-Net (EfficientNet-B7 encoder) Segmentation models pytorch enables using most of encoders from timm. So for us it was tf_efficientnet_b7_ns</p>
<p>nms_kernel_size of 3 will keep maximum within distance of 3. As long as the center pixel of both scatter points is more than 3 pixels away its fine.</p>
<p>A good practice is to check the model capability by using the gt segmentation mask. performing NMS with kernel size 3 on those will get the number of scatter points correct with an accuracy 99.7%. If you increase NMS kenrel to 5 the accuracy drops to 95% because of exactly the issue you mentioned: overlapping points</p>
<p>add those two lines </p>
<pre><code>xy_pixels = ax.transData.transform(np.vstack([xs,ys]).T)
np.save(config.out_file + .(fnum), xy_pixels)
</code></pre>
<p>after </p>
<pre><code>fig.savefig(config.out_file + .(fnum))
</code></pre>
<p>thanks to <br>
<a href="https://stackoverflow.com/questions/13662525/how-to-get-pixel-coordinates-for-matplotlib-generated-scatterplot/13662938" target="_blank">https://stackoverflow.com/questions/13662525/how-to-get-pixel-coordinates-for-matplotlib-generated-scatterplot/13662938</a></p> |
Benetech - Making Graphs Accessible | 7th place solution - no external data | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>For this competition, I chose to use a pipeline of multiple models instead of an end-to-end model (such as Donut, Deplot, or Matcha). I made this choice because the baseline end-to-end model model I tested with Kaggle data only did not perform well, and I thought it may require a powerful machine to train and cost time to generate data.</p>
<h1>Models</h1>
<ul>
<li>Image classification: used to classify chart types.</li>
<li>Text detection: used to detect x/y labels.</li>
<li>Text recognition: used to read the crops (pretrained model only, no fine-tuning).</li>
<li>Object Detection: used to detect x/y ticks, scatter points, and vertical and horizontal bars.</li>
<li>Object Segmentation: used to segment lines on line charts and segment vertical bars (to classify if the vertical bar is a histogram or not).</li>
<li>Pretrained Deplot Model: used for cases where the pipeline cannot predict anything (applied only in one case out of 559 CV files).</li>
</ul>
<h1>Data</h1>
<ul>
<li>Use only Kaggle data.</li>
<li>Split the <code>extracted</code> data into a 50% training set and a 50% validation set.</li>
<li>Use all <code>generated</code> data for training set</li>
<li>Finally, use all the data to train the final weights for each type of model.</li>
</ul>
<h1>Result:</h1>
<p>Best CV - LB (with train data only): 0.871 - 0.86</p>
<p>Private LB: 0.67</p>
<p>Best public - private LB: 0.86 - 0.69</p>
<h1>Pipeline Details</h1>
<p><strong>Step 0: Chart type classification</strong></p>
<p>I used a v2s backbone model to classify the charts, with a CV accuracy of 99.2%.</p>
<h2>General components/steps applied for all chart types:</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2Fdc243c99a1031a1d8e4ddc2f2dbc0250%2F000b92c3b098.jpg?generation=1687320266860703&alt=media" alt=""></p>
<p><strong>Step 1</strong>: Detect and read x/y labels:</p>
<ul>
<li>First, use a text detection model to detect the polygons of x/y labels.</li>
<li>Then, use a text recognition model to read the text inside the polygons.</li>
<li>Post-processing:<ul>
<li>For x labels, draw a horizontal line from top to bottom of the image and select the line that intersects the largest number of x labels polygons. This is because the text detection model is not always accurate.</li>
<li>Repeat the process for y labels.</li></ul></li>
</ul>
<p><strong>Step 2</strong>: Use an object detection model to detect x/y ticks.</p>
<ul>
<li>Post-processing: Similar to x/y labels, I draw a horizontal line to filter x ticks and a vertical line to filter y ticks.</li>
</ul>
<p><strong>Step 3</strong>: Map x/y labels and x/y ticks.</p>
<ul>
<li>Based on the relative position between labels and ticks, I create a 1-1 mapping for these label-tick pairs based on IOU in Ox direction and ignore all other labels/ticks that can't be paired.</li>
<li>Because there are cases where the x labels are not straight, to make the mapping more accurate:<ul>
<li>Get the rectangles of x_label from polygons.</li>
<li>Draw a rhombus with vertices at the center of the rectangle edges.</li>
<li>Draw a new rectangle with the center at the highest vertex of the rhombus.</li>
<li>Then create a 1-1 mapping with x_boxes similar to mapping x labels and x ticks.</li></ul></li>
</ul>
<h2>For a vertical bar chart</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2F729ce9a29230faa6093f8d795020f25a%2FScreenshot_2023-06-20_at_11.35.00.png?generation=1687320318358985&alt=media" alt=""></p>
<p><strong>Step 4:</strong> Detect the point on top of each bar using an object detection model. I will call these points "value boxes" because I predicted boxes instead of points.</p>
<p><strong>Step 5:</strong> Map x ticks and value boxes.</p>
<ul>
<li>This step is similar to mapping x labels and x ticks.</li>
</ul>
<p><strong>Step 6:</strong> Get the final value of each bar.</p>
<ul>
<li>Project the center of each value box to Oy, then get the final value by comparing it to the two nearest y ticks' values.<ul>
<li><code>value_box_value = y1_value - abs((y2_value - y1_value) / (y2_pixel - y1_pixel) * (value_y_pixel - y1_pixel))</code></li></ul></li>
</ul>
<p><strong>How to check if a vertical bar is a histogram:</strong></p>
<ul>
<li>Because all columns of a histogram always sit next to each other without space, I use a bar segmentation model to detect the bar region.</li>
<li>Calculate the percentage in the Ox direction. If the segmented region (in the Ox direction) is greater than 95% of the distance between the last bar and the first bar, then it is a histogram.</li>
</ul>
<h2>Horizontal Bar Chart</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2F6ade62f528b74599879f6833bc737a23%2FScreenshot_2023-06-20_at_11.32.59.png?generation=1687320337730044&alt=media" alt=""></p>
<p>I process a horizontal bar chart follow these steps:</p>
<ul>
<li>Rotate the chart 90 degrees and then flip it horizontally to make it similar to a vertical bar chart.</li>
<li>Use the same steps as for a vertical bar chart to process the horizontal bar chart.</li>
</ul>
<h2>Line Chart</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2Fe6af38a3f7ec91c67630248db8800533%2FScreenshot_2023-06-20_at_11.18.59.png?generation=1687320402512900&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2F60bcf18082ab15baf5a1879668494d18%2FScreenshot_2023-06-20_at_11.19.19.png?generation=1687320415337921&alt=media" alt=""><br>
<strong>Step 4</strong>: Use a segmentation model to predict the lines in the chart. For dashed lines, connect them to make them continuous.</p>
<p><strong>Step 5</strong>: Project the line onto the Ox axis and keep only the x labels and ticks that can be projected onto the line in the Ox dimension. Then, get the corresponding value boxes for the remaining x labels.</p>
<p><strong>Step 6</strong>: Obtain the final value for each value box:</p>
<ul>
<li>Project the center of each value box onto the Oy axis.</li>
<li>Compare the projected value with the values of the two nearest y ticks to get the final value.</li>
</ul>
<h2>For scatter chart</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1713154%2Ffd84003f92f9f973e8019ba2564e1315%2FScreenshot_2023-06-20_at_11.16.21.png?generation=1687320430238074&alt=media" alt=""></p>
<p><strong>Step 4</strong>: Detect scatter points using object detection models. I find the box size of scatter points by getting the largest rectangle connected component that contains the scatter point, then average all the box size in the chart to get the final box size for all scatter points. These predicted points will be referred to as "value boxes" in the following steps, in order to maintain consistency with other chart types.</p>
<p><strong>Step 5</strong>: Map the x/y ticks and value boxes.</p>
<ul>
<li>Similar to mapping the x ticks and value boxes in a vertical bar chart, we do the same for the y ticks in the Oy dimension.</li>
</ul>
<p><strong>Step 6</strong>: Obtain the final value of each value box.</p>
<ul>
<li>Project the center of the value boxes to Ox/Oy, then obtain the final value by comparing them to the values of the two closest x/y ticks.</li>
</ul>
<h2>For dot chart</h2>
<p>Similar to vertical bar chart, except if the x labels are numerical, we use scatter postprocessing methods instead.</p>
<h3>And another post-processing functions using computer vision based algorithms, some highlights:</h3>
<ul>
<li>To avoid errors in text recognition and detection, keep the y-label values as the longest increasing sequence.</li>
<li>Splitting words inside polygons since the pretrained text recognition model doesn't include spaces,, I use another word detection model. Then sort the words based on their geometric location can help with accuracy.</li>
</ul>
<p>More detail in the notebook. The inference notebook is public here: <a href="https://www.kaggle.com/code/thanhhau097a/bmga-submission-0-86" target="_blank">https://www.kaggle.com/code/thanhhau097a/bmga-submission-0-86</a></p>
<p>Cheer</p>
<p>Thank you! </p>
<p>Using linear regression is a great idea! To handle the problem of potentially incorrect x/y label texts, I use a heuristic method to ensure that the x-labels sequence from left to right is a longest increasing sequence. Then, I only keep ticks/labels that are in that sequence. I'm surprised to see your linear regression solution – it's a really clever approach!</p>
<p>Thank you, my old boss! Congrats on your Competition Master title! :D </p>
<p>Sorry, the correct name is EfficientNetV2-S, V2S is just a short name I usually use :D<br>
<a href="https://arxiv.org/abs/2104.00298" target="_blank">https://arxiv.org/abs/2104.00298</a></p>
<p>Hi Nam,</p>
<p>Text detection: DBNet<br>
Segmentation: EfficientNetV2-S backbone<br>
Value box detection: YoloX</p>
<p>This is my CV score with model trained with training split only:</p>
<pre><code># =
# chart_ = line score = 0.8549755488485793
# chart_ = vertical_bar score = 0.9413445015654374
# chart_ = scatter score = 0.6703900987052557
# chart_ = horizontal_bar score = 0.9510436031780835
</code></pre>
<p>Thank you!</p>
<ol>
<li>Agree, the mapping relies heavily on label and tick predictions. To avoid missing mapping pairs, I lowered the threshold to predict more labels and ticks. The predictions are quite accurate, so mistakes rarely happen.</li>
<li>Initially, I created the pre-annotation by finding the largest connected component in the chart region. Then, I used a labeling tool to correct any incorrect annotations. Then finetune the model on corrected data.</li>
</ol>
<p>Hope that answers your questions!</p> |
Bengali.AI Handwritten Grapheme Classification | 10th Place: Single Model Using Snapshot Ensemble w/ code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I updated the result of visualization and late submissions. The source code is now available on GitHub. <br>
I think Chris's magic is a really magic😂</p>
<p>Thanks.</p>
<p>Thanks!</p>
<p>Thanks!</p>
<p>Thanks <a href="/returnofsputnik">@returnofsputnik</a> !</p>
<p>I have a similar thought as yours because my each cycle model got silver or gold place.
In my intuition, using RandomErasing <strong>but not using CutMix/MixUp</strong> is more important than sSE Block. </p>
<p>Thanks!</p>
<h2>UPDATED</h2>
<p>I added the additional study at the end of this post. <br>
* link to discussion about DataAugmentation
* links to discussion and notebook about visualization of sSE Block
* tables of my late submission results (w/o RandomErasing, w/o sSE Block, w/ common sSE Block, and w/ <strong><em>Chris's Magic</em></strong>)
* link to the source code repository on GitHub</p>
<hr>
<p>I was more surprised than pleased when the private leaderboard uncovered. I didn't imagine such a big shake happens.
I still don't understand why got this place (sorry, since I had only 4 days, I was not able to do enough experiments) , and want to investigate it by late submission.</p>
<p>Although there remain some mysteries, this is my first solo gold medal. I'm so glad to share my solution as a gold place one and become Competitions Master!</p>
<p>Lastly, congrats to all the teams got the medal and participants finished this competition! And thanks to Kaggle and Bengali.AI for hosting this competition!</p>
<p><br>
Here, I'd like to share my solution's summary.</p>
<h3>Resources</h3>
<p>Kaggle notebooks and a local machine (GTX1080ti x 1)</p>
<h2>Approach</h2>
<p>At first, for tackling the unseen graphemes problem, I split train data by multi-label stratified group K-fold (regarding a grapheme as a group) and tried training models. However, I was not able to train models successfully. I suspect this is because some components are contained only in one or few grapheme(s).</p>
<p>My approach is very simple - training a model using all train data (because I want to include all components in training). This way, However, has a risk of overfitting. For preventing worsening of generalization performance, I train a model using cosine annealing scheduling and do snapshot ensemble.</p>
<p>There is no special pre-processing nor post-processing.</p>
<h2>Model</h2>
<p>The model is based on SE-ResNext50 and not so special, but has something unique <strong>in global pooling step</strong>. The figure below is the model overview.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F473234%2F30dfe0e6ec593a3824841290c0b5df98%2F10th_place_model_overview_v2.png?generation=1584668448561081&alt=media" alt="model overview"></p>
<p>Instead of simply applying Global Average Pooling to feature map and using its result as <strong>common input</strong> of each component's head, I used <strong>sSE Block</strong> and Global Average Pooling <strong>for each component</strong>.</p>
<p>If you want to know more details about sSE Block, read the original paper:</p>
<p><a href="https://arxiv.org/abs/1803.02579">Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks(Abhijit Guha Roy, et al., MICCAI 2018)</a></p>
<p><strong>NOTE</strong>: For passing 1-channel images to the pre-trained model(3-channel), I use a simple way. See <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136815#781819">the comment bellow</a> for more details.</p>
<h2>Training</h2>
<h3>Data Augmentation</h3>
<p>applying augmentations by the following order, implemented by <a href="https://albumentations.readthedocs.io/en/latest/">albumentations</a></p>
<p>[Original Image: 137x236]
-> Padding(to 140x245) -> Rotate(rotate limit=5, p=0.8)
-> Resize(to 128x224) -> RandomScale(scale limit=0.1, p=1.0)
-> Padding(to 146x256) -> RandomCrop(to 128x224, p=1.0)
-> <strong>RandomErasing(mask-ratio-min=0.02, mask-ratio-max=0.4, p=0.5)</strong>
-> Normalize(by channel-wise mean and std of train data)
-> [Model Input: 128x224]</p>
<p>As you see, <strong>I did not use CutMix/MixUp.</strong></p>
<h3>Loss</h3>
<p>calculating softmax cross entropy for each component, and averaging them by weights (<code>grapheme_root</code>:<code>vowel_diacritic</code>:<code>consonant_diacritic</code> = 2 : 1 : 1) </p>
<h3>Optimization</h3>
<ul>
<li>Optimizer : SGD + NesterovAG(momentum: 0.9) with weight decay(1e-04)</li>
<li>batch size: 64</li>
<li>epoch: 105</li>
<li>learning schedule: cosine annealing (3 cycle)
<ul><li>35 epoch per cycle</li>
<li>learning rate: max=1.5e-02, min=0.0</li></ul></li>
</ul>
<h2>Inference</h2>
<h3>transform</h3>
<p>applying the following transforms to images before feeding them into models</p>
<p>[Original Image: 137x236]
-> Padding(to 140x245) -> Resize(to 128x224)
-> Normalize(by channel-wise mean and std of train data)
-> [Model Input: 128x224]</p>
<h3>ensemble</h3>
<p>Obtaining models(35, 75, and 105 epoch)' outputs using softmax by component, simply averaging them and applying argmax by component.</p>
<h2>Why got this place ?</h2>
<p>I don't still understand, but have some hypotheses.</p>
<h3>RandomErasing is Good?</h3>
<p>Many participants used CutMix or MixUp. While these techniques seem effective in making combinations of components, not effective in disentangling interdependence of components in original combinations, especially about MixUp.</p>
<p>Because of this, may be, simple masking method like RandomErasing is good.</p>
<h3>sSE Block is Good?</h3>
<p>I'm not sure whether or not sSE Block is the best. But simply applying global average pooling looks not good for me because I suspect each component has spacial dependence.</p>
<p>I suppose a kind of weighted global average pooling <strong>for each component</strong> is good.</p>
<h3>Training a model by all data and Snapshot Ensemble is Good?</h3>
<p>In previous competitions, training K-fold and K-fold averaging was the very effective way for me. But in this competition, it was very difficult for me to split K-fold because of unseen graphemes problem and rare components, I trained a model using all train data.</p>
<p>Here are each cycle's and snapshot ensemble's scores.</p>
<p>| model | Public Score (Rank) | Private Score (Rank) |
|:-------------------:|:-------:|:--------|
| cycle 1 (35epoch) | 0.9754 (288th) | 0.9438 (23rd) |
| cycle 2 (70epoch) | 0.9821 (160th) | 0.9499 (14th) |
| cycle 3 (105epoch) | <strong><em>0.9843 (124th)</em></strong> | 0.9497 (14th) |
| snapshot ensemble | 0.9840 (131st) | <strong><em>0.9536 (10th)</em></strong> |</p>
<p>The model cycle 3 achieved the best public score, but this slightly overfitted.
Snapshot ensemble achieved the best private score, which is <strong>0.0037</strong> higher than the best single model(cycle 2).</p>
<p>I think snapshot ensemble is good when you want to train models <strong>using all train data</strong>.</p>
<p><br>
That's all. Thank you for reading!</p>
<p><br></p>
<hr>
<h2>Additional Study</h2>
<h3>RandomErasing is Good?</h3>
<ul>
<li>Discussion: <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/137029">CutMix/MixUp is <strong>Not</strong> All You Need?</a></li>
</ul>
<h3>sSE Block is Good?</h3>
<ul>
<li>Discussion: <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/137552">Key of the 10th solution? : Where sSE Block looks</a></li>
<li>Notebook: <a href="https://www.kaggle.com/ttahara/visualize-10th-place-model-where-sse-block-looks">Visualize 10th place model: Where sSE Block looks?</a></li>
</ul>
<h3>Late Submission</h3>
All the results
<ul>
<li>w/o RandomErasing : not using RandomErasing</li>
<li>w/o sSE Module : simply apply GAP to feature map extracted from SE-ResNeXt50 and feed it into each component's head (Dense -> ReLU -> Dropout -> Dense)</li>
<li>w/ <strong><em>Common</em></strong> sSE Module : apply one common sSE-Pooling to feature map and feed it into each component's head</li>
</ul>
<p>| model | cycle | Public Score | Private Score |
|:-------:|:-------------------:|:-------:|:--------|
| w/o RandomErasing | 1 (35epoch) | 0.9425 | 0.9187 |
| 〃 | 2 (70epoch) | 0.9479 | 0.9201 |
| 〃 | 3 (105epoch) | 0.9648 | 0.9345 |
| 〃 | Snapshot Ensemble | 0.9643 | 0.9373 |
| w/o sSE Module | 1 (35epoch) | 0.9746 | 0.9431 |
| 〃 | 2 (70epoch) | 0.9824 | 0.9490 |
| 〃 | 3 (105epoch) | 0.9836 | 0.9485 |
| 〃 | Snapshot Ensemble | <strong>0.9843</strong> | 0.9517 |
| w/ <strong><em>Common</em></strong> sSE Module | 1 (35epoch) | 0.9739 | 0.9433 |
| 〃 | 2 (70epoch) | 0.9809 | 0.9474 |
| 〃 | 3 (105epoch) | 0.9827 | 0.9496 |
| 〃 | Snapshot Ensemble | 0.9832 | <strong>0.9527</strong> |</p>
Compare results using Snapshot Ensemble (& <strong><em>Magical</em></strong> Post-Processing)
<p>| model | Public Score (rank) | Private Score (rank) |
|:--------:|:----------- -:|:---------------|
| w/o RandomErasing | 0.9643 (1210th) | 0.9373 (66th) |
| w/o sSE Module | 0.9843 (124th) | 0.9517 (12th) |
| w/ <strong><em>Common</em></strong> sSE Module | 0.9832 (149th) | 0.9527 (12th) |
| final sub model (<strong><em>component-wise</em></strong> sSE Module) | 0.9840 (131st) | 0.9536 (10th) |
| w/ <strong>Chris's Magic</strong>(-0.6,-0.6,-0.4) | <strong><em>0.9848 (112th)</em></strong> | 0.9623 (4th) |
| w/ <strong>Chris's Magic</strong>(-0.8,-0.8,-0.7) | 0.9828 (153rd) | <strong><em>0.9653 (3rd)</em></strong> |</p>
<p>OMG! It's a really magic!</p>
<p>If you want to use this magic (it is very easy to use but effective!), check this discussion: <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021">https://www.kaggle.com/c/bengaliai-cv19/discussion/136021</a></p>
<hr>
<h2>code</h2>
<p>I've published the source code on GitHub:
<a href="https://github.com/tawatawara/kaggle-bengaliai-cv19">https://github.com/tawatawara/kaggle-bengaliai-cv19</a></p>
<p>Thanks! I didn't think of that post-processing. I'll try it.</p>
<p>Thanks and congrats on 8th place! I'm really surprised at this result, too.</p>
<p>Although I still don't understand, I'm happy to get this place by very simple solution.</p>
<p>Thank you for trying to reproduce! Yes, I saved the model at epoch 35, 70, and 105.</p>
<p>For your reference, I show you my final model's training curve.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F473234%2Fc1a76e0dbf058bc81471e746027c8359%2FBengaliAI_CV2019_20200322045207.png?generation=1584820441866694&alt=media" alt="training curve"></p>
<ul>
<li>blue line (left axis) : training loss
<ul><li>It is jaggy because represents the loss of last iteration at each epoch.</li></ul></li>
<li>pale red line (right axis): learning rate
<ul><li>cosine anealing scheduling, lr-max=1.5e-02, lr-min=0.0</li></ul></li>
<li>There is no validation loss <strong>because of training by all train data</strong>.</li>
</ul>
<p>Regards.</p>
<p>Thanks!</p>
<p>No, I'm passing <strong><em>1-channel images</em></strong> to SE-ResNeXt50, but this is the pre-trained model(3-channel) on ImageNet.
I do this in a naive way.</p>
<p>The first conv-layer of SE-ResNeXt50 is <code>Conv2D(in_channels=3, out_channels=64, kernel_size=(7,7), stride=(2,2), pad=(3,3))</code>, of which weight has the shape of <code>(64, 3, 7, 7)</code> as bellow. <br>
(<strong>NOTE</strong>: I think this shape depends on your framework. I'm using Chainer and ChainerCV.)</p>
<p><code>python
&gt;&gt;&gt; import numpy as np
&gt;&gt;&gt; import chainercv
&gt;&gt;&gt; model = chainercv.links.SEResNet50(pretrained_model="imagenet")
&gt;&gt;&gt; model.conv1.conv.W.data.shape
(64, 3, 7, 7)
</code></p>
<p>I simply average it along <code>axis=1</code> which is corresponded to <code>in_channels</code>, then add a new-axis to this average weight.</p>
<p><code>python
&gt;&gt;&gt; w_mean = model.conv1.conv.W.data.mean(axis=1)
&gt;&gt;&gt; w_mean.shape
(64, 7, 7)
&gt;&gt;&gt; w_mean = w_mean[:, np.newaxis, :, :]
&gt;&gt;&gt; w_mean.shape
(64, 1, 7, 7)
</code></p>
<p>Finally, I use it as a weight of the first conv-layer of SE-ResNeXt50.</p>
<p><code>
&gt;&gt;&gt; model.conv1.conv.W.data = w_mean
&gt;&gt;&gt; model.conv1.conv.W.data.shape
(64, 1, 7, 7)
&gt;&gt;&gt; # # passing examples # #
&gt;&gt;&gt; x = np.random.rand(4, 1, 128, 224).astype("f")
&gt;&gt;&gt; x.shape # shape=(batch_size, channel, height, width)
(4, 1, 128, 224)
&gt;&gt;&gt; y = model(x)
&gt;&gt;&gt; y.shape # shape=(bs, output_class_num)
(4, 1000)
</code></p>
<p>I once pushed <code>publish topic</code> by mistake, sorry😅 .</p>
<p>Thanks!</p>
<p>Thanks! I'm glad that my solution gives you many impressions.</p>
<p>Thanks and congrats on your 6th place!</p>
<p>Yes, you are right. For the first two days, I validated models by validation set split in that manner and decided several settings such as parameters of data augmentation.</p>
<p>But validation loss would not decrease more at an early epoch (5 ~ 7). I was very troubled😅 </p>
<p><a href="/returnofsputnik">@returnofsputnik</a> </p>
<p>As far as I know, these two have the following differnces.( sorry if there is any mistake )</p>
<p>| property \ Method | Cutout | RandomErasing |
|:------------|:--------------------------|:---------------|
| mask size | fixed (hyper param) | <strong>random</strong> (decided from ranges of hyper param)|
| mask aspect ratio | fixed | <strong>random</strong> (decided from ranges of hyper param) |
| mask pixel value | fixed(mean value) | <strong>random</strong> (or fixed value) |
| mask location | random (if mask crosses edges, the overflown is cut ) | random (decided to not cross edges) |
| mask num| 1 or <strong>more than 1</strong> (hyperparam) | 1 |</p>
<p>if you want to know exact information, I recommend you to read the original papers and codes:</p>
<ul>
<li><code>Cutout</code>
<ul><li><a href="https://arxiv.org/abs/1708.04552">Improved Regularization of Convolutional Neural Networks with Cutout (Terrance DeVries, et al., arXiv:1708.04552) </a></li>
<li><a href="https://github.com/uoguelph-mlrg/Cutout/blob/master/util/cutout.py">Original Implementation(GitHub) </a></li>
<li><a href="https://albumentations.readthedocs.io/en/latest/api/augmentations.html#albumentations.augmentations.transforms.Cutout">Albumentations</a></li></ul></li>
<li><code>RandomErasing</code>
<ul><li><a href="https://arxiv.org/abs/1708.04896">Random Erasing Data Augmentation(Zhun Zhong, et al., arXiv:1708.04896) </a></li>
<li><a href="https://github.com/zhunzhong07/Random-Erasing/blob/master/utils/transforms.py">Original Implementation(GitHub)</a></li></ul></li>
</ul>
<p><a href="/yl1202">@yl1202</a> Thanks! that's a very useful comment for me.</p>
<p>I think your intuition is right if RandomErasing/CutOut are <strong>always</strong> applied, but practically, these are <strong>stochastically</strong> applied (e.g. p=0.5).</p>
<p>Sorry, that is represented by <code>Padding + Resize</code> in the overview figure but somewhat ambiguous.</p>
<p>I applied padding original images[137x236] to 140x245, then resized them into 128x224 (and normalized them).</p>
<p>I added this information and corrected the figure. Thanks!</p> |
Bengali.AI Handwritten Grapheme Classification | 13th place brief writeup (public 107th) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1.
root : vowel : consotant : sum-of-61-BCEs = 2:1:1:1/40</p>
<p>2.
well describeld in <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021">Chris Writeup</a></p>
<p>Suppose you are given sample XX with class i (you donot know the correct class).
If you correctly predict class i for sample XX, you will get additional score of <code>1 / (sample number of class i in test-set)</code> which can be approximated by <code>1 / (sample number of class i in train-set)</code>.</p>
<p>softmax() output for sample XX is probabiliy.
if i-th softmax output for sample XX is <code>p_i</code> , the probabiliy of sample XX is in class i is <code>p_i</code> .
If you predict class i for sample XX,
Your <em>expected gain of score</em> is
<code>p_i * 1 / (sample number of class i in train-set)</code> = <code>softmax(logit)[i] / (sample number of class i in trainset)</code></p>
<p>So the best prediction for sample XX is
<code>argmax-of-i ( softmax(logit)[i] / (sample number of class i in train-set)</code> )</p>
<p>3.
after layer0, 1, 2. same as <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/132898#763633">the post</a> I mentioned in the writeup.</p>
<p>root : vowel : consotant : sum-of-61-BCEs = 2:1:1:1/40</p>
<p>Thank you and Congrats to you too!
We followed a similar path in the competition :)</p>
<p>Not used for test. just for auxiliary task</p>
<p>most of augmentation and preprocess codes are from his implementation. ( I implement my own, but discarded it since his augmentation was better)</p>
<p>updated 2020/03/19</p>
<hr>
<p>Thanks to the hosts and helpful discussions.
Especially, I really appreciated <a href="/hengck23">@hengck23</a>'s kind sharing.</p>
<p>I am so lucky to get gold.
I expected random-split public/private and did nothing special for unseen grapheme.</p>
<p>It seems postprocessing for macro recall hack is really important.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2796584%2Fd7d8dab30ab64980f8b92edbf1c29af6%2Fbengali2.png?generation=1584544815730850&alt=media" alt=""></p>
<h2>Model</h2>
<ul>
<li>modified SE-ResNeXt50. 2-stride conv is replaced by 1-stride-conv+maxblur-pool
basically following <a href="/hengck23">@hengck23</a> 's <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/123757#757358">advice</a></li>
<li>phase1: trained with stem-conv(stride=2), phase2: replace stem-conv with 3 convs(stride=1)</li>
<li>Adam + ReduceLROnPlateau, about 200 epochs</li>
<li><p>5 fold ensemble</p>
<h2>Loss</h2></li>
<li><p>CrossEntropy for root, vowel, consonant</p>
<ul><li>2 out of 5 folds are finetuned using ohem loss</li></ul></li>
<li>Multilabel BinaryCrossEntropy to classify decomposed grapheme exists or not
<ul><li>graphme (or root or vowel or consotant) can be decomposed as shown in the figure. 61 possible decomposed parts exist (except '0' part). below is my exact code to make multihot label.</li></ul></li>
</ul>
<p>```</p>
<h1>load data</h1>
<p>train = pd.read_csv(C.datadir/'train.csv')
train_labels = train[['grapheme_root', 'vowel_diacritic', 'consonant_diacritic']].values.astype(np.int64)</p>
<h1>components label</h1>
<p>parts_pre = pd.read_csv(C.datadir/'class_map.csv').component
parts = np.sort(np.unique(np.concatenate([list(e) for e in parts_pre])))
parts = parts[parts != '0'] # 0 has no meanning
print("parts:", parts) # 61 parts shown</p>
<p>train_labels_comp = []
for grapheme in train['grapheme'].values:
train_labels_comp.append([part in list(grapheme) for part in parts])
train_labels_comp = np.array(train_labels_comp).astype(np.int64)
print("train_labels_comp.shape", train_labels_comp.shape)
if True: # debug
print("train_labels_comp", train_labels_comp[0].tolist())
print("train_labels_comp", train_labels_comp[1].tolist())
print("train_labels_comp", train_labels_comp[2].tolist())
```</p>
<h2>Preprocess and Augmentation</h2>
<p>100% same as <a href="/hengck23">@hengck23</a> 's <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/132898#763633">advice</a>
Preprocess : just resizing with cv2.INTER_AREA
Augmentation : OneOf(basic augmentations) + DropBlock</p>
<h2>Postprocess</h2>
<ul>
<li>prediction for decomposed grapheme is not used</li>
<li><strong>maximize expected recall by
argmax( softmax(logits) / np.power(class_count, 1) )</strong>. note that final sub use factor of 1.15 for grapheme root(no difference for private lb).
<ul><li>there may be better threshold optimization since confusion matrix still looks asymmetry after above optimization. But I gave up further improvement due to low sample size.</li>
<li>below is plot of "macro recall scores V.S. factor n" for my model (which is not final sub). You can confirm that the peak is around factor=1. You can also find that the improvement is not huge unlike private LB.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2796584%2F26fd5a4a563451455a39333fe670aab4%2Fmodel033_recall_hack.png?generation=1584545521502984&alt=media" alt=""></li></ul></li>
</ul>
<hr>
<p><strong>What I learned during competition ( note to self )</strong>
<em>how to use pytorch lightning / how to freeze specified layer / how to ensemble / concept of snapshot ensemble / ohem loss, class-balanced loss / hengck23's model surgery method / maxblur-pool and its implementation / keep high resolution may be important for low resolution image / interpolation method for resizing is sometimes important / how to implement some augmentation methods (mixup, cutmix, gridmask) / public & private is not always split randomly / vast.ai is low-priced compared to GCP</em></p>
<p>Thank you!
I was really impressed by your creative posts!</p>
<p>Thanks!
I added details for decomposition. Please feel free to ask me questions!</p>
<p>I added details for decomposition. Please feel free to ask me questions!</p>
<p>I added details. Please feel free to ask me questions!</p>
<p>Thanks for your question!
decomposition is from <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/123757#765573">https://www.kaggle.com/c/bengaliai-cv19/discussion/123757#765573</a></p>
<p>I will add some explanation later.</p>
<p>Thanks for your question!
decomposition is from <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/123757#765573">https://www.kaggle.com/c/bengaliai-cv19/discussion/123757#765573</a></p>
<p>I will add some explanation later.</p>
<p>Thank you! Unicode encoding is really informative.</p>
<p>Thank you!
Actually, I checked and used some of your SEResnext kernel codes for my first step!</p>
<p>Wow, arcface really works!
In my experiment, the contribution of decompose head was minor. The boost was around +0.0010:
CV(0.9804+0.0011), public LB(0.9751+0.0005), private LB(0.9422+0.0011).</p>
<p>I believe that adding decomposed label (which has richer information) will not harm training under any conditions even though the boost might be small.</p>
<hr>
<p>Actually, I anticipated decomposed head serves as a kind of metric learning.
Combining decomposed head with arcface may yield better learning of inter-grapheme distance.</p>
<p>(Important note: I am not familiar with metric learning AT ALL. I will start learning acrface from now :) )</p> |
Bengali.AI Handwritten Grapheme Classification | 1416th place solution + alpha | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="/cdeotte">@cdeotte</a> Wow, Really amazing!! I though your post process saturate if model itself predict well, but actually not. I learned how important to face with metric seriously!</p>
<p><a href="/yannmajewski">@yannmajewski</a> > It sucks that the public score doesnt reflect how the model will perform in the end
I agree, I could have done much better if public/private split was random... Anyway, in this competition I learned many things, so in total it was good experience:)</p>
<p>Congrats all the winners and thanks for sharing solutions!
I was shock down -1356th places, which means my model was good for seen classes but totally failed to predict unseen combinations.
By reference to winners solution I fixed my models, finally it achieved public 0.9925 / private 0.9552.
And this is trained only on TPU and using keras, so it may be useful for someone who want to use TPU and keras user.
<a href="https://www.kaggle.com/bamps53/private0-9552-tpu-keras-metric-learning">https://www.kaggle.com/bamps53/private0-9552-tpu-keras-metric-learning</a></p>
<p>| No | model | cv | public | private | rank |
|----|--------------------------------------|-------|--------|---------|--------|
| 1 | decode 1292 unique labels (original) | 0.999 | 0.9870 | 0.9035 | 1416th |
| 2 | 3 heads for each r, v, c | | 0.9891 | 0.9228 | 639th |
| 3 | 3 models for each r, v, c | | 0.9665 | 0.9383 | 56th |
| 4 | seen: No.1 unseen: No.3 | | 0.9925 | 0.9552 | 8th |
| 5 | No.4 with postprocess | | 0.9941 | 0.9704 | 2nd |</p>
<p><a href="/ipythonx">@ipythonx</a> Thanks, hope it's useful:)</p>
<p><a href="/ttahara">@ttahara</a> For me, cutmix didn't work only for consonant model. I assume if the key object is too small, cutmix doesn't make sense. But it's a different subject from how to deal with unseen classes. </p>
<p>Thanks, I was too naive to suspect public/private is not random...
I'm trying to reproduce your result too! I assume sSE is the key👀 </p> |
Bengali.AI Handwritten Grapheme Classification | 14th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Congratulations and Thank You</h2>
<p>This has been an interesting, challenging, and very educational experience for us. We would like to congratulate all the winners and everyone else who has worked so hard over the past three months on this competition. Hope that even those of you who have been burned by the shakeup at the end can come away from this competition with a lot of new things that you have learned. We certainly are.</p>
<p>We had lots of fun teaming up with each other. Keeping each other abreast of all that we were trying to do, and encouraging each other, especially in the last phase of this competition, has been an incredible experience for us. </p>
<p>We would like to thank the organizers, Bengali.ai and Kaggle, for putting this competition together. We hope that the results of this competition can be used in a meaningful and productive way for the advancement of the handwritten Bengali language. </p>
<p>Chris and Bojan would like to thank Nvidia for its ongoing generous support in terms of time and resources. </p>
<h2>Frameworks</h2>
<p>For our final solution, we have used both PyTorch and TensorFlow. We have also worked with Rapids on an alternative solution, which showed lots of promise but was not possible to incorporate with our best solution in the allotted submission time limits. </p>
<h2>Datasets</h2>
<p>Most of our models used just the competition dataset. One of our models (EfficientNet B7) trained partially with some of the external data. We had also tried using synthetic data, but that did not help our local models. It is possible that it did better on the private dataset, but we have not made any submissions with it yet.</p>
<h2>Image Resolutions</h2>
<p>Like everyone else, throughout the competition, we had experimented with different resolutions and aspect ratios. Our final solution was a combination of the following different resolutions: 128x128, 224x224, 137x236, 128x256. Our single best model used the 256x256 resolution, but the inference with it was taking too long, so we had to make a difficult decision to leave it out of the final ensemble in favor of a more diverse blend. </p>
<h2>Validation Scheme</h2>
<p>Our main validation scheme has been a 90/10 train/validation split. For most of the models, we have used the same 90/10 split, but in the latter stages of the competition, we had added a few with different splits for the sake of diversity.</p>
<h2>Network Architectures</h2>
<p>We have tried several different architectures and discovered that for the cutmix augmentation as we had implemented it, bigger networks were better, but they required a longer time to train and larger batch sizes. All but one network were trained with a single head. In hindsight, this might have seriously limited our post-processing as we had implemented it. Our final blend consisted of the following seven network architectures:</p>
<ul>
<li>EfficientNet B4</li>
<li>EfficentNet B6</li>
<li>EfficientNet B7</li>
<li>DenseNet 201</li>
<li>SeNet 154</li>
<li>2 SE-ResNext 101</li>
</ul>
<h2>Augmentations</h2>
<p>Two main augmentations that we used were CutMix and CoarseDropout. We had also used a mild shift, scale, rotate augmentations. We also used CAM CutMix, Chris describes it <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021">here</a>.</p>
<h2>Training Regiment(s)</h2>
<p>Our main training consisted of one cycle with starting lr of 0.0001, and a decrease at the plateau. We would train between 100 and 200 epochs, with batch sizes between 150 and 300. After the one main cycle, we would retrain for another 70 epochs, which would usually bump our validation for another 0.001. Depending on the machine, network, and batch sizes, one epoch would take between 10 and 30 minutes to train. Our single best network achieved 0.9980 for local validation, and 0.9892 on public leaderboard. </p>
<h2>Ensembling</h2>
<p>We’ve tried to build some advanced second level models, but our best ensembling ended up being just a weighted average of our models. Since our models were trained to predict logits and not probabilities, finding a good blend was tricky. In the end, we managed to get a pretty decent set of weights, that was the best for both oof predictions as well as the public leaderboard. </p>
<h2>Post Processing</h2>
<p>This was one of the most important steps in our model, and most of the models that finished at the top in this competition. It prevented us from dropping like a stone after the final shakeup. In a nutshell, it turns out that the recall metric is very sensitive to the distribution of various classes. In order to exploit this insight, it was necessary to scale various predictions in such a way that the probabilities of less frequent items got proportionally higher weight. The most mathematically reasonable way of doing this is to multiply various probabilities by their inverse frequencies. In practice, we had to deviate slightly from the perfect inverse law. We had spent a lot of time in the last days trying to improve this post processing, but in the end, a very simple set of exponent worked best for our models. You can read more about this and other aspects of our solution in <a href="https://www.kaggle.com/cdeotte">Chris Deotte</a>’s <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021">wonderful writeup</a>. </p>
<h2>Hardware</h2>
<p>The main workhorses for our model building have been two Nvidia DGX Stations which have been generously allocated by Nvidia for the purposes of Kaggle competitions. We have also relied on our own private resources, including a dual RTX Titan workstation, and several 2080 Ti workstations. </p> |
Bengali.AI Handwritten Grapheme Classification | 167th Place Solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you very much to the competition organizers at Kaggle, to Bengali.ai community for organizing the competition. Many thanks to everyone who discussed the competition on the Forum. With each competition, I have been learning a lot from the forum and notebook kernels. And finally congrats to all the winners and teams that participated in this competition! My solution summary is given below:</p>
<p><strong>Dataset:</strong>
• Image Size: 137x236 ( No preprocessing )
<strong>Augmentation:</strong>
• CutMix
<strong>Model:</strong>
• EfficientNet-B5 with three heads</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1532763%2F9fdbc2230f1ca5639dd271f093bada56%2Fmodel_diagram.PNG?generation=1587308567296419&alt=media" alt=""></p>
<p><strong>Training:</strong>
• 5 fold Configuration
• Data split on the basis of grapheme root labels
• Loss: Cross Entropy Loss
• Optimizer: Over9000
• Scheduler: Reduce On Plateau
• Gradient Accumulation
• Batch Size 100
• Initial Learning Rate 0.03
<strong>Inference:</strong>
• Best Average recall checkpoints were used
• Simple Average of the outputs from 5 folds
• Inference kernel: <a href="https://www.kaggle.com/mohammadzunaed/efficientnet-b5-inference-kernel-pytorch?scriptVersionId=32245517">https://www.kaggle.com/mohammadzunaed/efficientnet-b5-inference-kernel-pytorch?scriptVersionId=32245517</a>
<strong>Things that did not work for me:</strong>
• Preprocessing
• GridMask, Cutout, AugMix
• Label Smoothing Criterions
• Single head instead of three heads
• Activation functions and Convolutional layers in the heads
<strong>Github Link:</strong>
<a href="https://github.com/Rafizunaed/Kaggle-Bengali.AI-Handwritten-Grapheme-Classification.git">https://github.com/Rafizunaed/Kaggle-Bengali.AI-Handwritten-Grapheme-Classification.git</a></p> |
Bengali.AI Handwritten Grapheme Classification | 1st place solution /w code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is the first time for me to use GAN. So, I'm not so familiar with GAN. Furthermore, since GAN learning is inherently unstable, it is difficult to analyze whether normal learning is taking place. Therefore, we did not introduce mixed precision.</p>
<ol>
<li>I was able to produce the high CVs that were up in the Best Single Model discussion early on and predicted that the content of the gap between the high CVs and LBs was due to the Unseen class. At the same time, I predicted that the identification accuracy of the Unseen class would have a significant impact on the final LB.</li>
<li>I thought it was necessary to generate some form of Unseen class to identify the Unseen class.
In the beginning I created a model that was trained only with a composite image from a TTF file, and a model that mixed it with handwriting from the training data, but neither of these models performed well enough.</li>
<li>So, I thought about the strategy of generating Unseen class handwritten characters, and as a method of generation, I decided to create a CycleGAN that does style conversion from a ttf image instead of a normal GAN.</li>
<li>However, the generation of data sets by CycleGAN makes the process of verification and tuning difficult.</li>
<li>Furthermore, throughout the experiment, it was observed that the generation of handwritten characters from ttf images by CycleGAN did not result in any change in shape (texture-only transformation). This phenomenon indicates that the generated handwriting may be dissociated from the actual handwriting.</li>
<li>For these reasons, I switched to a strategy of generating TTF-style images from handwritten text at the expense of time at submission.</li>
</ol>
<p>The result of this strategy is the current solution.</p>
<p>The CycleGAN training code is now available. If you have any bugs or questions, please don't hesitate to ask.</p>
<p>Yes, I used augmentation policy that is optimized to SVHN Dataset.</p>
<p>When I get auroc score, Yes. But when create submission model, I use all data.</p>
<p>It is torch.nn.BCEWithLogitsLoss.</p>
<p>This is my first time working on zero shot learning. Although the performance has been quite good for recent zero shot learning, it is still insufficient for application to real tasks. Actually, the local evaluation of the Unseen class was about 80% accuracy. It will be useless unless you devise another idea.</p>
<p>The training dataset used in this competition contained 1295 types of Grapheme. However, there can be up to 168 * 11 * 8 types of Grapheme in this competition. Graphemes that are not included in the training dataset are called Unseen classes. This designation follows <a href="/haqishen">@haqishen</a>'s post. <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/134035">https://www.kaggle.com/c/bengaliai-cv19/discussion/134035</a></p>
<p>I understand that looking at what the other generator produces is important to analyze this problem. However, in this competition, no snapshot was created because the opposite generator was not needed. I'm going to do some training to analyze this problem.</p>
<p>First</p>
<p>That interpretation is almost correct. However, if the output of the pipeline when it is not "seen" is the "seen" class, the inference is performed again in the pipeline when it is "seen".</p>
<p>Second</p>
<p>ttf is an abbreviation of true type font and is a file that defines font images to be output on a computer. This time, I downloaded several free fonts and synthesized all possible Grapheme font images.</p>
<h1>1st Place Solution --- Cyclegan Based Zero Shot Learning</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2F5231a0ab7a0bf93611fb6fd3d72c6295%2F1.png?generation=1584591840196431&alt=media" alt=""></p>
<h2>Classes and Labeling</h2>
<p>I did not make inferences about the parts of the character. In other words, all my models classify against the 14784 (168 * 11 * 8) class.
Therefore, I needed to know what the combination of labels made up of Grapheme.</p>
<p>I predicted the relationship between the combination of labels and Grapheme from the label of the given train data, and created the following code.</p>
<p>```python=</p>
<p>class_map = pd.read_csv('../input/bengaliai-cv19/class_map.csv')
grapheme_root = class_map[class_map['component_type'] == 'grapheme_root']
vowel_diacritic = class_map[class_map['component_type'] == 'vowel_diacritic']
consonant_diacritic = class_map[class_map['component_type'] == 'consonant_diacritic']
grapheme_root_list = grapheme_root['component'].tolist()
vowel_diacritic_list = vowel_diacritic['component'].tolist()
consonant_diacritic_list = consonant_diacritic['component'].tolist()</p>
<p>def label_to_grapheme(grapheme_root, vowel_diacritic, consonant_diacritic):
if consonant_diacritic == 0:
if vowel_diacritic == 0:
return grapheme_root_list[grapheme_root]
else:
return grapheme_root_list[grapheme_root] + vowel_diacritic_list[vowel_diacritic]
elif consonant_diacritic == 1:
if vowel_diacritic == 0:
return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic]
else:
return grapheme_root_list[grapheme_root] + vowel_diacritic_list[vowel_diacritic] + consonant_diacritic_list[consonant_diacritic]
elif consonant_diacritic == 2:
if vowel_diacritic == 0:
return consonant_diacritic_list[consonant_diacritic] + grapheme_root_list[grapheme_root]
else:
return consonant_diacritic_list[consonant_diacritic] + grapheme_root_list[grapheme_root] + vowel_diacritic_list[vowel_diacritic]
elif consonant_diacritic == 3:
if vowel_diacritic == 0:
return consonant_diacritic_list[consonant_diacritic][:2] + grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic][1:]
else:
return consonant_diacritic_list[consonant_diacritic][:2] + grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic][1:] + vowel_diacritic_list[vowel_diacritic]
elif consonant_diacritic == 4:
if vowel_diacritic == 0:
return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic]
else:
if grapheme_root == 123 and vowel_diacritic == 1:
return grapheme_root_list[grapheme_root] + '\u200d' + consonant_diacritic_list[consonant_diacritic] + vowel_diacritic_list[vowel_diacritic]
return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic] + vowel_diacritic_list[vowel_diacritic]
elif consonant_diacritic == 5:
if vowel_diacritic == 0:
return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic]
else:
return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic] + vowel_diacritic_list[vowel_diacritic]
elif consonant_diacritic == 6:
if vowel_diacritic == 0:
return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic]
else:
return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic] + vowel_diacritic_list[vowel_diacritic]
elif consonant_diacritic == 7:
if vowel_diacritic == 0:
return consonant_diacritic_list[2] + grapheme_root_list[grapheme_root] + consonant_diacritic_list[2][::-1]
else:
return consonant_diacritic_list[2] + grapheme_root_list[grapheme_root] + consonant_diacritic_list[2][::-1] + vowel_diacritic_list[vowel_diacritic]
```</p>
<p>The generation of synthetic data and the conversion of the prediction results into three components are based on this correspondence.</p>
<h2>Split Data for Unseen Cross Validation</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2F2fd56df85b7442295bc605b10053aac9%2F2.png?generation=1584591905364753&alt=media" alt=""></p>
<p>All are randomly selected and split.Ashamedly, I split the data without thinking, so that a non-existent Grapheme root class was created at the time of evaluation, and proper evaluation could not be performed.
However, since the learning cost was very high, it could not be easily recreated, and all local cv were evaluated as they were.
😭</p>
<h2>(1) Out of Distribution Detection Model</h2>
<p>It is a model for distinguishing whether the input image is a Seen class or an Unseen class.
This model outputs confidence for each of the 1295 classes independently. If all confidences are low, it is judged as Unseen, and if there is at least one confidence, it is judged as Seen class.</p>
<ul>
<li>No resize and crop</li>
<li>Preprocess --- AutoAugment Policy for SVHN (<a href="https://github.com/DeepVoltaire/AutoAugment">https://github.com/DeepVoltaire/AutoAugment</a>)</li>
<li>CNN --- EfficientNet-b7(ImageNet Pretrained)</li>
<li>Optimizer --- <code>torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)</code> use defaule value</li>
<li>LRScheduler --- WarmUpAndLinearDecay
<code>python=
def warmup_linear_decay(step):
if step &lt; WARM_UP_STEP:
return step/WARM_UP_STEP
else:
return (train_steps-step)/(train_steps-WARM_UP_STEP)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, warmup_linear_decay)
</code></li>
<li>Output Lyaer --- LayerNorm-FC(2560 -> 1295)-BCEWithLogitsLoss + OHEM?
<code>python=
out = model(image)
sig_out = out.sigmoid()
loss = criterion(out, one_hot_label)
train_loss_position = (1-one_hot_label)*(sig_out.detach() &gt; 0.1) + one_hot_label
loss = ((loss*train_loss_position).sum(dim=1)/train_loss_position.sum(dim=1)).mean()
</code></li>
<li>Epoch --- 200</li>
<li>Batch size --- 32</li>
<li>dataset split --- 1:0</li>
<li>single fold</li>
<li>Machine Resource --- 1 Tesla V100 6 days</li>
</ul>
<p>| 1168 class Local CV(auroc) |
| -------- |
| 0.9967 |</p>
<h2>(2) Seen Class Model</h2>
<p>This model classifies 1295 classes included in the training data.</p>
<ul>
<li>No resize and crop</li>
<li>Preprocess --- AutoAugment Policy for SVHN (<a href="https://github.com/DeepVoltaire/AutoAugment">https://github.com/DeepVoltaire/AutoAugment</a>)</li>
<li>CNN --- EfficientNet-b7(ImageNet Pretrained)</li>
<li>Optimizer --- <code>torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)</code> use defaule value</li>
<li>LRScheduler --- WarmUpAndLinearDecay
<code>python=
def warmup_linear_decay(step):
if step &lt; WARM_UP_STEP:
return step/WARM_UP_STEP
else:
return (train_steps-step)/(train_steps-WARM_UP_STEP)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, warmup_linear_decay)
</code></li>
<li>Output Lyaer --- LayerNorm-FC(2560 -> 14784)-SoftmaxCrossEntropy</li>
<li>Epoch --- 200</li>
<li>Batch size --- 32</li>
<li>dataset split --- 9:1 random split</li>
<li>single fold</li>
<li>Machine Resource --- 1 Tesla V100 6 days</li>
</ul>
<p>| CV Score | LB Score |
| -------- | -------- |
| 0.9985 | 0.9874 |</p>
<p>Large gaps can be predicted due to the presence of Unseen Class.</p>
<h2>(3) Unseen Class Model</h2>
<p>Learning this model is done in two stages. The first step is training a classifier for images synthesized from ttf files. The second step is training the generator that converts handwritten characters into the desired synthesized data-like image. To perform these learnings, first select one ttf and generate a synthetic dataset.
The image size of the synthesized data was 236x137, the same as the training data. Using Pillow and raqm, I drew Graphheme in four sizes, <code>[84, 96, 108, 120]</code>. The size of the dataset is 59136.</p>
<h3>Font Classifier Pre-training</h3>
<ul>
<li>crop and resize to 224x224</li>
<li>preprocess --- random affine, random rotate, random crop, cutout</li>
<li>CNN --- EfficientNet-b0</li>
<li>Optimizer --- <code>torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)</code> use defaule value</li>
<li>LRScheduler --- LinearDecay
```python=
WARM_UP_STEP = train_steps*0.5</li>
</ul>
<p>def warmup_linear_decay(step):
if step < WARM_UP_STEP:
return 1.0
else:
return (train_steps-step)/(train_steps-WARM_UP_STEP)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, warmup_linear_decay)
```
- Output Lyaer --- LayerNorm-FC(2560 -> 14784)-SoftmaxCrossEntropy
- Epoch --- 60
- Batch size --- 32
- Machine Resource --- 1 Tesla V100 4 hours</p>
<h3>CycleGan Training</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2Fac780d7f0e17487c7ac2338346da5e93%2F3.png?generation=1584591955177298&alt=media" alt=""></p>
<ul>
<li>crop and resize to 224x224</li>
<li>preprocess --- random affine, random rotate, random crop (smaller than pre-training one), and no cutout</li>
<li>Model --- Based on <a href="https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix">https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix</a> + Pre-trained Font Classifier(fixed parameter and eval mode)</li>
<li>Optimzer --- <code>torch.optim.Adam(params, lr=0.0002, betas=(0.5, 0.999))</code></li>
<li>LRScheduler --- LinearDecay
```python=
WARM_UP_STEP = train_steps*0.5</li>
</ul>
<p>def warmup_linear_decay(step):
if step < WARM_UP_STEP:
return 1.0
else:
return (train_steps-step)/(train_steps-WARM_UP_STEP)
generator_scheduler = torch.optim.lr_scheduler.LambdaLR(generator_optimizer, warmup_linear_decay)
discriminator_scheduler = torch.optim.lr_scheduler.LambdaLR(discriminator_optimizer, warmup_linear_decay)
```
- Epoch --- 40
- Batch size --- 32
- Machine Resource --- 4 Tesla V100 2.5 days
- HyperParameter --- lambda_consistency=10, lambda_cls=1.0~5.0</p>
<p>Please see the paper for CycleGan. The points that are different from normal CycleGan are as follows. The discriminator adds the supervised loss of the pre-trained font classifier when calculating the loss of a handwritten2font generator.</p>
<p>The CV of the one that gave the highest score (lambda_cls = 4.0) is as follows. To avoid the influence of non-existent Graphheme root classes, macro average recall is calculated after excluding non-existent classes. (It does not mean that the recall value of a class that does not exist is calculated as 0.0 or 1.0.)</p>
<p>| Local CV Score for Unseen Class | Local CV Score for Seen Class |
| ------------------------- | --- |
| 0.8804 | 0.9377 |</p>
<p>After performing hyperparameter tuning, I trained two other types of ttf without evaluating local cv. They gave a higher LB score than the model trained in the first ttf, so the CV score of the parameter used in private may be higher.</p>
<p>I created two models using different ttf for submission. At the time of submission, inferences were made using these ensembles.</p>
<ul>
<li><a href="https://www.omicronlab.com/download/fonts/kalpurush.ttf">https://www.omicronlab.com/download/fonts/kalpurush.ttf</a></li>
<li><a href="https://www.omicronlab.com/download/fonts/NikoshLightBan.ttf">https://www.omicronlab.com/download/fonts/NikoshLightBan.ttf</a></li>
</ul>
<p>CycleGan's training code will be released after being modified so that it can run on 1GPU. Please wait.</p>
<h2>Strange Points</h2>
<p>To be honest, I can't justify why this method would increase the generalization of the Unseen class.
I expected that the generation of images very similar to the synthetic data would affect Consistency Loss and Discriminator and gain generalization to the Unseen Class. However, as the hyperparameters were adjusted, it was observed that the generalization performance improved while the image became unnatural.</p>
<p>| lambda_cls | Input Image | Generated Image | True Synthesis Image | Local CV |
| ---------- | ----------- | --------------- | -------------------- | -------- |
| 1.0 | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2F711934da5f230beddb44a8739df91485%2Fa.png?generation=1584592112177362&alt=media" alt=""> | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2Fe191a2b21f412a494f069c1f20a658bb%2Fc.png?generation=1584592112054776&alt=media" alt=""> | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2Ff8a1a40285305ea497c1602d149c4177%2Fb.png?generation=1584592116047842&alt=media" alt="">| 0.8618 |
| 4.0 | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2Fb3f573d385e3b9469d0ce6324e51b480%2Fe.png?generation=1584592113807041&alt=media" alt=""> | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2F7a2f830547a3a2b8f5a4daec2e0382f3%2Fg.png?generation=1584592111614129&alt=media" alt=""> | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2Fb40309c1b75ade826b394b326fba9201%2Ff.png?generation=1584592114057138&alt=media" alt=""> | <strong>0.8804</strong> |</p>
<h2>Code</h2>
<p>The reduced version of CycleGAN code is now available.</p>
<ol>
<li><a href="https://www.kaggle.com/linshokaku/cyclegan-classifier">https://www.kaggle.com/linshokaku/cyclegan-classifier</a>
Font image classifier training code (7h)</li>
<li><a href="https://www.kaggle.com/linshokaku/cyclegan-training">https://www.kaggle.com/linshokaku/cyclegan-training</a>
Training code for a generator that converts a handwritten image to a font image (8h)</li>
<li><a href="https://www.kaggle.com/linshokaku/cyclegan-submission">https://www.kaggle.com/linshokaku/cyclegan-submission</a>
Submission code</li>
</ol>
<p>Running from 1 to 3 in order is equivalent to the method I used in this competition.
Due to execution time and hardware resources, the batch size, training epoch, etc. are set very small, but if you run it with the hyper parameters as mentioned above, you should get results comparable to my execution results.
I hope you find it helpful.</p>
<p>In order to calculate OHEM we need to get the loss per element, since pytorch by default takes the average of all the elements, we need to pass 'none' to the 'reduction' argument of BCEWithLogitsLoss.</p>
<p><a href="/drhabib">@drhabib</a>
When I tried an approach to directly classify Grapheme, I realized that there were some with the same labeling but different Graphemes. This problem made it difficult to make a reasonable mapping between Grapheme and labels. So I decided to solve this problem by adding another class to Consonant Diacritics.
This issue was posted to discussion in the middle of the competition, and was eventually answered by <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/134258">https://www.kaggle.com/c/bengaliai-cv19/discussion/134258</a> . I was fortunate that this content was consistent with my solution.</p>
<p><strong>About seen/unseen detection</strong>
Since the inference time was not considered in the experimental stage, the model for detecting the Unseen class was trained separately from the standard CNN models.
I also thought that improving the model that recognizes the Unseen class was more important than detecting the Unseen class, so I deferred those experiments.
In the introduction of the solution after the competition, I knew technique called ArcFace at the first time. This technique seems to have been used in some of the top solutions, but was it mentioned in the paper or post that it was good for performing seen/unseen detection?</p>
<p><strong>About unseen model</strong>
I used efn-b0 as a classification model for synthesized images because I thought it had enough capacity to identify synthesized images with fewer variations than handwritten characters. And CycleGAN uses a lot of RAM to train multiple models at the same time. It can be said that a small model was used because of constraints during training, rather than constraints during inference.
I wanted to reduce the cost of tuning and fixing bugs by copying as much of the publicly available implementation as possible to efficiently score in a limited amount of time and resources. Therefore, modification of the model was of low priority, and in the end, did not make any changes from the published model.
The question of whether generator or classifier improvement is more important is definitely generator. I feel that the 40epoch learning time is not enough. If you take 1.5 to 2 times the training time for CycleGAN training, the accuracy may have improved. To that end, optimization and speeding up of the learning code are indispensable, and it was one of my reflections in this competition that I could not do it.</p>
<p>I haven't been able to do many experiments, and I haven't been able to answer the results of comparing some experiments, but it should be helpful.</p> |